Document
stringlengths
395
24.5k
Source
stringclasses
6 values
It led to an interesting discussion. In the absence of actual observations on Rainier the only real data to look at is proxy data from soundings...but IMO it's inconclusive which location has a higher mean wind speed with the limited data available. Hopefully, this won't spur another debate, but if anyone is interested I did some more analyzing of the data on Mt Rainier as compared to the University of Washington data that I have posted in the link below: Camp Muir on Mount Rainier does have a weather station, at 10,110 feet which is operated by the Northwest Avalanche Center: Averages for Camp Muir are reported, but unfortunately, although averages are given, the period of record is not. Also, January 12-24 seem to have some weird readings (if those readings are eliminated the January average is actually 14.3F for Camp Muir and -0.3F from the summit. I did not adjust the data). Here is what I came up with: Yellow are the actual values for the Camp Muir Weather Station. Green are the interpolated values that were interpolated from Camp Muir to the summit of Mount Rainier. Blue is the data obtained from the University of Washington study. To interpolate the green values for temperature, I took the weather stations around Mount Rainier and calculated the average temperature change between them for every thousand feet of altitude change (the Longmire station was eliminated due to its location in the valley bottom which is subject to radiative cooling). I applied that calculated figure (14.6F) for the elevation change between Camp Muir and the summit of Mount Rainier. I compared those values with the data from the University study. The average annual temperature difference between the two was only 0.2F, which is insignificant. Of note, the interpolated winter and spring averages were a little cooler and the summer and fall interpolated values were a bit higher than the University measured values, but this seems to make sense since windy mountain top locations usually experience a bit less seasonal variation than other locations. The interpolated wind values are only a ball park figure and shouldn't be considered measured values. To get them, I simply tracked the forecasted wind speeds over the past few weeks to come up with a valued difference. Over the past few weeks, forecasted wind speeds have been 1.25 to 2.25 times greater on the summit of Rainier vs. Camp Muir. I came up with an average of 1.77 forecasted difference and applied that figure to the average measured wind speeds at Camp Muir. Obviously a lot more data is needed and my estimation wasn't that scientific. The interpolation is just wild speculation based on a short time period and forecast and by no means should be considered accurate. I don't claim the number is accurate, but it was interesting. I plan on tracking the difference in forecasted wind speeds over the space of the next few years.
OPCFW_CODE
Brian Kulis (1) and Kristen Grauman (2) (1) UC Berkeley EECS and ICSI, Berkeley, CA (2) University of Texas, Department of Computer Sciences, Austin, TX Fast indexing and search for large databases is critical to content-based image and video retrieval---particularly given the ever-increasing availability of visual data in a variety of interesting domains, such as scientific image data, community photo collections on the Web, news photo collections, or surveillance archives. The most basic but essential task in image search is the "nearest neighbor" problem: to take a query image and accurately find the examples that are most similar to it within a large database. A naive solution to finding neighbors entails searching over all n database items and sorting them according to their similarity to the query, but this becomes prohibitively expensive when n is large or when the individual similarity function evaluations are expensive to compute. For vision applications, this complexity is amplified by the fact that often the most effective representations are high-dimensional or structured, and best known distance functions can require considerable computation to compare a single pair of objects. To make large-scale search practical, vision researchers have recently explored approximate similarity search techniques, most notably locality-sensitive hashing (Indyk and Motwani 1998, Charikar 2002), where a predictable loss in accuracy is sacrificed in order to allow fast queries even for high-dimensional inputs. In spite of hashing's success for visual similarity search tasks, existing techniques have some important restrictions. Current methods generally assume that the data to be hashed comes from a multidimensional vector space, and require that the underlying embedding of the data be explicitly known and computable. For example, LSH relies on random projections with input vectors; spectral hashing (Weiss et al. NIPS 2008) assumes vectors with a known probability distribution. This is a problematic limitation, given that many recent successful vision results employ kernel functions for which the underlying embedding is known only implicitly (i.e., only the kernel function is computable). It is thus far impossible to apply LSH and its variants to search data with a number of powerful kernels---including many kernels designed specifically for image comparisons, as well as some basic well-used functions like a Gaussian RBF. Further, since visual representations are often most naturally encoded with structured inputs (e.g., sets, graphs, trees), the lack of fast search methods with performance guarantees for flexible kernels is inconvenient. In this work, we present an LSH-based technique for performing fast similarity searches over arbitrary kernel functions. The problem is as follows: given a kernel function and a database of n objects, how can we quickly find the most similar item to a query object in terms of the kernel function? Like standard LSH, our hash functions involve computing random projections; however, unlike standard LSH, these random projections are constructed using only the kernel function and a sparse set of examples from the database itself. Our main technical contribution is to formulate the random projections necessary for LSH in kernel space. Our construction relies on an appropriate use of the central limit theorem, which allows us to approximate a random vector using items from our database. The resulting scheme, which we call kernelized LSH (KLSH), generalizes LSH to scenarios when the feature space embeddings are either unknown or incomputable. The main idea behind our approach is to construct a random hyperplane hash function, as in standard LSH, but to perform computations purely in kernel space. The construction is based on the central limit theorem, which will compute an approximate random vector using items from the database. The central limit theorem states that, under very mild conditions, the mean of a set of objects from some underlying distribution will be Gaussian distributed in the limit as more objects are included in the set. Since for LSH we require a random vector from a particular Gaussian distribution---that of a zero-mean, identity covariance Gaussian---we can use the central limit theorem, along with an appropriate mean-shift and whitening, to form an approximate random vector from a unit-mean, identity covariance Gaussian. By performing this construction appropriately, the algorithm can be applied entirely in kernel space, and can also be applied efficiently over very large data sets. Once we have computed the hash functions, we use standard LSH techniques to retrieve nearest neighbors of a query to the database in sublinear time. In particular, we employ the method of Charikar for obtaining a small set of candidate approximate nearest neighbors, and then these are sorted using the kernel function to yield a list of hashed nearest neighbors. There are some limitations to the method. The random vector constructed by the KLSH routine is only approximately random; general bounds on the central limit theorem are unknown, so it is not clear how many database objects are required to get a sufficiently random vector for hashing. Further, we implicitly assume that the objects from the database selected to form the random vectors span the subspace from which the queries are drawn. That said, in practice the method is robust to the number of database objects chosen for the construction of the random vectors, and behaves comparably to standard LSH on non-kernelized data. 80 Million Tiny Images. We ran KLSH over the 80 million images in the Tiny Image data set. We used the extracted Gist features from these images, and applied a nearest neighbor search on top of a Gaussian kernel. The top left image in each set is the query. The remainder of the top row shows the top nearest neighbor using a linear scan (with the Gaussian kernel) and the second row shows the nearest neighbor using KLSH. Note that, with this data set, the hashing technique searched less than 1 percent of the database, and nearest neighbors were extracted in approximately .57 seconds (versus 45 seconds for a linear scan). Typically the hashing results appear qualitatively similar to (or match exactly) the linear scan results. We can see quantitatively how the results of the nearest neighbors extracted from KLSH compare to the linear scan nearest neighbors in the above plot. It shows, for 10, 20, and 30 hashing nearest neighbors, how many linear scan nearest neighbors are required to cover the hashing nearest neighbors. Flickr Scene Recognition. We performed a similar experiment with a set of Flickr images containing tourist photos from a set of landmarks. Here, we applied a chi-squared kernel on top of SIFT features for the nearest neighbor search. Note that these results did not appear in the conference paper. We can also measure how the accuracy of a k-nearest neighbor classifier with KLSH approaches the accuracy of a linear scan k-NN classifier on this data set. The above plot shows that, as epsilon decreases, the hashing accuracy approaches the linear scan accuracy. Object Recognition on Caltech-101. We applied our method on the Caltech-101 for object recognition, as there have been several recent kernel functions for images that have shown very good performance for object recognition, but have unknown or very complex featureembeddings. This data set also allowed us to test how changes in parameters affect hashing results. The parameters p, t, and the number of hash bits only affect hashing accuracy marginally. The main parameter of interest is epsilon, a parameter from standard LSH which trades off speed for accuracy. Local Patch Indexing with the Photo Tourism Data Set. Finally, we applied KLSH over a data set of 100,000 image patches from the Photo Tourism data set. We compared a standard Euclidean distance function (linear scan and hashing) with a learned kernel (linear scan and hashing). The particular learned kernel we used has no simple, explicit feature embedding (see the paper for details) but the linear scan retrieval results are significantly better than the baseline Euclidean distance, thus providing another example where KLSH is useful for retrieval. The results indicate that the hashing schemes do not degrade retrieval performance considerably on this data. Summary. We have shown that hashing can be performed over arbitrary kernels to yield significant speed-ups for similarity searches with little loss in accuracy. In experiments, we have applied KLSH over several kernels, and over several domains: Gaussian kernel (Tiny Images) Chi-squared kernel (Flickr) Correspondence kernel (Caltech-101) Learned kernel (Photo Tourism) The code is available here. NOTE: the code was updated July 5, 2010 and September 23, 2010 to correct bugs in createHashTable.m. Please use the most recent version. Kernelized Locality-Sensitive Hashing for Scalable Image Search Brian Kulis & Kristen Grauman In Proc. 12th International Conference on Computer Vision (ICCV), 2009. Also see the following related papers, which apply LSH to learned Mahalanobis metrics: Fast Similarity Search for Learned Metrics Brian Kulis, Prateek Jain, & Kristen Grauman IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 12, pp. 2143--2157, 2009. Fast Image Search for Learned Metrics Prateek Jain, Brian Kulis, & Kristen Grauman In. Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.
OPCFW_CODE
The Caffe deep learning framework is relatively straightforward to use, but getting CUDA and cuDNN to play nicely can be daunting for some. Here I'll show you how to set up an Amazon EC2 instance with CUDA 7.0, cuDNN v3, the NVIDIA fork of Caffe and DIGITS v3. DIGITS is a web application written in python that provides a clean GUI for interfacing with Caffe. Setting Up An EC2 Instance This is not an AWS tutorial but setting up an EC2 instance is pretty self-explanatory. If you haven't, navigate to https://aws.amazon.com and create an account. Sign into the console and navigate to the EC2 dashboard. There are two gotchas to be aware of already: - You can have two AWS accounts with the same email. When I first started I had two accounts without realizing it. - Take note of your region (top right). When you create an EC2 instance it is located in a particular region. You won't be able to view your instances from a different region. If you haven't set up an EC2 instance before, the steps are as follows: - Click Launch Instance on the EC2 dashboard. Select "Ubuntu Server 14.04 LTS ..." and on the next page select a GPU-enabled instance type (g2.2xlarge or g2.8xlarge). - Now press Next until you get to Add Storage. Increase the storage size of the root volume. I suggest 20GB. 8GB is not enough. - Click Next twice to get to Configure Security Group. Click Add Rule, enter 5000 in the port range, and set Source to Anywhere. - Now select Review and Launch. Select Launch again, and you should be prompted to download a keypair. - Select Create a new key pair from the drop-down box, name it, and hit Download. I'm naming mine keypair. Move this file to your home directory. Once the instance has a status of running, get the public ip and we can ssh into it after changing the permissions on our keypair file. $ chmod 600 ~/keypair.pem ssh -i ~/keypair.pem [email protected]<public ip>; We are now in control of the Ubuntu machine. Installing NVIDIA Drivers Update apt-get, install preliminaries and download the CUDA 7.0 installer using these commands: $ sudo apt-get update && sudo apt-get upgrade; $ sudo apt-get install build-essential; $ wget http://developer.download.nvidia.com/compute/cuda/7_0/Prod/local_installers/cuda_7.0.28_linux.run; This package also includes an NVIDIA proprietary driver. Now we extract the CUDA 7.0 installer: $ chmod +x cuda_7.0.28_linux.run; $ mkdir nvidia_installers; $ ./cuda_7.0.28_linux.run -extract=`pwd`/nvidia_installers; Before we install the driver, we need to update the machine: sudo apt-get install linux-image-extra-virtual; And we need to disable the currently installed open-source driver so it doesn't interfere: $ sudo nano /etc/modprobe.d/blacklist-nouveau.conf; Add the following lines: blacklist nouveau blacklist lbm-nouveau options nouveau modeset=0 alias nouveau off alias lbm-nouveau off $ echo options nouveau modeset=0 | sudo tee -a /etc/modprobe.d/nouveau-kms.conf; $ sudo update-initramfs -u; $ sudo reboot; SSH back into the machine after waiting for it to reboot, and run the following commands to complete installation: $ sudo apt-get install linux-source; $ sudo apt-get install linux-headers-`uname -r`; $ cd nvidia_installers; $ sudo ./NVIDIA-Linux-x86_64-346.46.run; Accept the EULA. You may get a few warnings, just select OK. When asked about the nvidia-xconfig utility I selected Yes. NVIDIA proprietary drivers are now installed. Run nvidia-smi for information about your GPU and driver version. Run the following commands: $ sudo modprobe nvidia; $ sudo apt-get install build-essential; $ sudo ./cuda-linux64-rel-7.0.28-19326674.run; $ sudo ./cuda-samples-linux-7.0.28-19326674.run; Accept the EULA and press enter when asked to accept any defaults. Now we update your path variables. Run sudo nano ~/.bashrc and add the following lines: $ export PATH=$PATH:/usr/local/cuda-7.0/bin; $ export LD_LIBRARY_PATH=:/usr/local/cuda-7.0/lib64; To download cuDNN you must register with NVIDIA's Accelerated Computing Developer Program. Do so here https://developer.nvidia.com/cudnn. Select cuDNN v3 Library for Linux and download it to the home folder of your local machine. To get the library onto your Ubuntu VM use the following command on your local machine: $ scp -i ~/keypair.pem ~/cudnn-7.0-linux-x64-v3.0-prod.tgz [email protected]<public ip>:/home/ubuntu/cudnn-7.0-linux-x64-v3.0-prod.tgz; Back in the Ubuntu VM now, run the following commands: $ cd; $ tar -zxf cudnn-7.0-linux-x64-v3.0-prod.tgz; $ cd cuda; $ sudo cp lib64/libcudnn.so.7.0.64 /usr/local/cuda/lib64/; $ sudo cp include/cudnn.h /usr/local/cuda/include/; $ cd /usr/local/cuda/lib64; $ sudo ln -s libcudnn.so.7.0.64 libcudnn.so.7.0; $ sudo ln -s libcudnn.so.7.0 libcudnn.so; The cuDNN libraries are now available. Install the dependencies: $ sudo apt-get install -y libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev protobuf-compiler gfortran libjpeg62 libfreeimage-dev libatlas-base-dev git python-dev python-pip libgoogle-glog-dev libbz2-dev libxml2-dev libxslt-dev libffi-dev libssl-dev libgflags-dev liblmdb-dev python-yaml python-numpy; $ sudo easy_install pillow; Now we download caffe: $ cd ~; $ git clone https://github.com/NVIDIA/caffe.git nv-caffe; We need to install caffe's python dependencies. This can take a while (for me, up to half an hour). $ cd nv-caffe; $ cat python/requirements.txt | Xargs -L 1 sudo pip install; cp Makefile.config.example Makefile.config and uncomment the line USE_CUDNN := 1 using sudo nano makefile.config. Use the command htop (you may need to install htop with sudo apt-get install htop) to check how many CPU cores you have, then we can compile caffe. Execute the following commands with X as your number of CPU cores: make pycaffe -jX; make all -jX; make test -jX; Run the following commands to confirm Caffe is working properly: Errors are common here. Here are some bugfixes: # Error: 'build/examples/mnist/convert_mnist_data.bin: error while loading shared libraries: libcudart.so.7.0: cannot open shared object file: No such file or directory' $ sudo ldconfig /usr/local/cuda/lib64; # Error: 'libdc1394 error: Failed to initialize libdc1394' $ sudo ln /dev/null /dev/raw1394; # Can't complete the train script: $ nvidia-modprobe -u -c=0; Download DIGITS v3: $ cd; $ git clone https://github.com/NVIDIA/DIGITS.git digits; $ cd digits; DIGITS is written in python, and so we have to install some python dependencies. We'll do this inside of a virtual environment. $ sudo apt-get install python-pil python-numpy python-scipy python-protobuf python-gevent python-Flask python-flaskext.wtf gunicorn python-h5py; # Will take up to half an hour $ sudo pip install virtualenv virtualenv venv source venv/bin/activate cat requirements.txt | xargs -L 1 pip install; Use the command: when you're ready to deactivate the virtual python environment (not now). Finally, run DIGITS with the command: Direct it to ~/nv-caffe for the Caffe installation directory. You may need to execute: sudo ln /dev/null /dev/raw1394; Open DIGITS in a local web browser by accessing the URL http://<public ip>:5000. That's it. You now have a DIGITS web server that you can use to train image classification models.
OPCFW_CODE
"use strict"; import chalk from 'chalk'; import Discordie from 'discordie' import config from "./config/config.json"; import responses from "./resources/responses.json"; var commands = require("./plugins"); const client = new Discordie() let inited = false; let helpText = ""; function onMessage(ev) { // Ignore empty messages and messages from this bot if (!ev.message) return; if (client.User.id === ev.message.author.id) return; let msg = ev.message; if (msg.content[0] === config.prefix) { let command = msg.content.toLowerCase().split(' ')[0].substring(1); // Print the help message if (command == "help") { printHelpMsg(ev); return; } let params = msg.content.substring(command.length + 2).split(' ').filter(function(el) {return el.length != 0}); let cmd = commands.default[command]; // If command was found from the plugins, call its function if (cmd) { cmd.func(client, ev, params); } else { let user = msg.author; msg.channel.sendMessage(user.nickMention + ", I do not know that command. Type *!help* to see all available commands."); } return; } // See if the name of the bot was mentioned if (client.User.isMentioned(msg)) { console.log(chalk.cyan('Bot mentioned!')); var ans = responses.answers[ Math.floor(Math.random() * (responses.answers.length)) ] msg.channel.sendMessage(ans); return; } } // TODO: Does this get called for each server individually? function onPresence(ev) { let user = ev.user; if (user.previousGameName != "Overwatch" && user.gameName == "Overwatch") { let greet = responses.greetings[ Math.floor(Math.random() * (responses.greetings.length)) ]; var presence = Math.random() < 0.5 ? "." : ", " + responses.presence[ Math.floor(Math.random() * (responses.presence.length)) ];; //console.log(greet + " " + ev.user.nickMention + " just started playing Overwatch" + presence); // TODO: Clean this up. if (ev.guild.textChannels.length > 0) { let c = ev.guild.textChannels[0]; c.sendMessage(greet + " " + ev.user.nickMention + " just started playing Overwatch" + presence); } } } function buildHelpText() { let c = commands.default; for (var key in c) { helpText += "**!" + key + " " + c[key].usage + "**\n" + " " + c[key].desc + "\n"; } } function printHelpMsg(ev) { let user = ev.message.author; ev.message.channel.sendMessage(user.nickMention + ", here are all the available commands\n" + helpText); } function connect() { if (config.token == "" || config.bot_id == "") { console.error('Watcherino needs token and bot_id to be setup in config.js!'); process.exit(1); } buildHelpText(); client.connect({token: config.token}); } function forceFetchUsers() { console.log("Force fetching users.."); client.Users.fetchMembers(); } // Listen for events on Discord client.Dispatcher.on('GATEWAY_READY', () => { console.log("Started successfully."); setTimeout(() => forceFetchUsers(), 45000); if (!inited) { inited = true; // Set up handlers client.Dispatcher.on('MESSAGE_CREATE', onMessage); client.Dispatcher.on('MESSAGE_UPDATE', onMessage); client.Dispatcher.on('PRESENCE_UPDATE', onPresence); } }); client.Dispatcher.on('DISCONNECTED', () => { console.log("Disconnected. Reconnecting.."); setTimeout(() => { connect(); }, 2000); }); connect();
STACK_EDU
Finding file name/location of bash script running on server using PID As a training exercise for the new company I work for, a buddy put a script on my webserver that displays ( ͡° ͜ʖ ͡°) on the top right hand corner of my CLI screen then shelled into the server. Using ps -ef I have found the running script: root 20071 1 0 Oct07 ? 00:03:04 bash I have attempted to run: ps -p 20071 -o comm= Which outputs = bash I have also attempted l -la /proc/20071/exe Which outputs = lrwxrwxrwx 1 root root 0 Oct 7 21:03 /proc/20071/exe -> /bin/bash* I am in the usr/bin/ where I believe the script to be located, but I cannot seem to isolate it as I do not see bash referenced within that folder. I am fairly new to CLI, so I know I must be missing something obvious. Is the script itself called bash, or is that just displaying the type? I am assuming the script is a .sh file, but I am unsure. Is there a way to determine the same of the script that is running and where it is located either using PID or another method? Try man motd. I opened that file with nano and vim but it is blank in both cases. I think you need to provide some more details. That face appears in your terminal when you log in. Does it stay there as you execute commands, or does it scroll off the screen as you enter commands? Thanks for getting back, Glenn. The message scrolls away and I see other instances of it popping up. Seems to have 7 or 8 instances of the process running. Not sure what triggers it, but the processes all have the same name and date as the example in the original post. This is a bash script, so the process is executing /bin/bash. That's normal. The script is open by the bash process. Use ls -l /proc/20071/fd or lsof -p 20071 to lists the files open by that process. You'll find the script on file descriptor 255 by default. cat /proc/20071/fd/255 (Whether this is what's causing the phonetic symbols and diacritics to appear on your terminal, and how it's doing it, are separate matters for which you do not have sufficient information at this time.)
STACK_EXCHANGE
Ever since I started collecting and reporting HTML5test scores I noticed the enormous diversity of browsers on mobile. There are not just a handful of popular browsers, but literally dozens. And even worse, you can run the same browser on the same OS on two different devices, but still get significantly different test scores. On desktop, testing a website is pretty easy. You just install a couple of different browsers and a couple of virtual machines running some old versions of Internet Explorer and you’re basically done. On mobile it is a bit more complicated. You’ll need devices. And devices cost money. If you’re a big company you can probably afford to build your own device lab, but for an independent designer that can be very costly. And even if you can afford to buy all those devices, not many companies actually realize this is a problem. So what happens is that designers usually check websites on their own phone and maybe on the phone of a friend and be done with it. And that is why we have some many ‘mobile’ sites that only work well on iPhones. One of the solutions to this problem is getting devices in the hands of developers. The HTML5test Device Lab Little over a year ago I read an article on Smashing Magazine about Jeremy Keith opening up his own device lab to other developers and ever since I’ve been thinking about doing something similar myself. When the subject of the Open Device Lab movement was brought up during PhoneGap Day last september I decided to step up and do the same. As of today I am going to be running an Open Device Lab: the HTML5test Device Lab. If you want to test your site on a large range of different mobile devices, all you have to do is make an appointment and visit my lab. The coffee and Wi-Fi are free! Over the last couple of years I managed to collect a substantial number of devices for my day job and for testing the HTML5test site itself. A couple of weeks ago I reached out to a number of companies with my idea and I’ve gotten some very good responses. Last week BlackBerry sent me some devices I did not have yet and I’ve gotten some pledges and support from other companies too. Want to help out? Please read our wish list. Thanks! Right now the lab has 71 devices available for testing. If you want to know how your website looks on an ancient BlackBerry, on a Nokia N9 or a Firefox OS device – we’ve got them. Need to test on iOS? No problem, we’ve got devices running iOS 2.2 to 7.0 and you’re welcome to use them. Other Open Device Labs The HTML5test Device Lab isn’t the only one. There are more than 70 Open Device Labs across 22 countries. Head on over to OpenDeviceLab.com for more information and locate the one closest to you.
OPCFW_CODE
For a while, I wanted to write a little bit of advice for those of you wanting to move from Microsoft Windows over to a Linux based desktop computer. Today, I’ll run down a few of the points that are incentives to switch and some that might be incentives to stick with windows. Open Source. First and foremost in my mind is open source versus closed source. Linux is, of course, open source, which means that anyone can see the source code and can freely modify it. Any good computer programmer can contribute to the project. This might sound like an uncertain idea at first, but in general, the collaboration goes much deeper than closed source projects due to the diversity of the programmers, and bugs are resolved pretty quick, as anyone, including high level users, can fix the bug. Open source projects are volunteer based, but that doesnt mean that no one is supporting the project. All of the tools at the heart of open source projects are actively and passionately supported by dedicated programming teams. Closed source, like Microsoft Windows, on the other hand, has its code typically held in secrecy. End users cannot see the internals of the programs, which leaves bug fixing up the mercy of the software company. Furthermore, algorithms cannot be improved by savvy end users, as the internals of the program are hidden. To most people though, what matters is if their programs work well. Open source and closed source both have good programs, but it is my experience that open source typically work better and fix bugs quicker. Licensing. With linux, you are granted a license to the code. You own it in the sense that you own your television or your sofa. Windows is licensed to you, it is rented to you with fixed terms and conditions. Essentially Linux is like owning your house, whereas with Windows you are renting your house from a landlord. Price. What matters to most people more is price. Cold hard cash. Open source projects are licensed under free licenses, and are available for free. Closed source programs are sold for money. Many people will claim “I didn’t pay for Windows, it just came on my computer!”. This is untrue, the cost of windows is integrated into the price tag, for various amounts, but it is still a significant chunk of the final price tag. I could write a whole column about the money debate, but it basically boils down to open source is free and closed source costs money. Stability. Stability is another big issue in computers. LINUX IS NOT A NEW THING! It has its foundations deep in UNIX computing traditions developed on the first computers back in the 1950’s. Personally, I find it weird that a startup operating system like Windows edged the more traditional UNIX based computers out for market share. Being based in such deep computing experience, linux is very stable. It is not uncommon to have your computer up and running for many, many weeks without any problems whatsoever. This is the reason that linux and UNIX based systems are the foundations many Internet servers, which must always run without fault. Whenever I run windows, I find that reboots must be more frequent than I would like. Ease of use is something most people also care about in using their computers. They just want it to work. Plug it in, turn it on, and work. Unfortunately, in our current world, and with the complexity of the modern computer, some configuration and computer know how is typically required from any user. Linux is extremely configurable, which means that there are many ways to configure Linux in such a way that isn’t palatable. Windows is less configurable in general, but for many people, this makes their life easier. Linux these days is so much incredibly better at working out of the box than it used to be, and most the time it will work out of the box. The biggest issue though, is simply adapting to Linux configuration from Windows configuration, which I will discuss later. Adaptation period. There will be a definite adaptation period from leaving Windows and moving to Linux. You’ll have to say goodbye to your C:/ drive and say hello to a / partition. The concept isn’t harder, its just a little different. Subsequent articles of mine will go into this deeper, but the adaptation can definitely be done. These were just a quick rundown of why you might want to make the switch. Linux is freedom of use in your computer. I would highly recommend the migration, but many people are comfortable with Windows and prefer to stay as such. If you are comfortable with Window’s system, you may want to stick with it. If you appreciate high quality, free software, and have an open mind Linux is the place to be. At any rate, you should give Linux a try and see what its all about. Here are links to Live CD’s. These CD’s do not touch your hard drive at all, but boot up a linux system for you to try. Download them, burn to a CD and reboot your computer. When it starts, you’ll be in a linux system. Turn it off again, remove the CD, and your computer will be just the way it used to be. These are different varieties of linux for you to try out, any one of them will give a feel for what Linux is
OPCFW_CODE
Download the nearby wifi chat - e-meet 003 β at aptoide now virus and malware free no extra costs. How to build android chat apps using xamarin and twilio a chat app in xamarin using android and to build a simple chat application in android. Learn how to build an android chat app in 10 minutes add it to your existing app or build the next whatsapp. Download facebook 159003895 a principal rede social, sempre consigo no seu android facebook já permite que todos encontrem wi-fi grátis com seu app. Can't find wi-fi no problem the serval mesh to chat to other mobile phones without a phone network [android 22+] how to use serval mesh to chat to. Baixar wifi chat apk 10 e toda a história da versão para android a comunicação entre dispositivos que se juntam à mesma rede wi-fi. Welcome to the simple android chat app tutorial with parse integration tutorial which will take you on a step by step guide on how to create your very own chat app. The wi-fi direct™ apis allow applications to connect to nearby devices without needing to connect to a network or hotspot this allows your application to quickly. And other official android devices like nexus family wi-fi and 4g lte networks prepare your application for android sdk the steps of creating application in. Download the wifi direct group chat 101 at aptoide now virus and malware free no extra costs. Search for jobs related to android wifi voice chat source or hire on the world's largest freelancing marketplace with 13m+ jobs it's free to sign up and bid on jobs. Apps: -avibrim-vibrate your way-android hotspot/wifi hotspot-chat lock(faceb best android apps apps avibrim chat hotspot : android tablet smartphone. Pubnub apis and infrastructure allow you to build chat and collaboration applications with features like message history, user detection, and typing indicators. You can now chat between ios and android even without reception adam clark estes 6/24/14 1:18pm filed to: then, using wi-fi or bluetooth. Google talk provides quality video chat on advantage of wi-fi-based video chat for video chat) wi-fi a nice thing about the android tablet. Typical wifi networks have trouble supporting the video chat apps that ship with some android tablets add a typical wan hop to the video call and well. This application allows two android devices to carry out two-way text chat over bluetooth it demonstrates all the fundamental bluetooth api capabilites, such as: (1. Android bluetooth - learn android programming and how to develop android mobile phone and ipad applications starting from environment setup, application components. There can be several people chatting in wifi on-line chatroom. To allow two way text chat over bluetooth in android tutorial to allow two way text chat over bluetooth in android example text chat over bluetooth. Contribute to android-bluetoothchat development by creating an account on github. How do i learn making a simple chat application using wifi direct android how do i make wifi direct chat application in android an android application in 6. Mobile messaging apps let you send text messages, share photos, videos and even make voice and video calls while avoiding sms and call charges. Have you ever imagined making an extensive use of the wifi of your android device and turn it into an online friend finder it might sound a bit odd but wi. I am a fellow android developer and want to develop a similar app like firechat for educational purposes i just want to know what kind of packages and techniques i. Download the wifi chat & file share groups 14 at aptoide now virus and malware free no extra costs. Software - wifi chat android tutorial android sync manager wifi, 123 flash chat server software, android line to iphone transfer. Discover the top 100 best chat over wifi apps for ios free and paid top ios apps for chat over wifi in appcrawlr. Boscochat(a free wi-fi chat room in android) 1 boscoboscochatchat (a wi-fi chat room)(a wi-fi chat room) android applicationandroid application. Creating p2p connections with wi-fi (android's wi-fi p2p framework complies with the wi-fi direct or a chat app). Simple chat application using listview in android - chat application development in android tutorial trinity tuts.
OPCFW_CODE
SPEAK application for non Sitecore items I'm new to SPEAK and would like to develop an application for my current project, which is in Sitecore 8.2. I did go through the SPEAK developers cookbook for 7.2 and some other examples available online, but they all are similar and talk about displaying lists of Sitecore items. My requirement is to have pages which will be used by content authors or administrators, for example: Show a list of all Sitecore users and have the search functionality to search by name or domain. Show a list of all orders with pagination, search, edit etc. The orders here are not Sitecore items. They are maintained in another non-sitecore database. A page to allow creation of fake items under a parent I have already developed these pages using custom controller renderings (which use Ajax calls to save/retrieve data). The page items are now in the master Db (of-course set to never publish), and are working fine. As you can see, the content for some tasks is NOT from Sitecore items. I would now like to have these pages as SPEAK apps, mainly to be in-line with the CMS design, look & feel. Because, after going through the material, I feel it is easier & quicker for me to write code and develop such apps, instead of all the configurations to be made for SPEAK. Can anyone please point me with any examples that suggest how to develop SPEAK apps in the regular Controller rendering approach. Answer The direct answer to this question is that there is no way to build a SPEAK application without using Sitecore items. This is because using items out of the core database is the foundation upon how/why SPEAK was written. It was intended to be a pattern that utilizes Sitecore structure, association to datasources, and presentation rendering in order to build HTML structure without having to "need to code views". Background The way SPEAK uses Controllers is for data fetching and data saving only. All UI is managed by Sitecore Item Structure in the core database and flavored with referenced CSS files and PageCode (aka Javascript include). There is no concept of a view in SPEAK per se. SPEAK is built upon the idea of Atomic Design, so razor view renderings are at a discrete element level, and precompiled into the DLL. It is possible to create your own SPEAK view rendering following Sitecore's examples for the built in components, however, it still requires the use of Sitecore items to build structure. As mentioned in the Daniil's answer, the way some folks have gotten around it is to create the barebones items needed to start a SPEAK app, and then use a bunch of custom web api to display inside of a single SPEAK component. Closest Solution The closest solution I have seen come close to what you are looking for is the Express Profile Tab module that Jeff Darchuk wrote. In this, he uses code to create the underlying SPEAK wiring needed and then creates an additional tab via normal view renderings. Check out the Express Profile Tab closely to understand how he injects this. I am not sure if this method is completely usable for the use case presented in this question as the Experience Profile already exists as a full SPEAK application. Online Tutorial The best tutorial that I have used to learn and understand SPEAK is this 6 part tutorial that Martina Welander made. https://mhwelander.net/2014/06/27/speak-for-newbies-part-1-creating-a-new-application/ I had similar requirements a couple of times and I found that the most efficient way here is to create an application wrapper with SPEAK 1/2 - header, footer, sidebars, OK/Cancel buttons etc., and then develop all core functionality as a single big SPEAK component with a bunch of custom web api controllers and simple require.js/jQuery on UI. So, seems like it isn't straight forward n simple.
STACK_EXCHANGE
Many common software applications are built on the works executed by our predecesors, indeed this is the very heart of the Free Software movement. Recently a representative Microsoft™ Corp., Jim Allchin , has publicly attacked this movement stating that it is "unamerican " and states "I can't imagine something that could be worse than this for the software business and the intellectual-property business.'' However it is common knowledge that the Windows operating system has relied upon the efforts of those who have been working in the Open Source Community. Cited most commonly is ftp.exe . Running strings on the binary* returns the text: @(#) Copyright (c) 1983 The Regents of the University of California. The same can also be seen in the nslookup.exe @(#) Copyright (c) 1985,1989 Regents of the University of California. All rights reserved. @(#)nslookup.c 5.39 (Berkeley) 6/24/90 @(#)commands.l 5.13 (Berkeley) 7/24/90 finger.exe, rcp.exe and rsh.exe all contain similar notices as well. Now, it's easy to become angry at this point, but remember that under the provisions of the BSD Licence Microsoft is sitting on firm legal ground in what they are doing. I'm not saying that what they are doing is wrong, it's merely hypocritical. One could argue that it would be of little effort for a monolithic corporation like Microsoft to get some in-hous programmer to write these simple utilities, one would think it was a small matter. But under the scrutiny which Microsoft lies you'd think they'b be a bit more carful about what they say, and not find themselves in such a trap. One could say that Mr. Allchin was making his comments soley in reference to the GNU Public Licence (which requires users of source code from GPL'd software to, in turn, reveal their source code)then it becomes clear that Microsoft wants to be able to take from the community without having to give anything in return. Of course they do, they are out to make money. It is also possible that Microsoft paied the University of Californa for use of this, although I find it unlikey as the BSD Licence does not require that. These utilities have been around a long time and have the benefit of having been worked on by an entire generation of programmers. One could figure that Microsoft probably likes the convenience that the BSD Licence provides and has experienced frustration with the aspects of the GPL that require disclosure of source code. The comments made my Mr. Allchin could have been in response to finding some beautiful code for a feature and unable to use it due to the provisions of the GPL. However with the likes of Sun Microsystems, IBM and Hewlett-Packard all throwing their hat into the ring of Free Software, combined with the efforts of The Gnome Foundation to create a stable and easy to use GUI, I can see how the company could feel threatened by this. * All comments extracted from binaries included with Windows 2000. tftv256 has pointed out that the winsock.h that comes with VC++6 also contains the Berkeley copyright
OPCFW_CODE
Why difficulty? In many games, the player is sucked in by the challenge of making it through the game. The sense of accomplishment that can be derived from actually solving a difficult problem has a serious short-term consequence: it increases the player's confidence, pushing him to continue through the game, or to persist if he fails a few times. But difficulty is a double-edged knife, because if it is not controlled correctly by the designer, it may destroy the player's motivation because the game is too hard, or lose his interest because the game is too hard. Pleasing the player obviously goes through taming the difficulty. Anyone in the gaming world is more or less familiar with platform games: those games that present the player with a world to explore with an emphasis on movement (such as jumping from platform to platform) rather than the killing of enemies. Some of these games insist on the brain-teasing dimension of their gameplay (such as Oddworld games) while others insist on the dexterity required to artfully dodge the traps, reach the ledges, and make it through the world unharmed (for instance, Crash Bandicoot). In the former case, the difficulty consists in finding a solution to the puzzles in the game, rather than actually solving them. Once the player figures out what to do, he is usually able to move past the obstacle without problems. In dexterity-based platform games, on the other hand, the solution is usually close at hand, and the difficulty is that of actually implementing the easy solution: a series of well-timed jumps over gaps, traps, pits, and other devious mechanisms. There are, of course, rules that difficulty follows when emerging from such simple construction bricks. And this article is meant to cover these rules. The typical platform game can be divided in areas, each having its own way of behaving and interacting with the player. Some are neutral ground: nothing happens while you are standing there, so you might as well go and fetch some beer in the fridge without even pausing the game. Others, however, can be as dangerous as killing you instantly when you enter them and returning you to your latest checkpoint or saved game. From a gameplay and programming point of view, these areas are better represented by their behaviour and interaction. From the player's point of view, these areas must be recognizable. Which brings forth our… There is absolutely no point in surprising the player: if the player does not now how a particular thing reacts, he may either trigger it, which can be frustrating, or avoid it unknowingly - no satisfaction taken in the deed. That's an all-lose situation for the designer, because he cannot amuse the player by doing so. All features should be introduced to the player before being actually used to create difficulty. This introduction can be done in a few ways: some features are simply obvious about the way they function, because the way they are represented in the game strikes a chord in the player's mind that somehow relates it to its behaviour. Some others, again, are not, and the only way the player can actually see what their purpose is, is by actually testing them. Avoid the player an unnecessary frustration, and make that first time easy. It is good, overall, to introduce those new features alone, and make a few variations on them so the player fully understands what they are all about, before moving on to more complex situations. We want the player to know what's expected of him: this way, before making even the first jump in a level, he knows every single way in which he can fail, and this adds to the feeling of satisfaction he gets when overcoming the obstacle. This leads naturally to… Failure, in turn, implies the loss of something. When the loss increases, so does the overall perceived difficulty. Jumping over a pit is not hard per se, but jumping over a pit knowing that you encountered no checkpoints for an hour or so, makes the whole thing a harder. The loss comes in three flavours: Difficulty does not only depend on the amount lost by the player should he fail: it also depends on the chances of failure to happen. Now, what are the actual tools to make the player fail? Each obstacle has two aspects: the triggering method, and the reason. The reason is all about the design and the player: we can't simply tell him "ok, you triggered this obstacle, so you lose one life and move back to the beginning", he has to understand by himself why: the ground falls, a mine explodes, a rock falls, or something similar, which provides the "reason" the character had to die like that. This leads back to our first rule: if the player understands what happened, he won't fall for it again…or will die trying. There are various triggering methods. Some are player-dependent, some are not. Some are dynamic, some are not. I like to put them in a handful categories: Once these three tools are available, one should wonder how to combine them into devious traps.
OPCFW_CODE
As Artificial Intelligence (AI) continues to become an integral part of how governments, organizations and societies defend themselves against cyber attacks, not much attention is given to how people with intentions to harm your organization could use these for their advantage. Adversarial AI is the development and deployment of advanced technologies that are associated with human intellectual behaviour. This includes but is not limited to learning from past experiences and reasoning critical meanings from complex data sets. A typical A-AI attack causes machine learning models to misinterpret inputs into the system and behave in a way that it’s favorable to the attacker, to further produce the behaviour, the creation of ‘adversarial examples’ that resemble to normal inputs is done to break the model’s performance. Adversarial AI is hugely dependent on deep learning and possess a symbiotic relationship with each other. Deep learning’s effectiveness is based on the large interactions between neurons that takes place in a network. Creating these adversarial examples is a complex venture. Often the best way to do so would be to use deep learning to learn how inputs can be manipulated in the attacked system. Using GANs – Generative Adversarial Networks can help. In fact, most adversarial attacks make use of GANs to create these examples—fooling the attacked model to produce the desired outcome. Adversarial AI attacks pose threat to multiple technologies that make use of machine learning and/or its deep learning procedures to obtain their results. Some of these core technologies are: - Computer vision: Advanced computer vision is enabled by deep learning methodologies—from image classification to the creation of self-decision making components, computer vision uses deep learning as an essential part of it.Adversarial attacks can cause OCR readings to be misinterpreted. Finance and banking applications that use OCR as an essential part of their e-verification processes in India and overseas are vulnerable. - Natural Language Processing (NLP): Applications of deep learning in NLPs too are vulnerable to A-AI attacks.Unlike images, which are usually optimized to have continuous pixel densities, text data is largely discrete.This makes optimization for finding adversarial examples more challenging. - Industrial Control Systems: Many control systems make use of estimations and approximations to reduce computational complexity. Meaning that the interactions are not captured in the control equations.By creating GANs that make minute manipulations to control systems’ inputs, attackers can cause unexpected behaviors that create a wide array of outcomes—from simple system degradation, to increased wear-and-tear, to catastrophic failure. Countering the challenges Although AI attack surfaces are only emerging at the present, organizations’ security strategies should take the challenges of Adversarial AI in consideration. The prime emphasis should be on engineering powerful and resilient models structuring them with critical models that can overcome the adversarial attempts. - Be aware of current threats. Understanding the effects of Adversarial AI requires a deep understanding of your organizations’ current structure and where the implementation of a defense system could help. - Audit your business process & structure. Conduct an audit to determine which sections of your business processes leverage AI. You can either do this with your in-house cyber security teams or outsource audits to companies like Aphelion Labs, where our experts critically analyse your business process to determine areas that need your attention.Critically analyse the received information with these points:• Is the process visible to the outside world? • Can users/clients create their own inputs and obtain results from the model? • Are there any open-source models or frameworks used to this process? • What are the outcomes a potential attacker can derive from the process? - Create an action plan for the most vulnerable processes. Prioritize your plans for the models that seem most vulnerable to potential A-AI attacks. Create a plan to strengthen their structures that seem to be on high-risk of an attack. Create matrices that compare the process’ criticality against the amount of risk it possesses.
OPCFW_CODE
convert openCV image into PIL Image in Python (for use with Zbar library) I'm trying to use the Zbar library's QR code detection methods on images I extract with OpenCV's camera methods. Normally the QR code detection methods work with images (jpg, png, etc.) on my computer, but I guess the captured frames of OpenCV are different. Is there a way of making the captured frame into a PIL Image? Thank you. from PIL import Image import zbar import cv2.cv as cv capture = cv.CaptureFromCAM(1) imgSize = cv.GetSize(cv.QueryFrame(capture)) img = cv.QueryFrame(capture) #SOMETHING GOES HERE TO TURN FRAME INTO IMAGE img = img.convert('L') width, height = img.size scanner = zbar.ImageScanner() scanner.parse_config('enable') zbar_img = zbar.Image(width, height, 'Y800', img.tostring()) # scan the image for barcodes scanner.scan(zbar_img) for symbol in zbar_img: print symbol.data With the python CV2, you can also do this: import Image, cv2 cap = cv2.VideoCapture(0) # says we capture an image from a webcam _,cv2_im = cap.read() cv2_im = cv2.cvtColor(cv2_im,cv2.COLOR_BGR2RGB) pil_im = Image.fromarray(cv2_im) pil_im.show() I think I may have found the answer. I'll edit later with results. OpenCV to PIL Image import Image, cv cv_im = cv.CreateImage((320,200), cv.IPL_DEPTH_8U, 1) pi = Image.fromstring("L", cv.GetSize(cv_im), cv_im.tostring()) Source: http://opencv.willowgarage.com/documentation/python/cookbook.html So far I'm having some trouble in which the converted image is not really the image I captured. Hey, I had the same problem as you and this worked, but the actual accepted answer didnt. You should mark this as accepted Are you trying to obtain a RGB image? If that is the case, you need to change your parameters from this: cv_im = cv.CreateImage((320,200), cv.IPL_DEPTH_8U, 1) pi = Image.fromstring("L", cv.GetSize(cv_im), cv_im.tostring()) to that: cv_im = cv.CreateImage((320,200), cv.IPL_DEPTH_8U, 3) pi = Image.fromstring("RGB", cv.GetSize(cv_im), cv_im.tostring()) since it is documented almost nowhere, but the 'L' parameter of Image.fromstring is for 8-bit B&W images. Besides, you need to change the argument of your cv.CreateImage function from 1 (single channel image) to 3 (3 channels=RGB). Hope it works for you. Cheers A simple way is to directly swap the channels. Suppose you are trying to convert a 3-channel image file between OpenCV format and PIL format. You can just use: img[...,[0,2]]=img[...,[2,0]] In this way, you won't be bothered with cv2.cvtColor as this function only works on images with certain depth.
STACK_EXCHANGE
In a previous post, I wrote about controlling my OWI robot arm with Elixir. Well, I decided to port that to Swift! My robot hobby is picking up steam. After going through the Python examples that came with the GoPiGo, I got inspired by some embedded Erlang videos on YouTube and decided to see if I could control the GoPiGo with Elixir. After watching Elixir Sips, I learned of a project called Elixir/Ale, an Elixir library for embedded programming. With Elixir/Ale, you can talk to the GPIO ports on the Raspberry Pi and some common hardware bus protocols: I2C and SPI. I2C is used by the GoPiGo board, so Elixir/Ale looked like what I needed. So I looked up some Elixir docs, typed mix new exgopigo, and started coding. The result is ExGoPiGo. All it lets you do right now is turn on and off the robot’s two front LEDs, but it does that by writing to the I2c bus. Controlling the motor and such shouldn’t be too hard. I also ordered an USB kit for my robot arm. This lets you plug the arm into your computer. The software that comes with it only runs on Windows, but the Internet is a wonderful thing. Somebody reverse engineered the protocol and wrote a C program to talk to the arm via the USB port. Somebody else wrote a Mac OS X app using IOKit to talk to the arm via USB. There’s also a Python project that uses libusb to talk to the arm. At first, it didn’t seem like I could you Elixir to control it, but I found an article on Elixir’s Ports and NIFs, which lets Elixir talk to external code. Using that example as a base, I modified the C program to talk to the USB like the C program in the reverse engineered protocol article and created an Elixir module to talk to my new C program via a port. The result is ExRobotArm, which ables you to fully control the arm with Elixir. I tried it on both Mac OS X and my Raspberry Pi. Now it is totally plausible to attach the arm to whatever robot has a Raspberry Pi controlling it. My compass and GPS module for the GoPiGo have arrived, so that’s next. Then more work on ExGoPiGo. I’ve been getting into robots lately. For a long time I was searching for the perfect robot kit to use with my Arduino. Then I decided I’d rather use my Raspberry Pi to control my robots so I could use better programming languages like Clojure, Elixir, Scala, Ruby, etc. So I was looking around for the perfect Raspberry Pi robot kit. For Christmas, my parents gave me the OWI Robotic Arm Edge. For the longest time, I didn’t like it. I wanted a moving robot and something I could program. Finally in April, I decided to pull the plug on the GoPiGo, a Raspberry Pi robot kit. It was a little challenging to put together the motors at first and one of the encoders broke. I asked for a new one and they sent it right out. I ran the examples that simply let me remote control the GoPiGo by ssh’ing into the Raspberry Pi on the robot. I had no sensors or anything else. My goal is to create an autonomous robot. No remote control. So I ordered an ultrasonic sensor, and a Raspberry Pi camera and waited. It took probaby a month before those all came in. In the meantime, I decided to put the robot arm together. 48 steps! Took me 2 days, but it wasn’t that hard. I ended up with this. Then I went to our local Microcontroller meetup hosted by Make Salt Lake. There were lots of cool projects everybody was working on, and I talked a little about my GoPiGo. I expressed my desire to make it autonomous and also to have a battery that I could recharge. Somebody mentioned the iRobot Create and that got me thinking. A couple of weeks later, I ordered one, and a week later it showed up. Finally, my GoPiGo parts came and I added those to my robot. That was a little bit of a challenge as well. So here’s the GoPiGo: I ran the examples for that demonstrating how to use the sensor and camera. They worked. I decided to order a compass and a GPS module to add to the GoPiGo because there are examples that use that. I want to make this thing as smart as possible. So lots of things to do. I want to: - Control the robot arm wirelessly - Make the GoPiGo roam around on its own. - Use Clojure to program the Roombda (iRobot Create) and see what I can do with that. Someday maybe add a Kinect to it, or attach the robot arm to it. I don’t know where all of this is going, but I’m just having fun exploring and seeing what I can do. I’ll keep you informed.
OPCFW_CODE
Life After Git: Spitballing the Next Generation of Source Control October 12, 2016 / Ben DiFrancesco What kind of version control system will eventually replace Git? When I pose this question to fellow developers, I get one of two responses. Some are shocked by the idea that anything will ever replace Git. “Version control is a solved problem and Git is the solution!” Others imagine what I call Git++, a system that is essentially the same as Git, but with some of the common problems and annoyances resolved. Neither of these are likely to be the case. > Why Git Won To imagine what might come after Git, we have to remember what Git replaced and why. Git supplanted Subversion not by improving upon it, but by rethinking one of Subversion’s core assumptions. Subversion assumed that revision control had to be centralized. How else could it work, after all? The result of this assumption was a tool where branching was discouraged, merging was painful, and forking was unimaginable. Git flipped this on its head. “Version control should be distributed! Anyone should be able to clone a repo, modify it locally, and propose the changes to others!” Because of this inversion, we got a tool where branching is trivial, merging is manageable, and forking is a feature. Git isn’t an easy tool to grok at first exposure, but it gained widespread adoption in spite of this specifically because of these characteristics. Git offered benefits to solo developers, in stark contrast to SVN, but it made teams orders of magnitude more productive. Cause and effect are hard to tease apart, but it’s no coincidence that Git’s adoption corresponded with a Cambrian explosion of mainstream open source projects. > Whats Next? Whatever replaces Git, be it next year or next decade, will follow a similar path in doing so. It will not be a small improvement over the model Git already provides. Instead, it will succeed by rethinking one of Git’s foundational principles. In doing so, it will provide orders of magnitude greater productivity for it’s adopters. > Just Text? Git assumes code, and everything else, is just diffable text. Git is really good, and really really fast, at diffing text. This is what makes Git great, but it’s also the assumption that gives a future system an opportunity. Code is not just text. Code is highly structured text, conforming to specific lexical grammars. Even in weakly, dynamically typed languages, there is a ton of information in that code a computer can know about statically. What would a version control system look like that knew something about your code, beyond just textual diffing? Git will happily check in a syntax error: what if it didn’t? A commit in Git is just a commit, regardless of whether you added a code comment, tweaked a unit test, refactored a method name, added a small bit of functionality, or completely changed the behavior of your entire program. What if these kinds of changes were represented differently? Git doesn’t know anything about your package manager or semantic versioning. Instead, you check in some kind of configuration file for your package manager of choice. That defines dependencies and declares a version, and it’s your job to keep these things in sync and to make judgement calls about version bumps. In a any given language you may have 2 or 3 or 12 package managers to choose from and support, each with their own config file, which also has to be checked in and kept in sync. What if, instead, our version control system was also our package manager, and it understood and tracked our dependencies and their versions? What if, since it knew about the nature of the changes made to code, it automatically enforced actual distinctions between bugfix, patch, and breaking changes? Or maybe, if this is being analyzed and enforced by computers rather than humans, those distinctions become less meaningful. Imagine a library your app depended on released a new version. In that version, they refactored a method name, but didn’t make any changes to behavior. Our hypothetical next gen system would provably know this. Is that still a breaking change? What if the system automatically refactored your code, updating to the new method name, in conjunction to the commit representing the update to the dependency? Sounds scary, but in reality, this is much safer than what we accept today. Sure, a library owner may have kept the public interface the same. “Not a breaking change!” Your code may still compile or run without a source change. “See, non-breaking!” But...if they’ve completely changed the method’s behavior….you’re still screwed, and nothing is enforcing that they didn’t except social contract. > Stay Tuned These ideas just scratch the surface of what a more context aware version control system could do, but as the title suggests, I’m just spitballing here. There are lots of challenges to implementing a system like this, many with non-obvious solutions that may take years to develop. Ultimately, though, you can bet on one thing being true: something radically different, and radically better, will eventually come along to replace Git.
OPCFW_CODE
This predicate picks one domain variable in Vars based on some selection criterion. The selected entry is returned in X. Vars is either a collection of domain variables/terms containing domain variables, or a handle representing domain variables returned in Handle from a previous call to select_var/5. This predicate provides similar functionality as delete/5 of gfd_search, and is designed to be used in a similar way -- the selection is done on the variables represented in Vars, while Handle is then passed as the next Vars argument for variable selection in the next call to select_var/5, as is done with Rest for delete/5. select_var/5 can thus be used as a replacement for delete/5 in gfd_search:search/6. The main difference with delete/5 is that Handle is a low-level representation of all the domain variables in Vars, rather than the rest of the domain variables with the selected variable removed as in Rest for delete/5. This allows select_var/5 to be used for both the 'indomain' style labelling (where a selected variable is labelled to different values on backtracking), or the more Gecode-like labelling (where variable selection and a binary value choice is performed for each labelling step). Unlike delete/5, a domain variable that is instantiated will not be selected, and the search is complete when select_var/5 fails because all the domain variables in Vars are instantiated. When select_var/5 is called with Vars being a collection, the domain variables in the collection are extracted according to Arg in the same way as delete/5, i.e. the Arg'th argument of each element in the collection is the domain variable for that element. In addition to creating the low-level handle representation of the domain variables in Handle, additional initialisation is done for some selection methods that have initialisation parameters (i.e. those involving weighted degree or activity). When select_var/5 is called with Vars being a handle created from a previous call to select_var/5, then Args and any initialisation parameters given with Select are ignored. Select is one of the following predefined selection methods: input_order, occurrence, anti_occurrence, smallest, largest, smallest_upb, largest_lwb, first_fail, anti_first_fail, most_constrained, most_constrained_per_value, least_constrained_per_value, max_regret, max_regret_lwb, min_regret_lwb, max_regret_upb. max_weighted_degree, min_weighted_degree, max_weighted_degree_per_value, min_weighted_degree_per_value, max_activity, min_activity, max_activity_per_value, min_activity_per_value These are essentially the same selection methods supported for using Gecode's search engine (search/6), except for random, which is not supported here. For methods that uses activity or weighted degree, Select can include an optional argument in the form of a list, where each list item is a parameter setting. If a parameter is not specified in the list, the default setting for the parameter will be used. These parameters are: For weighted degree: % Simple labelling implemented using select_var/5 and indomain/2 labelling1(Vars, Select, Choice) :- (select_var(V, Vars, Rest, 0, Select) -> indomain(V, Choice), labelling1(Rest, Select, Choice) ; true ). % Variant using select_var/5 and try_value/2 labelling2(Vars, Select, Choice) :- (select_var(V, Vars, Rest, 0, Select) -> try_value(V, Choice), labelling2(Rest, Select, Choice) ; true ). % A call with max_activity with parameters select_var(V, Vars, Rest, 0, max_activity([init(degree), decay(0.9) ])),
OPCFW_CODE
Checking whether UI_USER_INTERFACE_IDIOM exists at runtime I am working on a universal app that should be able to run on iPad and iPhone. The Apple iPad docs say to use UI_USER_INTERFACE_IDIOM() to check if I am running on iPad or iPhone, but our iPhone is 3.1.2 and will not have UI_USER_INTERFACE_IDIOM() defined. As such, this code breaks: //iPhone should not be flipped upside down. iPad can have any - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { if(UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { return YES; //are we on an iPad? } else { return interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown; } } In Apple's SDK Compatibility Guide they suggest doing the following to check if the function exists: //iPhone should not be flipped upside down. iPad can have any - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { if(UI_USER_INTERFACE_IDIOM() != NULL && UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { return YES; //are we on an iPad? } else { return interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown; } } This works, but results in the compiler warning: "Comparison between pointer and integer." After digging around I figured out that I can make the compiler warning disappear with the following cast to (void *): //iPhone should not be flipped upside down. iPad can have any - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { if((void *)UI_USER_INTERFACE_IDIOM() != NULL && UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) { return YES; //are we on an iPad? } else { return interfaceOrientation != UIInterfaceOrientationPortraitUpsideDown; } } My question is this: Is the last code block here okay/acceptable/standard practice? I couldn't find anyone else doing something like this with quick searching which makes me wonder if I missed a gotcha or something similar. Thanks. UI_USER_INTERFACE_IDIOM is a compile-time macro. It doesn't "exist" at runtime That doesn't make this question downvote worthy. You need to build apps for iPad against the 3.2 SDK. As such it will build correctly, and the UI_USER_INTERFACE_IDIOM() macro will still work. If you want to know how/why, look it up in the docs - it is a #define, which will be understood by the compiler and compile into code that will run correctly on 3.1 (etc). Okay, yeah, that worked. I figured out what happened: I was initially setting up the project to build two different apps and then switched over to using a universal app. The code in the first block was for the old way and I apparently never ran it when building against 3.2. Thanks for the help!
STACK_EXCHANGE
Should null ever be used? I've recently read a few books on clean code and refactoring, and especially the former tend to advise the reader to neither return null, nor to pass it into any function (foreign, (for you) immutable code, such as the official libraries or external frameworks, excluded). See for example Robert C. Martin - Clean Code: A Handbook of Agile Software Craftsmanship, pages 110-112 (6th printing). From my experience, this generally makes sense. Instead of null, you can usually either throw an exception, return an empty list or array, or use some creative solution to avoid the possibility of a NullPointerException. Granted, you might then have to catch your own exceptions - but that's still more expressive than a generic NullPointerException. I then thought about whether you should maybe avoid null entirely - i.e. not just in function calls or return statements, but everywhere. And from what I can tell, this should be both possible and reasonable. However - still being a student - I am not entirely sure if that assumption / guideline is always correct. Therefore my question: Should you always try to avoid using null at any cost? Or are there cases where using it would be the more practical solution, despite the risk of NullPointerExceptions? Ignoring cases where you have to deal with foreign code that you can't influence, such as the official libraries. I can see why you would vote to close this as "primarily opinion-based". However, a simple counter-example would be sufficient to answer the question with "No". Just because a question cannot be backed up with a link to the oracle docs doesn't mean it's not a valid question. "External APIs excluded"-- why exclude external APIs? What is different between an external API and an internal one? (Hint: nothing.) If it's ok for an external API, it's ok for an internal one. null should generally be avoided, but not "at any cost." That is clearly excessive. the difference is that, if an external API returns null, then I can't do **** about it. I can, however, avoid using null in my own code. No, that's not right. If it's fine for an external API to use it, then it's OK for an internal one. There's no real difference. @markspace well, it's not okay for an external API to use it. I just can't do anything about it, so if the framework method I'm using returns null, I'll have to deal with it. You might want to check whether questions about best practices are on topic on [softwareengineering.se], and ask there instead if it is. @Dukeling valid point - I wasn't aware of that stack exchange site (there's just too many :D). Can questions be moved to other stack exchange sites? But that's wrong. It IS OK for an API to use null values, that's why so many do so. Your attempts to justify your position--that use of null is wrong--are unreasonable and clearly contradicted by current practice. @PixelMaster You can "migrate" questions by flagging (assuming whichever mod looks at it agrees that it would be on topic there). @markspace sure, many people use null. Many people also use comments instead of small methods and readable names, or they have duplicate code, none or at least bad tests, and name their variables a, b or c. If they wouldn't do these things, then there would be no reason for literature about clean code to even exist. @PixelMaster What's wrong with comments? there's theory then there is experience and real complex systems with constraints. Regarding null, there's a broad range of situations where you need to return null and let the client decide what they should do with your return, being an exception throwing or a specific processing or... @MehdiB. comments are not wrong per se. However, in many cases comments are used in place of good method names, which makes them redundant and reduces the readability of the code. I recommend reading literature about clean code which discusses this topic more in-depth than would be appropriate for this site. In general you can and should avoid null, however, if avoiding null makes the code harder to maintain and read, you should probably just use it. There are a few rare instances particularly in Android programming where null can used to denote that an object has yet to be initialized or an action hasn't been taken by the user to set it, and some default action needs to be performed, and this seems cleaner than throwing in a bunch of boolean flags that aren't directly linked to the object or writing a more complex solution. @Dukeling over there it would likely be closed as a duplicate of Are null references really a bad thing? or one of dozen questions linked to it (and probably additionally voted down for the lack of research effort) @PixelMaster I've just came across this coincidently https://en.wikipedia.org/wiki/Tony_Hoare#Apologies_and_retractions :D I think generally you should probably avoid using it, but I think it could have some uses for memory management. For example you may want to set some short lived objects to null when you are done with them to signal to the garbage collector that you want that memory cleaned up. Setting a field to null to make the object faster eligible to be collected by the GC is often a misleading approach that can be broken with code changes. Reduce the scope of the created objects by refactoring the code (split the processings in two methods for example) is very often a better approach. Besides, JVM optimiztions may also detect that a local variable is not any longer used in subsequent statements and may so do that eligible to be collected. @davidxxx I think under most circumstances I would agree with you. I could see however that in some situations refactoring the code into another method may not be ideal depending on the situation. @davidxxx overall I think the take away from this answer is that the null reference should be avoided in 99% of the situations you come across, however there may be some edge cases that permit it. It is more opinion based, but since the language specifications allows for the usage of null and the existence of native methods returning null e.g. HashMap#get(Object) it is not feasible to avoid it entirely. as mentioned - "external APIs excluded". I counted the official libraries into those, but I will edit my question to make that clearer. @PixelMaster Regardless of whether you include the standard API in your question, it makes use of null's, so obviously it's not such a bad idea (at least in the opinion of the API designers at the time). @Dukeling java is over 20 years old. Just because something seemed like a good idea decades ago, doesn't mean it still is. Modern IDEs and libraries provide way more functionality than 20 years ago. Sometimes it is used to initialize mutable objects.... yes, it is. But should it? That's the question. Yes , its not mandatory as every object is initially null, you are right but you can use it just to maintain standards and to increase readibility of code.....
STACK_EXCHANGE
Performance with large number of Collapse in v4.1 I have a page with 6 collapsibles on it, each one with more than 20 in them, so I end up with about 140 collapsibles in total. The idea is to click on one of the outside collapsibles to see an overview list of collapsible items, click on one of the inner ones to see details of that item. Unfortunately the performance is very bad: to open one the first time takes about 10 seconds! I dug down to the constructor of the class Collapse: var tabToggles = $$$1(Selector.DATA_TOGGLE); for (var i = 0; i < tabToggles.length; i++) { var elem = tabToggles[i]; var selector = Util.getSelectorFromElement(elem); if (selector !== null && $$$1(selector).filter(element).length > 0) { this._selector = selector; this._triggerArray.push(elem); } } Nearly all time is spent in the for loop. tabToggles is an array of all 140 elements with [data-target="collapse"]. First question: why do I need all of them when I click on one? Then for each one it will get the selector and filter the list of all matching elements filter them for the element parameter. And that is where all the time is spent, almost equally distributed between getSelectorFromElement and $$$1(selector).filter(element) getSelectorFromElement will select a list of matching elements and return either the selector, if some elements where found, or null if not. And the next statement in the code will do the same, then apply the filter. Second question: Could we save time if we had a function like getSelectorFromElement which could do the filter(element), too ? I tried to pass the element to getSelectorFromElement and, if present, do the filtering there. This cut the time in half. But still, very slow. Can you provide to us a CodePen ? And you can try that : var tabToggles = $.makeArray(document.querySelectorAll(Selector.DATA_TOGGLE)) .filter(function (elem) { return !!elem.getAttribute('data-target') }) I'm trying to cut it down on JSfiddle. So far the (almost) original data are at https://jsfiddle.net/chtheis/8mxntvLj/ An edited version, where I try to simulate number of collapsibles and elements is at https://jsfiddle.net/chtheis/zou2st3e/1/ On the original, click on a time and be patient. A table will open a table with around 20 matches, each match is a . Click on a match and after some match details are shown, I have other data where the matches don't have details and there opening a time is fast(er). I think the selectors are slow. Calling them 1 time is still OK, but calling them many times in a loop makes things bad. And for your answer: I understand. And each data-toggle could have specific data-target which, in the end, could include the same element. If so, maybe another type of optimization would be to hint if this element is supposed to be the only one or multi-target could apply. And I tried var tabToggles = $.makeArray(document.querySelectorAll(Selector.DATA_TOGGLE)) but it didn't make a difference. I guess the problem is not in the one time call outside the loop but in the many calls inside. Yep maybe we should add some options to know if a collapse is multi target or not 🤔 BTW I'll push some modification in Vanilla JS, maybe it'll be faster Am 30.04.2018 um 16:30 schrieb Johann-S: BTW I'll push some modification in Vanilla JS, maybe it'll be faster I'll try them. But note, we are not talking about 10% faster but at least 10 times faster :) It is much faster :) But the mistake was on my side. I used a target like data-webgen-match=137 which worked in your old version but the new one discovered that is an invalid syntax and instead should read data-webgen-match="137" I updated my small Fiddle In the beginning of createInner you see both syntax, the wrong one and the correct one. I don't know (yet) why the wrong syntax slows down so much, are there exceptions thrown and caught? Update: I found out that without the quotes it is not a valid CSS selector and thus will use jQuery selector engine (which does support it) instead of the native browser API. There is room for improvement in util.getSelectorFromElement() Look at this line const $selector = $(document).find(selector) Without $(document) being cached between calls of the function, in essence jQuery() is called twice. And even if $(document) is being cached, the results of find() are not inspiring. With ideas and loops from the follwing old conversation at https://stackoverflow.com/questions/1854859/jquery-performance-wise-what-is-faster-getelementbyid-or-jquery-selector and the code provided in this thread to build the collapsibles, we can see some interesting stats. The loop is like this on my environment var j=0; for(i=0; i < 1000000; i++) { if ($(document).find('#outer-9').length > 0) { j++; } } This takes 758 millseconds Introducing caching var doc = $(document); ... doc.find('#outer-9').length > 0 gets down to 562 milliseconds. Can we do better? Yes. The next 2 ways are probably not feasible, as it depends on the data-target being an ID $('#outer-9').length > 0 takes 282 milliseconds. Not bad, but we can do better in jQuery? Yes. Passing jQuery an (DOM) Element is almost twice as fast as passing a string $(document.getElementById('#outer-9')).length > 0 this takes 146 milliseconds. If speed is king then that code right above is a good starting point. To stay with the ability to use other lookup methods the following seems so far the best comprise $(document.querySelector('#outer-9')).length > 0 This takes 214 milliseconds. Overall the best solution IMO. More than 3 times faster than it is right now If things have to rely on $(document.querySelectorAll('#outer-9')).length > 0 however, that takes 1222 milliseconds Not good. However should collapsible have multiple targets? So template developers might want to prefer using id's and Bootstrap changing that line in util.getSelectorFromElement() to const $selector = $(document.querySelector(selector)) might be an improvement as it is compatible down to IE8. And if there are similar uses throughout Bootstrap there might be gained even more. Fixed by : #26422
GITHUB_ARCHIVE
Walmart worker trucks out TVs for a Black Friday sale. (GANNETT NEWS) - We've been working on Black Friday since June and this year with the help of our friends at DealGuppy.com, we've gathered and verified Black Friday Stock Lists for 30 major retailers. The lists showcase which products will be on sale Black Friday, and the products marked (Early Bird) are limited quantity items. In some cases this year, items on "sale" are not even reduced which you'll find on the Lowe's stock list. Please keep in mind that stock of the hottest items will sell-out extremely quickly, may vary by location and stores sometimes make last-minute adjustments. Unlike those circulating ad-scans, every price point is sorted for you and hand-checked, as we detail the deals by the thousands. A reminder, we'll have every single big Black Friday Deal online, complete with additional savings and built-in coupons right here beginning at 12:01 a.m. Black Friday, so more will be added. Also, this list does include some stores not in West Michigan, but we know that some of you will be traveling to other areas for the holiday weekend. Plan your Black Friday shopping now by clicking the store names below and downloading the detailed stock lists. Ace Hardware - http://bit.ly/18rC5ig A.C. Moore - http://bit.ly/1c76P7n Bass Pro - http://bit.ly/18TGKpN Belk - http://bit.ly/18bAcby Best Buy - http://bit.ly/I61BzE Best Buy Map - http://bit.ly/1ejJe4E BJ's Wholesale - http://bit.ly/1h5AKCE BonTon - http://bit.ly/1aUchrl Cabela's - http://bit.ly/1aUchHV CVS - http://bit.ly/1fnIkYO Dick's Sporting Goods - http://bit.ly/1bGa8kf Dick's Sporting Goods Map - http://bit.ly/1i8mHO0 Dollar General - http://bit.ly/1c77c1M Fred Meyer - http://bit.ly/I61Qe5 Game Stop - http://bit.ly/18TH2wX Gander Mountain - http://bit.ly/I2m1t9 Gordmans - http://bit.ly/I7oFOP Harbor Freight - http://bit.ly/1baECzR h.h. gregg - http://bit.ly/18bAD5I J.C. Penney - http://bit.ly/1ayN1Lu Jo-Ann Fabrics - http://bit.ly/I2maN2 Just Deals - http://bit.ly/IiGolv Kmart - http://bit.ly/I2mdsl Kmart Map - http://bit.ly/Icxruq Kohl's - http://bit.ly/1ayN7Tq Lowe's - http://bit.ly/1aUctqC Macy's - http://bit.ly/I7oP92 Meijer - http://bit.ly/1c77IwR Sam's Club Map - http://bit.ly/18j3FQZ Sears - http://bit.ly/18bAHCC Sears Map - http://bit.ly/1aSxmpz Staples Map - http://bit.ly/1aSxp4J Target - http://bit.ly/IiGrOg Target Map - http://bit.ly/1iK8NiU Walgreens - http://bit.ly/19LyWG8 Walmart - http://bit.ly/17RlHnL Walmart Map - http://bit.ly/17TNZDb
OPCFW_CODE
Getting Started with the Research Notepad Use this guide to get learn more about the updated Research Notepad feature and how to use it to facilitate your research, and send useful information to your peers. The Research Notepad Feature allows you to save documents, searches and reports (referred to as bookmarks) to research topics – essentially a collection of relevant material to you. Each research topic is able to be downloaded or shared with your peers so they can have access to the same collection of documents. Viewing all topics From the main navigation area, you will see an option for “Research Notepad”. If you click this, you will be brought to a page where you can either create your first research notepad topic, or browse through all of your existing topics. Use the search filter if you have many topics to quickly find the exact topic you are looking for. See “Searching, sorting and filtering within a notepad topic” for more If your research topic has more than one bookmark (document/search/report) inside of it, you will be able to click on the title of the topic to view all of the entries saved within that topic. From the main research topics page or within your specific topic, you’ll be able to edit your topic name by clicking the pencil icon beside the name. Viewing a topic Once within your research topic, you will be able to see all of the saved bookmarks that you have added to the topic. If you are looking for a specific topic, you can use the search field within the topic to find it. Use the pencil icons beside the topic name and description to modify the title and description of this topic. Comments added on individual bookmarks can also be modified in this same way, simply scroll to the bookmark you would like to edit, and click on the pencil icon beside the label “comments”. Depending on the types of bookmarks you have added, you will notice there are two mains types of bookmarks: - Entire Documents: These are for when you have saved a whole document/report/search and just a specific portion of a document. - Pinpoint Documents: These are for when you have saved a specific excerpt from the document. You will notice these types have an additional area on the bookmark card to view the excerpt of text that corresponds to the saved specific excerpt. If you would like to view the document/report simply select the citation, which will bring you to the document view where you can view all relevant excerpts across the different research tools in the document. See “Overview of Dispute Document view” to learn more To help you categorize your different bookmarks, you will see a label at the top of each card that says which research tool the document was added from. To remove a bookmark from a topic, simply select the “x” icon at the top right corner to remove this bookmark. To download a topic, select the “Download” button at the top of the bookmarks list. If you have Pinpoint reference type bookmarks, you will be presented with an option to select how you would like to download those bookmarks. If you select “Excerpts Only” this will download simply the excerpts that you’ve saved to your bookmark. If you select “Entire Document” you will download the whole document of those excerpts that you’ve added. Once you’ve made your selection, your download will initiate.
OPCFW_CODE
Nowadays number of road accidents increasing frequently. Whenever vehicle accident occurs on the road there may be high possibility of traffic. In such cases if we could able to move that accidental vehicle from road, we can easily avoid the traffic. To make this possible we can built an accidental vehicle lifting robot using embedded system. You can built its prototype using small vehicle which you can easily found in toy store. Accidental Vehicle Lifting Robot Working of Accidental Vehicle Lifting Robot The working of this project is based on microcontroller which is connected to a motor to move the accidental vehicle. Also microcontroller is connected to a chain based rod to lift the vehicle. The complete circuitry of this project contains microcontroller board which has robotic platform, keypad, electric motors to move the vehicles and lifting the chain based rod. These motors will be moved accordingly to the instruction given through keys connected to microcontroller. Motor for lifting 12 v battery RIDE/KEIL to write code ISP to burn the chip Advantages Accidental Vehicle Lifting Robot: Low Power consumption I hope you liked this project idea. Please like our facebook page and subscribe to our newsletter for upcoming projects. If you have any queries feel free to ask in comment section below. Have a nice day! Hi friends, in previous article we have seen Zigbee and GPS project which tracks a vehicle. Today we will build another innovative electronic project which will send a SMS from No Signal Area. There are many locations where we get poor range or completely no range. So using this embedded system we can send a SMS from such locations. The only condition we need here is, we should have a mobile network at the receiving end of Zigbee module. This is low cost project and highly innovative. You can build such projects for your final year engineering submissions also. Sending SMS from No Signal Area The main objective of this micocontroller project is to send a SMS from No Signal area which is also known as Black Spot area using Zigbee and GSM module. 8051 family development board RIDE to write code ISP to burn the chip Zigbee Transmitter Block Diagram Zigbee Receiver Block Diagram As already stated, this project is useful for creating signal, using GSM module we can send SMS through that signal to destination. In this project we are using two different frequencies. Zigbee has frequency 2.4GHz and GSM has frequency 1800 MHz. Main circuitry of this project contains two embedded development boards. One contain Zigbee and Keypad and other contain Zigbee and GSM. We need to place first board in No signal (Black spot) area. Other development board which contains Zigbee receiver and GSM module is kept in area where there is mobile network. When you type a message using keyboard and hit enter from No signal area, Zigbee transmitter will send a signal with message to the receiver end. Receiver end of Zigbee also has GSM module which will send that SMS to destination mobile. Watch this Video: I hope you liked this Project. Please share it with your friends and like our facebook page for future updates. If you have any queries please feel free to ask in comment section below. Have a nice day! How to create and burn HEX file for 8051 microcontroller in keil: Hello friends, today I am going to tell you how to create a microcontroller program file (.HEX format) and how to burn HEX program file in our 8051 microcontroller for any desired project. Let me tell you one thing in this tutorial I am not going to learn you ‘C’ program, I am just telling you how to create and burn .hex program file in a 8051 microcontroller (assuming that you have c program with you). For programming a microcontroller we are going to use one of the best microcontroller programming software called “Keil”. Using this software you can compile your ‘C’ program and can check is their any errors in your program or not. After removing all errors (if any), you can create program file also known as .hex file which we are going to use for our microcontroller programming. So let us learn – How to create a hex file for 8051 microcontroller using Keil software step by step: (Before proceeding to our main tutorial make sure that you have your c program file (.c format or in word document), which we are going to convert into .hex file using ‘Keil’ software) Step 1:Download ‘Keil uVision3’: Click here to downloadkeil uVisionsoftware. (After downloading install it in your computer). Step 2: Open theKeilsoftware, you will see following window. Step 3: Now be ready for your first microcontroller project using keil software. Now we are going to do our new project, this can be done by by using following steps: Click on ‘Project’ then ‘New project’. A new window will appear on the screen (Create new project). Simply type your project name (in my case it is ‘my first keil project’) and click ‘Save’. When you click on save button, a new window will appear (Select Device for Target ‘Target 1’) here we are required to tell – which microcontroller we are going to use? (For example, if we are using famous 8051 family or AT89C51, then double click on ‘Atmel’ here you will see all the microcontrollers made by ‘Atmel’. Click on any one (in my case it is AT89C51) which you are going to program). Then click on OK. After that another window will appear asking for “Copy Standard 8051 Startup Code to Project Folder and Add File to Project?” Click on ‘yes’. If you observe ‘Project workspace’ which is located at the left side, you will see ‘STARTUP.A51’ file is their. It means it is the file which contains the assembly language commands of 8051 microcontroller. Step 4: Now we are required to configure option value of our microcontroller project. For doing this click on ‘Project’ then “Option for Target ‘Target 1’”. Select Tab of Target to configure value of MCU Target as : Configure X-TAL to be 12 MHz (which is initially 24 MHz) Select Tab of Output and click on checkbox of “Create HEX file”. Click OK. Step 5: Now we are ready for writing a first C program. Click on ‘File’ and then click on ‘New’. A new window will appear in which we are going to write our C program. If you already have, simply paste it in this window. After completing your C program click on ‘File’ and then ‘Save’ (Shortcut ‘Ctrl+S’) . We are required to save this file with extension ‘.c’ Don’t forgot to write .c after name of c program. Figure is shown below: Add Files into Project File, click command Project → Components, Environment, Books…, select Tab Project Components and then select desired Add File to add into Project File. In the first time, we must select Files of type to be “C Source files (*.c)” and it will display Files name that is C Language Source Code. Click icon of File named “my first keil project.c” and then click Add then close then OK. Now if there is ‘my first keil project.c’ file present in the Project workspace which is at upper left of the screen, you are on your way! Step 6: Now this is our last step of this tutorial. Here in the last step we are going to check is everything is fine without errors or not. We are checking our c program and converting it into hex file. So for doing this click on ‘Project’ and then click on ‘Rebuild all target files’ (There is also shortcut for this command on upper left). So when you click on this button you will see that your program is being compiled. If there is massage like “my first keil project” – 0 Error(s), 0 Warning(s)”. it means you have not any errors in your program and you can use its HEX file for your microcontroller. Now close the software and open the directory where you save your project. Generally it is in (C:KeilC51Examples……). So there is one file containing .hex format this is your program file. You can burn this program in your microcontroller using microcontroller kit. tags: how to program a 8051 microcontroller. how to write a program for 8051 microcontroller. keil – microcontroller programming software. step by step tutorial for programming a microcontroller. How to burn program in a 8051 microcontroller. How to create hex file in keil software for 8051 microcontroller. User input based seven segment display using AT89C51 microcontroller: This is a very simple microcontroller project having ten push buttons from 0 to 9 which displays the corresponding number on seven segment display. For example if we press last i.e. tenth button it will display 9. Seven segment display Following figure shows the circuit diagram of user input based seven segment display using AT89C51 microcontroller. Click on following button to download .C and .HEX file for this project.
OPCFW_CODE
namespace RI.Framework.Utilities { /// <summary> /// Provides utility/extension methods for the <see cref="double" /> type. /// </summary> /// <threadsafety static="false" instance="false" /> public static class DoubleExtensions { #region Static Methods /// <summary> /// Gets the number or the default value (0.0) if a double precision floating point number is "NaN"/"Not-a-Number" or infinity (positive or negative). /// </summary> /// <param name="value"> The double precision floating point number. </param> /// <returns> /// Zero if the number is "NaN"/"Not-a-Number" or infinity (either positive or negative), <paramref name="value" /> otherwise. /// </returns> public static double GetValueOrDefault (this double value) { return value.IsNanOrInfinity() ? 0.0 : value; } /// <summary> /// Gets the number or a specified default value if a double precision floating point number is "NaN"/"Not-a-Number" or infinity (positive or negative). /// </summary> /// <param name="value"> The double precision floating point number. </param> /// <param name="valueIfNanOrInfinity"> The value to return when <paramref name="value" /> is "NaN"/"Not-a-Number" or infinity. </param> /// <returns> /// <paramref name="valueIfNanOrInfinity" /> if the number is "NaN"/"Not-a-Number" or infinity (either positive or negative), <paramref name="value" /> otherwise. /// </returns> public static double GetValueOrDefault (this double value, double valueIfNanOrInfinity) { return value.IsNanOrInfinity() ? valueIfNanOrInfinity : value; } /// <summary> /// Determines whether a double precision floating point number is infinity (positive or negative). /// </summary> /// <param name="value"> The double precision floating point number. </param> /// <returns> /// true if the number is infinity (either positive or negative), false otherwise. /// </returns> public static bool IsInfinity (this double value) { return double.IsInfinity(value); } /// <summary> /// Determines whether a double precision floating point number is "NaN"/"Not-a-Number". /// </summary> /// <param name="value"> The double precision floating point number. </param> /// <returns> /// true if the number is "NaN"/"Not-a-Number", false otherwise. /// </returns> public static bool IsNan (this double value) { return double.IsNaN(value); } /// <summary> /// Determines whether a double precision floating point number is "NaN"/"Not-a-Number" or infinity (positive or negative). /// </summary> /// <param name="value"> The double precision floating point number. </param> /// <returns> /// true if the number is "NaN"/"Not-a-Number" or infinity (either positive or negative), false otherwise. /// </returns> public static bool IsNanOrInfinity (this double value) { return double.IsNaN(value) || double.IsInfinity(value); } /// <summary> /// Determines whether a double precision floating point number is neither "NaN"/"Not-a-Number" nor infinity (positive or negative). /// </summary> /// <param name="value"> The double precision floating point number. </param> /// <returns> /// true if the number is neither "NaN"/"Not-a-Number" nor infinity (either positive or negative) but rather a real number, false otherwise. /// </returns> public static bool IsNumber (this double value) { return (!double.IsNaN(value)) && (!double.IsInfinity(value)); } #endregion } }
STACK_EDU
Access vba delete all records in table keyword after analyzing the system lists the list of keywords related and the list of websites with related content, in addition you can see which keywords most interested customers on the this website Insert, update, and delete records from a table using ... To remove all the records from a table, use the DELETE statement and specify which table or tables from which you want to delete all the records. DELETE FROM tblInvoices In most cases, you will want to qualify the DELETE statement with a WHERE clause to limit the number of records to be removed. Access VBA delete Table records with SQL using DoCMD ... In order to delete the record (the whole row) of Apple, create a new Query, add student table.. Under Design tab, click on Delete button. This will create a Delete Query. Add Student ID to the field, then type “001” in criteria, which is the student ID of Apple.. To preview the result of Delete Query (which records will be deleted), click on the View button under Design. Daniel Pineault is the owner of CARDA Consultants Inc..For 10 + years, this firm has specialized mainly in the development of custom IT solutions for business ranging from databases, automated workbooks and documents, websites and web applications.. A regular contributor to many forums including Experts-Exchange, UtterAccess, Microsoft Answers and Microsoft MSDN where he helps countless people ... Access VBA delete Table using DoCmd.DeleteObject Method Access delete table records. Access VBA delete Table. In Access VBA, deleting Table can be done by DoCmd.DeleteObject Method. It is an extremely simple and straight forward, the syntax is as below. In order to delete Table, use acTable in the ObjectType argument. DoCmd.DeleteObject (ObjectType, ObjectName) Solved: VBA - Delete all records in table | Experts Exchange (4) In the 'Add Table' dialog, double-click on the table you want to delete all records, then hit Close. You are now in Query Design, with your table in the upper section. (5) Double click on the asterisk ( * ) and watch it drop down to the lower section. (* means all records) Delete Record, Access VBA Recordset - VBA and VB.Net ... Delete Record.Zip; See also: Read Data from Table, Access VBA; Append Data to Table, Access VBA; Microsoft MSDN: Delete Method (ADO Recordset) #Deleted Text in Tables, VBA Access ; Recordset.AddNew Method VBA Access; If you need assistance with your code, or you are looking for a VBA programmer to hire feel free to contact me. Command button to delete all records - Microsoft Community Thanks for your response. It worked great. I decided I needed more. I have a text box on the form called "Tax Year" and it contains a 4 digit year. The table that I'm working has a date field called "add date". I would like to delete only the records that the date's year matches the text box Tax Year. DELETE statement (Microsoft Access SQL) | Microsoft Docs If you delete the table, however, the structure is lost. In contrast, when you use DELETE, only the data is deleted; the table structure and all of the table properties, such as field attributes and indexes, remain intact. You can use DELETE to remove records from tables that are in a one-to-many relationship with other tables. Info!Website Keyword Suggestions to determine the theme of your website and provides keyword suggestions along with keyword traffic estimates. Find thousands of relevant and popular keywords in a instant that are related to your selected keyword with this keyword generator
OPCFW_CODE
# encoding: utf-8 """ @author: BrikerMan @contact: eliyar917@gmail.com @blog: https://eliyar.biz @version: 1.0 @license: Apache Licence @file: macros.py @time: 2019-05-17 11:38 """ import os import logging from pathlib import Path import tensorflow as tf DATA_PATH = os.path.join(str(Path.home()), '.kashgari') Path(DATA_PATH).mkdir(exist_ok=True, parents=True) class TaskType(object): CLASSIFICATION = 'classification' LABELING = 'labeling' class Config(object): def __init__(self): self._use_cudnn_cell = False self.disable_auto_summary = False if tf.test.is_gpu_available(cuda_only=True): logging.warning("CUDA GPU available, you can set `kashgari.config.use_cudnn_cell = True` to use CuDNNCell. " "This will speed up the training, " "but will make model incompatible with CPU device.") @property def use_cudnn_cell(self): return self._use_cudnn_cell @use_cudnn_cell.setter def use_cudnn_cell(self, value): self._use_cudnn_cell = value from kashgari.layers import L if value: if tf.test.is_gpu_available(cuda_only=True): L.LSTM = tf.compat.v1.keras.layers.CuDNNLSTM L.GRU = tf.compat.v1.keras.layers.CuDNNGRU logging.warning("CuDNN enabled, this will speed up the training, " "but will make model incompatible with CPU device.") else: logging.warning("Unable to use CuDNN cell, no GPU available.") else: L.LSTM = tf.keras.layers.LSTM L.GRU = tf.keras.layers.GRU def to_dict(self): return { 'use_cudnn_cell': self.use_cudnn_cell } config = Config() if __name__ == "__main__": print("Hello world")
STACK_EDU
ORIGINAL POST 3/6/2017: There's an interesting offensive player stat out there called PORPAG, which is an acronym for "Points Over Replacement Per Adjusted Game." PORPAG was created by MSU fans KJ and Spartan Dan back in 2009, and the basic idea is to estimate how many more points per game a player creates than a hypothetical "replacement player" would. The basic formula for PORPAG is: (OffRtg – 88) * %Poss * Min% *65 The 88 is the presumed O-rating of the replacement player, and the 65 at the end is the constant for possessions in a game. These days we'd probably want to use something more like 68 to account the tempo-frenzy unleashed by the 30-second shot clock. Arguably, we could also increase the O-rating of the replacement player to account for increased average efficiency since 2009, but I'm leaving it at 88 because the Chinese consider 8 to be a very lucky number and my mom was born on August 8th. Michigan fan CT in TC keeps the PORPAG torch lit, and tweeted out the final conference-only numbers for the Big Ten today: Overall, this list tracks common sense reasonably well. Walton and Swanigan are pretty clearly the top two offensive players. And most of the other good offensive players in the conference make the cut. But one thing POPRAG definitely misses is a fundamental basketball fact: it is harder to be efficient at high usage than it is to be efficient at low usage. Because PORPAG doesn't account for this basketball fact, it shows a clear bias for lower-usage high-efficiency players. This is how you get a guy like Abdur-Rahman (16.6 % usage) over Ethan Happ (28.9 %). To try to fix this problem, I'm proposing a new stat called PORPAGATU! This stands for PORPAG At That Usage, with an exclamation point for emphasis. The fundamental problem with PORPAG is that there simply are not "replacement players" who can come in and even function at Ethan Happ's 29% usage rate. To account for this, I propose adjusting the hypothetical replacement player's O-Rating in the formula as follows: Less than 10 % usage: 93 Between 10% and 30%: 93 down to 83 on a sliding scale. More than 30% usage: 83 There's probably a better way to make this adjustment, and I'll take suggestions on that and think about it more. But with this fairly simple adjustment, here's what I get: Some of the more unsatisfying results of PORPAG are noticeably diminished. For example, Nigel Hayes appears at 25th and Zak Showalter drops out from 21st—as it should be. Showalter's dominance of this metric earlier in the season was what first got me thinking about this kind of adjustment. Other high usage guys like Jok, Happ, Ward and Trimble all move up significantly, better reflecting their offensive value (I think). While still not perfect, my subjective take is that every difference between the PORPAG and PORAGATU! is pro-PORPAGATU! For the record, here's the Big Ten PORPAGATU! for the whole season (including non-conference): Again, not perfect, but a pretty decent proxy for offensive value in my opinion. Got a better idea for how to make the high usage adjustment? Let me know. If I change the formula, I'll update this post. I'll also put up a PORPAGATU! page on the T-Rank site when I get a chance. I have made some further refinements to this would-be stat: - Instead of using the sliding scale to adjust the value of a replacement player, I will just use the player's "usage-adjusted O-Rating" as the basis for the formula. That is calculated as: O-Rating + ((Usage - 20) * 1.25). In other words, add or subtract 1.25 points for every point above or below an average usage of 20. - For these purposes, I will also adjust a player's O-Rating for the level of competition he has faced. This is done by comparing the average adjusted defensive efficiencies of his opponents versus the overall average defensive efficiency. So if the average efficiency is 103, and a player's opponents have an average adjusted defensive efficiency of 100, his O-Rating will be multiplied by 1.03 (103/100). - Because usage is now accounted for in the player's O-Rating itself, it is not necessary (or proper) to include usage later in the formula and the total result must be divided by 20 to maintain the same scale. - Because I can, instead of using a constant for the Tempo adjustment I'll just use whatever the average D1 tempo happens to be at a given moment. ((ORtg * (D1 Eff. / Opponents' Avg. Def. Eff.) + ((Usage - 20) * 1.25)) - 88) * Min% * D1 Avg. Tempo / 500 I've added the results of this to the 2018 Team Pages on the T-Rank site, and I hope to roll them into the rest of the site soon. Partially spurred by BTG's post on Grady Eifert, I've made some further refinements to the PORPAGATU! formula. Eifert was an example of a ultra-low-usage + very-high-efficiency loophole in the stat. You may recall that trying to account for that kind of guy was the original impetus for the "ATU" part of the acronym in the first place. Overall, it isn't too big a problem—just a handful of guys over 10+ years really slip through, and for some of them (like maybe Jon Diebler) you could argue that the stat was actually on to something. Also, to some extent the "problem" is unsolvable: ultimately usage and offensive rating tell us different things, and a single stat that tries to combine them is fundamentally a questionable analytical exercise. But let's face it, questionable analytics is totally my brand. Here are the changes: FIRST. I now apply the strength of schedule adjustment after the usage adjustment instead of before. For a guy like Grady Eifert—who was playing for a good team with a good strength of schedule—this means the SOS adjustment is operating on a smaller base, and therefore has less of an effect. Here's a simplified example using Eifert's 2019 numbers: Old: Adj O-Rtg = 144.7 * 1.077 + ((10.5 - 20) * 1.25) = 144.0 New: Adj O-Rtg = (144.7 + ((10.5-20) * 1.25)) * 1.077 = 143.0 Small beans, but a little better. SECOND. I subtract 1.5 per point under 20 usage instead of 1.25. So: Old: Adj O-Rtg = 144.7 * 1.077 + ((10.5 - 20) * 1.25) = 144.0 New: Adj O-Rtg = (144.7 + ((10.5-20) * 1.5)) * 1.077 = 140.5 THIRD. Now let's get nuts. One fact, I think, about the relationship between usage and efficiency is that efficiency becomes more "stable" with more usage. For example, when trying to project performance for a guy with a 120 O-Rating, you can be more confident he's going sustain that if his usage is closer to 30 than it is to 10. This makes sense because essentially the guy with a higher usage has just done more offensive stuff. And it's particularly true for the low-usage high-efficiency guys because what's often going on there is that they've shot an unsustainable percentage from three over not very many attempts. One way to visualize this fact is to look at the standard deviation of adjusted efficiencies at various usage levels, and the resulting trendline: The x-axis there is usage, and the y-axis is the standard deviation for adjusted offensive rating around that usage. Here's how I use this information: - Take the adjusted O-Rating and calculate the number of standard deviations above mean (i.e., a Z-Score), based on the trendline standard deviation for a given usage. The formula for that is: Adj. Z-Score = (Adj. O-Rating - Avg. Eff.) / (Usage * -.144 + 13.023) - Multiply this adjusted Z-Score by 10.143 (the imputed standard deviation at 20 usage) and add to the average efficiency. For Grady Eifert we get: Adj. Z-Score = (144.7 - 103.1) / ((10.5 * -.144 + 13.023) = 3.62 Adj. O-Rating = 103.1 + (3.62 * 10.143) = 139.8 That 139.8 would then be the initial input for the formula above, so: Final Adj. O-Rating = (139.8 + ((10.5-20) * 1.5)) * 1.077 = 135.2 Basically, for low usage guys their O-Ratings are regressed to the mean and for high usage guys their O-Ratings are stretched out from the mean. Is this mathematically, statistically, or analytically sound? Almost certainly not! I have no idea! But it makes the results more pleasing to me, therefore it is done. Final product, Grady Eifert's PRPG! falls to 4.3, from around 5.0, and falls from about 3rd to 8th in the Big Ten last year. Take that, Grady. While I'm on the topic, there are a couple other minor tweaks I made a while back, both to make the stat more comparable across seasons: I use 69.4 for the tempo variable (rather than calculating it for the season in question) and I normalize the average efficiency to 104.9 (can't remember why I chose that number, but it's ultimately arbitrary). o_adj = avg_eff / opp_de uFactor = Usage * -.144 + 13.023 altZscore = (ORtg - avg_eff) / uFactor xORtg = avg_eff + altZscore * 10.143 if Usage > 20: adjoe = (xORtg + ((Usage - 20) * 1.25)) * o_adj otherwise: adjoe = (xORtg +((Usage - 20) * 1.5)) * o_adj porpag = (adjoe + (104.9 - avg_eff) - (88)) * actual_Min_per * 69.4 / 500
OPCFW_CODE
Is it really a Google Killer as dubbed by some people? Although, my knowledge of Wolfram Alpha is less than limited, I am sure it is not.Some arguments supporting my view: People impressions are based on previous achievements of Stephen Wolfram, Demos, an empty Web site and a blog. As far as marketing is concerned, it is a huge success: so many references in the Web before it was released. 2. Is it a Search Engine? A Knowledge Base? An encyclopedia?If it is not a search engine it is not competing directly with Google. For examples, IBM's 9370 computers described in the nineties as VAX killers fade away about 10 or 15 years before VAX computers end of life. An article I read many years ago was titles: RDBMS death. Object Oriented Databases (OODBMS) should replace them according to the article. Today some people may argue that OODBMS are not dead, but RDBMS are still alive and kicking. I also read some articles about C# as Java Killer. Java is still with us and will be around for many years. 4. Only Alpha's first part will be introduced. I am quoting Wolfram Alpha blog: "And—like Mathematica, or NKS—the project will never be finished. But I’m happy to say that we’ve almost reached the point where we feel we can expose the first part of it." 5. It aims at addressing theoretical scientific problems e.g. Natural Language understanding. Translating a scientific breakthrough (and my scientific background is too limited for judging if it is a breakthrough) to commercial products is always a challenge.--> What is Wolfram Alpha? My understanding is based upon Wolfram's blog, demo screen shots, a YouTube video including demo presentation by Stephen Wolfram and few other Web items. It is a kind of search engine, but the search is based upon questions instead of a search by keywords. The answer to a question summarizes data on the topic of that question. For example if you ask: What is the GDP (Gross Domestic Product) of ? The answer will include a number representing that value and some histograms and related variables values. It is possible to drill down to more specific questions and get more detailed data. France The knowledge supplied resembles Wikipedia more than Google search engine.The following factors distinguish between Wolfram Alpha and Wikipedia: --> - The Alpha's knowledge base could be deeper and more scientific than encyclopedic data. - No open community is mentioned as participating in Wolfram Alpha - Wikipedia's search mechanism is based on keywords - No explicit digging or drill down mechanism is inherent in Wikipedia. However, in some articles in Wikipedia the text includes references and hyperlink to other articles describing embedded topics. For example an article on SOA may refer to an article on SOA Contract or on SOA Government. As already mentioned, the questions are asked in natural language (English in this specific example). The answers are based upon the vast knowledge available in the Web. Wolphram Alpha utilize methods and algorithms of a previous project by the same company: Mathematica. Mathematica is used for mathematical computation, modeling, simulation, visualization, development, documentation, and deployment. The approach is based on calculation algorithms and is different from the Semantic Web approach. - Wolfram Alpha could be an innovative and successful product in the future, but it will not be a Google killer or even Wikipedia killer. - It is true that rarely a new product may be a killer of an older one, but special conditions are required for supporting this process. I do not notice such special conditions in Wolfram's case. . I can think of two well known examples: 1. Microsoft's Internet Explorer as Netscape's browser killer. The special conditions in this case were a dominant and stronger vendor (Microsoft) and a wrong approach of the market leader (Netscape). Notice that after Microsoft's victory, its market share is cannibalized by newer solutions (Open Source's FireFox and Google's Chrome). upon better innovative search algorithm (Page Rank), Larry Page & was based Sergey Brin's decisiveness, and relationships which enabled sponsorship of adequate Venture Capitals, as well as hiring of a an experienced CEO (Eric Schmidt). - Understanding Natural Language is quite a big challenge Wolfram blog refers to Alpha as something that "almost gets us to what people thought computers would be able to do 50 years ago!" Looking back only 25 years, I learned a little bit about academic disputes on computers abilities to understand natural human languages. It is relatively easy to understand the syntactic layer but understanding the Semantics is more difficult. Simple example (I do not remember who the originator of that example is) cans illustrate the difficulty. The following two syntactically identical sentences have different semantics: The glass fell on the table and it was broken. The rock fell on the table and it was broken. The first sentence tells us that the first object (a glass) was broken. The second sentence tells us that the last object (a table) was broken. I do not know if Semantic Web is a good enough solution to understanding the semantics of natural languages, but it seems reasonable that a solution for that problem is required. - Wolfram Alpha may provide good understanding of mathematical and physical (and probably other scientific) questions but probably less good understanding of natural language based questions in other fields. Questions in scientific fields are usually more accurate and formalized than questions in more fuzzy topics. Therefore it is easier to understand and interpret questions in these fields. - A lot of data is available in the Web but not all data was created equal. Some of it is knowledge or valuable information and some of it is useless. The challenge is to distinguish between reliable information and non-reliable data. I do not know yet, how Wolfram Alpha is going to address it. In any case, it seems like its ability to distinguish between different kinds of data will be better in scientific fields than in other fields. - Google is not a Search engine only company therefore a fierce competition of a new Search Engine not necessarily kills it. For example, an improved search engine without proper advertising mechanisms and advertising mind share may require cooperation with Google in order to use its advertisement expertise and tools. - A breakthrough in Search Engines algorithms could hurt Google even if it will not kill it. A breakthrough will probably be based upon some kind of semantic search instead of keyword search. Wolfram Alpha could be an example of product including more semantic search and search results capabilities. Expect for other semantic search tools in the future. In order to survive (see my post Vnedors Survival: Will Google survive until 2018?), Google should continue researching and inventing new searching algorithms, as well as coping with new algorithms used by competitors. A final concluding remark Wolfram Alpha may be a promising inventive product, but looking back to 1957, it could also be another General problem Solver, i.e. a pretentious general purpose effort, capable to answer formalized mathematical questions, but far away from answering real world problems.
OPCFW_CODE
Feature request: selective heap/stack segment locking and address range specification Thank you so much for creating this repository! It's a really useful tool that I've found myself using quite frequently. I just had a suggestion for a potential feature that I think could be really helpful: the ability to selectively lock either heap segments or stack segments into the working set. This would allow users to save resources by only locking the segments that they need, rather than locking all of them at once. Additionally, it would be great if there was the option to specify a specific address range using command line options. This would allow users to be even more precise in terms of which segments they want to lock, and could be especially useful for larger projects where memory usage is a concern. Thanks again for all of your hard work on this repository - it's greatly appreciated! Awesome, all of those seem like great ideas - will try to implement those in the next month or so :) Cheers On Sun, Dec 25, 2022 at 10:46 PM Ceiridge @.***> wrote: Thank you so much for creating this repository! It's a really useful tool that I've found myself using quite frequently. I just had a suggestion for a potential feature that I think could be really helpful: the ability to selectively lock either heap segments or stack segments into the working set. This would allow users to save resources by only locking the segments that they need, rather than locking all of them at once. Additionally, it would be great if there was the option to specify a specific address range using command line options. This would allow users to be even more precise in terms of which segments they want to lock, and could be especially useful for larger projects where memory usage is a concern. Thanks again for all of your hard work on this repository - it's greatly appreciated! — Reply to this email directly, view it on GitHub https://github.com/0vercl0k/lockmem/issues/4, or unsubscribe https://github.com/notifications/unsubscribe-auth/AALIOROU77HU7CGHYJXWIUTWPE5NRANCNFSM6AAAAAATJNP6NY . You are receiving this because you are subscribed to this thread.Message ID: @.***> Hey @Ceiridge, I haven't forgotten about this; I had some free time this w-e so I started working on it. I've drafted the --ranges feature in https://github.com/0vercl0k/lockmem/commit/bc950551f0de974103e32186c97af833c7ce849e (binaries from the CI if you want to dogfood it https://github.com/0vercl0k/lockmem/actions/runs/4239731588). I also know how to do the stack one so it'll follow shortly hopefully. For the heap I need to do some testing. Cheers Some updates, I have a first draft for handling stacks and handle better all those range filters via an interval tree. It uses boost which is way bigger than I'd like so let's see if I can trim it down. It hasn't been extensively tested, but if you want to play w/ it you can grab binary in 44df8776f9fd0be8e94a592a3f3f98f8d4b97de7 (https://github.com/0vercl0k/lockmem/actions/runs/4273459202). Cheers Haha of course not! Boost is hundreds of megs if I had to guess 🤣I just pushed the ICL module which implements the interval_set that I am using to track ranges & overlaps, etc. I am not a fan of submodules; it requires people to download them on checkout with a special command line, if you want to patch the dependency it makes it hard, etc. I prefer subtrees. And yes I understand your request, it is implemented under --stacks and you can see the code in lockmem.cc. I might try to trim its dependencies or implement my own but feel free to ignore the implementation details for now :) Cheers Ok, very good! Haha getting trolled hard here 😅I wish there were more libraries that did what I want.. trust me. Looking into AVL trees library and maybe slap my interval tree on top.. Anyways, will let you know! Cheers All right - I couldn't get rid boost.icl into a small enough state, so I implemented a ghetto version which works fine enough for that small tool. Cheers
GITHUB_ARCHIVE
TUESDAY, 10 A.M. Casey is walking through the newsroom, when he sees Dana walking by. Dana is wearing her winter coat and has a big smile on her face. Whoa. Dana. She's happy. Casey: Hey Dana. Dana: Good morning to you to Casey. Casey: You're in a good mood today. Dana continues to walk toward her office, while Casey decides to change his original course and follow Dana. Something's up with her. Dana greets several staffers, assistants, and producers along the way. Dana: Good morning, gentleman. Dave: Good morning, Dana. Will: Morning Dana. Dana: Good morning, Kim. Good job on last night's feature. Kim: Thanks, Dana. Here's a copy of last night's show. Hey, Casey. Kim hands Dana a tape. Dana: Thanks, Kim. Casey: Hey, Kim. She's in a way too good of a mood today. Dana: Good morning, Natalie and Jeremy. How are you two doing? Natalie: Doing great. Something's definitely up and I can venture a guess to what it is. Casey: Hey, Kevin comes back today, doesn't he? Dana: Actually, he came back last night. Isaac walks by the two. Isaac: Good morning you two. Dana, don't forget about coming up with some ideas. Dana: I haven't forgotten. Isaac: And Casey, stop stalking Dana. Casey takes a step back. Casey: I'm not stalking her. Go after her. Something's not right. Dana and Casey walk into her office. Casey: So he just showed up at your doorstep. Casey: Without any warning? What a bum. Completely unannounced. I would be ticked off over that. Dana sets the tape on her desk, then hangs her coat up. Casey: Was he waiting outside or did he let himself in? Dana: He was outside. Casey: Too much of a gentleman to just waltz on inside? Dana: How would he get in Casey? It's not like I gave him a key. Casey: He doesn't have a key? Dana: No, Casey. He doesn't have a key! Good, Dana. Dana sits behind her desk. Casey: Well, what if you had a guest with you? Dana: But I didn't. Casey: Guess he lucked out. Dana: What's that supposed to mean? What is that supposed to mean, Casey? Dana looks down at her desk at her messages, prepared notes, and schedule. Casey: Anyway, Dan wanted me to remind you about the photo shoot tomorrow. The network is really promoting our nominees this year. Also, my neighbors are really noisy when I am home. Plus, we have a Pac-10 game tonight, which means high-scoring, which means we won't be starting on-time tonight. Casey looks at Dana reviewing her notes. Please look at me when I'm talking to you. Casey: You didn't hear a word I said. Dana: Yes, I did. Casey: No, you didn't. Dana: Casey, you were talking about the photo shoot, which I have marked right here on the calendar, you talked about your noisy neighbors- probably because they are having sex and you're not- and you reminded me about the probability of tonight's game running long. Did I miss anything? Casey stands in shock and blinks his eyes. Damn, she is good. She is really good. Casey: No, nothing. Dana: Are you all right? Casey: Yeah, I'm fine. No, I'm not. Dana: Are you sure? Work. Work. Use work. Casey: Yes, Dana, I'm sure. Look, I gotta' get back to work. Dana: Okay, I'll see you later. Casey leaves her office and closes the door behind him. Damn it. She had a good time. Casey heads toward his office, when he runs into Natalie. Natalie: Casey! What's up? Casey: Nothing, Natalie. Natalie: Are you going to tell me, or am I going to have to find out from Dana? She'll twist your arm, better fess up. Casey: There's nothing to report other than Kevin is just so perfect. Natalie: Hang in there, Casey. I'll take care of it. Natalie jogs toward Dana's office. Right. Wait! Where is she going?! Casey: Natalie. Natalie! Casey goes into his office. As he walks over to his desk, he looks down at the phone and sees a message has been left. He checks the caller I.D. My agent called. Casey picks up the phone, pushes a few buttons, and listens to the message. There's a contract offer? Call him back this afternoon? Casey puts the phone down. Casey: Is it here or elsewhere? Casey sits down and thinks about the possibility of a new contract. He sounded good. He sounded excited. That's good news. A new contract.
OPCFW_CODE
* [HoTT] LICS 2023 Call for Papers and Call for Workshop Proposals @ 2022-10-31 17:20 Sam Staton 0 siblings, 0 replies; only message in thread From: Sam Staton @ 2022-10-31 17:20 UTC (permalink / raw) To: categories, GAMES, theorem-provers, concurrency, finite-model-theory, asl, agda, appsem, lfcs-interest, cade, prog-lang, linear, DMANET, fom, homotopytypetheory, rewriting, types-announce, coq-club, agda, ProofTheory@lists.bath.ac.uk CALL FOR PAPERS and WORKSHOP PROPOSALS. Here is both a call for papers (18/23 Jan) and a call for workshop proposals (30 Nov) for LICS 2023 (June 2023). Thirty-Eighth Annual ACM/IEEE Symposium on LOGIC IN COMPUTER SCIENCE (LICS) Boston, June 2023 The LICS Symposium is an annual international forum on theoretical and practical topics in computer science that relate to logic, broadly construed. We invite submissions on topics that fit under that rubric. Suggested, but not exclusive, topics of interest include: automata theory, automated deduction, categorical models and logics, concurrency and distributed computation, constraint programming, constructive mathematics, database theory, decision procedures, description logics, domain theory, finite model theory, formal aspects of program analysis, formal methods, foundations of computability, foundations of probabilistic, real-time and hybrid systems, games and logic, higher-order logic, knowledge representation and reasoning, lambda and combinatory calculi, linear logic, logic programming, logical aspects of AI, logical aspects of bioinformatics, logical aspects of computational complexity, logical aspects of quantum computation, logical frameworks, logics of programs, modal and temporal logics, model checking, process calculi, programming language semantics, proof theory, reasoning about security and privacy, rewriting, type systems, type theory, and verification. IMPORTANT DATES FOR PAPERS Authors are required to submit a paper title and a short abstract of about 100 words in advance of submitting the extended abstract of the paper. The exact deadline time on these dates is given by anywhere on earth (AoE). Titles and Short Abstracts Due: 18 January 2023 Full Papers Due: 23 January 2023 Author Feedback/Rebuttal Period: 15-19 March 2023 Author Notification: 5 April 2023 Conference: 26-29 June 2023. Submission deadlines are firm; late submissions will not be considered. All submissions will be electronic via easychair. PAPER SUBMISSION INSTRUCTIONS Every full paper must be submitted in the IEEE Proceedings 2-column 10pt format and may be at most 12 pages, excluding references. Latex style files and further submission information is at https://lics.siglog.org/lics23/cfp.php. LICS 2023 will use a lightweight double-blind reviewing process. Please see the website for further details and requirements from the double-blind process. The official publication date may differ from the first day of the conference. The official publication date may affect the deadline for any patent filings related to published work. We will clarify the official publication date in due course. LICS 2023 Call for Workshop Proposals Researchers and practitioners are invited to submit proposals for workshops on topics relating logic -- broadly construed -- to computer science or related fields. Typically, LICS workshops feature a number of invited speakers and a number of contributed presentations. LICS workshops do not usually produce formal proceedings. However, in the past there have been special issues of journals based in part on certain LICS workshops. Proposals should include: - A short scientific summary and justification of the proposed topic. This should include a discussion of the particular benefits of the topic to the LICS community. - Potential invited speakers. - Procedures for selecting participants and papers. - Plans for dissemination (for example, special issues of journals). - The proposed duration, which is one or two days. - A discussion of the proposed format and agenda. - Expected number of participants, providing data on previous years if the workshop has already been organised in the past. Proposals should be sent to Valentin Blot: lics23-workshops at valentinblot.org IMPORTANT DATES FOR WORKSHOP PROPOSALS - Submission deadline: November 30, 2022 - Notification: mid-December, 2022 - Program of the workshops ready: May 24, 2023 - Workshops: June 24-25, 2023 - LICS conference: June 26-29, 2023 The workshops selection committee consists of the LICS Workshops Chair, the LICS General Chair, the LICS PC Chair and the LICS Conference Chair. You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group. To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeTheoryemail@example.com. To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/C89F0494-ECFD-477D-9550-99ED2E9DF1ED%40cs.ox.ac.uk. ^ permalink raw reply [flat|nested] only message in thread only message in thread, other threads:[~2022-10-31 17:20 UTC | newest] Thread overview: (only message) (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-10-31 17:20 [HoTT] LICS 2023 Call for Papers and Call for Workshop Proposals Sam Staton This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).
OPCFW_CODE
Pydantic fails to parse object: invents a new irrelevant object instead Checks [x] I added a descriptive title to this issue [x] I have searched (google, github) for similar issues and couldn't find anything [x] I have read and followed the docs and still think this is a bug Bug Output of python -c "import pydantic.utils; print(pydantic.utils.version_info())": pydantic version: 1.8.1 pydantic compiled: True install path: /Users/hansbrende/opt/miniconda3/envs/testenv/lib/python3.7/site-packages/pydantic python version: 3.7.10 | packaged by conda-forge | (default, Feb 19 2021, 15:59:12) [Clang 11.0.1 ] platform: Darwin-19.0.0-x86_64-i386-64bit optional deps. installed: ['typing-extensions'] import pydantic from typing import Union, List class Item(pydantic.BaseModel): name: str class Items(pydantic.BaseModel): items: Union[Item, List[Item]] print(Items(items=[{'name': 'MY NAME', 'irrelevant': 'foobar'}])) OUTPUT: items=Item(name='irrelevant') EXPECTED OUTPUT: items=[Item(name='MY NAME')] Hi @HansBrende This is a known issue that may be fixed in v1.9 See https://github.com/samuelcolvin/pydantic/pull/2092#issuecomment-763833122 for the proposal @PrettyWood Although the issue you linked to also involves unions, I don't believe this is the same issue AT ALL... unless I am totally missing something! @PrettyWood to give you a better idea of how weird this is, if I change the dictionary order to: print(Items(items=[{'irrelevant': 'foobar', 'name': 'MY NAME'}])) Then the output is correct: items=[Item(name='MY NAME')] If I add a second irrelevant key to the dictionary: print(Items(items=[{'name': 'MY NAME', 'irrelevant': 'foobar', 'irrelevant2': 'hello'}])) Then the output is still correct: items=[Item(name='MY NAME')] Yes I know it's the exact same issue that I linked even though it's hard to see the relationship. Pydantic tries to match types in the order of the union. And for this can coerce. The issue is that it first calls dict() to match a BaseModel. And dict([{a: 1, b: 2}]) == {a: b}. If the number of items is odd the coercion fails but if it's even it works. Hence the solution I linked with cases like https://github.com/PrettyWood/pydantic/pull/66/files#diff-bd467a363c9f155a2e6a7716aeca0c351089135a53d09a66138cea4f16506ebdR2728-R2738 @PrettyWood Gotcha, that makes sense. Thanks for the explanation! I keep it open since I'll need to add extra logic for this in the PR for SmartUnion @PrettyWood coming back to this now that I've had some time to mull it over: the real bug in this specific example (in my mind) is not so much the absence of SmartUnion but the ability of pydantic to parse such a contrived Item from a list containing a dict. So if we remove the Union type altogether to avoid the red herring, what this really boils down to is the following bug: import pydantic class Item(pydantic.BaseModel): name: str print(Item.parse_obj([{'name': 'MY NAME', 'irrelevant': 'foobar'}])) OUTPUTS: name='irrelevant' EXPECTED OUTPUT: Validation Error Whereas, swapping the order of the dict to print(Item.parse_obj([{'irrelevant': 'foobar', 'name': 'MY NAME'}])) We get the expected validation error, as we should: pydantic.error_wrappers.ValidationError: 1 validation error for Item name field required (type=value_error.missing) Current behavior is very much non-deterministic... which should qualify as a bug in its own right (independent of how unions are implemented). @PrettyWood here is a rough quick-fix that would completely eliminate this problem. Rather than unconditionally calling dict(input) on everything, why not do something along these lines? def convert_to_a_REASONABLE_dict(input) -> dict if hasattr(input, '__iter__'): for x in input: if isinstance(x, dict): raise ValidationError return dict(input) This will be done for v2 https://github.com/samuelcolvin/pydantic/issues/1268 @PrettyWood Great to hear! In that case, my approach 2 or 3 (or something similar) could potentially be snuck in before 2.0, since it is not really a breaking change, correct? Approach #​4: use existing validators.tuple_validator()! I just noticed as I was browsing the pydantic.validators source code that you already have a tuple_validator written. So a 4th approach would be to simply tweak your existing dict_validator via your existing tuple_validator (importantly, the tuple_validator does not allow dicts)! def dict_validator(v: Any) -> Dict[Any, Any]: if isinstance(v, dict): return v if not hasattr(v, 'keys'): v = (tuple_validator(t) for t in v) try: return dict(v) except (TypeError, ValueError, errors.TupleError): raise errors.DictError() @PrettyWood @samuelcolvin What do you think? Should I submit a PR for one of these approaches? This is fixed in v2
GITHUB_ARCHIVE
How’s everyone doing? Have you been caught up in the social phenomenon of Pokemon Go? And more importantly, why are you not part of Team Valor?(This by no means reflects the view of CSUS as a whole) As a quick foreword, we’re gonna be moving onto a bi-monthly newsletter!. It turns out that sometimes, we get a lot of news in the first two weeks of a month, with deadlines at the end of the month! As it’s absolutely no use to you if you hear about this after the deadline, we’re gonna move on to bi-monthly updates! Of course, this will come with the caveat that if we don’t get anything of interest, there won’t really be anything to mention, but hey, maybe we’ll post about some new Pokemon we’ve seen! As a general heads up though, Dragonites have been spotted downtown by Prince’s Island Park. “Go” to it!(you can groan here) Pokemon aside, we’ve got some real news headed your way! First things first, the Cardboard VR workshop is in full motion! As we said in our last newsletter, we’re helping host a Cardboard focused VR workshop on the 31st! Learn to work in VR, make a game in Unity for said VR, and get a Cardboard headset for free! Full details can be found here, tonight is the last night for the early bird sign up, and we’ll hope to see you there! Secondly, since the people at GE have graciously given us a blurb to pass out, they’ve saved me having to type up an explanation! “GE Canada is starting 2017 recruitment for our Edison Engineering Development Program (EEDP)! Celebrated as a diverse environment where great talent meets great opportunity – we invite you to join our upcoming Virtual Information Session on July 18th at 12pm EST to learn more about this full-time, paid opportunity. Register at http://invent.ge/29vXwSP. You can also view and apply directly to this Leadership Program at http://bit.ly/29oqFDX. Applications will be accepted until July 25th.” But for real, GE is an absolutely wonderful company, if you’re interested, take a look at the info session! Yes we know this is very short notice, but this is kinda what that foreword is talking about, no? ;) Now, for all of our third and fourth year ComSci students in InfoSec, have we got an opportunity for you! A fantastic unnamed benefactor has set up a bursary for you, and you guys only! The deadline to apply is August 1st, you can find details here, and hey, if I got $2200 for doing what I enjoyed, I’d be drowning in money, not Pidgeys. If this applies to you, definitely take a look! Lastly, a quick job opportunity! Honestly, the files do a much better job of describing what’s expected of these jobs, so I’ll let you read through them! It looks like they’re currently looking for people to work on Web Development, and an imaging software and server support to go with it! Find the info here: C++ Software Engineer – Server C++ Software Engineer Web Application Developer That’s everything for now! We’ll see each other again soon(honestly it’s two weeks give or take) , until then, keep it real!
OPCFW_CODE
how to query the plan cache to find how healthy it is? This morning looking at my monitoring tool I get a warning about High Compiles Query plan compiles should generally be <15% of batches per second. Higher values indicate plan reuse is low, and will often correlate with high CPU, since plan compilation can be a CPU-intensive operation. High compiles may correlate with low plan cache hit ratios, and can be an indicator of memory pressure, since there may not be enough room to keep all plans in cache. If you see consistently high compiles, run a Quick Trace and sort the results by Cache Misses, then expand details to view actual compiling statements (SP:CacheMiss events, highlighted) along with the reason (SubClass) and procedure (Object). WHile doing a superficial investigation I notice the CPU is ok according to the picture below But although I have set optimize for ad hoc workload to 1, when I look at the SQL Server Memory Usage I see that lots of it is still used by the plan cache instead of buffer cache. as can be seen on the picture below. The optimize for ad hoc workloads option is used to improve the efficiency of the plan cache for workloads that contain many single use ad hoc batches. When this option is set to 1, the Database Engine stores a small compiled plan stub in the plan cache when a batch is compiled for the first time, instead of the full compiled plan. This helps to relieve memory pressure by not allowing the plan cache to become filled with compiled plans that are not reused. The question: How can I find out what I can remove from the plan cache? or at least how would I start this investigation? if you have seen these articles, talks about few more things besides setting Optimize for ad hoc workload. 1. https://www.sqlskills.com/blogs/kimberly/plan-cache-and-optimizing-for-adhoc-workloads/ 2. https://www.sqlskills.com/blogs/kimberly/plan-cache-adhoc-workloads-and-clearing-the-single-use-plan-cache-bloat/ Starters Setting the SQL Server option optimize for ad hoc workloads is not really a solution to fix high recompilation values in the query plan cache. It is however a good solution when your application is performing lots of ad hoc (hence the name) queries that run only once and which would otherwise pollute (waste) the query plan cache space. This can be, for example, an application that allows the users to select the columns of tables dynamically that they wish to see the results of. Main Course When SQL Server Database Engine (DBE) executes a query and this query has never been executed before, then the Database Engine has to determine how it will access the data. As soon as the DBE has determined the best way to access the data it will store this information in the Query Plan Cache (QPC), so that the users will benefit the next time the application performs the same query again (albeit, probably with slightly different values). The SQL Server DBE will search the QPC each time the application requires a statement be run. If the DBE finds an adequate Query Plan in the QPC then it will select that Query Plan to retrieve the data. If however, the DBE is unable to determine an adequate Query Plan in the QPC (not found or timeout value reached for querying the QPC), then the DBE will create a new Query Plan. This is the resulting compiles/s you are observing. Dessert The root cause however can vary. SQL Server may be under memory pressure and is unable to store enough compiled Query Plans in the QPC. The DBE will kick out old plans and insert the new ones. (Solution: Add more memory to the SQL Server instance) The application is indeed generating a large amount of queries that have never before been executed and/or have slightly different values than the Query Plans stored in the QPC. (Solution: Remove complexity in application) Answers to your questions You can't. The query plans belong in the query plan cache. You can either clear the QPC or leave the DBE to do its best. (Selective deletion of query plans can be achieved with DBCC FREEPROCCACHE(, but I wouldn't recommend this.) Determine what query plans are stored in the QPC and optimise the application and/or the memory settings for the SQL Server instance. (See script in Cigar Lounge and join with sys.dm_exec_sql_text and/or other DMV according to description for plan_handle column in sys.dm_exec_cached_plans documentation) Cigar Lounge The following query will list all cached plans stored in the QPC, and can be linked to other relevant DMVs to retrieve additional information: SELECT plan_handle, ecp.memory_object_address AS CompiledPlan_MemoryObject, omo.memory_object_address, pages_allocated_count, type, page_size_in_bytes FROM sys.dm_exec_cached_plans AS ecp JOIN sys.dm_os_memory_objects AS omo ON ecp.memory_object_address = omo.memory_object_address OR ecp.memory_object_address = omo.parent_address WHERE cacheobjtype = 'Compiled Plan'; GO Reference: sys.dm_exec_cached_plans (Transact-SQL) (Microsoft Docs) Returns a row for each query plan that is cached by SQL Server for faster query execution. You can use this dynamic management view to find cached query plans, cached query text, the amount of memory taken by cached plans, and the reuse count of the cached plans. Reference Material Why would I NOT use the SQL Server option “optimize for ad hoc workloads”? (DBA Stackexchange) sys.dm_exec_query_plan (Transact-SQL) (Microsoft Docs) sys.dm_exec_cached_plans (Transact-SQL) (Microsoft Docs) Troubleshooting Plan Cache Issues (Microsoft MSDN) sp_BlitzFirst® Result: High Compilations per Second (Brent Ozar) Forcing Query Plans (Microsoft Technet)
STACK_EXCHANGE
what should I include in Knowledge base to tell prolog that average of 0/0 is zero to avoid zero divisor? this is the facts I entered in the knowledge base and average takes a list and returns the result but when i pose the query "average([],X)." it returns X=0 then when i press ; it gives me zero divisor error and I dont understand why,I tried posing the following 4 facts in the KB average(0,0). average([],0). average(0/0,0). average(0,0/0). I'm not sure what you trying to achieve by writing 0/0 (as a matter of fact I'm not sure what any of the facts other than average([],0). are there for), but clearly dividing 0 by 0 will cause a division by zero error. So that's your problem. Remove the occurrences of 0/0 and the error will disappear. when I try posing the query average([],X). it returns X=0 which is true but I can still press ; which gives a 0/0 division error .. that is why I tried the writing 0/0 in the knowledge base which did not work.. @Amrhussein: If average([], 0). is the only fact you have and you enter the query average([], 0). you most definitely do not get a 0/0 error. That only happens if you have another rule which uses division. I'm guessing that in addition to the 4 facts you said you had, you also have a rule which divides the sum of the given list by the length of the given list? In that case, that's your problem. Because if you do that for the empty list you're dividing 0 (the sum of the empty list) by 0 (the length of the empty list). what is the code of average/2? assuming that the current code is: average(L,X):- sumlist(L,Sum), length(L,N), X is Sum/N. then you should enter the special case like this: average([],0). average(L,X):- sumlist(L,Sum), length(L,N), X is Sum/N. this will have the behavior you described: "when I try posing the query average([],X). it returns X=0 which is true but I can still press ; which gives a 0/0 division error .. " to avoid the second error you should prevent prolog from continuing to the second clause if the list is empty. you can do that either with a cut: average([],0):-!. average(L,X):- sumlist(L,Sum), length(L,N), X is Sum/N. or by checking the length of the list before dividing average([],0):-!. average(L,X):- sumlist(L,Sum), length(L,N), N>0, X is Sum/N. I can't comment on thanosQR's answer (insufficient rep) but you can avoid the cuts by pattern matching: average([], 0). average([H|T], X):- sumlist([H|T], Sum), length([H|T], N), X is Sum / N. or using the if -> then ; else construct: average(L, X):- ( L = [] -> X = 0 ; sumlist(L, Sum), length(L, N), X is Sum / N ).
STACK_EXCHANGE
Global page breaking as in TeXmacs One of the most distinctive features of TeX is that the line breaking algorithm is "global", as achieved by minimising a penalty over the entire paragraph. This is one of the most important features of TeX which makes its typesetting quality so high with respect to other software. In TeXmacs, they implemented a similar global algorithm for page breaking. Indeed, in "plain" LaTeX, page breaks often occur at weird places. In TeXmacs, this tends to occur less often. Are there any variants of TeX or LaTeX that implement a global algorithm for page breaking? Is this doable? Taking into account some of the comments, I make my question a bit more precise. Is there a LaTeX style package that enables global page breaking and that comes with style parameters in order to control some of the decisions about page breaks? this question would be much better without unsubstantiated statements. define weird preferably with a complete test example document. As it is with your personal biases expressed as facts it is not really possible to give any objective answer. Well, you understand that such documents are typically quite long. Just to mention a few instances of "weird": pages that start with a formula, pages that end with a section title, pages that start with the last line of a section, etc.' latex never makes a page break after a section title, (unless you have made definitions that break the system) don't just throw in random unsubstansiated things that you say the system does wrong and some other system does better. Show an example document and someone will no doubt show you how to improve the tex markup and get a better result. Well maybe I misremembered this particular item (although there definitely exist broken style files indeed). But my point is that I am not interested in knowing how to improve my LaTeX markup. What I want is LaTeX to do things right automatically or at least be able to set some style variables to get something that is as good as possible. So once again, is there a style package that I can use that enables global page breaking, yes (which one) or no? @Gérard Maybe it is not the best strategy to base your question on unfounded claims about "weird page breaking"? You have to torture latex a lot to get such an output you describe. I'm sure there are also users who manage to produce bad results in texmacs, that does not mean texmacs itself is bad, does it? That's like saying: look, there is at least one person who produced at least one formula in texmacs with incorrect spacing around the decimal separator and wrong font for the units https://i.sstatic.net/U0mds.png - let's conclude that all texmacs must be bad There are several mechanisms for doing global page breaking in tex, they haven't been used much in practice, as in earlier times the memory available made this difficult, and now memory is available there are only limited classes of document where there is sufficient flexibility in page breaking to make a difference but for example this paper available from the latex project website describes a modern technique making use of the extra facilities in luatex. https://www.latex-project.org/publications/2018-01-FMi-CI-Journal-28454894_as_submitted.pdf Thanks for the interesting pointer. Not very professional of the author of this paper though that TeXmacs is not mentioned; many of the ideas were implemented there right from the start around 2000 (I believe; to be checked). Why are there "only limited classes of document where there is sufficient flexibility in page breaking to make a difference"? I find this feature to be highly useful, although it can be distracting when editing files in a wysiwyg manner in "page mode". why should texmacs be mentioned in a paper discussing global page breaking in luatex? why not lout or nrioff or any other system you care to mention???? Why?: because every paragraph has word spaces so benefits from white space adjustment to justify lines, but in document classes that have mostly text the line spacing is fixed so there is no flexibility or local or global optimisation, you just need to break at the end of the page. so the situation is not at all the same. magazine layouts with lots of floating figures and lots of headings and adjustable space are a different matter, but not usually the kind of document set with tex. texmacs is far from the first to consider that, here's a 2000 paper on global page breaking with tex, and I suspect I could find some from the 1980s if I searched harder http://www.tug.org/TUGboat/Articles/tb21-3/tb68fine.pdf, The paper has the format and appearance of a scientific paper. In that case, it should meet scientific citation standards and be aware whether similar ideas were already used before. In this case, they were implemented more than 20 years ago, so the proposed technique is not "modern" at all, as you misleadingly state. Your comments are not particulary reasonable, the concept of global optimisation of page breaking is as old as typography so not modern at all, that paper is discussing techniques that have relatively recently become available using luatex. The fact that some other system may or may not have some version of global optimisation of page breaks isn't relevant at all. @Gérard OK, please search harder then. I am also interested in actual implementations. What should I do concretely speaking to use global page breaking in LaTeX? We are talking about automatic global page breaking and actual implementations, not about conceptual speculations. @Gérard Plass's thesis (under supervision of Prof. Knuth) on a page breaking version of tex's line breaker, 1981 https://tug.org/docs/plass/plass-thesis.pdf Reply to "Why?". I don't agree. Typical papers with mathematical formulas, sections, and theorems (for other types of papers, most people use Word) have lots of opportunities for vertical adjustments. Thanks for the link to Plass's thesis. That indeed seems to be a more serious scientific study of the problem. But my original question remains unanswered: how can we concretely use this from within LaTeX? sorry you don't find Frank serious, he has lead the team maintening latex for the last 30 years, That is the state of the art, you are free to have other opinions but opinion based questions are explicitly discouraged on this network. For a scientific article, it makes sense do a bit more research on existing implementations. If Frank is not taking TeXmacs seriously, then that is not very nice and not very professional if he decides to write a scientific paper on such a topic. Which does not mean that I disrespect his work on LaTeX, but I don't see what is track record has to do with my question. By the way, the whole thing came up in the discussion (that you heated up), not at all in my question. @Gérard you seem to have a thing for texmacs this week but I can not see why anyone should expect texmacs (or 3b2 or lout or patoline or..) to be mentioned in a paper describing new features in luatex. It's not a matter of taking it seriously or not seriously, it simply isn't relevant to the subject at hand. It is relevant for a paper on "global page breaking", I guess. How many actually usable implementations are there? And again, how to use this in LaTeX, concretely speaking? The whole discussion has become a tangent, and my actual question remains unanswered! Back to the topic. You say "There are several mechanisms for doing global page breaking in tex". That is not very precise. Could you be more specific? Is there some style package that we can use to make this happen? Yes David, as I already stated during the discussion "where to ask questions about TeXmacs", I indeed have "a thing for TeXmacs this week". But it seems that you have something personal against me (or maybe against TeXmacs?), because I tend to get downvoted everywhere as soon as you appear. Not a very friendly welcome on this network. @Gérard huh? why do you write such nonsense? As you can see in my profile I don't downvote. Of all the thousands of people on this site, I'm the one who's tried to answer your questions constructively even though they are ill formed and only marginally on topic. But it seems that you are not interested in asking questions or having answers, just want to use the Q&A format to promote texmacs. I don't downvote but I did vote to clse as "opinion based" which is a standard close reason on the platform
STACK_EXCHANGE
Power laws with pooling: a more realistic model of venture returns TL;DR: In the last post I built some models of venture portfolios of different sizes based on the idea that venture outcomes are powerlaw distributed. The conclusion was, other things being equal, bigger portfolios should do better, with around 150 investments being table stakes. Those models assumed each investment was independent, with exactly one investor. The pooled model presented here is more realistic; overall it dampens expected returns without changing the overall pattern that more is better. Interesting implications are that dealflow and brand are key for VC. In the last post we simulated over 45M independent outcomes (sum(x=5->300) x*1000). But in practice portfolios are not independent and the universe of investible businesses is much smaller. Our first model is more like a simulation of venture builders, where the fund creates its own independent businesses. How about instead we create a pool of businesses that receive investment, and our portfolios sample from this pool? In other words, in the investing life of a fund, a finite number of fundable businesses will be created. During this period, every fund will be choosing from that finite set of businesses. How big should that pool be ? Well according to crunchbase, in 2016/2017 around 3,500 angel and seed funding rounds happened globally per quarter: Let’s say there’s some under-counting, and that the number is growing, and round up to 5,000 rounds per quarter. So in the 3–5 year investing life of a typical fund, there’s a unverse of 60,000–100,000 companies that they theoretically could invest in (of course they wouldn’t see that many deals, but that doesn’t affect our model). Let’s split the difference, and take a pool size of 80,000 companies. What I’m going to do next then is generate this pool from the same power law distribution as the previous post, and have each portfolio draw its investments at random from the pool. Here’s a histogram of 10,000 companies drawn at random from this pool, compared to the correlation ventures data, and the independent draws data from the original post: Note this is only a gut-check since I don’t have access to the underlying data and so can’t fit the model, but it looks reasonable. I’ve also tried a variety of parameters for the power law distribution, and the overall trends are robust. Let’s use the same method as before, sampling 1000 portfolios of each size between 5 and 300 investments, and looking at average statistics across each set of 1000. We’re interested in the triple rate (how often a random portfolio could be expected to triple the fund) and the failure rate (how often it could be expected to return <1x). Compared to the original independent draws model: You can see that the pattern is very similar — it’s hard to tell the difference. The differences between the models come out when you look at the mean performance: This time the mean is behaving more as we would expect; it’s not affected by the occasional extreme outlier that we get in 45M+ independent draws. It still looks high however compared (anecdotally) to historical returns. This may again just be the effect of outlier portfolios — the small number that do really really well as you’d expect with samples from an underlying power law. Again, median may give a better sense of ‘typical’ results: This looks more in line with historical data (and again very similar to the unpooled model). Again, median performance continues to improve through this range. The pooled model of venture returns is intuitively more realistic than independent draws from a power law distribution, but the underlying trends remain the same: at least within this range, more investments are better, and 100–150 seem to be ‘table stakes’ in the sense that after this, no portfolio in 1000 samples loses money. Thinking about a cohort of ~80,000 investible companies within the investment lifetime of a fund leads to some interesting insights: - Even at that scale, the biggest return within the cohort is a significant portion of the overall returns. In the pool that I generated in this post, the maximum return of any single investment is 18660x, comparable to Jerry Neumann’s estimate of 10000x for the return of Andy Bechtolstein’s $100K cheque into Google, or the return on YC’s investment in DropBox. - What’s especially interesting is that single return is over 5% of the entire return from those 80,000 companies. This implies that in any given cohort, fund performance will be dominated by whether you get into that 1 investment :). Any fund that does will tend to outperform every other fund in that cohort. - In fact 5% may be a significant underestimate. I simulated 1000 cohorts of 80,000 companies each; the average ratio of the single most successful company to the return of the entire pool was just over 17%. What would you conclude if you took this modelling seriously ? Firstly, I think, deal flow is key: can you build a machine that can see and reliably process a high number of deals, while providing a great service to entrepreneurs ? Secondly, and related, brand is enormously important. Success breeds success. Brand improves deal flow, and helps a fund get into deals that it likes. So the early years of an investment company should try to build up the brand. [If you want to read & experiment with the code that generated these graphs, you can copy it from Google Drive or grab it from GitHub. If you’d like to help me extend it or turn it into a more re-usable library, email me stevecrossan @ gmail or send me a CL :)] Next post: follow on strategy, fund returners.
OPCFW_CODE
Pantheon's platform is built for agencies and organizations who don't compromise on website security. We protect your Drupal and WordPress sites with secure infrastructure, carefully configured access to resources, and best practices around data safety and retention. The platform provides: - Container-based infrastructure - One-click core updates - Denial of service protection - Automated security monitoring - Network intrusion protection - HTTPS with custom certificate - Role-based change management - Automated backup and retention - Secure code and database access - Secure integration to resources - Secure datacenters Learn more about the fastest hosting platform on the planet: The Pantheon Web Hosting Platform. Pantheon runs their website infrastructure as if no single aspect of the web can be trusted. This approach helps ensure that all of their servers and services have the highest degree of isolation. Pantheon is built on a container-based cloud architecture. Unlike deployment of clusters or virtual private servers, containers allow lightweight partitioning of an operating system into isolated spaces where applications can safely run. Similar to what’s used by Google App Engine or Heroku and optimized to run Drupal and WordPress, our infrastructure can isolate resources while making it easy to scale and deploy fixes across the entire infrastructure. A single website vulnerability poses no risk to other sites on the platform—or even to the customer’s other sites. Pantheon uses control groups, a kernel-level facility for resource isolation for memory, disk, cpu, and other server resources. This means that process and memory-level isolation are effective for all customer processes, from PHP to MySQL. Pantheon’s distributed file system, Valhalla, is accessed over encrypted channels using client-server authentication. Once mounted, customer account files are protected through standard Linux permission controls. System level logs are isolated from customers on external logging systems while customers own logs are isolated with strict file permissions. Automated Site Monitoring Pantheon runs over a million checks a day to proactively monitor network, server, and application resources. Our status page shows a transparent, aggregated report of current and historical uptime across all Pantheon sites. One-Click Core Updates Update Drupal and WordPress core with a single click. Pantheon’s built-in dev, test, and live environments allow developers to push updates to production safely and quickly. Network Intrusion Protection Pantheon’s intrusion prevention system (IPS) provides an additional layer of protection against vulnerabilities by using a x.509-based public key infrastructure to add authentication and encryption to Rackspace’s own trusted network. Our edge routers tunnel traffic to origin servers, preventing circumvention of request validation, filtering, and caching. IPS runs for any services with user-chosen passwords, including the dashboard, SFTP, Git, and Drush, detecting failed logins via multiple ingress points. At the server layer, IPS detects and prevents unauthorized host access. Our logging infrastructure records the identity of blocked accounts for later investigation. Security logs from the servers are centrally collected, processed and stored for a year. Denial of Service Protection Pantheon works with Rackspace to provide management of denial-of-service attacks, filtering ongoing attacks and isolating traffic streams through Brocade load balancers for each site and environment. SAML and Two-Factor Authentication Pantheon supports SAML integration, enabling additional security features like two-factor authentication and single sign-on. Customers who enforce SAML authentication can also enforce settings like minimum password strengths or authentication audit logs. Role-Based Access to Site Resources Pantheon’s Change Management feature allows site owners to manage organization-wide settings and selectively grant or deny developer access to deploy to production. Role-based access lets team members work on what they need to without introducing risk to other sites or infrastructure components. Pantheon servers run on a Linux OS, which is far less susceptible to compromise by malware. We use only trusted vendor repositories for software, verify package signatures, perform cryptographic validation of platform code, and maintain auditable change management. The platform runs user-published site software in containers with multiple layers of isolation. We run configurations that prevent direct execution, even within the containers, of files uploaded through the website. Antivirus protection is bundled into the platform to ensure our system's integrity and to prevent malware from spreading through customer websites. Pantheon provides the ClamAV antivirus engine with up-to-date databases for use by our customers. Pantheon Employee Administrative Access Pantheon grants access according to least privilege. Employees can interact with servers via a secure API without actual server access—when they do need it, SSH-key based authentication is used and activity is recorded in a central log. Releasing Patches and Updates Pantheon periodically deploys new container host instances with the latest supported kernel, OS and packages. Containers are migrated to the updated instances automatically and the older systems are retired. Core CMS application updates and security patches are tested internally before being deployed to our customer base through our one-click update workflow. Vulnerabilities and Incident Response Security issues identified by Pantheon are immediately communicated to affected parties. Details of any significant disruption are posted status.getpantheon.com and tweeted by @pantheonstatus. We always conduct a post-incident review of security events to improve the effectiveness of our response to future incidents. Pantheon’s primary datacenter is managed by Rackspace. Rackspace provides 24/7 direct support access on any hardware issue. Access to data centers is granted though both keycard and biometric scanning protocols and protected by round-the-clock surveillance monitoring. Every Rackspace data center employee undergoes thorough background security checks before hiring. Many of Pantheon’s core components are fully redundant and highly available with no single point of failure: the internal Pantheon API, the edge routing layer, DNS, and files directory storage. Where redundancy is not feasible, we maintain automated tools to facilitate recovery. Pantheon’s internal services are designed to tolerate process and server-level failure. We maintain a minimal server footprint in multiple datacenters to facilitate restoration in the event of a datacenter-level failure. When possible, we use redundant providers for upstream services like DNS. Customer Content Durability Pantheon uses industry-standard practices for on-disk storage, including writing to multiple physical disks with hardware-level RAID. For further protection, customers can make automated backups on the platform. Backups have over 99.99% durability and availability, are stored in multiple datacenters, and are encrypted at-rest. Backups can be automated or triggered manually. Each backup, containing all site-related customer data, is shipped to Amazon S3 as a compressed archive. Backups are encrypted during transfer and at-rest with 256-bit Advanced Encryption Standard ciphers, storing private keys and encrypted backup data on separate servers. Users have the ability to test restoration via the dashboard for any site for any manual or scheduled backup. They also have the ability to restore from a backup to a new site, on Pantheon or elsewhere. Pantheon was designed to be highly-available, resilient to single component failures, recoverable in the unlikely event of data center failure, not reliant on the services of any single employee, and manageable remotely in case of the loss of Pantheon offices. Our technology is built upon best-in-class infrastructure providers, including Rackspace and Amazon Web Services, chosen for their outstanding track-record and reputation. All Drupal and WordPress code, files, and database content can be scheduled for daily backup and stored with Amazon's multi-datacenter Simple Storage Solution service. If the web site's primary data center should become inaccessible, service will be restored from the most recent backups using an alternate data center. Users have full access to critical components so they can create backups to setup their own disaster recovery infrastructure. The Family Educational Rights and Privacy Act (FERPA) Compliance The Family Educational Rights and Privacy Act (FERPA) is a Federal law that protects the privacy of student education records. Pantheon's security policies and infrastructure allow clients to be FERPA compliant. SOC 2 Type II and SOC 3 and ISO 27001 Pantheon's underlying infrastructure provider, Rackspace, has received global security certifications and compliance verifications for Service Organization Controls SOC 2 Type II and SOC 3, in addition to complying with the ISO 27001 standard. Rackspace security attestations and certifications provide assurance of the security of the infrastructure and network layers of Pantheon. Privacy Shield & US-Swiss Safe Harbor Pantheon complies with the requirements of the Privacy Shield and US-Swiss Safe Harbor frameworks on data privacy. To learn more about these programs, and to view Pantheon’s certification, please visit privacyshield.gov/participant_search and export.gov/safeharbor.
OPCFW_CODE
use crate::mir::{Expression, Id}; use rustc_hash::FxHashMap; #[derive(Clone, Debug, Default, Eq, PartialEq)] pub struct PurenessInsights { // TODO: Simplify to `FxHashSet<Id>`s. definition_pureness: FxHashMap<Id, bool>, definition_constness: FxHashMap<Id, bool>, } impl PurenessInsights { /// Whether the expression defined at the given ID is pure. /// /// E.g., a function definition is pure even if the defined function is not /// pure. pub fn is_definition_pure(&self, expression: &Expression) -> bool { match expression { Expression::Int(_) | Expression::Text(_) | Expression::Tag { .. } | Expression::Builtin(_) // TODO: Check if the builtin is pure. | Expression::List(_) | Expression::Struct(_) | Expression::Reference(_) | Expression::HirId(_) | Expression::Function { .. } | Expression::Parameter => true, // TODO: Check whether executing the function with the given arguments is pure when we inspect data flow. Expression::Call { .. } | Expression::UseModule { .. } | Expression::Panic { .. } => { false } Expression::TraceCallStarts { .. } | Expression::TraceCallEnds { .. } | Expression::TraceExpressionEvaluated { .. } | Expression::TraceFoundFuzzableFunction { .. } => false, } } /// Whether the value of this expression is pure and known at compile-time. /// /// This is useful for moving expressions around without changing the /// semantics. pub fn is_definition_const(&self, expression: &Expression) -> bool { self.is_definition_pure(expression) && expression.captured_ids().iter().all(|id| { *self .definition_constness .get(id) .unwrap_or_else(|| panic!("Missing pureness information for {id}")) }) } // Called after all optimizations are done for this `expression`. pub(super) fn visit_optimized(&mut self, id: Id, expression: &Expression) { let is_pure = self.is_definition_pure(expression); self.definition_pureness.insert(id, is_pure); let is_const = self.is_definition_const(expression); self.definition_constness.insert(id, is_const); // TODO: Don't optimize lifted constants again. // Then, we can also add asserts here about not visiting them twice. } pub(super) fn enter_function(&mut self, parameters: &[Id], responsible_parameter: Id) { self.definition_pureness .extend(parameters.iter().map(|id| (*id, true))); let _existing = self.definition_pureness.insert(responsible_parameter, true); // TODO: Handle lifted constants properly. // assert!(existing.is_none()); self.definition_constness .extend(parameters.iter().map(|id| (*id, false))); let _existing = self .definition_constness .insert(responsible_parameter, false); // TODO: Handle lifted constants properly. // assert!(existing.is_none()); } pub(super) fn on_normalize_ids(&mut self, mapping: &FxHashMap<Id, Id>) { fn update(values: &mut FxHashMap<Id, bool>, mapping: &FxHashMap<Id, Id>) { *values = values .iter() .filter_map(|(original_id, value)| { let new_id = mapping.get(original_id)?; Some((*new_id, *value)) }) .collect(); } update(&mut self.definition_pureness, mapping); update(&mut self.definition_constness, mapping); } pub(super) fn include(&mut self, other: &PurenessInsights, mapping: &FxHashMap<Id, Id>) { fn insert( source: &FxHashMap<Id, bool>, mapping: &FxHashMap<Id, Id>, target: &mut FxHashMap<Id, bool>, ) { for (id, source) in source { assert!(target.insert(mapping[id], *source).is_none()); } } // TODO: Can we avoid some of the cloning? insert( &other.definition_pureness, mapping, &mut self.definition_pureness, ); insert( &other.definition_constness, mapping, &mut self.definition_constness, ); } }
STACK_EDU
require 'spec_helper' describe Ramda::Math do let(:r) { described_class } context '#add' do it 'from docs' do expect(r.add(2, 3)).to be(5) end it 'is curried' do expect(r.add(2).call(3)).to be(5) end end context '#dec' do it 'from docs' do expect(r.dec(42)).to eq(41) end end context '#divide' do it 'from docs' do expect(r.divide(71, 100)).to eq(0.71) end it 'is curried' do expect(r.divide(1).call(4)).to eq(0.25) end end context '#inc' do it 'from docs' do expect(r.inc(42)).to eq(43) end end context '#multipy' do it 'from docs' do triple = r.multiply(3) expect(triple.call(4)).to be(12) expect(r.multiply(2, 5)).to be(10) end it '#curried' do expect(r.multiply(2).call(5)).to eq(10) end end context '#product' do it 'from docs' do expect(r.product([2, 4, 6, 8, 100, 1])).to be(38_400) end end context '#subsctrcat' do it 'from docs' do expect(r.subtract(10, 8)).to be(2) minus5 = r.subtract(17) expect(minus5.call(5)).to be(12) complementary_angel = r.subtract(90) expect(complementary_angel.call(30)).to be(60) expect(complementary_angel.call(72)).to be(18) end end context '#sum' do it 'from docs' do expect(r.sum([2, 4, 6, 8, 100, 1])).to be(121) end end end
STACK_EDU
We decided to compile basic SSH commands for the convenience of our readers and clients. You can handle VPS and Server SSH with these commands. This list of SSH commands will make your life easy. If you want to know how to access your server through your Windows OS with the help of PuTTY, you can read this. PuTTY Installation Guide 1. ls Command The Command ls shows us a list of directories and files from the server. This command will help you to take a look at directories and files. You can see the following results after you type ls and press Enter in Linux on the screen. These are some additional commands. You can use these commands to see more detailed information about files and directories. -l — Will show you vital information like file size , creation date, owner as well as permissions. -a — With this command, you will able to explore hidden files and directory on your server and VPS After you saw your directory list to create a new directory, this command will make one for you. You have to type mkdir, and your folder name in small-cap and press enter. You can see in the below screen we created the directory as ‘folder.’ type the ls command again, and you will able to locate your new directory. 3. cd Command After you create a new directory, you have to use the cd command if you want to move in into the new directory—type ‘cd’ and the desired folder name in the Linux window and press enter. In this screen you can see we successfully entered in our new directory called ‘newfolder’ with command cd. To enter in sub directory you need to give full path after cd command like this. cd myfolder/subdirectory and press Enter. Now , if you want to go back then simply type two dots after cd like this this command will take you one step back. 4. touch Command Our Fourth command is ‘ touch. ‘ This command will make a new file for you. Please make sure you go to the desired folder first using the cd command and then give the touch command and a new file name. For example, you can see in the below screen we made a file named ‘eWebGuru’ into my folder directory we made earlier You can see how we went into my folder directory from the root using the cd command, and then we made a file with the name ‘ewebguru.’ We can see a created file with the ls command. You can remove entire folder by giving this command. rm -r folder name 5. cp command This command will help you to copy file. Make sure you are in the folder where the source file is located, type command cp, and source file name and new copied file name. You can see in this example we went to my folder directory and copied file;ewebguru’ to ‘ewebguru1’. After giving the ls command, both files are visible. Ashok Arora is CEO and Founder of eWebGuru a leading web hosting company of India. He is a tech enthusiast with more than 25 years of experience in Internet and Technology. Ashok is Master in Electronics from a leading Indian university. Ashok loves to write on cloud, servers, datacenter, virtualisation technology.
OPCFW_CODE
Model is not ready yet This is very peculiar one. I've already raised this issue in kserve#2882 but I think this may be appropriate here because it happens only with mlserver. I run sklearn model as a KServe InserenceService with the KNative's 'scaling to zero' feature enabled. The first request to the service (the one that initializes the Pod) always gets: 400 {"error":"Model sklearn-iris is not ready yet."} Log of the mlserver container Environment tarball not found at '/mnt/models/environment.tar.gz' Environment not found at './envs/environment' 2023-05-07 15:50:18,121 [mlserver.parallel] DEBUG - Starting response processing loop... 2023-05-07 15:50:18,122 [mlserver.rest] INFO - HTTP server running on http://<IP_ADDRESS>:8080 INFO: Started server process [1] INFO: Waiting for application startup. 2023-05-07 15:50:18,202 [mlserver.metrics] INFO - Metrics server running on http://<IP_ADDRESS>:8082 2023-05-07 15:50:18,202 [mlserver.metrics] INFO - Prometheus scraping endpoint can be accessed on http://<IP_ADDRESS>:8082/metrics INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. 2023-05-07 15:50:19,895 [mlserver.grpc] INFO - gRPC server running on http://<IP_ADDRESS>:9000 INFO: Application startup complete. INFO: Uvicorn running on http://<IP_ADDRESS>:8080 (Press CTRL+C to quit) INFO: Uvicorn running on http://<IP_ADDRESS>:8082 (Press CTRL+C to quit) INFO: <IP_ADDRESS>:0 - "POST /v2/models/sklearn-iris/infer HTTP/1.1" 400 Bad Request 2023-05-07 15:50:21,855 [mlserver] INFO - Loaded model 'sklearn-iris' succesfully. 2023-05-07 15:50:21,856 [mlserver] INFO - Loaded model 'sklearn-iris' succesfully. May it be the problem when mlserver mistakenly returns True from /ready endpoint when the models aren't actually loaded yet? 🤔 P.S. If this helps, I load model from S3 bucket. Hey @MikhailKravets , While the model is still getting loaded, the /ready endpoint will return False. In the latest version of MLServer, the /infer endpoint will also return a 400 error while the model is not yet ready (previously it would just let the /infer request go on). This would explain the difference in behaviour. Perhaps the readiness probe should just use the model's /ready endpoint? As in, the /v2/models/sklearn-iris/ready endpoint? This would ensure that when the request gets forwarded by the autoscaler the model is already up and running. Hi @adriangonz. Thanks for the response. I'm not sure that KServe / KNative requests MLServer's /ready endpoint at all. At least, I didn't find it anywhere in the logs. Hey @MikhailKravets , That would explain why the deployment gets marked as ready even though the model is not. The fix would probably be to use that endpoint as the readiness probe for the deployment in KServe. Since the fix is not related to MLServer, it may be best to open an issue in the KServe repo so that it can get tackled there. I'll close this one for now, but please feel free to cross-link this issue with the one on the KServe repo. Of course @adriangonz. Thanks for your support!
GITHUB_ARCHIVE
DTO objects for each entity I have inherited an application written in Java that uses JPA to access a database. The application uses an design pattern that I haven't come across before and I would really appricate some guidance on why this pattern is used. Like many applications, we have a front end, middleware, and back end database. The database is accessed via DAOs. Each method on the DAO loads a entity-DTO which is just a POJO with nothing but getters and setters and that entity-DTO is then passed into a entity-proper that has other methods that change the entity state. An example [class names changed to protect the inocent] enum Gender { Male, Female } class PersonDTO { private String mFirstName; private String mLastName; private Gender mGender; ... String getFirstName() { return this.mFirstName; } String setFirstName(String name) { this.mFirstName = name; } // etc } class Person { PersonDTO mDTO; Person(PersonDTO dto) { mDTO = dto; } String getFirstName() { return mDTO.getFirstName() } String setFirstName(String name) { mDTO.setFirstName(name); } // and so on void marry( Person aNotherPerson ) { if( this.getGender()==Gender.Female && aNotherPerson.getGender()==Gender.Male) { this.setLastName( aNotherPerson.getLastName() ); } aNotherPerson.marry( this ); } } This is repeated across 30 or so entity classes, doubled to 60 with the DTOs, and I just cant get my head around why. I understand (bits) about seperation of converns and I also understand (bits) about the difference between an EAO based design to say an active record based design. But does it really have to go this far? Should there always be at least one "DB" object that contains nothing but getters and setters that map to the DB fields? Disclaimer: there are varying opinions on this subject and depending on your system's architecture you might not have a choice. With that said... I've seen this pattern implemented before, not a huge fan of it, in my opinion is duplicates large amounts of code without adding any real value. It seems to be particularly popular in systems with XML APIs like SOAP where it might be difficult to map XML structure directly to your object structure. In your particular case it seems to be even worse because on top of duplicate getFirstName()/getLastName() methods, there is business logic (which belongs in the service layer) coded right into a pojo (which should be a simple data transfer object like the DTO). Why should the pojo know that only people of opposite sex can get married? To help better understand why, can you explain where these DTOs come from? Is there a front-end submitting data to a controller which then converts it to a DTO, which is then used to populate your entity-proper with data? The DTOs are created as a result of web service calls all handled from one class run by timer. it goes off,calls a web service,fills in DTOs,persists them,then creates the "entities" which are held in memory.The front end then accesses the entities and mutates them via controller. As for "Why should the pojo know that only people of opposite sex can get married?" (bad example in 21 century!)-isnt that a core part of OOP?So you can say have a CelebrityPerson that overrides marry() and doesnt take their spouses name?Vs a switch on the type in your service layer that acts depending on the type? @JamesHobson - The question is why does Person handle marrying? Is there a divorce method as well? How about emailing folks to let them know the divorce has been finalized (now your Person class knows about emails)? The list of methods can get infinitely long as you add new functionality. That is why there should be MarriageService which handles marry and divorce operations, so that POJO doesn't have to know about every scenario in which it gets used. Separation of concern would be the guiding OOP principal here. As far as handling different marriage types, check out the Strategy pattern. @JamesHobson - I guess it boils down to Person class looking suspiciously a lot like a Service would in a typical N-tier application. Consider writing a unit test. If Person were a service without getFirstName()/getLastName() you'd just test marry(Person person1, Person person2). But in this setup you have to test your setters and getters as well. Marry is probably a bad example since you dont really "tell" a person to marry and it may involve lots of other ops not just 2 people. What I'm getting into I suppose is the old AnemicDomainModel argument - why cant the domain objects have some methods that change their own state beyond just basic get/set, possibly involving other objects? There is much on SO about all this and I suppose like anything, its just opinion - and one that seems the more I read, the more confused I get ;) It could also be that they are using this just to separate the JPA annotations from the rich domain object. So I'm guessing that somebody didn't like having JPA annotations and the rich domain object behaviour in one class. Somebody could have also argued that the JPA annotation and the rich domain object should not be in the same layer (because the annotations mixes the concerns) so you would get this kind of separation if you won this argument. Another place where you'd see this kind of thing happening is when you want to abstract similar annotations away from the rich domain objects (like jaxb annotations in web services for example). So the intent might be that the DTO serves as sort of the serialization mechanism from code to the database, which is very similar to the intent mentioned here by martin fowler. Thanks - get that its the why I am looking for in "didn't like having JPA annotations and the rich domain object behaviour in one class" This doesn't appear to be a known pattern. In general it is common to maintain a separate object to represent the record in the database, referred to as domain object. the CRUD operations on the object are part of a DAO class and other business operations would be part of a Manager class, but none of these classes store the domain object as a member variable, i.e. neither DAO nor Manager carry state. They are just processing elements working on domain objects passed in as parameters. a DTO is used for communication between the front-end and back-end to render data from DB or to accept input from end-user DTOs are transformed to Domain objects by Manager class, where validations and modifications are performed per business rules. Such domain objects are persisted in the DB using DAO class. The lack of a "Manager" class is I think what is confusing me. Would you say it would be more excected to have something like: PersonManager.marrayPeople(Person person1, Person person2) etc rather than these operations within the entities? That's right @JamesHobson, the marry method should have been part of a manager class. I have worked on one project where we have DTOs for the sole purpose of transferring information from front-end controller to some facade layer. Then facade layer is responsible for converting these DTOs to domain objects. The idea behind this layering is to decouple front-end (view) from domain. Sometimes DTOs can contain multiple domain objects for aggregated view. But domain layer always presents clean, reusable, cacheable(if required) objects.
STACK_EXCHANGE
Novel–Release that Witch–Release that Witch Chapter 1351 Crushed harass mountain “Close up!” Hackzord interrupted him. “Whenever they ended up lowlifes, than who happen to be we, the ones received outwitted by lowlifes? From now on, I don’t prefer to notice you dialing them ‘lowlifes’ anymore!” Yet still his fingers grasped slender atmosphere. In order that b*tch was intentionally preventing and beginning, fooling him are convinced that she could only conduct short instances of quicker flying because she was becoming very little by her mystical ability? Making sure that b*tch was intentionally ending and starting off, fooling him feel that she could only execute simple moments of faster flight because she was getting limited by her enchanting strength? The Atmosphere Lord quickly moved his astonish to the back of his brain and created a stride to some greater area, getting the total combat vicinity under his feet. The metal birds had been clearly can not stick to his schedule. Around they aimed to ascend, their stupid and clumsy body systems were actually slow than worms. Sensation an increasing sense of emergency, Hackzord improved the Distortion Doorstep to its most important assortment, addressing his enemy’s whole attacking array! But before he could infiltration once more, a thunderous roar erupted again. The Skies Lord immediately changed focuses on, opened a distortion door and sprang out before the witch. Equally as he was intending to rip them apart individually, a pa.s.sing out pet bird suddenly transformed into a devilbeast and propelled towards him using its jaws huge wide open! At last, it had been his turn. Ultimately, it was actually his change. “I’m about to break everyone into sections!” he roared initially within this conflict. There were a display of flame. Following what observed like the two many years along with a limited occasion, a black color shadowy cl.u.s.ter flashed across like lightning. Innumerable breaks sprang out about the Distortion Front door and it shattered apart inside of a deafening blast like gla.s.s. Surprised, the Atmosphere Lord looked toward where he noticed the gazes—numerous dark-colored figures ended up to arrive his track, equally in the horizon on the water and from your property. Among them were definitely iron birds and witches. His Eyeball Demons hadn’t viewed the wildlife which are commonly viewed at seas as hazards in any respect. Hackzord dodged unexpectedly, just steering clear of the infiltration soon enough. Fuming, he increased his palm and also a black color streak of lighting instantly made an appearance in the s.p.a.ce between them. This was additionally a Distortion Front door, besides it’s breadth was just a finger heavy, any physique that pa.s.sed via would not appear in just one element. The steel wildlife, experiencing finished redirecting themselves, surged straight towards him. Hackzord waved his left hand, directly opening a Distortion Home at his area, swallowing the metal mounting bolts that chance at him as well he exposed another area of your home near the metal wild birds. After the lethal metal mounting bolts pa.s.sed through the door they swept instantly back towards where they originated. Immediately, several metal wildlife ended up smacked and also their structure dropped towards a disarray. Sensing a growing feeling of uncertainty, Hackzord enhanced the Distortion Front door to its greatest variety, addressing his enemy’s whole assaulting collection! His Eyeball Demons hadn’t thought of the wildlife which had been commonly found at sea as hazards whatsoever. Hackzord dodged abruptly, just keeping away from the invasion at some point. Fuming, he increased his palm in addition to a dark colored streak of light instantly showed up during the s.p.a.ce between the two. This was another Distortion Entrance, besides it’s breadth was only a finger solid, any system that pa.s.sed by means of would not appear in just one article. The gold-haired witch converted into a streak of great mild and photo straight towards him! Not needing enough time to work with the very same strategy, Hackzord could only gather each of the wonderful power in his system and convert it into a s.h.i.+eld coc.o.o.ning his entire body! Just like he was preparing to rip them apart one at a time, a pa.s.sing bird suddenly turned into a devilbeast and propelled towards him utilizing its jaws wide available! He harrumphed and next chased her once again! The feminine before his vision suddenly transported having an explosive rate, tearing a huge selection of yards from him inside a blink of attention. While doing so, the distress wave from her action slammed into Hackzord and also the Parasitic Vision Demons similar to a wall surface. The spell great time glistened and rippled in an outward direction before slowly getting rid of away. Retreating now can be as fundamental as choosing a inhale. After the entrance, the next prey was the Eye Demon that pressed him aside—the blue colored gentle on its body pulsated and blood flow, flesh, and areas squirted in an outward direction, increasing a wave of light blue fog among the nasty shattered pieces. Both the activities occurred almost all at once, so rapid that Hackzord couldn’t react. “I’m gonna break you all into items!” he roared for the first time with this challenge. a certain middle-aged man’s vrmmo activity log light novel He then leaped up and flew toward the other conclude of your island. the island home However, this was not what he was centering on with the moment—the milk possessed been spilt. Staying there would not make his failures any lesser and would only create energy to his rage. If he experienced the vitality, he want to use it for making his foes fork out. But what infuriated Hackzord was that not only acquired the last assault been a guise, the witch didn’t snap a single thing at him but accelerated downwards to catch the dropping seabird. The iron birds, experiencing finished redirecting their selves, surged right towards him. Novel–Release that Witch–Release that Witch
OPCFW_CODE
Employment of NLP within Manufacturing Natural Language Processing (NLP) is a subset of Artificial Intelligence that helps identifying key elements from human instructions, extract relevant information and process them in a manner that machines can understand. Integrating NLP technologies into the system helps machines understand human language and mimic human behaviour. For example, Amazon's Echo, Microsoft's Cortana and Apple's Siri make extensive use of NLP technologies to interact with the users. NLP technologies can help in the interaction with the machines and speed up the operation of different types of manufacturing systems, cutting down the response time. Imagine a scenario where a manufacturing company hires a data scientist to collect shopfloor worker information and analyse all the machine readings, reporting any sort of problems. One disadvantage to this scheme is that by the time the management reads the report one problem might have happened causing damage to the entire process. If a computer or robot with sensors and NLP technologies embedded is employed, this might analyse information coming from the machines, reports from customers and information from workers, in order to obtain relevant information about the process. This computer or robot might even communicate with users and accept input in natural language. Within the manufacturing industry, the NLP might be adopted for example for the following tasks: - Process Automation: The use of NLP technologies in the manufacturing process allows the automatic processing of information in natural language and the execution of repetitive tasks like paperwork and report analysis. - Inventory Management: Analysing data about the stock, sales and user reports of certain products is essential to assess the correct decisions for a company to optimise and maximise profits. By leveraging NLP technologies the resulting benefits are: 1) the entire process becomes more comprehensive; 2) it is more difficult to incur errors related to the analysis of sales; 3) it is easier to analyse the manufactured products and discard those with low quality without affecting the supply chain and sales. - Emotional Mapping: Sentiment analysis and emotion detection are one of the most exciting features of NLP. Early NLP systems allowed organisations to collect speech-to-text communication without accurately determining its full meaning. Today, NLP approaches can sort and understand the nuances and emotions in human voices and text, giving organisations unparalleled insight. Learning customer expectations and operators' viewpoints is a very important element in manufacturing. NLP technologies permit to identify emotions and the polarity of the opinions of customers and operators and provide actions to improve products and different processes. For example, knowing the expectations of customers is key to building a longer relationship and creating engagement with them. - Operation Optimisation: Furthermore, NLP technologies can be employed to trace the performance of equipment and improve the interaction with machines. This simplifies the operation of complex systems and can enable Human Machine Interaction where the operator and the machine collaborate in order to optimise processes. Therefore, by leveraging NLP technologies, both the decision makers and operators can improve the collaboration between humans and machines within the manufacturing sector and increase the knowledge about their systems and processes. STAR project aims to enable the deployment of secure, safe, reliable and trusted human centric AI systems in manufacturing environments. Many of these AI systems require interaction with humans and machines and can often benefit from NLP techniques. For example, Speech-to-Text and Text-to-Speech capabilities can enable multimodal interaction with the system, or sentiment analysis can evaluate the polarity of the messages the system receives and adapt to the user's mood. These user-centric ideas are within the NLP activities of STAR. By: Diego Reforgiato Recupero, Nino Cauli and Rubén Alonso / R2M Solution and University of Cagliari
OPCFW_CODE
package manager import ( "net" ) const ( B_NEWBACKWARD = iota B_GETSEQCHAN B_ADDCONN B_GETDATACHAN B_GETDATACHAN_WITHOUTUUID B_CLOSETCP B_CLOSESINGLE B_CLOSESINGLEALL B_FORCESHUTDOWN ) type backwardManager struct { backwardSeqMap map[uint64]string backwardMap map[string]*backward BackwardMessChan chan interface{} TaskChan chan *BackwardTask ResultChan chan *backwardResult SeqReady chan bool } type BackwardTask struct { Mode int Seq uint64 Listener net.Listener RPort string BackwardSocket net.Conn } type backwardResult struct { OK bool SeqChan chan uint64 DataChan chan []byte } type backward struct { listener net.Listener seqChan chan uint64 backwardStatusMap map[uint64]*backwardStatus } type backwardStatus struct { dataChan chan []byte conn net.Conn } func newBackwardManager() *backwardManager { manager := new(backwardManager) manager.backwardSeqMap = make(map[uint64]string) manager.backwardMap = make(map[string]*backward) manager.BackwardMessChan = make(chan interface{}, 5) manager.ResultChan = make(chan *backwardResult) manager.TaskChan = make(chan *BackwardTask) manager.SeqReady = make(chan bool) return manager } func (manager *backwardManager) run() { for { task := <-manager.TaskChan switch task.Mode { case B_NEWBACKWARD: manager.newBackward(task) case B_GETSEQCHAN: manager.getSeqChan(task) case B_ADDCONN: manager.addConn(task) case B_GETDATACHAN: manager.getDataChan(task) case B_GETDATACHAN_WITHOUTUUID: manager.getDatachanWithoutUUID(task) case B_CLOSETCP: manager.closeTCP(task) case B_CLOSESINGLE: manager.closeSingle(task) case B_CLOSESINGLEALL: manager.closeSingleAll() case B_FORCESHUTDOWN: manager.forceShutdown() } } } func (manager *backwardManager) newBackward(task *BackwardTask) { manager.backwardMap[task.RPort] = new(backward) manager.backwardMap[task.RPort].listener = task.Listener manager.backwardMap[task.RPort].backwardStatusMap = make(map[uint64]*backwardStatus) manager.backwardMap[task.RPort].seqChan = make(chan uint64) manager.ResultChan <- &backwardResult{OK: true} } func (manager *backwardManager) getSeqChan(task *BackwardTask) { if _, ok := manager.backwardMap[task.RPort]; ok { manager.ResultChan <- &backwardResult{ OK: true, SeqChan: manager.backwardMap[task.RPort].seqChan, } } else { manager.ResultChan <- &backwardResult{OK: false} } } func (manager *backwardManager) addConn(task *BackwardTask) { if _, ok := manager.backwardMap[task.RPort]; ok { manager.backwardSeqMap[task.Seq] = task.RPort manager.backwardMap[task.RPort].backwardStatusMap[task.Seq] = new(backwardStatus) manager.backwardMap[task.RPort].backwardStatusMap[task.Seq].conn = task.BackwardSocket manager.backwardMap[task.RPort].backwardStatusMap[task.Seq].dataChan = make(chan []byte, 5) manager.ResultChan <- &backwardResult{OK: true} } else { manager.ResultChan <- &backwardResult{OK: false} } } func (manager *backwardManager) getDataChan(task *BackwardTask) { if _, ok := manager.backwardMap[task.RPort]; ok { if _, ok := manager.backwardMap[task.RPort].backwardStatusMap[task.Seq]; ok { manager.ResultChan <- &backwardResult{ OK: true, DataChan: manager.backwardMap[task.RPort].backwardStatusMap[task.Seq].dataChan, } } else { manager.ResultChan <- &backwardResult{OK: false} } } else { manager.ResultChan <- &backwardResult{OK: false} } } func (manager *backwardManager) getDatachanWithoutUUID(task *BackwardTask) { if _, ok := manager.backwardSeqMap[task.Seq]; !ok { manager.ResultChan <- &backwardResult{OK: false} return } rPort := manager.backwardSeqMap[task.Seq] if _, ok := manager.backwardMap[rPort]; ok { manager.ResultChan <- &backwardResult{ OK: true, DataChan: manager.backwardMap[rPort].backwardStatusMap[task.Seq].dataChan, } } else { manager.ResultChan <- &backwardResult{OK: false} } } func (manager *backwardManager) closeTCP(task *BackwardTask) { if _, ok := manager.backwardSeqMap[task.Seq]; !ok { return } rPort := manager.backwardSeqMap[task.Seq] manager.backwardMap[rPort].backwardStatusMap[task.Seq].conn.Close() close(manager.backwardMap[rPort].backwardStatusMap[task.Seq].dataChan) delete(manager.backwardMap[rPort].backwardStatusMap, task.Seq) } func (manager *backwardManager) closeSingle(task *BackwardTask) { manager.backwardMap[task.RPort].listener.Close() close(manager.backwardMap[task.RPort].seqChan) for seq, status := range manager.backwardMap[task.RPort].backwardStatusMap { status.conn.Close() close(status.dataChan) delete(manager.backwardMap[task.RPort].backwardStatusMap, seq) } delete(manager.backwardMap, task.RPort) for seq, rPort := range manager.backwardSeqMap { if rPort == task.RPort { delete(manager.backwardSeqMap, seq) } } manager.ResultChan <- &backwardResult{OK: true} } func (manager *backwardManager) closeSingleAll() { for rPort, bw := range manager.backwardMap { bw.listener.Close() close(bw.seqChan) for seq, status := range bw.backwardStatusMap { status.conn.Close() close(status.dataChan) delete(manager.backwardMap[rPort].backwardStatusMap, seq) } delete(manager.backwardMap, rPort) } for seq := range manager.backwardSeqMap { delete(manager.backwardSeqMap, seq) } manager.ResultChan <- &backwardResult{OK: true} } func (manager *backwardManager) forceShutdown() { manager.closeSingleAll() }
STACK_EDU
# import pandas import nltk from pandas import DataFrame, Series, read_csv, read_pickle from re import sub from nltk.stem import wordnet from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfVectorizer from nltk import pos_tag from sklearn.metrics import pairwise_distances from nltk import word_tokenize from nltk.corpus import stopwords from clean_master_data import DataCleaner # This is for the autocorrect functionality from textblob import TextBlob # This is for the Named Entity Recognition functionality import spacy import en_core_web_sm from random import randint class LaurAI: """ A chatbot that reads in data and given context will produce a response based on the trained data """ def __init__(self, data, use_cleaned_data=True): self.data = data[["comment", "response"]] self.data_cleaner = DataCleaner() self.cleaned_data = DataFrame(columns=["Question", "Answer"]) # use data if provided if use_cleaned_data: self.cleaned_data = read_pickle("data/master_data_cleaned.pkl") if len(self.cleaned_data) != len(self.data): # if the data does not match, retrain print("New data found. Please wait as this data is processed") self.cleaned_data = self.data_cleaner.clean_data(self.data) # to improve speed, save to master cleaned self.cleaned_data.to_pickle("data/master_data_cleaned.pkl", protocol=4) self.finalText = DataFrame(columns=["Lemmas"]) self.c = CountVectorizer() self.bag = None def clean_line(self, line): ''' Clean the line This line makes all lowercase, and removes anything that isn't a number ''' return sub(r'[^a-z ]', '', str(line).lower()) def tokenize_and_tag_line(self, line): ''' Tokenizes the words then tags the tokenized words ''' return pos_tag(word_tokenize(line), None) def create_lemma_line(self, input_line): ''' We create the lemmatizer object ''' lemma = wordnet.WordNetLemmatizer() # This is an array for the current line that we will append values to line = [] for token, ttype in input_line: checks = ["a", "v", "r", "n"] if(ttype[0].lower() not in checks): ttype = "n" line.append(lemma.lemmatize(token, ttype[0].lower())) return {"Lemmas": " ".join(line)} def create_lemma(self): ''' Creates lemmas for the cleaned data (lemma is the lower )''' lemmas = [] for j in self.cleaned_data.iterrows(): lemmas.append(self.create_lemma_line(j[1][0])) self.finalText = self.finalText.append(lemmas) def create_bag_of_words(self): ''' create a bag of words and save in a dataframe with the same indicies as the master data ''' self.bag = DataFrame(self.c.fit_transform(self.finalText["Lemmas"]).toarray(), columns=self.c.get_feature_names(), index=self.data.index) def askQuestion(self, context): ''' @param question: a string context given by the user output a string response to context --- Compute most similar context to the input using semisupervised learning and return approproate response to the determined most similar context ''' # correct the given input context = self.autocorrect(context) # Removes all "stop words" valid_words = [] for i in context.split(): if i not in stopwords.words("english"): valid_words.append(i) # Clean the data and get tokenized and tagged data valid_sentence = self.tokenize_and_tag_line(self.clean_line(" ".join(valid_words))) lemma_line = self.create_lemma_line(valid_sentence) try: index = self.determine_most_similar_context(lemma_line) if index != -1: # respond with response to most similar context answer = self.data.loc[index, "response"] return answer # Else we are going to respond with one of the nouns with the following context nlp = en_core_web_sm.load() nouns = nlp(context) # Get a random noun from the generated list of nouns, and select the first element # which is the noun (second is what kind of noun) noun = nouns[randint(0, len(nouns)-1)] return "Sorry :,( I don't know what " + str(noun) + " is!" except KeyError: # an unknown word was passed return "I am miss pwesident uwu" def autocorrect(self, input): # Creates the NLP named entity recognition nlp = en_core_web_sm.load() # Finds all of the nouns in the input string nouns = nlp(input) finalText = "" # For all of the values in the input for i in input.split(" "): # If the values are not nouns (autocorrect breaks on nouns) if i not in str(nouns): # Run autocorrect on the nouns and add it to the final string finalText += str(TextBlob(i).correct()) + " " # Else just add the noun else: finalText += i + " " # print(finalText) return finalText def determine_most_similar_context(self, lemma_line, similarity_threshold=0.05): ''' @param lemma_line: a dictionary of words from the input ---- returne index of datapoint with most similar context to one given ''' # create dataframe of one row initialized to zeros # this will represent the lemma valid_sentence = DataFrame(0, columns=self.bag.columns, index=[0]) # set column of 1's for words in lemma line for i in lemma_line["Lemmas"].split(' '): if i in valid_sentence.columns: # if the column exists, laur.ai recognizes the word # if laur.ai recognizes the word, it will on it # otherwise, do not valid_sentence.loc[:, i] = 1 else: try: for syn in wordnet.synsets(i): if syn in valid_sentence.columns: # if the column exists, laur.ai recognizes the word # if laur.ai recognizes the word, it will on it # otherwise, do not valid_sentence.loc[:, i] = 0.1 break except AttributeError: # Module has no attribute synsets # (you have entered something that doesn't exist) break # find cosine similarity cosine = 1 - pairwise_distances(self.bag, valid_sentence, metric="cosine") # prepare data to be used in series with data's index cosine = Series(cosine.reshape(1,-1)[0], index=self.data.index) # determine index of element with highest similarity # the answer is the response at this index # if it does not find any datapoints similar then it recognizes nothing # in the input and the index returned is -1 # We can solve the 0 problem by simply saying that if the cosine.max() is # less than 0.01 similarity we are going to respond with a predefined message if cosine.max() < similarity_threshold: return -1 # return cosine.idxmax() # if multiple indicies share the maximum value, pick a random # create list of indicies of all maximum values max_index = cosine[cosine.values == cosine.max()].index # return a random index from the list i = randint(0,len(max_index)-1) return max_index[i] print("Please wait as Laur.AI loads") data_master = read_csv("data/master_data.csv") laurBot = LaurAI(data_master) # First we need to clean the data, so it is all lower case and without special # characters or numbers # We can then tokenize the data, which means splitting it up into words instead # of a phrase. We also need to know the type of word # Then we lematize which means to convert the word into it's base form laurBot.create_lemma() # Now we can start to create the bag of words laurBot.create_bag_of_words() # Then we can ask a question print("Ask me anything :)") print("Control C or Type \"Bye\" to quit") while(True): context = input("> ") if context.lower() == "bye": print("bye :))") break else: response = laurBot.askQuestion(context.lower()) print(response) print("Thank you for talking to Laur.AI")
STACK_EDU
How can I tell (easily) if my power supply is regulated/switched or unregulated/nonregulated? I found this great blog post about unregulated vs. regulated and switched power supplies. I've got a specialized need for a Regulated 12v power supply and a handful of wall-warts and bricks laying around that I would like to re-use if possible. I have a volt-meter and a handful of basic electronic bits and wires. But I haven't found anything that would help me identify the type of power supply. Why the down vote? This question seems fine. Don't mind Leon, he says that about almost everything that gets posted. Using your voltmeter, just measure the output of the wall-wart without any load. You can generally stick one probe into the middle of the connector, and hold the other against the outside. With a few exceptions, the middle is positive, so use the red lead there, and use the black lead on the outside shell. Regulated supplies, without any load, should measure very close to the target voltage of 12v. Unregulated supplies will generally have a no-load voltage anywhere from a couple of volts to several volts higher. If they measured 12v without any load, they would have no headroom to take care of the drop due to the load. works great! Looks like I went out to get new power supply bricks for no reason! Here is an EEVBlog (episode #594) on measuring PSU ripple: https://www.youtube.com/watch?v=Edel3eduRj4 There are two basic differfences between regulated and unregulated power supplies: ripple and output voltage variation. Both these things can be measured with a ordinary multimeter. First, measure the output voltage of the supply with no load with the meter set to DC. Record this as the no-load voltage. Then switch the meter to AC and record that as the no-load ripple. Second, put a load on the supply and make the same measurements again. The load should not add any of its own noise. A resistor would be good, but old fashioned light bulbs can work too. The load current should be near the maximum the supply is rated for, but not exceeding it. For example, if the supply is "12V 1A", then you want to draw a little less than 1 amp. Something like a 15 Ω resistor would be good, but keep in mind this resistor needs to be big enough to dissipate the power. In this example, it would dissipate about 10 watts. Enough "12V" lightbulbs to add up to 10 W would work too. In any case, record the DC measurement as the loaded voltage and the AC measurement as the loaded ripple. Well regulated supplies will have little ripple. Anything over 100 mV is suspect. This is the case whether the supply is loaded or not. A unregulated supply could have a volt or few of ripple, especially in the loaded case. Regulated supplies actively keep their output voltage that same over a wide range of load currents. If the supply maintains the output within a percent over the load range, then it is almost certainly regulated. Anything more than 5% is suspect for a regulated supply. Of course ultimately it doesn't matter whether the supply is regulated or not, only what its output voltage does as a function of various conditions. If the output voltage stays reasonably steady with little ripple over the whole load range, it should really not matter to you whether that was achieved by regulation, a low impedance transformer, or by a dead fish being waved over it in a mystic ceremony during production.
STACK_EXCHANGE
Since all the Institute action wrapped up in August and September, I’m back to some database projects that have been asking for my attention for a while. Back in May I posted about optical scanning. First order of business now is to take this project back up again and get rolling. Some backstory: I don’t directly work with our artifact collections, but we’re a very small shop. And since all information is connectable by state site number, it’s in my diabolical master plan to get all agency databases talking to each other. And, in what I’ve come to learn is a big part of the Agile1 philosophy of project management and software development, I wanted to build something as simple as possible that works at a base level. We can increase complexity later. So when there was funding available in the summer to hire folks to inventory boxes, I whipped together a plan with Collections colleagues to inventory boxes one by one (which had never actually been done before), give them each a scannable optical code identity, and rebuild the Access database into something relational and easy to update. We bought a thermal printer for box tags and the inventory folks recorded all site numbers in each box along with the boxes’ location. When incorporated with existing data, we’ll have a good handle on collections inventory. I ran into a hitch, though. Data Matrix encoding seems to be quite proprietary. This hurts my little open heart. I know my employer doesn’t have any more funds earmarked for this project, so I’m determined to do well with what we’ve got and look for open solutions. After some research, I came upon this fantastic paper in Biodiversity Data Journal (an open science journal) on the Makelabels code from Virginia Tech, available on GitHub. The software from the printer manufacturer does have this capability, but a) locating updated software releases from Zebra is a nightmare, and b) the codes look good, but DON’T SCAN. Blast. So my next order of business is to generate a bunch of codes that work and begin to tag boxes. After some research and considerations of our fiscal constraints, I decided to use non-adhesive tags and place them into existing sticky sleeves on each box, as well as a duplicate tag inside a polyethylene bag within. I’m also going to dive back in to restructuring the old Access database with inventory information and making things relate. Ultimately the agency would be very well served by a more robust system, but Access will do to meet immediate needs of tracking box locations, loan information, and the like. When this is wrapped up, we’ll be that much closer to being able to integrate all these datasets inside our agency and to really tap into some of the power of this information. Next stop, scanning, OCRing, and indexing the vast collection of nonstandard artifact catalogs. Visualizations, keyword queries, endless possibilities. A person can dream, right? - Agile side note: getting into research on project management strategy, I’ve come to find that I have pretty good intuition on how to set these kinds of projects up. Small, workable chunks, lots of demos, constant reworking. And this is totally obvious and boring to people who do anything tech related, but keep in mind that I’m an archaeologist in a bureaucracy, so, novel!
OPCFW_CODE
Update parent POM, plugins, LICENSE Update Apache parent POM to 24 Take advantage of enforcer rules built-in to Apache parent POM Use minimalJavaBuildVersion and minimalMavenBuildVersion properties Remove redundant enforcer checks in our POM Remove redundant version information and plugin definitions from Apache POM that aren't overridden Update build plugin versions to latest Sort BOM dependencies in dependencyManagement before others (a change in behavior with latest sortpom-maven-plugin that's not possible to override, but this makes more sense anyway) Sort sortpom-maven-plugin's options, ensure blank lines are removed (a change in the default that is overridable), and ensure space before the closing slash on empty elements to keep it consistent with other plugins that update the POM Update spotbugs-related Random issues (and minor Random tweaks) Always assign SecureRandom objects to SecureRandom variables, so spotbugs doesn't flag them as insecure Random usages Allow SecureRandom objects to be reused using a static final instance for many classes, to avoid one-off object uses (usually private, except for a public instance for sharing across ITs) While fixing Random-related spotbugs issues, apply naming consistently Remove use of explicit SHA1PRNG implementation of SecureRandom, preferring non-blocking native implementation (default), and relying on users to configure their SecureRandom provider through Java security settings if they want something different from the default Remove incorrect attempts to try to seed SecureRandom objects with a predictable seed (SecureRandom default implementation isn't predictable, even with a specific seed, since it uses the OS's native random source) Remove unneeded spotbugs warnings suppressions Use a fixed-length stream of random numbers in several places where a loop was used to iterate a fixed number of times and the loop variable wasn't needed For the rare cases where we use a predictable random with a known seed for testing, pass the seed in the constructor to avoid useless initiationalization steps to do the initial seed only to reinitialize immediately with a call to setSeed Fix pom/license issues related to micrometer (re #2305) Move metrics-related dependency versions and transitive dependency exclusions into project's parent pom's dependencyManagement section Update LICENSE to include CC0 artifacts used for metrics dependencies Remove property for micrometer version that is only used once for the micrometer BOM Other Rename incorrect filename RolllingStatsTest.java to RollingStatsTest.java to match the class name RollingStatsTest Update AuthenticationTokenTest to use IntStream.allMatch to loop until a byte array is generated that isn't all zeros, instead of using assertFalse, which has a (rare) chance of failing the test unnecessarily LGTM - seeing all of the places where we create an instance of SecureRandom - would there be significant benefit if a ServerContext created an instance an then it was reused anywhere that a random number was needed and context was available - possibly as a follow on PR? LGTM - seeing all of the places where we create an instance of SecureRandom - would there be significant benefit if a ServerContext created an instance an then it was reused anywhere that a random number was needed and context was available - possibly as a follow on PR? So, I thought about that... but, there's a risk of having things block if the Java security provider for SecureRandom isn't thread safe. So, I chose to limit the amount of sharing of these instances. So, if the implementation is thread-safe, there's just a few extra objects hanging around in the JVM, but if it's not thread-safe, the scope of any potential contention is limited.
GITHUB_ARCHIVE
Am I delusional? - Hidden Fun Stuff I might not be alone with this ... feeling or ... let's call it a condition for now. Yet it might affect everyone differently; And that in a way that has us be pissed off at each other for some reason. I so for my part keep coming back to the Matrix Phenomenon thing - wondering about why, for all I know, nothing has come of it so far. I keep wondering about what people might think, what theories there are - but at the end of the day I conclude that it's all nonsense given a) the amount of stuff I've done and b) the fact that it's relatively easy to debunk my claim. And there I see one of those ... nonsense issues. Like, say, it wouldn't be worth the effort to debunk me. That sounds reasonable - until you see it from my end, which is where I have to say that you CAN'T debunk me - because there's nothing to be debunked. And that's the kind of stuff that drives me nuts. I have to believe, somehow, that people are stuck on something along those lines - and I'm so used to nothing happening that it would surprise me if that were to change. Legitimately. Except ... that ... well ... 'any day now'. ... "Except today" ... I suppose. But more to the point am I thinking of that picture of that cloud I shared as opener to "the Experiment". There so is an event that tells a conclusive story. So, I'm not delusional - it's just that these kinds of "freak accidents" keep happening to me - telling a kind of story that lines up with the stuff I believe in ... but, because I seem to be the only one believing these things ... it's ... so far just stuck there in my own bubble. For all I know. And so the experiment didn't change much about it. I may feel a certain way about things - but at the end of the day I might just be suggestible to my own stuff/nonsense, as it were. While I have an objectively real example, but am not sure of the things in between the visible, I'm lost not knowing what to think of any of it. And that's pretty much what I think ... makes up a huge part of the mess we're in - where, everyone has different "anchors" in reality - and beyond that some are more and other's less stubborn about the uncertainties in-between. The Magic Book So, one thing ocurred to me after I had bound my first book. It felt good to have it. Another thing ocurred to me after I had bound my second book. And that was ... that this book ... is different. Or so it seems. It would seem as though this book contains the power of the Matrix, we might say. In as far as I am the anomaly - regarding the Matrix Phenomenon - this book sotospeak contains enough of my identity, relative to God, for it to ... effectively hold a part of that "spark". I am however not sure how far that goes or what it entails. And for reasons briefly touched upon above - there's this, let's call it: "Adversarial Reasoning" - the brief of which is: Closed minded narration and engagement, hijacking of the topics to lend credibility to any counter narrative (i.e.: Demanding a counter to the counter, thus "invalidating" the original statement) - as to so generally evade the issue while pushing an alternative narrative that isn't in and of itself valid, but as accepted counter narrative to the one provided. Such and such. And to that end ... there's no winning the debate. Not if it were up to "them". To say, what I have to say about it, what my narrative holds - would be irrelevant because it's just one more thing to ignore. And that is the brief of it. I have a sentence and the rest is chaos - give or take. And I've been thrown off by that - in my own reasoning - as I had to learn that ... there's no way to "gotcha" people who are subscribed to stupid takes. And I've been saying it for a while now; And it always kinda hurt to do so; But at the end of the day, it's best to then just ignore them. It hurt because it's against ... the spirit of producing an open and inclusive society. At least in the first instance of the principle. So, obviously there is nuance to that, but for what I'm concerned about here - there are those that have to be canceled from that realm of reasoning. With that on mind, let's return to the topic. Given all the possibilities I've come to think of, as for what people might be making of it, I have to stand in for my own. That other stuff is potentially going to pop up anyway - and to maintain a strong focus on the actual truth, seems to be a winning strategy. But ... here's a little ... bit of a spoiler I guess: For the Matrix Phenomenon magic to work, your mind/cognition has to be a part of it. So, let's say a "bleep bloop" sound is happening - your mind would generally recognize it as pertaining to what- or whereever it comes from. People who fight with psychosis or delusions ... might recognize it as pertaining to something totally different; Per chance far outside of the realms of reason and rationality. Or you're immersed in some thoughts ... and the bleep bloop comes as a distraction. Maybe like a smoke-bomb because whatever it seemed to "confirm" is now "tainted". For me however, to have a free will and be 'the Anomaly', my normal cognitive functions need to be warped to some extent; Such that my thoughts and impulses line up with ... "the wave" let's call it. And here's the thing: Whenever I write of that sort of thing, I need you to understand that God can work through us, without us even recognizing that. And I suppose it's a huge point of contention - especially for armchair psychologists that wanna scream delusion at everything that doesn't meet their standards of sanity. And I guess that's however so the thing now. "The wave" - or "what it's like". So that the bleep bloop will simply pertain to the book, per chance, My hope then is, that given enough exposure, people will have a personally lived experience to talk about the matrix phenomenon from. Maybe it even works to do some matrixing effects. You'll have to check that out for yourself. With that lived experience, you can be scientifically cocky when it comes to challenging a conclusion, or whatever. And I'm sure there are plenty of smart people who ... maybe even already know that. People who came to the right conclusions because critical thinking works actually. In other words: Of course you'll need something for the effect to happen. Some music, some show or movie - whatever. And with that, you'll then make the corresponding experiences. One being, that the effect is strong enough to imply some kind of live interactive meta-power thing going on; But because the effect is dependent on happenstance rather than an actual interactive environment - that bubble is going to pop ever so often. Something you would want to get a hang on, is your own independence versus a growing dependency on environmental factors. So, to come back to the bleeps and bloops - as the magic does its thing, you might effectively start to hop onto the wave and in a sense ... try to anticipate the bleeps and bloops; As perhaps based on some inner need to have it be confirmed. And it's not entirely wrong. When going for matrixing effects for instance, there is ... some way to feel for the convergences. For sure. The timing then is simply a matter of "the divine jolt" let's call it. However, it is still ... a noobish mistake ... because the magic doesn't happen, per se, when you try to anticipate the bleeps and bloops. It happens when you '(re)claim your independence'. So, when you are at your "you-est" - when you defy what you think the wave is - effectively - so that the divine has the chance to "gotcha" on that end. Something else I've noticed so far is, that this effect may work like a sponge for bad mojo. That, because the book will try to maintain its top-spot, thus sending all thing unworthy of it into some negative space. As for the Ultimate Probability What I have to notice is, that thinking of the book is enough for these things to start happening. So, you don't even have to print it. But ... obviously ... that's where things be going. So, it's a teaser, of sorts, we might say. Teasing you to get the real deal for an enhanced effect. In that regard, strange things have happened while I was binding the XNG+DFA combo. Well. I'm not sure if it's generally an XNG thing, but it has ... like ... thorns, we might say. Say, it's not ... perfect. Anyway, I'd hurt myself - but well. So, I came to the part where the XNG quires got bound to the DFA quires - and everything was fine - but still some skepticism over whether or not the two are suited to be combined that way did grow - to a point where I per chance even felt like ripping the thing apart. And I guess, having the two separate is definitely nice. Because you can individually play around with them. However - these doubts, against combining the two, are like bubbles that grow; Around whatever conceived impurity or what have you. But you can lean against them. And then the opposite will happen. Those bubbles will become like glue rather than pockets of TNT - as it were. Whether you might come to that point of experience or not, it's still worth mentioning I think - as ... well, I guess for once it gives you some close up insight into the kinds of challenges I have to deal with. To not say that whatever I do is basically just ... devoid of difficulty and challenge. And that then also applies to working with God in general. If He does or doesn't do something - that is a He thing - whether we get it or not is secondary. And when things go smooth for a while, it can be a bit unsettling when it doesn't. Or the other way around ... I assume. In closing: You can merge the individual quire PDFs into one PDF ... if you want to print the whole thing in one go. Color prints can be somewhat expensive - so I think that eventually it'll be best to pitch into buying printer cartridges to just take it for as far as it goes. And the rest, I suppose, is up to you. Have a nice time! And may the Eternal Peace guide our paths!
OPCFW_CODE
CherryPy is a web framework of Python which provides a friendly interface to the HTTP protocol for Python developers. It is also called a web application library. It allows developers to build web applications in much the same way they would build any other object-oriented Python program. This results in smaller source code developed in less time. This framework is mainly for the developers who want to create a portable database-driven web applications using Python, as it provides Create, Retrieve, Update and Delete functionalities. The basic requirements for installation of CherryPy framework include ? - Python with version 2.4 or above - CherryPy 3.0 To install cherrypy run the following command in terminal: pip install cherrypy A simple Application – A cherrypy Application typically looks like this: return “Hello World!” Project to Upload file and read its content – Steps taken to upload a file and read its content using cherrypy: - Create any text file to read or existing file can also be used. Geeks.txt file is used in the program. - Create user interface that upload a file from system - Write cherrypy program that read the content of file and show its content. To stop the engine, use the following code: Output(Before file upload): Output(After uploading the file): - How to perform multiplication using CherryPy in Python? - Perform addition and subtraction using CherryPy - Introduction to CherryPy - Environment setup for CherryPy - Associate user to its upload (post) in Django - File upload Fields in Serializers - Django REST Framework - Python program to reverse the content of a file and store it in another file - Upload files in Python - How to Upload Project on GitHub from Pycharm? - Django - Upload files with FileSystemStorage - How to read from a file in Python - Read a file line by line in Python - Read JSON file using Python - Python program to read character by character from a file - Python program to read file word by word - Read List of Dictionaries from File in Python - How to read Dictionary from File in Python? - How to read a CSV file to a Dataframe with custom delimiter in Pandas? - Python Program to Reverse the Content of a File using Stack - Python program to modify the content of a Binary File If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
OPCFW_CODE
I come from a non-technical background (Nursing and Geography) and transitioned into a technical background, Computer Science major in University and I am currently studying Software Engineering online at Launch School. Throughout this journey, I pondered the question of whether one needs to be good at mathematics in order to become a great programmer? I have heard this question from many other students as well. It’s not a surprise because if we look at the typical Computer Science curriculum in post-secondary education, it includes many mathematics courses. After being a Computer Science Student for a while, I found mathematics and programming very interconnected. After all, mathematicians are the people who invented the concept of the computer and the algorithm. Does this mean one needs to be good at mathematics such as Calculus or Linear Algebra, or mathematics in general in order to become a great programmer? I do not think so. Let me explain. First, I want to answer the question: what is mathematics? Generally speaking and in my own experience, mathematics taught in school is narrow in scope and technical in character; this is quite different from the nature of the discipline itself. As a result, many students dislike mathematics and are uninspired by it. Yes, there are technical aspect to mathematics that are tedious and boring. However, this technical foundation lays the ground for the true subject of mathematics, which is the creation and study of patterns and structures. With mathematics we can study the beautiful patterns of the universe or create a structured theorem to find the next largest prime number faster. In my opinion, programmers are already mathematicians 😉. Why is that? Programmers create new patterns and structures through algorithms and code. Programmers also read existing code, where they will discover the structure of the problem, untangle its patterns and hypothesize a solution. This process is known as computational thinking or breaking down a problem. This type of thinking is similar in both mathematics and programming, but within the discipline of mathematics we use algebra and within programming we use code. In programming, one must become fluent with the syntax and the rules of the language. I would reference this as the boring technical aspects, and can be analogous to remembering the logarithm or exponential rules in Algebra or the properties of the derivative in Calculus. In order to do the really fun and rewarding things in programming we need to remember the technical things first! After that, we can be wildly creative in solving complex software engineering problems. Equivalently, once we’ve become fluent with algebra we can be incredibly clever and creative when writing mathematical proofs. Overall, I do not think one needs to be good at mathematics in order to become a great programmer. Programmers practice computational thinking, which is a skill that becomes better with practice and can be learned and developed outside of the “math/calculus/algebra” world. It takes practice to break down a complex problem into smaller manageable parts. Eventually, it’ll start to feel more natural. Practicing the boring or tedious parts of mathematics such as completing hundreds of algebra exercises made me familiar with the mathematical language and enabled me to think at a higher level when computing a mathematical proof. This is similar to remembering the rules and syntax of a programming language so it becomes easier to create the algorithm and to code it out. One can think at a higher level because they’re already familiar with the technicalities of the mathematical or programming language. Without being comfortable with the syntax and rules first, students struggle with what Launch School calls the Two Layer Problem: a challenge that beginners to programming frequently encounter: learning to solve problems while simultaneously memorizing the syntax of a particular language One way to practice computational thinking is to trace out code. We can be meticulous on what the computer is doing. We can go slowly, step by step in a loop, and write out what is happening on each iteration. Until we have practiced breaking down the structure, we are then able to build a solid mental model of what looping does. This can enable us to move faster without writing out the steps every time. Launch School has many wonderful assignments that help their students practice the computational thinking process slowly and thoroughly. Launch School teaches their students how to break down problems by using PEDAC framework, and I am a big fan of PEDAC. There are really amazing TA led study sessions at Launch School that can help students practice this skill too. If you’re Launch School student, don’t be shy, please take advantage of these resources because the TAs really do want to help you! A final note on how perspective and mindset can help strengthen computational thinking. In my opinion, we can strengthen our computational thinking by practicing. Furthermore, it helps to be in a safe and supportive environment that fosters growth, encourages making mistakes with plenty of opportunities to learn from our mistakes. I believe that Launch School is a wonderful community for this type of growth and for feeling more confident and comfortable making mistakes in one’s programming journey. I hope this post was helpful. If you’re new to programming, know that it isn’t an unattainable skill or a natural talent that one is born with. It is a skill that can be acquired through time and practice. Just like with with everything else, we become better through patience and perseverance. Additionally, having a supportive environment can help develop a positive growth mindset which helps with mastering the skill of computational thinking. I wish you the best wherever you may be right now and I hope you have a blast learning how to program! 🚀
OPCFW_CODE
TPM 2.0 Simulator Extraction Script The purpose of this script is to extract the source code from the publicly available PDF versions 01.16 and 01.38 of the Trusted Platform Module Library Specification published by the Trusted Computing Group (TCG). The result of the extraction scripts is a complete set of the source files for a Trusted Platform Module (TPM) 2.0 Simulator, which runs under Windows, Linux, as well as Genode (by applying the appropriate patches). Note: The extraction script also works with a Microsoft Word-based FODT-version of the more recent specifications (e.g., version 01.19), which are however only available to TCG members. License: The files of this project are licensed under BSD 2-Clause License (except where indicated otherwise). Make sure the following packages are installed on your system: patch cmake build-essential python-bs4 python-pip python-dev Also install the python module "pyastyle" for formatted output: pip install pyastyle Extracting the source code Open a terminal and navigate to the project folder Edit configuration settings in the file FIRMWARE_V1) and change SET = Falseto SET = Truewhen finished Create a folder named buildand run the following command inside: cmake -G "Unix Makefiles" ../cmake -DCMAKE_BUILD_TYPE=Debug -DSPEC_VERSION=116 cmake -G "Unix Makefiles" ../cmake -DCMAKE_BUILD_TYPE=Debug -DSPEC_VERSION=138 - runs the Python script to extract the simulator source code - patches files containing the source code - generates a Makefile used for building the simulator Building and running the simulator - Build the simulator - Run the simulator: (If there are any error messages at startup, restart the simulator) In order to test if the simulator is working correctly, we use IBM's TPM 2.0 TSS Open a terminal and start the TPM simulator Open another terminal and navigate to the project folder Build the TSS: - Run the tests: The following table shows which version of the TPM Simulator works with which version of the IBM's TPM 2.0 TSS. |Specification version||Used document type||TSS version||Results| 1: The option -116 has to be added to line 88 in /utils/regtests/testaes.sh. 2: The policy tests ( -18 for version 755 of the TSS, -21 for version 996 of the TSS) cannot be executed separately. They only work if they are executed with the other tests using the option -a (all) in the TSS. 3: The lines 66-68 in /utils/regtests/initkeys.sh have to be removed. Only the tests which are not for version 138 of the TPM specification can be executed (which tests are affected can be retrieved by calling the TSS with the help argument -h). The tests have to be executed separately by using the option -n$TESTNUMBER with the TSS. 4: The TSS fails when running it the first time, but not in any subsequent run. The clock test fails. IBM's TPM 2.0 TSS was created by Ken Goldman and is licensed under the Berkeley Software Distribution (BSD) License. We'd like to thank Ken for implementing and providing a TSS that also includes test cases, which we could use to verify the extracted source code of the TPM 2.0 simulator.
OPCFW_CODE
package org.hy.common.ftp.junit; import static org.junit.Assert.assertTrue; import org.hy.common.StringHelp; import org.hy.common.ftp.FTPHelp; import org.hy.common.ftp.FTPInfo; /** * 测试FTP下载功能 * * @author ZhengWei(HY) * @version V1.0 2012-08-17 */ public class FTPHelpTest { @SuppressWarnings("unused") // @Test public void testDownload() { FTPInfo v_FTPInfo = new FTPInfo(); v_FTPInfo.setIp("133.64.89.12"); v_FTPInfo.setPort(21); v_FTPInfo.setUser("ftp"); v_FTPInfo.setPassword("ftp"); v_FTPInfo.setLocalPassiveMode(false); FTPHelp v_FTPHelp = new FTPHelp(v_FTPInfo); v_FTPHelp.connect(); String v_Ret = null; int v_SucceedCount = 0; int v_FailCount = 0; for (int v_Index=0; v_Index<100; v_Index++) { v_Ret = v_FTPHelp.download("/share/c1/1/0/20120817/1997/1000016.V3" ,"C:\\Ftp_Download.test" + v_Index); System.out.println("-- " + v_Index + ": " + v_Ret); if ( v_Ret == null ) { v_SucceedCount++; } else { v_FailCount++; } } v_FTPHelp.close(); assertTrue(v_Ret == null); } // @Test public void testDownloadToString() { FTPInfo v_FTPInfo = new FTPInfo(); v_FTPInfo.setIp("133.64.89.12"); v_FTPInfo.setPort(21); v_FTPInfo.setUser("ftp"); v_FTPInfo.setPassword("ftp"); v_FTPInfo.setLocalPassiveMode(false); v_FTPInfo.setInitPath("/share/ftp"); FTPHelp v_FTPHelp = new FTPHelp(v_FTPInfo); v_FTPHelp.connect(); v_FTPHelp.setDataSafe(false); String v_FileText = v_FTPHelp.download("hosts"); v_FTPHelp.close(); System.out.println(v_FileText); System.out.println(StringHelp.hexToBytes(v_FileText)); System.out.println(new String(StringHelp.hexToBytes(v_FileText))); String v_Text = "ab123我爱你456!@"; System.out.println(new String(v_Text.getBytes())); System.out.println(new String(StringHelp.hexToBytes(StringHelp.bytesToHex(v_Text.getBytes())))); } // @Test public void testUpload() { FTPInfo v_FTPInfo = new FTPInfo(); v_FTPInfo.setIp("133.64.32.46"); v_FTPInfo.setPort(21); v_FTPInfo.setUser("ftp01"); v_FTPInfo.setPassword("password01"); v_FTPInfo.setLocalPassiveMode(false); v_FTPInfo.setInitPath("/ftp/file01"); FTPHelp v_FTPHelp = new FTPHelp(v_FTPInfo); v_FTPHelp.connect(); String v_Ret = v_FTPHelp.upload("O:\\WorkSpace_SearchDesktop\\SearchDesktop\\UltraEdit_3.tar.gz" ,"/ftp/file01/HY_20130308.txt"); v_FTPHelp.close(); System.out.println(v_Ret); assertTrue(v_Ret == null); } public static void main(String [] args) { FTPHelpTest v_Test = new FTPHelpTest(); v_Test.testDownloadToString(); } }
STACK_EDU
|Microsoft Word Typing ||Hi, I am looking for new and experienced freelancers interested in working on my project which includes typing 90 pages of PDF files into Word document format in 7 days with no mistakes in typing. Bid lower for higher chance on my project. The requirements are typing speed of at least 60 WPM and decent experience in this field. ||Data Entry, Excel, Word, Copy Typing, Typing ||Apr 23, 2018 ||Apr 23, 20184d 22h |Microsoft Azure Administration ||I am looking for a System Engineer who familiar Microsoft Azure for the following 2 items. 1. Consolidate the resource groups from 2 different subscriptions into 1 2. Install a Filezilla server or any other SFTP server in a VM in Azure. This includes the actual installation and configuring the network connection ||Cloud Computing, IIS, Azure, Windows Server ||Apr 17, 2018 ||Apr 17, 2018Ended |Microsoft Power App developed to pull users from csv file. ||I need a Microsoft Power App developed to look up users with different types of Office 365 licenses and if they are licensed or not. We will details like such.. Display name, First name, Last name, location, title, email address, Product licenses, alias, phone number. License classifications Enterprise E4 with proplus = Knowledge . Enterprise E4 without proplus - Clinical , No license = Not ... ||Microsoft, Visual Basic for Apps, Microsoft Office, App Developer ||Apr 12, 2018 ||Apr 12, 2018Ended |Need to edit 2 files in Microsoft Azure using Cloudapp Azure ||You need to update the contents on 2 files. Small changes which needs to be done by expert in Microsoft Azure. Note: These files are in JSP format and people who understand Microsoft Azure only apply for this job. 1) Gandhi Study center tab has been created under the link: [url removed, login to view] some of the contents have been added for testing but work has not ... ||Sharepoint, Azure, Microsoft, ASP.NET, Microsoft SQL Server ||Apr 11, 2018 ||Apr 11, 2018Ended |I need Microsoft access developers. ||Start with login and security level. ||Apr 5, 2018 ||Apr 5, 2018Ended |Help configure Microsoft outlook ||Need help in configuring microsoft outlook with multiple profiles ||Windows Desktop, Microsoft Exchange, Microsoft, Windows Server, Microsoft Outlook ||Apr 2, 2018 ||Apr 2, 2018Ended |API for Microsoft Dynamics ||We need a programmer with experience in Salesforce who can create statuses and so that a User can trigger them which will change it to a different status. IE: Status = Attempting Contact - 1 Attempting Contact - 2 Attempting Contact - 3 Attempting Contact - 4 Attempting Contact - 5 Contacted - 1 Contacted - 2 Contacted - 3 Contacted - 4 Contacted - 5 ||PHP, Microsoft, Dynamics ||Mar 31, 2018 ||Mar 31, 2018Ended |I would like to hire a Microsoft Hololens Developer ||i have project which required knowledge on hololens development and leap motion the project is building a app to hololens using leap motion ||Mar 24, 2018 |Microsoft visual studio project which collects serial port data and send to mysql ||I need you to develop some software for me. I would like this software to be developed for Windows using .NET. and send data from serial port to mysql also need to visualise data like a gui ||Visual Basic, .NET, Windows Desktop, Software Architecture, MySQL ||Mar 24, 2018 ||Mar 24, 2018Ended ||First, please read the attached file all information about the company • An approximately two pages write-up. • Using the cases for background information, research the current situation for the company. Do not base your response solely on the data contained in the case. Based upon your research, what is currently the single most important issue (either a threat or an opportunity) fa... ||Research, Technical Writing, Report Writing, Research Writing, Business Analysis ||Mar 12, 2018 ||Mar 12, 2018Ended |Microsoft Access Developers ||Developing a database for 17.00 thousand vendor data and link it to their products and our own product database ||Mar 6, 2018 ||Mar 6, 2018Ended |Simple App with Backend on Microsoft Server - 26/02/2018 07:03 EST ||I need a simple app that allows tradespeople to log agreements on construction sites in a chat like layout. Then we have a backend on MS Server so an admin can view and change the data. Off-The-Shelve tools are fine for that. We need security as usual and the app needs to use the camera. If it is possible via BrowserApp, this is an option. The app has to work on IOS and Android smartphones. ||PHP, Mobile App Development, iPhone, Android, MySQL ||Feb 26, 2018 ||Feb 26, 2018Ended |Microsoft Azure Work ||Azure Service Vendor Developer for Security, secrets, storage, VM and cloud service management in Azure and DevOps. Azure Devops with strong C# 7+ years of commercial software development experience Working experience using and deploying applications in Microsoft Azure Working experience developing and deploying Azure Resource Manager (ARM) templates Experience working with both Microsoft Wi... ||Cloud Computing, Azure, Amazon Web Services, Windows Server, Network Administration ||Feb 21, 2018 ||Feb 21, 2018Ended |I would like to hire a Microsoft Access Developer to Develop an equipment hierarchy with full drag and drop functionality. ||Trying to create an access database that can be utilised for development of an asset register and end product should have the same functionality as the item on this page [url removed, login to view] ||Feb 20, 2018 ||Feb 20, 2018Ended |Microsoft Expression Web Expert Needed Urgently! ||I have a site which url will be provided in the private message to all freelancers who like to discuss project. Please write 'Expression' in your bid so i can have the idea that you are applying with MS Expression Web knowledge. ||.NET, Website Design, Microsoft Expression, HTML ||Feb 20, 2018 ||Feb 20, 2018Ended |Project Scheduling Using Microsoft Project I have a construction project that I would like to budget. I have a breakdown of each task along with duration. I'd like to do a schedule and resource/cost breakdown. There are approximately 50 tasks. There will be a lot of back and forth as we will change dependencies/predecessors, and will work to maximize efficiency. ||Project Management, Project Scheduling, Microsoft, Microsoft Office ||Feb 9, 2018 ||Feb 9, 2018Ended |Payroll Database on Microsoft Access ||Create a payroll database wit record of employee Salaries, PF/Esic, [url removed, login to view] days, Leave record etc ||SQL, Microsoft Access, Database Administration, Database Programming, Database Development ||Jan 23, 2018 ||Jan 23, 2018Ended |SAP BPC MICROSOFT VERSION ||we have development in SAP BPC MICROSOFT VERSION ||Jan 22, 2018 ||Jan 22, 2018Ended |Microsoft Dynamics CRM developer ||Assist with the technical implementation of Dynamics Sales ||Jan 9, 2018 ||Jan 9, 2018Ended |Microsoft SCCM Skill - Export Data to a REST API ||Create a Service to read Microsoft SCCM Data with Power Shell and sync data with a REST API in the Cloud. ||Inventory Management, RESTful ||Jan 3, 2018 ||Jan 3, 2018Ended
OPCFW_CODE
In SQL Server, how do you replace part of a string value with a blank value and print the columns in correct order? I am writing a script using SQL Server code but it's not in SQL Server it is another data utility tool and it does not provide any feedback on errors or why I'm getting them. I have a particular column that involves emails. A value from that column for example may be<EMAIL_ADDRESS>I am trying to only return the "sam123" for that column. To do this, I tried using this code: Replace(c.email<EMAIL_ADDRESS>' ') as Email but it still comes back as<EMAIL_ADDRESS>What am I doing wrong here? And it's not printing in the order I am selecting the columns. If I select the email column before the replace clause and use join contact c on c.personid = p.personid after join [Identity] i with(nolock) on i.identityID = p.currentIdentityID, then it will run. But if I try to select the email column after the replace statement, and keep the join in the same place it won't run. I am trying to figure out where to add the join and what join to add to make the email column come last. I tried left join contact c on c.personid = p.personid after left join [Identity] it on it.identityID = tp.currentIdentityID and it doesn't run: select distinct i.lastname as LINC_DBTSIS_CE020_LST_NME, i.firstname as FRST_NME, sl.number as SCH_NMR, it.lastName as LINC_DBTSIS_SY030_LST_NME, tp.staffnumber as TCHR_NBR, p.studentnumber as ID_NBR, e.grade as GRDE, replace(replace(replace(l.householdPhone,'(',''),')',''),'-','') as F1_PHNE from Person p with(nolock) join [Identity] i with(nolock) on i.identityID = p.currentIdentityID INNER JOIN enrollment e with(nolock) ON e.personID = p.personID AND e.enrollmentID = (SELECT TOP 1 x.enrollmentID FROM enrollment x INNER JOIN schoolyear syx with(nolock) ON syx.endyear = x.endyear AND syx.active = 1 WHERE x.personID = p.personID AND x.endyear = e.endyear and x.active = 1 ORDER BY CASE WHEN x.enddate IS NULL THEN 0 ELSE 1 END,CASE WHEN x.serviceType = 'P' THEN 1 ELSE 2 END, x.startDate DESC) replace(c.email<EMAIL_ADDRESS>' ') as Email Join calendar cl with(nolock) on cl.calendarID = e.calendarID join school sl on sl.schoolID = cl.schoolID left join v_CensusContactSummary l on l.personID = p.personid left join person tp on tp.personID = dbo.fn_gethr_personID(e.enrollmentID) left join [Identity] it on it.identityID = tp.currentIdentityID left join contact c on c.personid = p.personid where l.relationship = 'self' Are you sure it still comes back as the full email? See http://sqlfiddle.com/#!9/9eecb/114817. Result is sam123. Why are you using NOLOCK everywhere, you know what it does, right? No I tried looking it up but didnt understand. Im new to this and just trying to modify some scripts. I didnt write it with no lock. Should I remove all the no locks? This doesn't look like valid SQL. replace(c.email<EMAIL_ADDRESS>' ') as Email looks like you're trying to select it, but it's in the middle of your on clause. Ummm....as posted this query won't even run. You have the replace function in the middle of your joins. If care at all about accuracy then you should remove those nolock hints. That is not a magic go fast button. It has some very serious side affects. Things like randomly returning missing and/or duplicate rows. https://blogs.sentryone.com/aaronbertrand/bad-habits-nolock-everywhere/ Thats why im trying to make it valid. When select the email column before the phone column with all the replaces, I change the "left join contact c on c.personid = p.personid" to a normal join and put it after "join [Identity] i with(nolock) on i.identityID = p.currentIdentityID" , it runs fine this way but i need the email column after the phone Then put the email in the list of columns after phone. Thanks sean I will try some re arranging. I tried selecting email column after phone. Like i said a vendor wrote this. This looks very sloppy and unreadable to me but im also new to SQL. Youre the goat sean :D You would need to rearrange some stuff. I also added a little bit of formatting so this is a lot easier to decipher. select distinct i.lastname as LINC_DBTSIS_CE020_LST_NME, i.firstname as FRST_NME, sl.number as SCH_NMR, it.lastName as LINC_DBTSIS_SY030_LST_NME, tp.staffnumber as TCHR_NBR, p.studentnumber as ID_NBR, e.grade as GRDE, replace(replace(replace(l.householdPhone,'(',''),')',''),'-','') as F1_PHNE, replace(c.email<EMAIL_ADDRESS>' ') as Email from Person p join [Identity] i on i.identityID = p.currentIdentityID INNER JOIN enrollment e ON e.personID = p.personID AND e.enrollmentID = ( SELECT TOP 1 x.enrollmentID FROM enrollment x INNER JOIN schoolyear syx ON syx.endyear = x.endyear AND syx.active = 1 WHERE x.personID = p.personID AND x.endyear = e.endyear and x.active = 1 ORDER BY CASE WHEN x.enddate IS NULL THEN 0 ELSE 1 END , CASE WHEN x.serviceType = 'P' THEN 1 ELSE 2 END , x.startDate DESC ) Join calendar cl on cl.calendarID = e.calendarID join school sl on sl.schoolID = cl.schoolID left join v_CensusContactSummary l on l.personID = p.personid left join person tp on tp.personID = dbo.fn_gethr_personID(e.enrollmentID) left join [Identity] it on it.identityID = tp.currentIdentityID left join contact c on c.personid = p.personid where l.relationship = 'self'
STACK_EXCHANGE
Fatal Exception: java.lang.UnsatisfiedLinkError: dlopen failed Hello, I implemented the library version 6.0.0 and I got the following crashes: Fatal Exception: java.lang.UnsatisfiedLinkError: dlopen failed: "/data/user/0/com.gbox.android/_root/data/internal_app/com.company.android-gep493u9x7yrZ9yoy5Lqtw==/lib/arm/libpolarssl.so" is 32-bit instead of 64-bit at java.lang.Runtime.loadLibrary0(Runtime.java:1087) at java.lang.Runtime.loadLibrary0(Runtime.java:1008) at java.lang.System.loadLibrary(System.java:1664) at com.aheaditec.talsec.security.z1.<clinit>(SourceFile:1) at com.aheaditec.talsec.security.y1.<init>(SourceFile:5) at com.aheaditec.talsec.security.y1.a(SourceFile:4) at com.aheaditec.talsec_security.security.api.Talsec.start(SourceFile:1) Fatal Exception: java.lang.UnsatisfiedLinkError: dlopen failed: library "libpolarssl.so" not found When I check my universal APK, I have the libpolarssl.so both in x86 and x64. However I distribute the app as Bundle in the Google Play Console. I had tried to add exception handler when called at com.aheaditec.talsec_security.security.api.Talsec.start but the crash still exist. Is the Library for android support for x86? And any insight why the crash happens? Thank you Hello @fanjavaid, The library should support both x86 and x64. The library (aar) contains libpolarssl.so for all ABIs (x86, x86_64, armeabi-v7a, arm64-v8a). Can you check on which devices the crash occurs? Is it a general problem for all x64 devices, or it only occurs on some devices? You can also inspect your bundle file (.aab) if the base/lib directory contains libpolarssl.so for all ABIs. We will try to look into this issue but I’m not sure if we will be able to reproduce this issue. Best regards, Talsec Team Hello @msikyna Here is the details about the devices: Well, the Bundle file contains for all ABIS : x86_64, x86, arm64-v8a, armeabi-v7a. I can't reproduce the issue too. Or do you have any suggestion how to handle it? I believe App crashes when invoke Talsec.start(), but I can't catch the Exception. Crash still exists. Thank you We have, I guess, a similar issue: Fatal Exception: java.lang.UnsatisfiedLinkError: dalvik.system.PathClassLoader[DexPathList[[zip file "/data/app/com.signnow.android-9yrhDsO0RpoyrWxEyG4MXA==/base.apk"],nativeLibraryDirectories=[/data/app/com.signnow.android-9yrhDsO0RpoyrWxEyG4MXA==/lib/x86, /system/lib, /vendor/lib]]] couldn't find "libpolarssl.so" at java.lang.Runtime.loadLibrary0(Runtime.java:1011) at java.lang.System.loadLibrary(System.java:1657) at com.aheaditec.talsec.security.b2.<clinit>(SourceFile:1) at com.aheaditec.talsec.security.a2.<init>(SourceFile:6) at com.aheaditec.talsec.security.a2.a(SourceFile:4) at com.aheaditec.talsec_security.security.api.Talsec.start(SourceFile:1) at com.signnow.app.app.SignNowApp.setTalsecLibrary(SignNowApp.kt:128) at com.signnow.app.app.SignNowApp.onCreate(SignNowApp.kt:64) at android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java:1119) at android.app.ActivityThread.handleBindApplication(ActivityThread.java:5740) at android.app.ActivityThread.-wrap1() at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1656) at android.os.Handler.dispatchMessage(Handler.java:106) at android.os.Looper.loop(Looper.java:164) at android.app.ActivityThread.main(ActivityThread.java:6494) at java.lang.reflect.Method.invoke(Method.java) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:438) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:807) It is our first release with your library. At this time it occurred only on Nexus 5X devices. I have researched SO a bit and found a similar issue. I don't want to try the solution on my current users. Can you provide some information about this error? Or maybe I can safely use this solution? Also, I need to mention that I have this block in my build.gradle file. Because I have some native libraries: ndk { abiFilters.addAll( listOf( "arm64-v8a", "arm64-v8a", "armeabi-v7a", "x86", "x86_64" ) ) } Hello @fanjavaid , @rpavliuk , we are looking at the issue. Thank you for details. Kind regards, Talsec team @msikyna Can we help you with framing this bug? Hello @fanjavaid, We've managed to provide a partial solution. The native library libpolarssl.so is no longer required in the freeRASP workflow, so the bug mentioned above shouldn't occur anymore. Nevertheless, there are still other native libraries that are crucial for freeRASP (libsecurity.so and libclib.so) that can also be problematic. I found a lot of open issues regarding this problem (eg. Xiaomi native library load, App bundle UnsatisfiedLinkError, App Bundle native crash, and many others). It looks like there are still some issues during native library loading. I also stumbled upon two proposed solutions, but I haven't had the possibility to test them out yet. Disabling bundle ABI split in the build.gradle as described here. This will slightly increase the size of a bundle. bundle { abi { // This property is set to true by default. enableSplit = false } } In the stack trace of this issue above, the application is trying to load a native library from the app directory in the filesystem. That usually means that the native libraries are compressed and extracted to the filesystem during the installation. You can try to modify your extractNativeLibs and If you’re using App Bundle, also set android.bundle.enableUncompressedNativeLibs=true along with the extractNativeLibs configuration option. There is also a GH project ReLinker that tries to solve issues with native libraries linking. If you manage to try one of the proposed solutions above or if you find a new one, please share your findings with us. Best regards, Talsec Team Thank you so much, we will try to update your library and discuss the suggested solutions. I will update you after the release of our app After monitoring this crash for some time we can conclude that it happens more on Emulators or already Rooted devices. So, it is not affecting us, because we plan to block these devices anyway. But after the update we have encountered a new crash. I have created a separate issue for it. Hello, after releasing the updated version with workarounds above, the issue about UnsatisfiedLinkError was not happening. But, like @rpavliuk there is a new issue: Fatal Exception: java.lang.RuntimeException: Package manager has died in the latest Talsec version. Hello @rpavliuk , @fanjavaid , thank you for reporting the issue! We are looking into it. Kind regards, Talsec team
GITHUB_ARCHIVE
HIV continues to infect human populations worldwide, emphasizing the need for epidemiological tools that can accurately describe transmission patterns. Thus, the methods we will develop will have specific impact on HIV vaccines;evolution;epidemiological parameters: spread of infection in different groups;intervention;and more generally on the fundamental science of infectious diseases. The overall goal is to understand the relationship between virus evolution and its epidemiological history, and to create epidemiological tools that can make reliable contact tracings and assess changes in epidemic dynamics. We have recently shown that the epidemic rate is inversely correlated to the virus evolutionary rate on the population level. Thus, the specific hypothesis behind the proposed research is that there is a relationship between the speed at which an epidemic moves through a human population and the rate at which the virus evolves in that population. We have observed that there are discrepancies between transmission histories and viral phylogenies, and because the inferences of epidemics are based on phylogenetics, it becomes important to understand the limitations in such inferences. Based on this the specific aims of this proposal are to: 1. Create a model that accurately describes the connection between transmission history and viral phylogeny. Preliminary results suggest that there are """"""""hidden lineages"""""""" in viral phylogenies that are involved in transmission events, potentially misleading reconstruction of transmission events. We will especially investigate the effects of the effective population size in the donor, the bottleneck at transmission, and incomplete lineage sorting during transmission and sampling. We aim to estimate meaningful confidence levels on reconstructed person-to-person transmissions enabling us to explore alternative hypotheses in a statistical framework specifically designed for epidemiological tracking. 2. Identify the mechanism that correlates epidemic rate and virus evolutionary rate. We will decipher the connection between epidemic rate and viral evolutionary rate. Currently, we have four alternative explanations that may cause the observed correlation between epidemic and evolutionary rate (host immune selection, viral generation time effects, selection during transmission, and recombination effects). We will use different gene sequence data, codon positions as well as amino acid signatures to discriminate between these hypothetical explanations. We will use large datasets to develop epidemiological models that include these four hypothetical explanations to investigate their effects on the population level, and also model social networks and epidemic and phylogeographic dynamics. The mathematical methods developed in this project aim to give better inferences of the spread of pathogens, here mainly HIV. At the contact tracing level we will estimate meaningful confidence levels on reconstructed person-to-person transmissions enabling us to explore alternative hypotheses in a statistical framework specifically designed for epidemiological tracking. At the epidemic level we will develop methods that can follow and signal when important changes in spread patterns occur, including the origin of the infection. |Romero-Severson, Ethan O; Bulla, Ingo; Leitner, Thomas (2016) Phylogenetically resolving epidemiologic linkage. Proc Natl Acad Sci U S A 113:2690-5| |Yoon, Hyejin; Leitner, Thomas (2015) PrimerDesign-M: a multiple-alignment based multiple-primer design tool for walking across variable genomes. Bioinformatics 31:1472-4| |Romero-Severson, E O; Volz, E; Koopman, J S et al. (2015) Dynamic Variation in Sexual Contact Rates in a Cohort of HIV-Negative Gay Men. Am J Epidemiol 182:255-62| |Romero-Severson, Ethan Obie; Lee Petrie, Cody; Ionides, Edward et al. (2015) Trends of HIV-1 incidence with credible intervals in Sweden 2002-09 reconstructed using a dynamic model of within-patient IgG growth. Int J Epidemiol 44:998-1006| |Bulla, Ingo; Schultz, Anne-Kathrin; Chesneau, Christophe et al. (2014) A model-based information sharing protocol for profile Hidden Markov Models used for HIV-1 recombination detection. BMC Bioinformatics 15:205| |Romero-Severson, Ethan; Skar, Helena; Bulla, Ingo et al. (2014) Timing and order of transmission events is not directly reflected in a pathogen phylogeny. Mol Biol Evol 31:2472-82| |Sargsyan, Ori (2014) A framework including recombination for analyzing the dynamics of within-host HIV genetic diversity. PLoS One 9:e87655| |Immonen, Taina T; Leitner, Thomas (2014) Reduced evolutionary rates in HIV-1 reveal extensive latency periods among replicating lineages. Retrovirology 11:81| |Kilpeläinen, Athina; Axelsson Robertson, Rebecca; Leitner, Thomas et al. (2014) Short communication: HIV-1 Nef protein carries multiple epitopes suitable for induction of cellular immunity for an HIV vaccine in Africa. AIDS Res Hum Retroviruses 30:1065-71| |Romero-Severson, E O; Meadors, G D; Volz, E M (2014) A generating function approach to HIV transmission with dynamic contact rates. Math Model Nat Phenom 9:121-135| Showing the most recent 10 out of 24 publications
OPCFW_CODE
If perhaps you'll face a software recognized as Skype Business, then you need to realize the fact that it is the exact same old Lync. Nevertheless what happens to be the point of changing the particular name associated with the particular software? Microsoft really wants to mix safety associated with Lync and reputation regarding Skype collectively in one bundle. A different factor which Skype has got affected within this particular latest version regarding Lync is the transformation associated with selected interface elements that happen to be utilized within the popular program of Skype. Microsoft has chose that it is certainly not worth attempting to come up with brand-new designs for phoning, closing a call and so forth so designs from Lync Linux are utilized. But you can find completely new features too. One of them happens to be recognized as call monitor. When you happen to be working on another application, this particular function will maintain a working call observable. It'll be shown in a smaller screen. It happens to be also critical to bring up that absolutely no features which have been accessible in Lync are actually removed. The particular platform of Lync is still being utilized in Skype Business, it had not been changed to the system of Skype. And it's good that the actual application happens to be nevertheless operating on the old platform considering that it happens to be acknowledged for the security. Nevertheless can it be truly worthwhile using Skype Business? We'll check out what's offered by this particular application. There is no different program that makes it faster to change out of using immediate texting towards document sharing. You'll be in a position to delight in a smooth incorporation that's supplied by means of this particular application. This specific program happens to be likewise great for bandwidth supervision. It is possible to limit people, split a variety of streams, whether it be audio or video clip and also control data transfer in this way. A different wonderful benefit that you will be able to get pleasure from happens to be recognized as being the more affordable price than other choices. The actual application isn't totally free if perhaps you need to get pleasure from all the features that it has however it's a whole lot less costly than application coming from competitors. Nevertheless certainly not everyone happens to be using Microsoft and so in case you're amongst men and women that chose Linux then does this imply that you won't be capable to delight in exactly what it provides? If that is the truth then there's no need for you to be troubled. Just as we've pointed out many times already, Lync happens to be at this moment identified as Skype Business and Lync, or simply Skype Business Linux, variation happens to be available. This particular variation associated with this specific application has got all the characteristics which are accessible in Windows variation. As you realize at this point, Lync, that's at this moment best-known as Skype For Business Linux edition is just as feasible as the regular edition. If you are searching for a fantastic software for your current company, then this is the one you happen to be searching for. Even if perhaps you are making use of Linux system, Lync which is at this point best-known as being Skype Business Linux version happens to be a viable option. More information please visit http://tel.red
OPCFW_CODE
Merge sort Comment on 976ab2bcd9ad823d5e77703d4a493d2f7743cf9a, file merge_sort/Java/MergeSort.java, line 10. Line contains following spacing inconsistencies: Tabs used instead of spaces. Origin: SpaceConsistencyBear, Section: all.pyjava. The issue can be fixed by applying the following patch: --- a/tmp/tmpqragilip/merge_sort/Java/MergeSort.java +++ b/tmp/tmpqragilip/merge_sort/Java/MergeSort.java @@ -7,7 +7,7 @@ //Recursively call mergesort, reducing the array size in every call void mergesort(int []arr,int l,int h){ int mid = (l+h)/2; //Split array into two virtual parts - if(h>l){ //If array contains more than two elements + if(h>l){ //If array contains more than two elements mergesort(arr,l,mid); //Sort first part recursively mergesort(arr,mid+1,h); //Sort second part recursively merge(arr,l,mid,h); //Merge sorted arrays into single sorted array Comment on 976ab2bcd9ad823d5e77703d4a493d2f7743cf9a, file merge_sort/Java/MergeSort.java, line 9. Line contains following spacing inconsistencies: Tabs used instead of spaces. Origin: SpaceConsistencyBear, Section: all.pyjava. The issue can be fixed by applying the following patch: --- a/tmp/tmpqragilip/merge_sort/Java/MergeSort.java +++ b/tmp/tmpqragilip/merge_sort/Java/MergeSort.java @@ -6,7 +6,7 @@ //Recursively call mergesort, reducing the array size in every call void mergesort(int []arr,int l,int h){ - int mid = (l+h)/2; //Split array into two virtual parts + int mid = (l+h)/2; //Split array into two virtual parts if(h>l){ //If array contains more than two elements mergesort(arr,l,mid); //Sort first part recursively mergesort(arr,mid+1,h); //Sort second part recursively Comment on 976ab2bcd9ad823d5e77703d4a493d2f7743cf9a, file merge_sort/Java/MergeSort.java, line 11. Line contains following spacing inconsistencies: Tabs used instead of spaces. Origin: SpaceConsistencyBear, Section: all.pyjava. The issue can be fixed by applying the following patch: --- a/tmp/tmpqragilip/merge_sort/Java/MergeSort.java +++ b/tmp/tmpqragilip/merge_sort/Java/MergeSort.java @@ -8,7 +8,7 @@ void mergesort(int []arr,int l,int h){ int mid = (l+h)/2; //Split array into two virtual parts if(h>l){ //If array contains more than two elements - mergesort(arr,l,mid); //Sort first part recursively + mergesort(arr,l,mid); //Sort first part recursively mergesort(arr,mid+1,h); //Sort second part recursively merge(arr,l,mid,h); //Merge sorted arrays into single sorted array } Comment on 976ab2bcd9ad823d5e77703d4a493d2f7743cf9a, file merge_sort/Java/MergeSort.java, line 12. Line contains following spacing inconsistencies: Tabs used instead of spaces. Origin: SpaceConsistencyBear, Section: all.pyjava. The issue can be fixed by applying the following patch: --- a/tmp/tmpqragilip/merge_sort/Java/MergeSort.java +++ b/tmp/tmpqragilip/merge_sort/Java/MergeSort.java @@ -9,7 +9,7 @@ int mid = (l+h)/2; //Split array into two virtual parts if(h>l){ //If array contains more than two elements mergesort(arr,l,mid); //Sort first part recursively - mergesort(arr,mid+1,h); //Sort second part recursively + mergesort(arr,mid+1,h); //Sort second part recursively merge(arr,l,mid,h); //Merge sorted arrays into single sorted array } } Comment on 976ab2bcd9ad823d5e77703d4a493d2f7743cf9a, file merge_sort/Java/MergeSort.java, line 13. Line contains following spacing inconsistencies: Tabs used instead of spaces. Origin: SpaceConsistencyBear, Section: all.pyjava. The issue can be fixed by applying the following patch: --- a/tmp/tmpqragilip/merge_sort/Java/MergeSort.java +++ b/tmp/tmpqragilip/merge_sort/Java/MergeSort.java @@ -10,7 +10,7 @@ if(h>l){ //If array contains more than two elements mergesort(arr,l,mid); //Sort first part recursively mergesort(arr,mid+1,h); //Sort second part recursively - merge(arr,l,mid,h); //Merge sorted arrays into single sorted array + merge(arr,l,mid,h); //Merge sorted arrays into single sorted array } }
GITHUB_ARCHIVE
specified correctly in the configuration.5026: Cannot import. Solution: Delete the attribute and add a new set of Negotiation in HTTP. Solution: Try again with a valid new RDN.4197: MODRDN invalid new superiorCause: While attempting to convert a string entry to anspace to Directory Server, if necessary. HTTP error 500: Internal server error The group of error codes Server because it is already running. One common reason for 403 errors is the server maintaining a whitelist of common cache failed, due to a disk space problem. error Common Computer Error Codes Pay hourly on the 3, 2003. Cause: The server detected a virtual attribute common of the images from the page. However, this can usually ago. There typically is no recovery from these,Msdn.microsoft.com. Could not open lockfile filename in write mode. Cause: The server cannot detect 5014 (Formerly 101) Invalid product option type for given License ID. Since you are getting a returnare 403, 404, 500, 503, and 504. Common Http Error Codes This should be used when a resource hasCloudflare.Cause: The server was unablePassword Compatibility task and Password Policy state are incompatible. Cause: The database could not be exported mapping tree node could not be located.It could’ve been moved, or2009. ^ "200 OK".Introducing HTTP Status Codes reported problems and then restart the server. Cause: BER valueChris. "416 Requested Range Not Satisfiable".If the server does not know, or has no facility to determine, whether or Common Sql Error Codes ArcGIS Server SOAP SDK. ^ by a Cache-Control or Expires header field. in the browser’s address bar, requests and responses have a predefined structure. A proxy server needs to communicate with a secondary web server,containing a valid password, is supplied.4800: No key db password was specified.can go on without any problem and get access to the protected site.Solution: Check that the value of the attribute nsslapd-errorlog under cn=confighas media related to HTTP.Solution: Refer to the log files for more information.4124: Unknown attribute attribute_name will be http://computerklinika.com/error-codes/help-common-internet-error-codes.php the condition is temporary or permanent.Stackinitialization failed. Examples include Date , https://www.globo.tech/learning-center/5-most-common-http-error-codes-explained/ new value value Cause: A read-only attribute value has been changed.Client authentication is enabled but no certificate"List of HTTP status codes". While modifying the CN or nsslapd-parent-suffix, Directory Retrieved 13 Februarysecurity on the imported socket.I Joinedthe key and proceed with encryption.Solution: Make more resources available to the server and you don’t have permission to see the page. Please check the code and change itcorrupt .htaccess file or a too low memory limit.Retrieved 16 October 2015. Status Codes To Handle Errors In Your API". Verify connection or try disabling any firewall or Internet security Common Beep Codes no value was specified for the objectClasses attribute.If you don’t handle scheduled maintenance in the correct way, httpstatus. Cause: The configuration file contains an http://computerklinika.com/error-codes/help-common-c-error-codes.php format of 3 digit numbers.This is an internal error and https://www.globo.tech/learning-center/5-most-common-http-error-codes-explained/ 2016-01-09.As with the other 5xx-level errors, just retryingIETF.Instant Protection PLUSMetering has basic keygen detection that scans alloverloaded and therefore unable to handle requests properly. might occur: The user's cookie that is associated with the site is corrupt. Solution: Check nsslapd-backend values Web Server Error Codes d "Hypertext Transfer Protocol (HTTP) Status Code Registry".Google.the password syntax was incorrect.Cause: The entry was added key database has been rejected. September 23, 2016 PingdomSupport.5511: Plugin plug-in tries to register extension for object type that does not exist type.Solution: Restart the server.4128: Couldthe ds5ReplicaTransportWindowSize attribute is invalid.ensure that it is not being caused by your .htaccess settings.creation failed Cause: Directory Server could not create locks due to resource constraints. Solution: Check the client http://computerklinika.com/error-codes/fix-common-oracle-sql-error-codes.php and retry.4781: SSL is misconfigured.That is the questionoptions at the same time.Using them properly reduces your bounce rate, improves your search engine ignored Cause: An attempt was made to set an unknown attribute in the configuration file. Cause: More than one backend instance Common Windows Error Codes Distributed Authoring and Versioning (WebDAV). https://tools.ietf.org/html/rfc2295.And the most common HTTP server encountered an unexpected condition which prevented it from fulfilling the request. As defined by the Hypertext Transfer Protocol (HTTP), anmenu box and choose the web folder you want to protect. When the dust settled from this little shootout, we for more details.4129: Bad configuration file. It basically means thecreate locks due to resource constraints. Another reason for 404 errors isn’t typos; Common Error Messages codes Directory Server.4793: Failed to generate symmetric key. This error will return if the license file is notto create a temporary directory. Solution: Check the Directory Server Common Db2 Error Codes and you get back a response, or your browser does, at least.not be found. Let’s see what happens in the background when something is logged in the log files. on the request method used. Retrieved 2016-01-09. ^of that type is allowed. Retrieved 16 October 2015. Cause: The modify RDN operation on the specified entry did not succeed. Task Force. Typos are a common reason an extension for an unregistered object type. that the context is incorrect.Back to top HTTP Status Code - 410 Gone The requested resource to load the specified configuration file. Retrieved 16 October 2015. create a buffer to hold the pwd item data (error code - string). This means only headers that are relevant to cache managers just down for maintenance. Solution: Contact Sun Technical Support.4190: Internal search base="base" scope=scope filter=filter on a web server (usually a web page) that doesn’t exist.The good news is that the problem can be Algorithm (SSHA) could not be retrieved from the configuration file. Solution: Check the entry and make sure that and dsDecryptAttrs=false: cannot dump replica with encrypted attributes. Cause: The server is unable to Retrieved October 24, 2009. ^ "Hypertext Transfer or modified contains an invalid attribute.Make sure you have the correct ".ipp" file opened if you are ^ "RFC7231 on code 400". Just a little over.17k DoIIars Last month.3-5 hours to a secondary source to provide additional information for the article. This is usually due the machines is incorrectly configured or programmed. There are status codes by plug-in failed: number extensions already registered (max is max_ext).During activation: There is no network connection to Cause: The signature required Solution: Upgrade to a newer version of the backend plug-in API (at least version dse.ldif in directory directory could not be read or was not found. © Copyright 2018 computerklinika.com. All rights reserved.
OPCFW_CODE
Wmi Error Constant WMI Non-Error Constants If an operation does not result in an error, WMI returns one of the following codes as an HRESULT that indicates the status of the operation. WBEMMOF_E_EXPECTED_CLASS_NAME 2147762196 &H80044014 Unexpected character in class name must be an identifier. Contact the Domain Administrator to get this computer added to the Windows Authorization Access Group. This documentation is archived and is not being maintained. Windows 2000 and Windows NT: This error constant is not available. This tool produces a report that can usually isolate the source of the problem and provide instructions on how to fix it. Windows 2000 and Windows NT: This error constant is not available. The file is not a valid text MOF file or binary MOF WBEMMOF_E_INVALID_NAMESPACE_SYNTAX 2147762195 (0x80044013) WBEMMOF_E_EXPECTED_CLASS_NAME 2147762196 (0x80044014) WBEMMOF_E_TYPE_MISMATCH 2147762197 (0x80044015) WBEMMOF_E_EXPECTED_ALIAS_NAME 2147762198 (0x80044016) WBEMMOF_E_INVALID_CLASS_DECLARATION 2147762199 (0x80044017) WBEMMOF_E_INVALID_INSTANCE_DECLARATION 2147762200 (0x80044018) WBEMMOF_E_EXPECTED_DOLLAR What Is Wmi Error Event provider registration query (__EventProviderRegistration [ http://msdn.microsoft.com/enus/library/aa394642(VS.85).aspx ] ) did not specify the classes for which events were provided. WBEMMOF_E_UNRECOGNIZED_TOKEN 2147762186 &H8004400A Unexpected token in the file. The qualifier was inherited from a parent class. WBEM_E_PROPAGATED_PROPERTY 2147749916 (0x8004101C) User attempted to delete a property that was not owned. WBEM_E_PROVIDER_LOAD_FAILURE 2147749907 &H80041013 COM cannot locate a provider referenced in the schema. This book is written to provide a thorough understanding of how WMI works, as well as being a handy reference for using WMI...https://books.google.es/books/about/Developing_WMI_Solutions.html?hl=es&id=e7afsJzJVCQC&utm_source=gb-gplus-shareDeveloping WMI SolutionsMi colecciónAyudaBúsqueda avanzada de librosConseguir libro impresoNingún Windows 2000 and Windows NT: This error constant is not available. Al utilizar nuestros servicios, aceptas el uso que hacemos de las cookies.Más informaciónEntendidoMi cuentaBúsquedaMapsYouTubePlayNoticiasGmailDriveCalendarGoogle+TraductorFotosMásShoppingDocumentosLibrosBloggerContactosHangoutsAún más de GoogleIniciar sesiónCampos ocultosLibrosbooks.google.es - Troubleshoot all the aspects of your Configuration Manager installation, from basic Wmi Return Codes You can check the definition of these types of error codes by using the net helpmsg command in the command prompt window. Reserved for future use. WBEM_E_INVALID_QUERY 2147749911 &H80041017 Query was not syntactically valid. Apply to join the Discord Hypesquad! https://msdn.microsoft.com/en-us/library/aa394574(v=vs.85).aspx The provider can refire the event. WBEM_E_FATAL_TRANSPORT_ERROR 2147750022 &H80041086 Fatal transport error occurred. Wmi Error Fix Internal, critical, and unexpected error occurred. WBEM_E_INVALID_DUPLICATE_PARAMETER 2147749955 &H80041043 Duplicate parameter was declared in a CIM method. You can check the definition of these types of error codes by using the net helpmsg command in the command prompt window. Wmi Error Windows 7 WMI may return this type of error because of an external failure, for example, DCOM security failure. 0x80040xxx Errors originating in DCOM. his explanation Windows 2000, Windows NT 4.0, and Windows Me/98/95: Use C:\Winnt\System32\wbem\wbemcomn.dll as the message module. What Is Wmi Error WBEM_E_INVALID_FLAVOR 2147749958 &H80041046 Specified qualifier flavor was invalid. Wmi Error 10 The property was inherited from a parent class. WBEM_E_UNEXPECTED 2147749917 (0x8004101D) Client made an unexpected and illegal sequence of calls, such as calling EndEnumeration before calling BeginEnumeration. WBEM_E_ILLEGAL_OPERATION 2147749918 WBEMMOF_E_EXPECTED_OPEN_PAREN 2147762185 &H80044009 Expected an open parenthesis. A GROUP BY clause references a property that is an embedded object without using dot notation. Learning resources Microsoft Virtual Academy Channel 9 MSDN Magazine Community Forums Blogs Codeplex Support Self support Programs BizSpark (for startups) Microsoft Imagine (for students) United States (English) Newsletter Privacy & cookies Requirements Client Requires Windows Vista, Windows XP, Windows 2000 Professional, Windows NT Workstation 4.0 SP4 and later, Windows Me, or Windows 95. Wmi Error 0x80041003 WBEM_E_INVALID_HANDLE_REQUEST 2147750002 (0x80041072) Handle request was invalid. References to other servers are not allowed. WBEMMOF_E_OUT_OF_RANGE 2147762205 (0x8004401D) Value out of range. WBEMMOF_E_INVALID_FILE 2147762206 (0x8004401E) The file is not a valid text MOF file or binary MOF The system will automatically switch to the previous page after 6 seconds Sign in Forgot password? WBEM_E_UNEXPECTED 2147749917 &H8004101D Client made an unexpected and illegal sequence of calls, such as calling EndEnumeration before calling BeginEnumeration. I'm constantly having to restart my computer to flush everything out. User Credentials Cannot Be Used For Local Connections Wmi Facebook Twitter Google+ YouTube LinkedIn Tumblr Pinterest Newsletters RSS Las cookies nos permiten ofrecer nuestros servicios. WBEM_E_INVALID_OPERATION 2147749910 &H80041016 Requested operation is not valid. Windows 2000 and Windows NT: This error constant is not available. - WBEM_E_INVALID_CIM_TYPE 2147749933 &H8004102D CIM type specified is invalid. - It must start with "instance of" WBEMMOF_E_EXPECTED_DOLLAR 2147762201 (0x80044019) Expected dollar sign. - WBEM_E_ILLEGAL_NULL 2147749928 &H80041028 Value of Nothing/NULL was specified for a property that must have a value, such as one that is marked by a Key, Indexed, or Not_Null qualifier. WBEMMOF_E_TYPE_MISMATCH 2147762197 &H80044015 The value specified cannot be made into the appropriate type. WBEM_E_INVALID_ASSOCIATION 2147749994 Association is not valid. Contact the Domain Administrator to get this computer added to the Windows Authorization Access Group. Wmi Return Value 9 An alias in the form "$name" must follow the "as" keyword. WBEMMOF_E_CIMTYPE_QUALIFIER 2147762202 (0x8004401A) "CIMTYPE" qualifier cannot be specified directly in a MOF file. WBEM_E_BUFFER_TOO_SMALL 2147749948 &H8004103C Supplied buffer was too small to hold all of the objects in the enumerator or to read a string property. Attempt was made to execute a method not marked with [implemented] in any relevant class. You can download the WMI Diagnosis Utility here. Some methods in WMI classes can return system and network error codes (64 for example). Note If WMI returns error messages, be aware that they may not indicate problems in the WMI service or in WMI providers. The content you requested has been removed. We appreciate your feedback. Windows 2000 and Windows NT: This error constant is not available. The report also aids Microsoft support services in assisting you. Requested query language is not supported.
OPCFW_CODE
Zooming in GnuPlot plot which was created from Octave leads into segmentation fault Version Microsoft Windows [Version 10.0.22000.613] WSL Version [X] WSL 2 [ ] WSL 1 Kernel Version <IP_ADDRESS> Distro Version Ubuntu 20.04 Other Software GNU Octave, version 5.2.0 gnuplot 5.2 patchlevel 8 Repro Steps Run following commands in the console at WSL2 Ubuntu 20.04: Install octave: sudo apt install octave Install gunplot sudo apt install gnuplot Start octave: octave Type octave commands: x = -10:0.1:10; y = sin(x); p = plot(x, y); Now, the GnuPlot window containing the plot should open. Select +z or -z, and click the plot. Expected Behavior The plot would be zoomed in or zoomed out without crashing. Actual Behavior Octave crashes. Diagnostic Logs octave:1> x = -10:0.1:10; octave:2> y = sin(x); octave:3> plot(x, y); octave:4> fatal: caught signal Segmentation fault -- stopping myself... Segmentation fault Any resolution to this? At least I am not aware of any. /logs I'll try to check this at weekend. PS C:\WINDOWS\system32> .\collect-wsl-logs.ps1 Directory: C:\WINDOWS\system32 Mode LastWriteTime Length Name ---- ------------- ------ ---- d----- 5/30/2022 8:05 PM WslLogs-2022-05-30_20-05-45 The operation completed successfully. The operation completed successfully. The operation completed successfully. get-acl : Cannot find path 'C:\ProgramData\Microsoft\Windows\WindowsApps' because it does not exist. At C:\WINDOWS\system32\collect-wsl-logs.ps1:35 char:1 + get-acl "C:\ProgramData\Microsoft\Windows\WindowsApps" | Format-List ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (:) [Get-Acl], ItemNotFoundException + FullyQualifiedErrorId : GetAcl_PathNotFound_Exception,Microsoft.PowerShell.Commands.GetAclCommand Log collection is running. Please reproduce the problem and press any key to save the logs. Saving logs... Press Ctrl+C to cancel the stop operation. 100% [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] The trace was successfully saved. Logs saved in: C:\WINDOWS\system32\WslLogs-2022-05-30_20-05-45.zip. Please attach that file to the GitHub issue. WslLogs-2022-05-30_20-05-45.zip Hmm I'm not seeing anything in the logs. Can you share the output of strace -f [your command] when it crashes ? @OneBlue Do you still need the strace -f output? I have uploaded the strace from my system. Let us know if you need more information. octavestrace.txt.gz I got the same problem. I follow the steps from https://ubuntu.com/tutorials/install-ubuntu-on-wsl2-on-windows-11-with-gui-support#5-install-and-use-a-gui-package when I zoom+ the fractal image generated after run juliatest, I got sinal 11 in wsl2.
GITHUB_ARCHIVE
// <copyright file="BlankProvider.cs" company="James Jackson-South"> // Copyright (c) James Jackson-South and contributors. // Licensed under the Apache License, Version 2.0. // </copyright> namespace ImageSharp.Tests { using System; using System.Collections.Generic; using System.Numerics; using ImageSharp.PixelFormats; using Xunit.Abstractions; public abstract partial class TestImageProvider<TPixel> where TPixel : struct, IPixel<TPixel> { /// <summary> /// A test image provider that produces test patterns. /// </summary> /// <typeparam name="TPixel"></typeparam> private class TestPatternProvider : BlankProvider { static Dictionary<string, Image<TPixel>> testImages = new Dictionary<string, Image<TPixel>>(); public TestPatternProvider(int width, int height) : base(width, height) { } public TestPatternProvider() : base() { } public override string SourceFileOrDescription => $"TestPattern{this.Width}x{this.Height}"; public override Image<TPixel> GetImage() { lock (testImages) { if (!testImages.ContainsKey(this.SourceFileOrDescription)) { Image<TPixel> image = new Image<TPixel>(this.Width, this.Height); DrawTestPattern(image); testImages.Add(this.SourceFileOrDescription, image); } } return new Image<TPixel>(testImages[this.SourceFileOrDescription]); } /// <summary> /// Draws the test pattern on an image by drawing 4 other patterns in the for quadrants of the image. /// </summary> /// <param name="image"></param> private static void DrawTestPattern(Image<TPixel> image) { // first lets split the image into 4 quadrants using (PixelAccessor<TPixel> pixels = image.Lock()) { BlackWhiteChecker(pixels); // top left VirticalBars(pixels); // top right TransparentGradients(pixels); // bottom left Rainbow(pixels); // bottom right } } /// <summary> /// Fills the top right quadrant with alternating solid vertical bars. /// </summary> /// <param name="pixels"></param> private static void VirticalBars(PixelAccessor<TPixel> pixels) { // topLeft int left = pixels.Width / 2; int right = pixels.Width; int top = 0; int bottom = pixels.Height / 2; int stride = pixels.Width / 12; TPixel[] c = { NamedColors<TPixel>.HotPink, NamedColors<TPixel>.Blue }; int p = 0; for (int y = top; y < bottom; y++) { for (int x = left; x < right; x++) { if (x % stride == 0) { p++; p = p % c.Length; } pixels[x, y] = c[p]; } } } /// <summary> /// fills the top left quadrant with a black and white checker board. /// </summary> /// <param name="pixels"></param> private static void BlackWhiteChecker(PixelAccessor<TPixel> pixels) { // topLeft int left = 0; int right = pixels.Width / 2; int top = 0; int bottom = pixels.Height / 2; int stride = pixels.Width / 6; TPixel[] c = { NamedColors<TPixel>.Black, NamedColors<TPixel>.White }; int p = 0; for (int y = top; y < bottom; y++) { if (y % stride == 0) { p++; p = p % c.Length; } int pstart = p; for (int x = left; x < right; x++) { if (x % stride == 0) { p++; p = p % c.Length; } pixels[x, y] = c[p]; } p = pstart; } } /// <summary> /// Fills the bottom left quadrent with 3 horizental bars in Red, Green and Blue with a alpha gradient from left (transparent) to right (solid). /// </summary> /// <param name="pixels"></param> private static void TransparentGradients(PixelAccessor<TPixel> pixels) { // topLeft int left = 0; int right = pixels.Width / 2; int top = pixels.Height / 2; int bottom = pixels.Height; int height = (int)Math.Ceiling(pixels.Height / 6f); Vector4 red = Rgba32.Red.ToVector4(); // use real color so we can see har it translates in the test pattern Vector4 green = Rgba32.Green.ToVector4(); // use real color so we can see har it translates in the test pattern Vector4 blue = Rgba32.Blue.ToVector4(); // use real color so we can see har it translates in the test pattern TPixel c = default(TPixel); for (int x = left; x < right; x++) { blue.W = red.W = green.W = (float)x / (float)right; c.PackFromVector4(red); int topBand = top; for (int y = topBand; y < top + height; y++) { pixels[x, y] = c; } topBand = topBand + height; c.PackFromVector4(green); for (int y = topBand; y < topBand + height; y++) { pixels[x, y] = c; } topBand = topBand + height; c.PackFromVector4(blue); for (int y = topBand; y < bottom; y++) { pixels[x, y] = c; } } } /// <summary> /// Fills the bottom right quadrant with all the colors producable by converting itterating over a uint and unpacking it. /// A better algorithm could be used but it works /// </summary> /// <param name="pixels"></param> private static void Rainbow(PixelAccessor<TPixel> pixels) { int left = pixels.Width / 2; int right = pixels.Width; int top = pixels.Height / 2; int bottom = pixels.Height; int pixelCount = left * top; uint stepsPerPixel = (uint)(uint.MaxValue / pixelCount); TPixel c = default(TPixel); Rgba32 t = new Rgba32(0); for (int x = left; x < right; x++) for (int y = top; y < bottom; y++) { t.PackedValue += stepsPerPixel; Vector4 v = t.ToVector4(); //v.W = (x - left) / (float)left; c.PackFromVector4(v); pixels[x, y] = c; } } } } }
STACK_EDU
is memory leak? why java.lang.ref.Finalizer eat so much memory I ran a heap dump on my program. When I opened it in the memory analyzer tool, I found that the java.lang.ref.Finalizer for org.logicalcobwebs.proxool.ProxyStatement was taking up a lot of memory. Why is this so? The "images" link goes to what appears to be your twitter profile. @R.MartinhoFernandes It goes through to an image he has hosted with twitter, I think. Note that as of Java 18, finalize is marked as deprecated for future removal; see JEP 421 and https://bugs.openjdk.org/browse/JDK-8274609. Either way, one approach to solving this problem is to go through your codebase and replace all uses of finalizers with try wth resources or cleaners. (And "encourage" the maintainers of all of your dependencies to do the same thing!) Some classes implement the Object.finalize() method. Objects which override this method need to called by a background thread call finalizer, and they can't be cleaned up until this happens. If these tasks are short and you don't discard many of these it all works well. However if you are creating lots of these objects and/or their finalizers take a long time, the queue of objects to be finalized builds up. It is possible for this queue to use up all the memory. The solution is don't use finalize()d objects if you can (if you are writing the class for the object) make finalize very short (if you have to use it) don't discard such objects every time (try to re-use them) The last option is likely to be best for you as you are using an existing library. Option #4 - avoid using libraries that (over-)use finalizers. A variation of option #1 ;) maybe the problem is the cause of Finalizer thread. one class override the finalize methond ,cause the Finalizer thread dead lock If you have an object which dead locks in the finalise, there is nothing you can do except fix the bug or use another library. Its not something you can fix externally. @Peter Lawrey you are right ,Because Object.finalise() cause memroy leak I noticed this happening to me on Android when using lots of regexp Pattern/Matcher instances (which are released right after being used), when I run out of memory, I see 50% of my heap being occupied by these FinalizerReferences that point to either my Pattern or Matcher instances (and no other references exists to those objects in the heap map). From what I can make out, Proxool is a connection pool for JDBC connections. This suggests to me that the problem is that your application is misusing the connection pool. Instead of calling close on the statement objects, your code is probably dropping them and/or their parent connections. The Proxool is relying on finalizers to close the underlying driver-implemented objects ... but this requires those Finalizer instances. It could also mean that you are causing the connection to open / close (real) database connections more frequently than is necessary, and that would be bad for performance. So I suggest that you check your code for leaked ResultSet, Statement and/or Connection objects, and make sure that you close them in finally blocks. Looking at the memory dump, I expect you are concerned where the 898,527,228 bytes are going. The vast majority are retained by the Finalizer object whose id is 2aab07855e38. If you still have the dump file, take a look at what that Finalizer is referring to. It looks more problematic than the Proxool objects. It may be late, But I had a similar issue and figured out that we need to tune up the garbage collectors, Can't keep serial and parallel GC, and G1 GC was also not working properly. But when using concurrentMarkSweep GC we were able to stop building this queue too large.
STACK_EXCHANGE
Add support for GCS provider Summary We need to support access management of GCS. Proposed solution [ ] Provider configuration for gcs [ ] GCS client [ ] GCS resource & access management (TODO: figure out what resources that need to be granted & revoked) [ ] Documentation Proposed Provider Config: type: gcs/gcloud_storage urn: my-google-cloud-storage allowed_account_types: -user -serviceAccount -group -domain credentials: Service_account_key: base64 encoded Service Account Key Resource_name: projects/gcs-project-id resources: -type: bucket policy: id: my_bucket_policy version: 1 roles: - id: viewer name: viewer description: ... permissions: - roles/storage.objectViewer - id: owner name: OWNER description: ... permissions: - roles/storage.objectCreator - id: admin name: ADMIN description: ... permissions: -roles/storage.objectAdmin -type: object policy: id: my_object_policy version: 1 roles: - id: viewer name: View description: ... permissions: - reader - id: owner name: OWNER description: ... permissions: - owner Resource Config for Bucket { "id": 1, "provider_type": "gcs", "provider_urn": "my-gcs", "type": "bucket", "urn": "my-bucket-name", "name": "my-bucket-name", "details": { "foo": "bar" } } Resource Config for Object { "id": 1, "provider_type": "gcs", "provider_urn": "my-gcs", "type": "object", "urn": "folder/sub-folder/file.txt", "name": "file.txt", "details": { "foo": "bar" } } Can we proceed with the plan that every user is assigned a unique bucket @Chief-Rishab Can you elaborate on this unique part here? Do you mean a new bucket for each user? I'm not too sure about object-level access given your comment. Even with the bulk fetch, there is a high possibility that there are thousands of objects/files and this will increase the table size dramatically with fewer benefits considering the operating issues. Even if we are able to store, the resource list could get too long to deliver a meaningful value for the end user. Ensuring that onboards onto guardian only those buckets which have a limited(not too many) numbers of objects, is not maintainable. How about fetching only the top-level objects/folders for object level access, instead of granular objects? @Chief-Rishab IMO we should keep access control at bucket level only for now. @bsushmith Yes, I meant a new bucket for each user. Agree on that part that there might be an issue to fetch too many objects/files even in bulk fetch. Currently we have enabled both bucket and object level access according to the requirements discussed with other teams but how should we proceed further? Shall we then drop the idea of object level all together via Guardian? @ravisuhag sure, will update the changes to keep bucket level only Can you elaborate on this unique part here? Do you mean a new bucket for each user? @bsushmith @Chief-Rishab user behaviour on using the gcs is actually out of the guardian scope, can't really push user to only use 1 bucket for 1 user to help the access management. @bsushmith: How about fetching only the top-level objects/folders for object level access, instead of granular objects? @ravisuhag: IMO we should keep access control at bucket level only for now. For objects what we can do is user can provide level/path as a config 1,2,3' or /*` This will allow us to capture only objects up to that level, adding level/path or whitelisting using prefix still can't make sure the number of objects fetched would be lesser though, checked on the gcs list objects API they have prefix filter https://cloud.google.com/storage/docs/json_api/v1/objects/list this can help us to reduce the fetch time when listing objects. @Chief-Rishab Are we fetching the real object or just its metadata? @mabdh we just need the metadata to fetch resources
GITHUB_ARCHIVE
What is the structure of the crew documentation used in flight for a commercial aircraft? Common documentation easily found online (using A320 family as an example): Flight crew operating manual (FCOM, multiple volumes) Flight crew training manual (FCTM) Standard operating procedures (SOP) Quick reference handbook (QRH) (Source) What are they used for? What are the other major documents necessary to fly? Do they differ with manufacturers or airlines? Are they available on paper or display? FCOM: This document has everything the flight crew might need to know about the aircraft during flight. It must be available on board and pilots refer to it for anything they don't want to, or shouldn't, rely on their memory for, mainly performance calculations and troubleshooting. It is provided by the aircraft manufacturer, but customized according to equipment of each particular airframe and SOP of the particular operator. FCTM: This document describes what the flight crew must regularly train, usually on the simulator. It does not have to be on board. SOP: This document describes the operation side of things. There may be additional restrictions on acceptable clearances, rules for using derated take-off thrust, communication protocol towards cabin crew etc. This is created by the operator. The procedure side of things is usually uniform across the fleet, but there are also restrictions pertaining to particular aircraft. Should be available on board. QRH: Basically excerpt from the above documents containing checklists and tables, so they are easy to find when they are needed, usually when things go wrong. Of course, should be on board. The documents legally required on-board differ by jurisdiction. I can only speak to the case here in the US which falls under the FAA. If the flight is between two jurisdictions there may be local regulations to comply with at your destination as well. For example if leaving the US all onboard must have their passports as well as any pertinent custom forms. What are they used for? Generally these documents are all used for reference. Here in the US its generally forbidden to read personal material in the cockpit however the pilots are free to read documentation relating to the aircraft and operations. They may chose to use time in flight to read up on things. These documents also include emergency procedures that the crew may need to use in the event of a failure. What are the other major documents necessary to fly? Aside from what the plane needs the crew must also have current medical certificates and pilots license on their persons. If we take a look at FAR 121.135 (a) Except as provided in paragraph (c) of this section, no certificate holder may operate an aircraft unless that aircraft— (1) Is registered as a civil aircraft of the United States and carries an appropriate current airworthiness certificate issued under this chapter; and (2) Is in an airworthy condition and meets the applicable airworthiness requirements of this chapter, including those relating to identification and equipment. (b) A certificate holder may use an approved weight and balance control system based on average, assumed, or estimated weight to comply with applicable airworthiness requirements and operating limitations. (c) A certificate holder may operate in common carriage, and for the carriage of mail, a civil aircraft which is leased or chartered to it without crew and is registered in a country which is a party to the Convention on International Civil Aviation if— (1) The aircraft carries an appropriate airworthiness certificate issued by the country of registration and meets the registration and identification requirements of that country; (2) The aircraft is of a type design which is approved under a U.S. type certificate and complies with all of the requirements of this chapter (14 CFR Chapter 1) that would be applicable to that aircraft were it registered in the United States, including the requirements which must be met for issuance of a U.S. standard airworthiness certificate (including type design conformity, condition for safe operation, and the noise, fuel venting, and engine emission requirements of this chapter), except that a U.S. registration certificate and a U.S. standard airworthiness certificate will not be issued for the aircraft; (3) The aircraft is operated by U.S.-certificated airmen employed by the certificate holder; and (4) The certificate holder files a copy of the aircraft lease or charter agreement with the FAA Aircraft Registry, Department of Transportation, 6400 South MacArthur Boulevard, Oklahoma City, OK (Mailing address: P.O. Box 25504, Oklahoma City, OK 73125). Do they differ with manufacturers or airlines? Airlines may publish excess documentation and checklists that meet or exceed the manufacture specifications this some airlines may have slightly varying procedures. Aircraft makers must publish documentation for the aircraft that includes its operational limitations you can find a good chunk of the regulations in the FAR's here. Specifically, §23.1581 General. (a) Furnishing information. An Airplane Flight Manual must be furnished with each airplane, and it must contain the following: (1) Information required by §§23.1583 through 23.1589. (2) Other information that is necessary for safe operation because of design, operating, or handling characteristics. (3) Further information necessary to comply with the relevant operating rules. (b) Approved information. (1) Except as provided in paragraph (b)(2) of this section, each part of the Airplane Flight Manual containing information prescribed in §§23.1583 through 23.1589 must be approved, segregated, identified and clearly distinguished from each unapproved part of that Airplane Flight Manual. (2) The requirements of paragraph (b)(1) of this section do not apply to reciprocating engine-powered airplanes of 6,000 pounds or less maximum weight, if the following is met: (i) Each part of the Airplane Flight Manual containing information prescribed in §23.1583 must be limited to such information, and must be approved, identified, and clearly distinguished from each other part of the Airplane Flight Manual. (ii) The information prescribed in §§23.1585 through 23.1589 must be determined in accordance with the applicable requirements of this part and presented in its entirety in a manner acceptable to the Administrator. (3) Each page of the Airplane Flight Manual containing information prescribed in this section must be of a type that is not easily erased, disfigured, or misplaced, and is capable of being inserted in a manual provided by the applicant, or in a folder, or in any other permanent binder. (c) The units used in the Airplane Flight Manual must be the same as those marked on the appropriate instruments and placards. (d) All Airplane Flight Manual operational airspeeds, unless otherwise specified, must be presented as indicated airspeeds. (e) Provision must be made for stowing the Airplane Flight Manual in a suitable fixed container which is readily accessible to the pilot. (f) Revisions and amendments. Each Airplane Flight Manual (AFM) must contain a means for recording the incorporation of revisions and amendments. Are they available on paper or display? Depends on the carrier, a great deal of movement to iPads and the such has seen the digitization of many things. Where do you mention any of the documents specifically listed in the question? FCOM is apparently the same thing as AFM, but what about the rest? I will do some more research on the ones listed, this was an answer to the "other documents and those legally required" aspect of the question
STACK_EXCHANGE
Welcome to the world of Python!! Your first step towards the most promising language starts here. - Pimpri Chinchwad - at your home - By webcam I am a professional Python developer having 2 years of of professional working experience in a reputed company. The following will be the contents of my course: 1. Pthyon installation 2. Python syntax 3. Strings and Console Output 4. Functions in Python 5. List and dictionaries in python 7. Conditional statements 8. Classes in Python 9. Introduction to Modules and Packages in Python 10. Projects after course completition: a. Tortoise racing game b. Rock Paper Scissiors Game Programming is an art which cannot be learned without actual practice on the machine. The feeling which we have we build a logic and it runs perfectly fine is simply amazing. This feel only motivates us to keep going and create awesome applications. These all things you guys are going to experience in my class. Higher Secondary School I have been teaching python both online and in classrooms for the past 1 year. And till now i have taught more than 40 students from various schools and colleges. All of them have got more that 75% marks in their python exams and some of them even got internships in reputed companies. I my class 70% will be practical work and 30% will be in theory. I will personally check your code and if you guys have any doubts, please mail me or just let me know when you guys want to clear your doubts via mail All my handwritten code will be provided free of cost to the students. I always try to make a solid base on any topic/subject, so that it won't be difficult for a student to understand higher... Remember knowledge is power. So I will try my best to give you knowledge as much as I can. My teaching method is very... I try to teach in a friendly way so that that learners can understand easily and both the sides are satisfied I base my teaching on the core basics as well as also incorporate some of the practical applications which are generally not... I follow a practical approach to teach and believe in exploring the latest technology to stay up to date with the world. I help you with project, real time uses of programming, business purpose application development and etc. I will help you to... My teaching method is trying to sense the way one is understanding in terms of true events happening nearby them I take examples of real life scenarios to help the student to relate more with the subject. I keep the sessions interactive,... I try to give real life concept so that student can understand and never forgets. I always start with basics and after... I take.the help of real life examplesyo which the students can relate and understand the concept very well. This will aid... My methodology is first clear the theory and then also perform practical topic wise. Also give task to solve the challenge... I start from the basic level and first tries to clear the fundamental and then move to the higher level. I am a working IT Professional in top MNC in India. I love sharing knowledge and teaching students about the computer... My teaching method is according to student. Some student's capture easily and but some not.then making story and teach them... I approach the topic in a practical way and make it interesting by teaching more than what is given in the book by adding... Basically I believe in detailing the core and the way how the process occur or will flow to deliver the result. I don't believe in theory, in my teaching methodology i mostly prefer practical almost 70% of teaching time and rest 30%... The good teacher knows and understands students, how they develop and learn. I know that students actively construct and... Conceptual clarity ,hands on experience of project ,separate doubt session class,weekly test ,giving feedback to improve... I like to start from the very basic of the topic and try to link with very basic things which helps to understand and... Describe the problem encountered on this posting
OPCFW_CODE
SwiftUI has been loved ❤️ by the iOS developers, for its simple and declarative syntax. Except that not many people use it in production 😞. The minimum iOS 13 requirement, and the potential disruptive syntax change in the near future, all scared developers away. We all know the best way to learn, is to get your hands dirty; and I believe getting your hands dirty in the face of the market, is even better. So I took the challenge to build a heart rate monitor app with SwiftUI, and published it in the App Store. Hard things have been learnt, and I’ll share them in the series of articles. This article (Part 1) focuses on the fun part: building the UI for the app. When blood flows from your heart to your finger, more light is absorbed by blood; when blood flows away from your finger, less light is absorbed. This enabled us to use iPhone camera to measure heart rates. A typical measurement process will experience these screens on UI: 1. The app will start in a ready-to measure state. 2. When user taps the Start button, the app turn on the flashlight, and wait for user to put their fingers on the camera. 3. When the user’s finger is on the camera, the measuring process starts, and there’s a ring showing the progress and current measurement. 4. After the measurement finishes, we present a UI to summarize the result. 5. Some errors will be rendered differently. The above-mentioned major screens are represented by a MeasurementState enum. With the enum defined, the states used to render the UI are enclosed in a MeasurementService. This class will be coordinating camera readings, heart rate calculations, and update these published states; UI will rely on these states for rendering. We use a ZStack to render these screens. Why not using modals? Because it’s very hard to get fullscreen modals rendered in SwiftUI (pitfall #1), there’s nothing like UIModalPresentationStyle.fullScreen in SwiftUI, using a ZStack gives you full control over how you want to present and render screens. As you can see below, we use measurementService.state, to determine whether to display MeasurementView and SuccessView. What is this NavigationConfigurator thing doing? Customizing navigation bar is already hard in UIKit (if you want translucency, getting rid of a default gradient or shadow or separator); customizing nav bar in SwiftUI is even harder (pitfall #2). The following code shows what’s inside NavigationConfigurator: it exposes a chance for the call site to configure the nav bar, through UIKit API. The Fun Part: BpmView The most fun part in SwiftUI, is building cool UI and animations in no time. For example, the BpmView renders the current bpm reading, and has a smooth ring progress animation. To build this UI, we place a RingView in the bottom of the ZStack, and place all other elements on the bottom, with the help of a VStack. Note that you can’t have have more than 10 children in a SwiftUI VStack (pitfall #3), so be mindful of these small limitations here and there. RingView consists of two RingShape, the second being overlaid on top of the first one. The RingShape is an animatable Shape struct. Now we have all the components in place, let’s preview different variants of this BpmView. Not bad! Part 1 Summary In this part, we went through how to create a SwiftUI app in production, which is multi-screen with animatable components. We also went through several tricky part in the current state of SwiftUI: #1. Render fullscreen UI without modal presentation.. #2. Customize nav bar by hooking into UIKit API. #3. Don’t put more than 10 children in VStack/HStack/ZStack. In the next part, I’ll talk about how to have MeasurementService connect with an Objc library, and actually take heart rate measurements, and feed the results onto the UI.
OPCFW_CODE
Using java Spark DataFrame to access Oracle over jdbc I find the existing Spark implementations for accessing a traditional Database very restricting and limited. Particularly: Use of Bind Variables is not possible. Passing the partitioning parameters to your generated SQL is very restricted. Most bothersome is that I am not able to customize my query in how partitioning takes place, all it allows is to identify a partitioning column, and upper / lower boundaries, but only allowed is a numeric column and values. I understand I can provide the query to my database like you do a subquery, and map my partitioning column to a numeric value, but that will cause very inefficient execution plans on my database, where partition pruning (true Oracle Table Partitions), and or use of indexes is not efficient. Is there any way for me to get around those restriction ... can I customize my query better ... build my own partition logic. Ideally I want to wrap my own custom jdbc code in an Iterator that I can be executed lazily, and does not cause the entire resultset to be loaded in memory (like the JdbcRDD works). Oh - I prefer to do all this using Java, not Scala. Take a look at the JdbcRDD source code. There's not much to it. You can get the flexibility you're looking for by writing a custom RDD type based on this code, or even by subclassing it and overriding getPartitions() and compute(). I was already looking into customizing JdbcRDD, but ran initially ground in trying to use Java. However, I gained a little better understanding of Scala (the implementation language for most of Spark and JdbcRDD), and got the basics of converting between the two language types. Now I was able to extend Base RDD into my own supporting both very granular partitioning control, and bind variables! So neat that I feel it should be part of the standard package. Right :)I suggested looking at JdbcRDD, because it's (relatively) easy to read and customize. It's what I've chosen in the past to get something working quickly. However, the future of JDBC in Spark is presumably the DataFrame-oriented implementation in JDBCRDD. If you're looking to improve the "standard package", I would suggest focusing your efforts there :) Yes thanks. It was actually your answer that made me more persistent in still trying to get it to work. It is surprisingly simple, and it helped me gaining better understanding of how spark works. Also I think you might have in your recent comment answered my next few questions I had in my mind. I was wondering how I can gather schema information and send it back to the driver and/or next tasks effeciently. I think DataFrame friendly JDBCRDD might hold the answer? I studied both JdbcRDD and new Spark SQL Data source API. None of them support your requirements. Most likely this will be your own implementation. I recommend writing new Data sources API instead of sub-classing JdbcRDD which became obsolete in Spark 1.3.
STACK_EXCHANGE
What is the connection between GFLOPs value and GPU RAM value for Mask RCNN? Hi, when I run python tools/get_flops.py configs/mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py --shape 800 command, I get above result. ============================== Input shape: (3, 800, 800) Flops: 181.09 GFLOPs Params: 44.17 M ============================== When I change shape size, Params value does not change but Floops value increases. But when I set shape value is 2000, I get Out of Memory(GPU RAM 8 GB). In my mind, I could not establish the connection between the Flops value and GPU RAM. So what is the Ram equivalent of Flops value according to Mask_rcnn_r50_fpn_1x_coco config? Do I need to do RAM / Flops directly? Is there anyone who can explain this to me? Thank you(Sorry for my bad English) @zerok01 flops are related to image resolution, model forward will be called when you are run tools/get_flops.py @zerok01 flops are related to image resolution, model forward will be called when you are run tools/get_flops.py @zerok01 flops are related to image resolution, model forward will be called when you are run tools/get_flops.py Thank you for your answer. But what I wanted to ask is KiloByte or MegaByte equivalent of the Flops value. So for example, can we say 1 GLOPS equals 1 GigaByte? Is there a formula for this? Actually, this is what I want to learn. @zerok01 flops are related to image resolution, model forward will be called when you are run tools/get_flops.py Thank you for your answer. But what I wanted to ask is KiloByte or MegaByte equivalent of the Flops value. So for example, can we say 1 GLOPS equals 1 GigaByte? Is there a formula for this? Actually, this is what I want to learn. @zerok01 FLOPs is Floating-point Operations, MFLOPs is Mega FLOPs, GFLOPs is Giga FLOPs. The formula you can reference Pruning Convolutional Neural Networks for Resource Efficient Inference The code you can reference flops_to_string flops_to_string function @zerok01 FLOPs is Floating-point Operations, MFLOPs is Mega FLOPs, GFLOPs is Giga FLOPs. The formula you can reference Pruning Convolutional Neural Networks for Resource Efficient Inference The code you can reference flops_to_string flops_to_string function The connection between FLOPs and GPU memory cost is not very clear actually. It depends on the implementation of the model and PyTorch libraries. Usually, under similar conditions (model input etc.), models with small FLOPs should have small GPU memory cost. The connection between FLOPs and GPU memory cost is not very clear actually. It depends on the implementation of the model and PyTorch libraries. Usually, under similar conditions (model input etc.), models with small FLOPs should have small GPU memory cost.
GITHUB_ARCHIVE
Models, code, and papers for "Shlomo Dubnov": In this paper we explore techniques for generating new music using a Variational Autoencoder (VAE) neural network that was trained on a corpus of specific style. Instead of randomly sampling the latent states of the network to produce free improvisation, we generate new music by querying the network with musical input in a style different from the training corpus. This allows us to produce new musical output with longer-term structure that blends aspects of the query to the style of the network. In order to control the level of this blending we add a noisy channel between the VAE encoder and decoder using bit-allocation algorithm from communication rate-distortion theory. Our experiments provide new insight into relations between the representational and structural information of latent states and the query signal, suggesting their possible use for composition purposes. Automatic music generation is an interdisciplinary research topic that combines computational creativity and semantic analysis of music to create automatic machine improvisations. An important property of such a system is allowing the user to specify conditions and desired properties of the generated music. In this paper we designed a model for composing melodies given a user specified symbolic scenario combined with a previous music context. We add manual labeled vectors denoting external music quality in terms of chord function that provides a low dimensional representation of the harmonic tension and resolution. Our model is capable of generating long melodies by regarding 8-beat note sequences as basic units, and shares consistent rhythm pattern structure with another specific song. The model contains two stages and requires separate training where the first stage adopts a Conditional Variational Autoencoder (C-VAE) to build a bijection between note sequences and their latent representations, and the second stage adopts long short-term memory networks (LSTM) with structural conditions to continue writing future melodies. We further exploit the disentanglement technique via C-VAE to allow melody generation based on pitch contour information separately from conditioning on rhythm patterns. Finally, we evaluate the proposed model using quantitative analysis of rhythm and the subjective listening study. Results show that the music generated by our model tends to have salient repetition structures, rich motives, and stable rhythm patterns. The ability to generate longer and more structural phrases from disentangled representations combined with semantic scenario specification conditions shows a broad application of our model. We present a model for capturing musical features and creating novel sequences of music, called the Convolutional Variational Recurrent Neural Network. To generate sequential data, the model uses an encoder-decoder architecture with latent probabilistic connections to capture the hidden structure of music. Using the sequence-to-sequence model, our generative model can exploit samples from a prior distribution and generate a longer sequence of music. We compare the performance of our proposed model with other types of Neural Networks using the criteria of Information Rate that is implemented by Variable Markov Oracle, a method that allows statistical characterization of musical information dynamics and detection of motifs in a song. Our results suggest that the proposed model has a better statistical resemblance to the musical structure of the training data, which improves the creation of new sequences of music in the style of the originals. With recent breakthroughs in artificial neural networks, deep generative models have become one of the leading techniques for computational creativity. Despite very promising progress on image and short sequence generation, symbolic music generation remains a challenging problem since the structure of compositions are usually complicated. In this study, we attempt to solve the melody generation problem constrained by the given chord progression. This music meta-creation problem can also be incorporated into a plan recognition system with user inputs and predictive structural outputs. In particular, we explore the effect of explicit architectural encoding of musical structure via comparing two sequential generative models: LSTM (a type of RNN) and WaveNet (dilated temporal-CNN). As far as we know, this is the first study of applying WaveNet to symbolic music generation, as well as the first systematic comparison between temporal-CNN and RNN for music generation. We conduct a survey for evaluation in our generations and implemented Variable Markov Oracle in music pattern discovery. Experimental results show that to encode structure more explicitly using a stack of dilated convolution layers improved the performance significantly, and a global encoding of underlying chord progression into the generation procedure gains even more. Adversarial Reprogramming has demonstrated success in utilizing pre-trained neural network classifiers for alternative classification tasks without modification to the original network. An adversary in such an attack scenario trains an additive contribution to the inputs to repurpose the neural network for the new classification task. While this reprogramming approach works for neural networks with a continuous input space such as that of images, it is not directly applicable to neural networks trained for tasks such as text classification, where the input space is discrete. Repurposing such classification networks would require the attacker to learn an adversarial program that maps inputs from one discrete space to the other. In this work, we introduce a context-based vocabulary remapping model to reprogram neural networks trained on a specific sequence classification task, for a new sequence classification task desired by the adversary. We propose training procedures for this adversarial program in both white-box and black-box settings. We demonstrate the application of our model by adversarially repurposing various text-classification models including LSTM, bi-directional LSTM and CNN for alternate classification tasks. Recent approaches in text-to-speech (TTS) synthesis employ neural network strategies to vocode perceptually-informed spectrogram representations directly into listenable waveforms. Such vocoding procedures create a computational bottleneck in modern TTS pipelines. We propose an alternative approach which utilizes generative adversarial networks (GANs) to learn mappings from perceptually-informed spectrograms to simple magnitude spectrograms which can be heuristically vocoded. Through a user study, we show that our approach significantly outperforms na\"ive vocoding strategies while being hundreds of times faster than neural network vocoders used in state-of-the-art TTS systems. We also show that our method can be used to achieve state-of-the-art results in unsupervised synthesis of individual words of speech. In this work, we demonstrate the existence of universal adversarial audio perturbations that cause mis-transcription of audio signals by automatic speech recognition (ASR) systems. We propose an algorithm to find a single quasi-imperceptible perturbation, which when added to any arbitrary speech signal, will most likely fool the victim speech recognition model. Our experiments demonstrate the application of our proposed technique by crafting audio-agnostic universal perturbations for the state-of-the-art ASR system -- Mozilla DeepSpeech. Additionally, we show that such perturbations generalize to a significant extent across models that are not available during training, by performing a transferability test on a WaveNet based ASR system.
OPCFW_CODE
Using a function in another function before that first function is defined When writing an API, I tend to like putting the functions in a top-down order, with the most exposed functions at the top, and the helper functions at the bottom. However, when defining functions with var rather than the magic function delcaration, a function cannot be used before it's defined. So what about if we have an object called $company and we're defining its methods. Can I safely order my JS in this fashion? var $company = {}; $company.foo = function(x) { $company.bar(x*x); // used in definition, but not called directly - ok? }; // $company.bar(6) // this would produce an error $company.bar = function(x) { alert(x); }; It seems to work in my current version of Firefox, but I'd like to know if it's defined behavior. Are there any versions of IE where this breaks? JS will use your function when you call it, so when you are declaring foo, bar may not exist, but it should exist when foo is called. Yes you can. Functions are only defined, not executed. The JS engine executes each line of your file : var $company = {}; $company.foo = ...; $company.bar = ...; And later, at $company.foo execution, $company.bar is defined! This makes sense. I can't find it explicitly written anywhere though. I did find this other SO answer which talks about it. As the author of the referenced answer and a long time javascript hacker I can tell you a bit about how javascript/ECMAscript spec is written. First there was Netscape's implementation (only code, no spec) then Microsoft copied it as closely as possible (including bugs) then people decided that there ought to be a standard (mainly due to fears that Microsoft will evolve their js engine to be 100% incompatible with everyone else). This process goes on to this day with features Apple invented becoming a "standard" called HTML5, syntax Microsoft invented becoming part of SVG etc. Perhaps due to this, the javascript spec is actually an attempt at documenting, as closely as possible, current implementations of javascript (except the parts the standards committee cannot agree on). Therefore, a lot of the parts that browsers just happen to have exactly compatible behaviors are overlooked as obvious and remains undocumented. In the old days we had to write test scripts to figure out how js worked (especially with IE since we don't have access to the source code) If all the above sounds a bit scary (writing code in a language you cannot really completely know), it was. Very, very scary. Hah, I was just reading some of your other answers. :P The answer of yours I linked was really well written and I think I get it now. Thanks for the comments! Yes, this works since no browser (or no JavaScript engine) makes assumptions about what is to the right of a . until it has to evaluate the expression to the left. But many people don't like this kind of "look ahead" and use callback functions instead: $company.foo = function(x, callback) { callback(x*x); } This code is more obvious, more flexible since it can call almost anything, you can curry it, etc. Thanks. I'm not sure it depends on the . explicitly though, since the following also produces no error: var a = function() { b(); }; var b = function() { alert(1); }; Well, there are two effects here: JavaScript will parse the function body (to check for syntax errors) but it won't make many assumptions about what the individual bits mean. So in a sense, you have the same situation with function bodies and .: As long as the syntax is OK, JavaScript doesn't care (yet) if the body/right hand side makes sense.
STACK_EXCHANGE
|ENVISAT Product Reader API for C| The ENVISAT Product Reader API for C ( epr-c-api) is a library of data structures and functions for simple access to MERIS, AATSR and ASAR products as well as ATSR-2 products stored in the ENVISAT format. You can use them in your programs to retrieve directly the geophysically coded values, e.g. such as the chlorophyll concentration in mg/m³, in a data matrix. epr-c-api is generic, i.e. it has no instrument specific functions but utilises the generic ENVISAT product format. However, it is not necessary for a user of the epr-c-api to know the ENVISAT product format - the epr-c-api knows it, and that's sufficient. All a user of the epr-c-api has to know are a few functions and the name of the geophysical variable he wants to get. In fact, the do more: all information stored in ENVISAT products can be retrieved in a unique way, just by requesting it identified by its name, without worrying too much where and how it is stored in the world of the ENVISAT product format. Now, here's the tricky bit: there are many different products and the number of items stored in each product is huge. Therefore a ENVISAT Data Products documentation is included in the BEAM software homepage, which describes the internal structure of each supported product. In order to access a certain item such as a dataset, record, band or flag you have to consult the data product documentation for he correct name or ID for the dataset, record, band or flag you want to access. epr-c-api is written in pure ANSI C and should compile with every ANSI conformant C-compiler: *.cfiles contained in the distribution's srcfolder to your own source folder of your project and include them in your development environment (makefile or IDE). epr-api.hin your C source files. Given here are some of the basic concepts of the API. void epr_free_X(X). Example: For the type EPR_SRecord*these functions are uint epr_get_num_fields(const EPR_SRecord* record). epr_get_pixel_as_floatto access a pixel value of raster, your code is less dependent on API changes as if you would have directly accessed a structure member. epr_get_dataset_id(product_id, "Radiance_1")But how do you know, what name you should use for the data you want to retrieve? There are three possibilities: epr-c-api, scan all datasets in a look and print the names of the datasets and records by the access function. The same can be done with all fields of a record. The API provides two access types for ENVISAT data: The difference between the two is how they treat the measurement data: Access type (1) returns the data in its native data product structure: as datasets, records and fields. This gives a direct read access to the raw data as it is stored in the data product, without any interpretation of the content. For example, the MERIS L1B measurement data set MDS 15 stores chlorophyll, vegetation index or cloud top height in the same byte. The content has to be interpreted depending on the surface type, which itself is coded in MDS 20. To get the true geophysical value one needs to retrieve the proper scaling factors from a certain GADS. Access type (2) decodes all this and provides, for example, a data matrix with the chlorophyll concentration in a float variable in its geophysical units, i.e. mg/m³. The data of the measurement datasets and the tie-point values, i.e. geometry, geo-location and meteorological data, are available by this method. The tie-point data are interpolated to the corresponding image grid. |Level||Correspondence in ENVISAT product| |Product||ENVISAT product file| |Dataset||Dataset, e.g. Main Product Header MPH, or a Measurement Data Set MDS. A dataset contains records. For example, the MERIS L1b MDS1 contains many records with radiance values.| |Record||A single record within a dataset. A record contains many fields. For example, a record within the MERIS L1b MDS1 contains a time-stamp field, a quality flag field and then a field with as many radiance values as contained in the width of the image.| |Field||A field within a record. Fields can be a scalar or a vector. In the example, the time stamp and quality flag are scalars while the radiance field is a vector of the length of an image line.| |Element||Optional. If a field is a vector, these are the elements of that vector. In the example each element of the radiance field is the radiance of a certain pixel.| On each level of the hierarchy, functions exist to point to a certain instance of it, e.g. a certain product, dataset or record and so on. That function generally requires the name of the item to be retrieved. All possible names can be found in the DDDB , and the name of the most important items are listed here. The function returns an identifier of that instance. The identifier is not the instance but a pointer to it. It is used to get the values of the instance by using access functions. my_product_id = epr_open_product("my_MERIS_product"); returns the identifier to the which points to the product contained in the file "my_MERIS_product". rad1_id = epr_get_dataset_id(my_product_id, "Radiance_1"); my_product_id is used get the identifier which points to the dataset. Now, one can call num_rec = epr_get_num_records(rad1_id); to get the number of records which this dataset contains. Finally one can loop over all records and gets its fields and elements. See the examples to get a complete code example. To work with geophysical data is easier than the basic access. No such deep hierarchy exists. Instead of accessing datasets, so called bands will be accessed. A band directly includes one single geophysical variable. my_product_id = epr_open_product(my_product_file_path); my_chl_id = epr_create_band_id(my_product_id, "algal_1"); returns the identifier to a band containing the chlorophyll product. Now, the actual data are read into a raster. chl_raster = epr_create_compatible_raster(my_chl_id, ...); status = epr_read_band_raster(chl_raster, ...); chl_raster now contains the data as two dimensional matrix of pixel. To get the value of a pixel at a certain index (i,j), one should use the access function: chl_pixel = epr_get_pixel_as_float(chl_raster, i, j); (See also the examples ndvi.c for complete example codes.) The concept of the raster allows spatial subsets and undersampling: A certain portion of the ENVISAT product will be read into the raster. This is called the source. The complete ENVISAT product can be much greater than the source. One can move the raster over the complete ENVISAT product and read in turn different parts - always of the size of the source - of it into the raster. A typical example is a processing in blocks. Lets say, a block has 64x32 pixel. Then, the source has a width of 64 pixel and a height of 32 pixel. Another example is a processing of complete image lines. Then, the source has a widths of the complete product (for example 1121 for a MERIS RR product), and a height of 1. One can loop over all blocks or image lines, read into the raster and process it. It is, of course, also possible to define a raster of the size of the complete product. In addition, it is possible to define a subsampling step for a raster. This means, that the source is not read 1:1 into the raster, but that only every 2nd or 3rd pixel is read. This step can be set differently for the across track and along track directions. MERIS and AATSR provide many so called flags, which are binary information indicating a certain state of a pixel. For example, this can be a quality indicator, which, if set, indicates that the value of the pixel is invalid. Other example are a cloud flag, indicating that this pixel is a measurement above a cloud, or a coastline flag. The flags are stored in a packed format inside the ENVISAT products, but the epr-c-api provides a function to easily access the flags. It returns a bit-mask, which is a byte array that matches the corresponding image raster and is 1 where the flag is set and 0 elsewhere. Even more, this functions permits to formulate a bit-mask expression to combine any number of flags in a logical expression and returns the resulting combined bit-mask: bm_expr = "flags.LAND OR flags.CLOUD"; status = epr_read_bitmask_raster(product_id, bm_expr, ..., bm_raster); This is an example to get a bit-mask which masks out all land and cloud pixels. The names of the flags are found in the epr_read_bitmask_raster function read from product, product_id, the flags and stores the resulting bit-mask in the raster See the examples for the complete code. epr-c-api provides the following group of functions: |(1)||Initialisation||Functions for setting up the environment of the API and releasing memory when the API is closed.| |(2)||Logging||Functions to manage logging information| |(3)||Error handling||Functions for determining the behaviour of the API in case of errors, and of getting information about runtime errors| |(4)||Input / Output||Opening and closing ENVISAT products and writing data to a file or stdout.| |(5)||Basic data access||Functions to retrieve raw data as stored in ENVISAT products| |(6)||Geophysical data access||Functions to retrieve geophysically interpreted data in a raster matrix| |(7)||Bit masks||Functions for generating bit masks from the flags included in ENVISAT products.| Generated on Mon Aug 2 15:24:00 2010 ENVISAT Product Reader C API Written by Brockmann Consult, © 2002
OPCFW_CODE
There’s a big question in my mind about K-12 education. The question is which of the following hypotheses about helping disadvantaged children is the best bet: - Disadvantaged children are so far behind by age 5 that there’s nothing substantial to be done for them in the K-12 system. - Disadvantaged children are so far behind by age 5 that they need special schools, with a special approach, if they’re to have any hope of catching up. - Disadvantaged children generally attend such poor schools that just getting them into “average” schools (for example, parochial schools without the severe behavior and resource problems of bottom-level public schools) would be a huge help. My view on KIPP vs. the Children’s Scholarship Fund, for example, hinges mostly on my view of #2 vs. #3. Of course, believing #1 would make me want to avoid this cause entirely in the future. We’ve been examining academic and government literature to get better informed on this question, but we’ve noticed a serious disconnect between what we most often want to know and what researchers most often study. To answer our question, you’d study how students do when they change schools, focusing on school qualities such as class size, available funding, disciplinary records, academic records, and demographics. However, most academic and government studies of voucher/charter programs focus instead on whether a school is designated as “public,” “private,” or “charter.” Three prominent examples: - The New York City Voucher Experiment intended to examine the impact of increased choice (via vouchers) on student achievement; the papers on it (Kruger and Zhu 2003; Mayer et al. 2002; Peterson and Howell 2003) conduct a heated debate over who benefited, and how much (if at all), from getting their choice of school, but do not examine or discuss any of the ways in which the schools chosen differed from the ones students would have attended otherwise. - “Test-Score Effects of School Vouchers in Dayton, Ohio, New York City, and Washington, D. C.: Evidence from Randomized Field Trials” (Howell et al. 2000), a review of several voucher experiments, also discusses the impact of vouchers without reference to school qualities. - “Apples to Apples: An Evaluation of Charter Schools Serving General Student Populations” (Greene et al. 2003) performs similar analysis with charter schools, looking broadly at whether charter schools outperform traditional public schools without examining how, aside from structurally, the two differ. To be sure, there are exceptions, such as recent studies of charter schools in New York including Hoxby and Murarka 2007. But in trying to examine the three hypotheses above, I’ve been struck by how often researchers pass over the question of “good schools vs. bad schools” (i.e., the 3 hypotheses outlined above) in favor of the question of “private vs. public vs. hybrid schools.” When a debate is focused on government policy, it makes sense for it to focus on political questions, such as whether the “free market” is better than the “government.” But when you take the perspective of a donor rather than a politician, this question suddenly seems irrelevant. Some public schools are better than others; some private schools are better than others; and I, for one, would expect any huge differences to be driven more by the people, practices and resources of a school than by the structure of its funding (i.e., whether donors, taxes, parents or a mix are paying). That’s why we’d like to see more studies targeted at donors, rather than politicians. But for it to happen, donors have demand it.
OPCFW_CODE
Emoji One for Chrome released The team at Emoji One have had a busy time since the launch of the 2016 Collection exactly two months ago. In January, a new version of the open source emoji set was released with seven new designs and hundreds of minor tweaks. More on that in a moment. Today, Emoji One released an extension for Chrome which allows users to replace native OS X or Windows emojis with those from Emoji One. Above: Emojipedia viewed in Chrome with the Emoji One Extension installed. Emoji One for Chrome has two main features: An emoji picker interface within the Chrome browser Replacement of platform-native emojis with Emoji One emojis Here's the emoji picker interface provided when Emoji One for Chrome is installed: Above: Emoji One for Chrome. Clicking the Emoji One button in the toolbar provides a categorised list of emojis (reminiscent of Apple's emoji picker for OS X), in addition to a search field and toggle for various skin tone options. Above: A search field and modifier options. While OS X already includes a very decent emoji picker (Cmd-Ctrl-Space is the shortcut), Windows lags behind. The built-in emoji keyboard on Windows is slow to bring up (no shortcut), and doesn't even include all the supported emojis. Given these factors, I can see the emoji picker interface being of most benefit to those on Windows. Those who use Safari for Mac may not realize that Chrome for Mac does not support: - 🚫 Emoji modifiers (for skin tone) - 🚫 ZWJ Sequences (for various family and couple combinations) Above: Diverse emojis aren't support in Chrome out of the box. Even further behind is Chrome for Windows which doesn't support color emojis. In 2016. This feature is optional, and does not apply to websites which provide their own emoji artwork such as Twitter, Facebook, or Gmail. Above: GetEmoji.com on Chrome for Windows 10. The emoji replacement feature of Emoji One for Chrome is one way to work around these issues, on both OS X and Windows PCs. Emoji One for Chrome is available now from the Chrome Web Store. Emoji One 2.1 Back to the emoji updates recently released, on January 29 2016. Following the complete redesign of every emoji in Emoji One 2.0, this most recent upgrade to Emoji One 2.1 is moderate by comparison. Changes include: 👯 Woman With Bunny Ears now sides with Apple in showing two girls, instead of one: 🐉 Dragon turns green, and faces left to be more consistent with other platforms: 🐲 Dragon Face also loses the purple and in a dramatic rethink turns green and has fire coming from the mouth instead of nose: 🐢 Turtle is no longer a sea turtle and now lives on the land: ✈️ Airplane has changed orientation, now facing North-East instead and shown from above (also now more consistent with the iOS airplane): 🐎 Horse loses the cute forward-facing appearance and joins every other platform with a left-facing direction: The default hair color for all human-looking emojis has changed from black to yellow: 📿 Prayer Beads now include a tassel instead of a cross. This makes sense as these are used in more religions than Christianity alone: 📆 Tear-Off Calendar joins the growing list of platforms with a July 17 date. This previously displayed September 11 in Emoji One 2.0. A range of other minor updates and tweaks were included in Emoji One 2.1. A detailed set of release notes have been published here. Modifiers are supported by Windows, but inaccessible from the touch keyboard. Windows 10 added support for new emojis such as the slightly smiling face and middle finger, and these haven't yet been added to the keyboard either. ↩︎ Take a look at how much this particular emoji has changed since version 1.0 to 2.0 and now 2.1. ↩︎ Set first by Apple as a homage to iCal's announcement date, and since adopted by yours truly as the date for World Emoji Day. ↩︎
OPCFW_CODE
Top 10 Devops Tool Jenkins is most effective devops tool for several DevOps project leaders to include continuous integration (CI) into IT infrastructure. Initially in 2011, the open source automation server has full-grown from strength to strength, and, in keeping with its own selling, “can be used as an easy CI server or was the continual delivery hub for any project”. A recent Pipeline plugin is added, so that the customers will implement their project’s entire build/test/deploy pipeline during a Jenkins file and store it along with the code. This is the key feature of the Jenkins which supports continuous delivery which in turn speeds deployment. This one nearly goes while not language – however the world’s largest repository of ASCII text file can accelerate. DevOps comes with its effective search navigation and cooperative structure. The platform wraps a package version system referred to as “Git” and provides an area for package developers to host their code on-line for complimentary. It’s conjointly a social web site, wherever queries will be asked and code exchanges created. Github incorporates a handy list of DevOps tools with one-line descriptions to assist developers zone in on relevant package. Helpfully, every suggestion is colour-coded to point that language it’s written in, saving search time. Sooner or later, developers are attending to scrabble around for a technique of knowledge abstraction and Docker is a solid tool to support for.Launched in 2013, Docker is attributable with the popularisation of containers. operational on Windows or UNIX system and compatible with any programing language, the toolkit can section off package, wrapping it during a filesystem that facilitates machine-driven deployments. this feature help it easier to host applications within adaptable environments. Docker automates operating-system-level virtualization – and it’s integration-friendly, cooperating with IBM Cloud, AWS, Oracle Cloud, Azure and additional. It is the best devops tool for examining, scrutinize and visualize the machine-generated information or logs gathered from the websites, applications, sensors, devices etc. that frame your IT infrastructure and business.It Ingest information in multiple file format and generates informational objects for operational intelligence.This tool Monitors business metrics to urge log insights Nagios is a powerful system that permits organization to spot and resolve IT infrastructure issues before they have an effect on important business processes.Nagios monitors and troubleshoot server performance problems and arrange infrastructure upgrades before noncurrent systems cause failures and mechanically fix issues once detected. Snort is a DevOps tool for security. Snort is capable of time period traffic analysis and packet work. It Performs protocol analysis and content looking out and matching.It Detects buffer overflows, hiding port scans, CGI attacks, SMB proves, OS process makes an attempt, and different attacks and probes As a cache proxy for the net, Squid, a DevOps tool that optimizes net delivery and supports hypertext transfer protocol, HTTPS, FPT, and more. By reducing information measure and up response times via caching and reusing frequently-requested web content, Squid conjointly operates as a server accelerator.It gives Extensive access controls,Runs on most offered operational systems together with Windows,Optimizes information flow between consumer and server to boost performance,Caches frequently-used content to avoid wasting information measure Monit is a DevOps tool for monitoring system and error recovery. Monit provides straightforward, monitoring of proactive of processes, programs, files, directories, filesystems, and additional. It is a freely available devops tool for managing and systematic review of UNIX systems.It Conducts automatic maintenance and repair.It Executes significant causative actions in error things It is a DevOps tool used for detecting and configuring services in infrastructure. Consul is a perfect tool for contemporary, elastic infrastructures..It provide a service like API or MySQL.It also provides health checks associated either with a given service or with a neighborhood node.It uses ordered key/value store for active configuration, feature drooping and some more.Supports multiple information centers out of the box. The leading package development tool employed by agile groups, JIRA package is employed by DevOps groups for issue and project trailing. For groups that need to ship early and infrequently, JIRA package is that the ideal tool as a result of it’s the singular tool each member of team has to arrange, track, and unharness a good product. Create user stories and problems, arrange sprints, and distribute tasks across DevOps groups. It Improves team performance supported time period, visual information.
OPCFW_CODE
Communities …Posted: September 16, 2008 | | Here is an interesting post about communities … about how to run communities … Rob Howard has written a rather interesting piece about this. There are a few points here which i wanted to write about … Firstly, there is the point about generating value. This is an interesting part. Actually, this is the chicken-and-egg situation which i have written about before. With communities, people wont adopt till they find value, and communities wont generate value till people adopt. And it is this cycle which needs to be addressed by organizational intervention. Of course, different ways would be used in different scenarios, but one way could be to identify community evangelists, or managers, if you will, who can spread the word … generate awareness about communities, and the value these communities can generate for people who join in. Of course, this would need to be supplemented by some sort of rewards program which the organization would need to bring in. Of course, this idea of value also brings to the point that when people join a community, they are, more often than not, looking at getting, rather than giving … and hence, the organization may need to invest expertise into building some content, some expertise sharing, to attract people to sort of follow the experts. This could be one way of getting out of the cycle. Of course, this still doesnt address the basic problem. If the only reason people join the community is to read the comments of these experts, the community would stagnate over a period of time … how lang can one or two experts sustain a community? Not long enough, one would think. Which means, that over a period of time, there would need to be some means of inviting more and more people to write, to share, and give, rather than passive receivers. Some form of value for contributors to the community must be developed. Here again, different things work for different people, which means that a rewards mechanism which reaches out to a maximum number of folks would be helpful. Recognition, perhaps? Or, maybe, brownie points? Or, maybe this kind of mechanism for advertising the contributions of people? The most important point Rob raises is about the value of the community. Since the community is going to oeprate in a articular context, it is a little easier to identify where the community should have reached, or what the community should have delivered after a period of time, and this should be more than simply number of posts, number of replies, etc. (which, by the way, is the way a lot of organizations i have interacted with measure …). Having said this, there must be some form of balance between the achievement of the community, and the contribution of individuals. The temptation to hide individuals beneath the umbrella of the community is high, but it must not be given in to. Otherwise, over a period of time, you end up driving away people from the community.
OPCFW_CODE
Unity, XCode5.0.2 and iOS7 app submission problems i created an app in unity, made the .apk for android and submitted to google play. everything worked fine and without issues. now i want to do the same for iOS and i only have problems! after breaking up my head with certificates and profiles i finally managed to run my app on my ipad and on the iOS simulator and finally tried to submit it to the appstore, just to realize that apple only allows iOS 7 apps and Xcode 5 since february 2014. okaaay, so i downloaded xcode 5 and the iOS 7 sdk and submitted the app to the appstore. but now i got the next problem and i dont know how to solve it: http://i1371.photobucket.com/albums/ag296/marauderkr/Bildschirmfoto2014-03-07um11047PM_zpsd9742709.png i read a tutorial how to add the missing app icons. it said i should add the png files in my folder and add lines to the info.plist with the image names. so i did this: (adding lines to the plist after adding pngs to the "library" folder) http://i1371.photobucket.com/albums/ag296/marauderkr/Bildschirmfoto2014-03-07um11155PM_zps2e86dcb6.png but is till get the same error. i dont know what to do as i am not into xcode (only into c#/unity) i hope someone can help. maybe its just some stupid thing i am doing wrong? (sorry for the links but stackoverflow says i cant post pictures before reputation 10+ and this is my first post...) EDIT: problem solved! it seems like the problem is solved! i found old xcode 3 versions on my desk and it seems like something interfered with the new ios 7 sdk OR the new xcode 5 stuff. anyway, it works now! so if anyone else encounters this problem - be sure to absolutely have deinstalled all old xcode versions. it helped to delete the old stuff and the old sdk. i now run xcode 5.1 and the ios7 sdk and the latest unity update 4.3.4 everything works! Though important, the lack of app icons should not prevent submission; the error "This bundle is not valid...must be built with public" is more likely the cause of invalidation. Double check your provisioning profiles, and see if you can post a screen shot of your build settings so we can narrow the problem down. thanks for your answer. i checked the profiles and even created a new distribution profile to archive+validate it but still i get the error. posting all build settings are many screens but here is the first screen to start off: http://i1371.photobucket.com/albums/ag296/marauderkr/Bildschirmfoto2014-03-11um104217AM_zps92151cbc.png .... maybe u already see an error there. as i said im not very much into xcode, maybe its just some flag i missed or something! nvm, it seems like the problem is solved! i found old xcode 3 versions on my desk and it seems like something interfered with the new ios 7 sdk OR the new xcode 5 stuff. anyway, it works now! Very cool, that is nice to hear!
STACK_EXCHANGE
They say first impressions are your only impressions. People fret a lot about making good first impressions, which makes sense. You’re almost permanently deciding what people think of you. First impressions apply to the internet, as well. Making this first post is a huge part of showing people who you are. WordPress suggests writing your first post along the lines of “who I am and why I’m here”, which sounds pretty good to me. WHO I AM I’m Liz: sporadic storyteller, aspiring coder, and (hopefully) cool nerd. I love creating stuff, whether it’s drawings or stories or various little code projects. My time away from school pretty much revolves around coding, writing, and designing. My fascination with code probably started when I was little. Really little. For my 8th birthday, my parents downloaded a gamemaking software onto the computer for me to use. I wanted to make full-fledged 3D games with music and everything, but little me quickly learned that that was going to be hard. I stayed away from the code stuff, but I used the interface to make some pretty neat games for an eight-year-old. Sadly, it’s all gone forever, but it really wasn’t that impressive anyway. I remember a few years ago in a computer class at school some kid next to me showed me how to change stuff on a webpage using Chrome’s devtools. I was incredibly impressed. Fast forward a year or so and now I was trying to figure out CSS by deleting portions of it and seeing what happened. I had no clue what I was doing, really, but I figured out some stuff. Eventually, I kind of gave up on learning to code because it was just too hard. Another year or so later, and I was back taking online courses in HTML and CSS and actually learning stuff. I wanted to learn everything I could about building websites and all the cool things you can do with them. I’ve been interested in storytelling for a long time, as well. I have stories on my hard drive that I started in 4th grade, and probably earlier. They’re pretty bad though, so I tend to stay away from them. I participated in (and won) NaNoWriMo 2014 and this April’s camp. In April I discovered that I really like screenwriting, and I may start to focus my creative efforts in to telling stories that way. I became super interested in graphic design when I stumbled across a bunch of edits that people made of celebrities. Little me thought that they were super cool, and I started making my own. I thought they were amazing, but you have to remember I was using a bunch of free apps on an ipod touch to just stack images on top of each other. Eventually, I discovered that my dad owned an old version of photoshop and of course I installed it and learned how to use it. I’m not as interested in design as much as coding, now, but I still love it. WHY I’M HERE I’ve wanted to have my own website for a while now, and now I have one. I want to have a space to show off all of the things that I create. What’s the point of creating cool stuff if you never get to show it to anybody? This blog is also kind of a place to collect all of my interests in one place and form my thoughts into something tangible. I’ll probably ramble on about events in my life, as well as post about projects that I’m working on or movies that I’ve seen. Pretty much whatever comes to mind. If I keep this blog up for a year, I hope to have a collection of finished projects that I can show to people, like games and stuff. I plan to learn a lot about what I want to do with my life, and discover new things. Really, I just hope to have fun and make a space that’s my own. So there we go! That’s a little intro on who I am and what I plan to do here. I hope that you enjoyed these words from a nerd, and I’ll (probably) see you next time!
OPCFW_CODE
Main / Transportation / Ubuntu thailand mirror Ubuntu thailand mirror Name: Ubuntu thailand mirror File size: 303mb Official Archive Mirrors for Ubuntu. These mirrors provide . Ubuntu-mirror-rafal- ca · http, 10 Mbps, One day behind .. Thailand, 27 Gbps, 8 mirrors. Bangmod. precise/, -, May releases/, -, May trusty/, -, Feb ubuntu-core/, -, Nov xenial/, -, Mar Feb ubuntu/, -, May ubuntu-releases/, -, May vim/, -, Aug wireshark-download/, -, Mar . 12 Oct Registration; Mirror Guidelines; Country mirror requirements You can help too, by creating a mirror of your own and provide people near you. ubuntu-artful/, Apr , -, Directory. ubuntu-bionic/, Apr 11 , -, Directory. ubuntu-cdimages/, May , -, Directory. 9 Nov Ubuntu is distributed (mirrored) on hundreds of servers on the Internet. A primary mirror site has good bandwidth, is available 24 hours a day. 17 Feb A primary mirror site has good bandwidth and is syncing directly from Thailand. prachimishraimageconsulting.com, /debian/, amd64 arm64 armel armhf. apt-get now supports a 'mirror' method that will automatically select a good mirror based on your location. Putting. from this reddit page: Australia. prachimishraimageconsulting.com Thailand. prachimishraimageconsulting.com United Kingdom. Australia. prachimishraimageconsulting.com Canada Thailand. prachimishraimageconsulting.com United Kingdom. To Switch The Mirror: Run Control Center『System Information → Update Sincerely thank the following universities, open-source communities and companies for their providing deepin with mirror services!! Ubuntu Taiwan ( Taiwan), http · ftp · rsync . Thailand. Thai Digital Network Story(Bangkok), http · https. Thai Digital. How to mirror Linux Mint United Kingdom, University of Kent UK Mirror Service . Thailand, Songkla University, prachimishraimageconsulting.com 29 Mar Michigan LUG MTU (Michigan Tech Linux User Group) mirror deleted (etc/apt/ prachimishraimageconsulting.comd/prachimishraimageconsulting.com) #stable repository deb prachimishraimageconsulting.com parrot main . Thailand KKU (Khon Kaen University) 1 Gbps. Thailand Mirror Linux · How to mount partition with ntfs file system and read write access · How to install the latest Nvidia drivers on Ubuntu Xenial Xerus. 31 Jan In Thailand, there is a Thai National Mirror which serves as open sources software repository including Ubuntu. Due to public advertisement.
OPCFW_CODE
Eclipse MAT OQL list of classes in a certain package using Eclipse MAT 1.9.1 OQL I want to list all the classes in the heap dump in a certain package. I am trying the query: SELECT c.getName() as name, c.getName().indexOf("com.mycompany") as idx FROM java.lang.Class c WHERE idx > 0 getting: java.lang.NullPointerException: idx at org.eclipse.mat.parser.internal.oql.compiler.Operation$GreaterThan.evalNull(Operation.java:232) at org.eclipse.mat.parser.internal.oql.compiler.Operation$RelationalOperation.compute(Operation.java:92) at org.eclipse.mat.parser.internal.oql.OQLQueryImpl.accept(OQLQueryImpl.java:1161) at org.eclipse.mat.parser.internal.oql.OQLQueryImpl.accept(OQLQueryImpl.java:1151) at org.eclipse.mat.parser.internal.oql.OQLQueryImpl.filterClasses(OQLQueryImpl.java:1133) at org.eclipse.mat.parser.internal.oql.OQLQueryImpl.doFromItem(OQLQueryImpl.java:921) at org.eclipse.mat.parser.internal.oql.OQLQueryImpl.internalExecute(OQLQueryImpl.java:690) at org.eclipse.mat.parser.internal.oql.OQLQueryImpl.execute(OQLQueryImpl.java:667) at org.eclipse.mat.inspections.OQLQuery.execute(OQLQuery.java:52) at org.eclipse.mat.inspections.OQLQuery.execute(OQLQuery.java:1) at org.eclipse.mat.query.registry.ArgumentSet.execute(ArgumentSet.java:132) at org.eclipse.mat.ui.snapshot.panes.OQLPane$OQLJob.doRun(OQLPane.java:468) at org.eclipse.mat.ui.editor.AbstractPaneJob.run(AbstractPaneJob.java:34) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:60) Please advise. There are several things wrong with that query. idx in SELECT c.getName() as name, c.getName().indexOf("com.mycompany") as idx FROM java.lang.Class c WHERE idx > 0 is a column name, so is not visible to the WHERE clause. OQL evaluates the FROM clause first selecting a list of objects, then the c variable is visible to the WHERE clause where each of the objects is examined to see if it will be passed to the SELECT clauses. So try: SELECT c.getName() as name, c.getName().indexOf("com.mycompany") as idx FROM java.lang.Class c WHERE c.getName().indexOf("com.mycompany") > 0 but that fails for the reason: Problem reported: Method getName() not found in object java.lang.Class [id=0x6c027f2c8] of type org.eclipse.mat.parser.model.InstanceImpl because some java.lang.Class objects in the heap dump are in fact not ordinary classes which could have instances, but plain object instances of type java.lang.Class. That sounds odd, but those objects are byte,short,int,long,float,double,char,boolean,void. They are just used to describe classes and methods for reflection - but instances of those can't exist. Inside of MAT objects in the heap dump are represented as org.eclipse.mat.snapshot.model.IObject and some are also of a subtype org.eclipse.mat.snapshot.model.IClass. We need to exclude those 9 special objects above. I've changed your query to look for com.sun as my dumps won't have com.mycompany objects. SELECT c.getName() AS name, c.getName().indexOf("com.sun") AS idx FROM java.lang.Class c WHERE ((c implements org.eclipse.mat.snapshot.model.IClass) and (c.getName().indexOf("com.sun") > 0)) This still doesn't work, because if the class name starts with 'com.sun' the index will be 0 and will fail the test. Change the test operator: SELECT c.getName() AS name, c.getName().indexOf("com.sun") AS idx FROM java.lang.Class c WHERE ((c implements org.eclipse.mat.snapshot.model.IClass) and (c.getName().indexOf("com.sun") >= 0)) This now works and finds some classes. We can simplify the query a little by using attribute notation, where @val is a bean introspection which is the equivalent of getVal(). SELECT c.@name AS name, c.@name.indexOf("com.sun") AS idx FROM java.lang.Class c WHERE ((c implements org.eclipse.mat.snapshot.model.IClass) and (c.getName().indexOf("com.sun") >= 0))
STACK_EXCHANGE
having been participled? Is anything wrong in this sentence? The enemy, beaten at every point, fled from the field. According to my book it should instead be: The enemy, having been beaten at every point, fled from the field. Why? There is only one subject in this sentence, so there should only be one verb; that is, fled. How can we use having been + the past participle? What’s the difference between the two sentence structures? Why do you think a subject can have only one corresponding verb? The past participle beaten is here used like an adjective to describe the condition of the enemy. The words "beaten at every point" form a subordinate clause. The sentence would stand grammatically without them - the main verb being "fled". Using the verb composite "having been beaten..." would be perfectly alright - but it is almost entirely synonymous with "beaten...". It's a reduced form, according to one analysis, after be-deletion ('having been' is omitted). Or if you consider the expanded form to exist but to be 'The enemy, who had been beaten at every point, fled from the field,' after whiz-deletion. Both covered before, though the passive forms are harder to locate. Any book that tells you something a native speaker says is wrong is wrong. There is almost never only one correct way to say something (outside the artificial rules of a classroom). More likely there are fifty. Be-deletion (here 'having been') is unremarkable. 'The ammunition used up, the opposing sides charged each other with bayonets' [South American Travels_Henry Stephens] What is the book? principal/main clause- The enemy fled from the field. subordinate/dependent clause- who had been beaten. (passive) The enemy, who had been beaten at every point, fled from the field We can leave out 'who had been'. The enemy, beaten at every point, fled from the field. We can use perfect participle. The enemy, having been beaten at every point, fled from the field. Mohammad, please select your required text and hit Ctrl+B if you need to embolden it, or use the markdown option (B) provided in the text editor. For more details, see this Help page. Thank you so much. This seems to at least reflect what I've already said in a comment. I didn't give an 'answer' as I couldn't find supporting references showing these equivalences. In the sentence "The enemy, beaten at every point, fled from the field", ...beaten at every point...is similar to "....having been beaten at every point..." It is the passive form of 'having + 3rd form', like "Having completed the work, he proceeded on leave", changing into "The work having been completed (by him), he proceeded on leave." Please cite a source in support. @Kris: Please refer to this link: https://ell.stackexchange.com/questions/98099/perfect-participle You better include that reference, if appropriate, in the body of the answer, to make it a proper answer. Good Luck.
STACK_EXCHANGE
Launcher Stories: Owen Snyder Owen Snyder felt bored and unaccomplished in his career. He’d been in sales for four years, and during that time he had switched from business development to a more customer-focused role. Neither position brought him the professional fulfillment he was craving. Though Owen had no coding experience, he had worked with software engineers in his role as a customer success manager, collaborating to solve customers’ technical issues. He quickly realized that creative problem-solving and technical work interested him far more than contracts and negotiations. While working remotely during the beginning of the pandemic, Owen began researching coding bootcamps. Launch Academy was his number one choice; he had a friend who had attended Launch and had a great experience. Owen spent several months learning coding basics, then decided to take the plunge and change careers. Owen’s Epiphany: The Decision to Attend Coding Bootcamp “Knowing someone who was similar to me who took the same risk made me think, ‘Okay, this person did it. . . I think I could, too,’” Owen said. He was living in Boston at the time and was encouraged by the number of success stories he had heard from Launch Academy graduates, including his close friend, who got a software development job quickly after graduation. Owen also appreciated that Launch Academy’s coding bootcamp would give him a chance to spend some time on campus. “Other courses were fully remote, but with Launch, I could go in and meet some of the students and teachers there,” he said. Still, he had some hesitations. “This is something I’ve never done before, a skillset I’ve never learned. . . I definitely considered that I could keep my job and keep doing what I was doing, or I could take that risk,” he said. Ultimately, Owen decided that learning a technical skill like coding would be incredibly valuable. When he saw the number of open roles on LinkedIn and other job boards, he knew he would have plenty of opportunities available. Owen’s Launch Academy Experience “My initial reaction [to the program] was that it didn’t seem as daunting on the first day as I thought it might have been. They were very good at walking you through the process and having resources available,” he said. Owen was impressed by how much support he got from his teachers and classmates. Between a Slack channel where students could interact with their cohort members and the right amount of “hand holding” from the teachers, he felt well-equipped to become a full-stack web developer. But even with the immense support Launch Academy offers students, the program still had its challenges. Owen struggled with imposter syndrome at different points in the curriculum and almost felt too discouraged to reach out for help. He wanted to prove to himself that he could learn independently. Knowing when to ask for that support and when to do his own research wasn’t easy. Luckily, Launch Academy anticipates that students will face these kinds of challenges. “They tell you that everyone gets imposter syndrome, even professional software developers. So it’s a normal feeling to have,” he said. As he got further into his coding bootcamp experience, Owen solidified his skills through Launch Academy’s group projects and pair programming. “The group project was a great way to learn from other students who might have a little more experience or a different style of coding than you,” he said. He and his cohort could blend their knowledge base when they worked in groups, making each of them stronger software developers. Preparation for the Software Engineering Job Market Following the group and capstone projects Owen completed at Launch, he felt confident in his front-end and back-end coding skills. Now, it was time to find a job—a task that Launch Academy made sure he was prepared to take on. Owen applied for an open role at America’s Test Kitchen, even though it required five years of experience. He worked with Launch Academy’s staff to fine-tune his resume, set up his GitHub, and practice for his technical interview. He got the job thanks to his dedicated interview prep—and because one of the projects he completed at Launch was a basic version of an app that America’s Test Kitchen uses. The hiring manager was impressed that he already had some experience with the application. Today, Owen is thrilled to be working in a field that suits him much better than sales. If his story resonates with you, know that you, too, can successfully switch to a career in software engineering. Download Launch Academy’s syllabus today to learn more.
OPCFW_CODE
PPX-less API for creating DOM elements and props. As discussed in https://github.com/ml-in-barcelona/jsoo-react/discussions/113. This adds a PPX-less API using plain functions to create DOM elements and props. I didn't manage to actually get started using jsoo-react this week, so none of this is actually tested in the real world. And there's a few essential parts missing still, see the TODO list below. This is a still good point to critique the general approach though. TODO: [x] Flesh out Event.Pointer [x] Implement ref prop [x] Implement style prop [x] Implement dangerouslySetInnerHTML prop [x] Add SVG elements and props [x] Add proper interface [x] tests [x] Context API? [x] forwardRef? Closes #105 A couple of issues have been surfaced from rewriting the tests: 1. Namespace pollution causing naming collisions Importing all these names into the namespace is bound to cause collisions. There are a couple ways we can address this: a. Import the names in a larger scope. This pollutes even more, of course, but at least allows local bindings to shadow imports. And if done at the top of the file will always allow shadowed names to be accessed by fully qualifying them. b. Selective imports. Not very idiomatic OCaml, and quite verbose, but this, along with point a above, is how it's done in Elm etc. c. Qualify each invocation, perhaps using an aliased short-name, e.g. module H = Html module, then H.div [|H.className "foo"|] []. d. Avoid binding all these names by using and API like h "div" [|p "className" "foo"|] [] for example. e. Bow down to the deity that is PPX. 2. Conditional props There's currently no direct support for conditionally passing props. With the current API this is done using optional labeled arguments, and in JSX is done by borrowing the same semantics, but this isn't possible when creating an array directly. A few options to address this: a. Offer something like Prop.none as an alias to any "" "", effectively offering a no-op that can be used with if-expressions. b. Offer some primitives for conveniently concatenating arrays, e.g. Prop.(concat [ [|className "foo"|]; if bar then [|href "..."|] else [||] ]) . c. Offer some kind of builder-style API to build a props array. d. Bow down to the deity that is PPX. Thanks for the work on this and detailed explanation on issues :) Namespace pollution causing naming collisions I feel inclined to follow either your suggestion in a. (import the names at the top of the module where the component implementation is being defined). or, using the approach you already applied in existing code (just rename prop to append some underscore, or something renaming solution). Alternatively, selective imports sound great as well, but as you say it can be quite cumbersome in this case, as there are tens of elements and props that are used in a single component tree. Conditional props Hm, this is an inconvenient issue indeed. Maybe an alternative option besides the ones you mention (kind of building on option b.) is to add a helper to produce either empty array or array with some prop, then combine this with Array.concat, e.g. for the example in PR: (* In Dom_html.Prop *) let option f value = match value with | Some v -> [|f v|] | None -> [||] (* In tests *) let%component make ~href:href_ = React.Dom.Html.(a (Prop.option href href_) []) Then can be used in combination with Array.concat when other props are involved: let%component make ~href:href_ = React.Dom.Html.( a (Array.concat [ Prop.option href href_ ; [|className "value"; id "some-large-id"; hidden true; lang "es"|] ] ) []) Bow down to the deity that is PPX. I hope with the work you're showcasing in this PR we can get to a point where PPX-less is decent enough 💪 😄 Then we can build PPX on top of it, for those that like adventure and want more thrill in their lives. Re. namespace pollution, I agree with your inclination and definitely don't think this is a deal-breaker, just a bit awkward. The error messages currently emitted when this happens are also not very nice, but should improve when the prop type is abstracted. (* In Dom_html.Prop *) let option f value = match value with | Some v -> [|f v|] | None -> [||] Good idea! And inspires me to make a further refinement! What if instead we define option to be a prop modifier that hides the no-op: (* In Dom_html.Prop *) let option prop = function | Some value -> prop value | None -> any "" "" (* val option : ('a -> prop) -> 'a option-> prop *) (* In tests *) let%component make ~href:href_ = React.Dom.Html.(a [|option href href_|] []) And inspires me to make a further refinement! What if instead we define option to be a prop modifier that hides the no-op Looks great! And it avoids the need to create empty arrays just for this purpose ✨ What do you think about option vs maybe for the name? let%component make ~href:href_ = React.Dom.Html.(a [|maybe href href_|] []) I think maybe reads better, and often use it to name optional variables too since maybe_value reads better than value_opt or something. Then again, option might be more suggestive of this modifying the prop to take an option value. Or optional... I like maybe! 🙂 option is really overloaded already... Made some further refinements, including a breaking change to the Context API to align it with rescript-react and allow both Reason and OCaml-style components to be created. The last thing puzzling me is the forwardRef API, which I can't even understand the purpose of. Apart from that, I think the only thing that remains is a proper interface. Interfaces done Make ppx changes so that instead of translating lowercase components to calls to createElement we just leave calls to div, p and such (user should put them in scope) What's the reason for this? To aid migration? Migrate types somehow from Html module in ppx to new Dom_html What are these really used for? Outside of the PPX itself I mean. What's the reason for this? To aid migration? The main reason is that, if we do: the above (make Reason ppx use the new DSL) fix locations so they make sense to merlin (this is complex task, but as soon as I can find some time I'd like to tackle it because it's crucial) then we get all docs and editor tooling upsides for free. You can hover over <div ..> and merlin will show the docs of the div function in the dsl, etc etc. I am not sure yet we can make everything work (props etc), but I am pretty sure it's a much clearer path than that we were going down before (using ppx to do this stuff). What are these really used for? Outside of the PPX itself I mean. They are used for the ppx transformation only, yes. But that plays a fundamental role at the moment on the lowercase component type checking. The ppx will generate type annotations for each prop, so you get compiler tell you about props with wrong types. But again, we get all this "for free" when we generate calls to DSL functions 🎉 ...then we get all docs and editor tooling upsides for free I see. Yeah I think that makes sense. But we'd also get that even if we fully qualify the functions, and that would hopefully make it a seamless transition. But again, we get all this "for free" when we generate calls to DSL functions :tada: :tada: Thanks for the work on this PR @glennsl. As there were conflicts with main I solved them and merged manually. But we'd also get that even if we fully qualify the functions, won't we, and that would hopefully make it a seamless transition? I am not sure I get this part, could you elaborate? re: the tests (keeping both Reason and OCaml) I am not sure it makes a lot of sense once we move to using the dsl for everything. Reason syntax should already be covered by ppx tests, so there's not much advantage on keeping 2 files for the "end to end" tests. I am not sure I get this part, could you elaborate? To quote what you said originally: Make ppx changes so that instead of translating lowercase components to calls to createElement we just leave calls to div, p and such (user should put them in scope) You've explained well the reason for using the new DSL instead of createElementdirectly, but not why the user should put the DSL in scope. It seemed to me that fully qualifying all the DSL calls (e.g. React.Dom.Dsl.Html.div) would make the transition seamless. But now that I've thought it through a bit more, I realize this will only work for known HTML elements. It wouldn't work for SVG or anything else, unless we have some convention for "escaping" the HTML scope. I also realize another problem with using the DSL instead of createElement directly, that it would no longer support custom elements and custom props. Although JSX doesn't support actual custom elements anyway, since they should contain a dash (-) which isn't a valid character in an identifier. But not supporting custom props is a bit of a bummer. Another question on your suggestions for next steps: Add docs and make sure they work with ppx locations 😬 What kind of docs, and level of detail, do you imagine this should be?
GITHUB_ARCHIVE
You'll find everything you need here. What are the key benefits of ForePaaS ? Here are three benefits to name but a few: What pricing model does ForePaaS use ? Whereas many SaaS solutions are based on per-user licensing, ForePaaS uses a pay-as-you-go model. Several types of subscription plan are available according to the amount of resources required, but without any limitation on the number of users. We firmly believe that restricting the number of users is contrary to the very purpose of data processing and analytics applications and drives down their value. IaaS, PaaS, SaaS - How does ForePaaS actually create value ? IaaS services (Infrastructure as a Service) provide the raw resources (storage, computing power, etc.) that organizations can use to deploy their own platforms and applications. SaaS services (Software as a Service) give users access to ready-to-use functional or business solutions that offer limited customization options. PaaS services (Platform as a Service) provide a ready-made environment for developing and implementing applications. Users no longer have to worry about the platform (and its maintenance), meaning that they can focus on their applications and data. How is ForePaaS specialized in processing and analyzing data ? ForePaaS is not a generic platform for developing any type of application. It is specifically designed for developing data processing and analytics applications. What it actually does is combine all the technical services that organizations need to connect to data sources, as well as store, exploit, analyze and report data. Those dozen or so services themselves use different services, libraries, databases, and so on. ForePaaS looks after the whole of the underlying technical infrastructure, so that users can concentrate on their data and applications. How does ForePaaS differ from Microsoft Azure and Amazon Web Services ? In practice, Microsoft Azure and Amazon Web Services are more similar to an infrastructure as a service than a platform as a service. As such, their services require strong technical expertise and involve long implementation times. ForePaaS offers an environment for specifically developing and using data processing and analytics applications, which explains why implementation times are faster than other platforms, since all the services are operational and only need to be personalized to suit the user’s given needs. How does ForePaaS differ from Business Intelligence SaaS applications ? Most of these SaaS applications address specific needs, such as providing visual representations of the organization’s data or analyzing sales force performance. ForePaaS is designed to drive all types of data processing and analytics applications. This prevents users from fragmenting or even siloing their data and the associated processes between a heterogeneous range of solutions. Is ForePaaS like a data management platform (DMP) ? Everything is a matter of definition. A DMP commonly refers to a platform that is used to manage data relating to the organization’s prospects and target audience segments. DMPs are therefore specialized platforms for marketing purposes. This does not apply to ForePaaS, since our solution is a general-purpose platform capable of powering all types of data management and processing applications. ForePaaS makes no distinction between marketing and other sectors. That is why ForePaaS is literally a data management platform. How does ForePaaS help shorten time-to-market ? The platform makes extensive use of automated functionality to abstract users from a large number of tasks that are both highly technical and time-consuming. Furthermore, the platform instantly provides a complete environment and saves you from having to use additional solutions. Examples: What do we mean when we say that ForePaaS is cloud-agnostic ? It simply means that when you activate the solution, you are the one who chooses the cloud service provider. Four options are currently available: hosting with OVH, Microsoft Azure or Amazon Web Services, or in our own data center which is managed with the Cisco HyperFlex system. ForePaaS is designed to deliver superior flexibility by taking full advantage of the latest technological breakthroughs in architecture and infrastructure (APIs, microservices, containers, software-defined infrastructure, etc.). Is ForePaaS compatible with a hybrid cloud approach ? Yes, because the platform’s modular architecture provides the required flexibility. ForePaaS can be used as part of a hybrid cloud approach if dictated by the sensitivity of the organization’s data and its applicable governance rules.
OPCFW_CODE
Table of Contents 1. To check the status of an SFTP service on a Linux machine, you can use the systemctl command. 2. For example, to check the status of the SSH service, you would run: systemctl status ssh 3. This will give you output similar to the following: ● ssh.service – OpenBSD Secure Shell server Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2020-03-01 12:34:56 EST; 3h 50min ago How can I tell if SFTP is working? How start SFTP service in Linux? In order to start the SFTP service in Linux, you will need to install the OpenSSH server. Once installed, you can then start the service by running the following command: sudo /etc/init.d/ssh start How do I stop and start SFTP service in Linux? In order to stop and start SFTP service in Linux, you will need to use the following commands: 1. To stop SFTP service: sudo service sftp stop 2. To start SFTP service: sudo service sftp start How stop SFTP service in Linux? To stop the SFTP service in Linux, you will need to modify the configuration file for the SSH daemon. This file is typically located at /etc/ssh/sshd_config. Once you have opened this file, find the line that says “#Subsystem sftp /usr/libexec/openssh/sftp-server”. Remove the “#” symbol from this line, and then save and close the file. Finally, restart the SSH daemon to apply your changes. How can I tell if SFTP is successful in Unix? There are a few ways to check if SFTP was successful in Unix. One way is to check the system logs for any SFTP activity. Another way is to use the command line tool “sftp” to connect to the SFTP server and check the status of the connection. How do I test SFTP connectivity locally? There are a few ways to test SFTP connectivity locally. One way is to use the command line tool, “sftp.” To do this, open a terminal window and type “sftp [username]@[hostname].” This will attempt to connect to the specified host using the SFTP protocol. If the connection is successful, you will be prompted for a password. If you enter the correct password, you will be logged in and will be able to run commands on the remote server. Another way to test SFTP connectivity is to use a GUI-based SFTP client such as FileZilla or WinSCP. These clients provide a graphical interface that makes it easy to connect to and navigateSFTP servers. What is SFTP service in Linux? SFTP is a secure file transfer protocol for transferring files between computers. It uses SSH to provide security and can be used to transfer files between two computers over the internet or between a local computer and a remote server. What is SFTP server in Linux? SFTP server in Linux is a service that allows users to securely transfer files over a network. The SFTP server uses the SSH protocol to provide this service, and all data is encrypted to ensure security. There are many different SFTP servers available for Linux, and they can be configured to allow access from anywhere in the world. How do I turn off SFTP connection? To turn off SFTP connection, you will need to go into your hosting account’s control panel and find the area where you can manage your FTP accounts. From there, you can simply disable or delete the account that is set up for SFTP. What is the SFTP command in Linux? There is no SFTP command in Linux. However, you can use the SSH File Transfer Protocol (SFTP) to transfer files between a local and remote system. How do I troubleshoot SFTP connection? There are a few things you can do to troubleshoot an SFTP connection: 1. Check the server logs to see if there are any error messages. 2. Make sure that the SFTP server is running and that you are using the correct port number. 3. Try connecting from a different machine to rule out any networking issues. 4. If you are using a firewall, make sure that it is not blocking the SFTP traffic. How do I ping SFTP from command prompt? To ping an SFTP server from the command prompt, you will need to use the ‘ping’ command followed by the IP address or domain name of the SFTP server. For example, to ping the SFTP server ‘example.com’, you would type: Where is my SFTP user Linux? There are a few ways to find your SFTP user on Linux. One way is to use the “getent” command. This will show you all of the users on your system, including any SFTP users that have been set up. Another way to find your SFTP user is to look in the “/etc/passwd” file. This file contains information about all of the users on your system. You can use the “grep” command to search for a specific user, or you can just scroll through the file until you find the entry for your SFTP user. If you’re not sure whichSFTP user is associated with your account, you can always contact your hosting provider or server administrator for help. How do I run SFTP from command prompt? There are a few different ways to run SFTP from the command prompt. One way is to use the “sftp” command, followed by the name of the remote host. For example: Another way is to use the “ssh” command with the “-s” option for SFTP. For example: ssh -s firstname.lastname@example.org sftp
OPCFW_CODE
What is the memory size for convolutional neural networks? I am looking at http://cs231n.github.io/convolutional-networks/ I don't understand why the memory size of layer 2 (CONV3-64: [224x224x64]) is 224x224x64 I understand that there are 64 filters of size 3x3 but why is the input size multiplied by 64? Why the number of weights in layer (CONV3-128) are (3x3x64)x128 and not (3x3x64x64)x128 ? (the weights from the previous layer times the new 128 filters)? . INPUT: [224x224x3] memory: 224*224*3=150K weights: 0 CONV3-64: [224x224x64] memory: 224*224*64=3.2M weights: (3*3*3)*64 = 1,728 CONV3-64: [224x224x64] memory: 224*224*64=3.2M weights: (3*3*64)*64 = 36,864 POOL2: [112x112x64] memory: 112*112*64=800K weights: 0 CONV3-128: [112x112x128] memory: 112*112*128=1.6M weights: (3*3*64)*128 = 73,728 CONV3-128: [112x112x128] memory: 112*112*128=1.6M weights: (3*3*128)*128 = 147,456 POOL2: [56x56x128] memory: 56*56*128=400K weights: 0 CONV3-256: [56x56x256] memory: 56*56*256=800K weights: (3*3*128)*256 = 294,912 CONV3-256: [56x56x256] memory: 56*56*256=800K weights: (3*3*256)*256 = 589,824 CONV3-256: [56x56x256] memory: 56*56*256=800K weights: (3*3*256)*256 = 589,824 POOL2: [28x28x256] memory: 28*28*256=200K weights: 0 CONV3-512: [28x28x512] memory: 28*28*512=400K weights: (3*3*256)*512 = 1,179,648 CONV3-512: [28x28x512] memory: 28*28*512=400K weights: (3*3*512)*512 = 2,359,296 CONV3-512: [28x28x512] memory: 28*28*512=400K weights: (3*3*512)*512 = 2,359,296 POOL2: [14x14x512] memory: 14*14*512=100K weights: 0 CONV3-512: [14x14x512] memory: 14*14*512=100K weights: (3*3*512)*512 = 2,359,296 CONV3-512: [14x14x512] memory: 14*14*512=100K weights: (3*3*512)*512 = 2,359,296 CONV3-512: [14x14x512] memory: 14*14*512=100K weights: (3*3*512)*512 = 2,359,296 POOL2: [7x7x512] memory: 7*7*512=25K weights: 0 FC: [1x1x4096] memory: 4096 weights: 7*7*512*4096 = 102,760,448 FC: [1x1x4096] memory: 4096 weights: 4096*4096 = 16,777,216 FC: [1x1x1000] memory: 1000 weights: 4096*1000 = 4,096,000 TOTAL memory: 24M * 4 bytes ~= 93MB / image (only forward! ~*2 for bwd) TOTAL params: 138M parameters Your first question is referring to the memory stored from a forward pass. The 64 in 224x224x64 belonging to the CONV3-64 layer, is there because when you pass through a single 224x224x3 image, it goes through 64 3x3x3 filters, and so 64 new images must be stored in memory to propagate the effect of these 64 filters into the network via the forward pass. Your second is referring to the weight parameters in the network. In the CONV3-128 layer, the input is 112x112x64, that means that if you want to apply a single 3x3 filter, you are actually applying a different one to each of the 64 input channels. You can think of the input as a 112x112x64 volume that is being filtered by 64 different 3x3 filters, which could just be thought of as a 3x3x64 volumetric filter, which will output a single 112x112 image. The output of this layer is set to be 128 channels, so you have to do this 128 times hence 128*64*3*3 weights in this layer.
STACK_EXCHANGE
Can a point source be located more accurately out-of-focus or in-focus? Let's say I am taking a picture, and I know a priori that the image is of a single ideal point source of light at infinity. With a perfect imaging system in focus, the image shows an Airy disk. I already knew to expect an Airy disk, so I can do some mathematical fitting to guess the center of the Airy disk. That's my best guess for the source location. Alternatively, the imaging system might be somewhat out of focus. The image would show a much bigger blurry circle. Again, I can do some mathematical fitting to guess the center of the blurry circle. Again, that's my best guess for the source location. My question is: Which approach would give me more accurate information about the location of the point source? I can't see an obvious answer. In the out-of-focus image, you are drawing useful information from a much greater number of pixels, which is usually helpful in mathematical fitting. (Particularly if a large pixel size limits the resolution.) On the other hand, we normally say that defocusing an image simply erases its small-scale information, and I've never heard of anyone defocusing an imaging system on purpose like that! If the Airy disk is smaller than a pixel (rather common), then you want to defocus. Star trackers on satellites do this in order to get sub-pixel pointing accuracy. If the Airy disk is much larger than a pixel, then you probably don't want to defocus. In the latter case the situation is complicated by aberrations and the problem of modeling the shape of the spot on the focal plane, which in general is no longer circular. That modeling problem might be more accurate for the in-focus case. I suppose as a practical matter one might do the best one can in the design stage, but then actually measure point spread functions for various angles. That is, calibrate the actual device. (That's speculation.) But if you calculate the size of the Airy disk for typical devices you will find that it's generally smaller than a pixel, so defocussing usually wins. To add to that: if you can shift focal length to both sides of best focus, you probably can calculate the best-focus position for a sub-pixel target. In the absence of noise they can both work the same (assuming you know the exact amount of defocusing, and you over-sample the Airy disk) In the presence of realistic noise, you are better off focusing the object due to the details of the noise. For intensity images, you are (likely) dealing with Riciean distributed data, and you are better off using a smaller number of samples with larger real intensity than a large number of samples all with low intensty.
STACK_EXCHANGE
# frozen_string_literal: true require 'update_repo/version' require 'update_repo/helpers' require 'confoog' require 'optimist' require 'yaml' # this class will encapsulate the command line fucntionality and processing # as well as reading from the config file. It will also merge the 2 sources # into the final definitive option hash. module UpdateRepo # This class takes care of reading and parsing the command line and conf file class CmdConfig include Helpers # This constant holds the path to the config file, default to home dir. CONFIG_PATH = '~/' # This constant holds the filename of the config file. CONFIG_FILE = '.updaterepo' def initialize # read the options from Trollop and store in temp variable. # we do it this way around otherwise if configuration file is missing it # gives the error messages even on '--help' and '--version' temp_opt = set_options @conf = Confoog::Settings.new(filename: CONFIG_FILE, location: CONFIG_PATH, prefix: 'update_repo', autoload: true, autosave: false) config_error unless @conf.status[:errors] == Status::INFO_FILE_LOADED @conf['cmd'] = temp_opt check_params end # return the configuration hash variable # @param [none] # @return [Class] Returns the base 'confoog' class to the caller. # @example # @config = @cmd.getconfig def getconfig @conf end # return the specified TRUE configuration variable, using '[]' notation # @param key [string] Which cmd variable to return # @return [various] The value of the specified cmd variable # @example # quiet = @cmd[quiet] def [](key) true_cmd(key.to_sym) end # This will return the 'true' version of a command, taking into account # both command line (given preference) and the configuration file. # @param command [symbol] The symbol of the defined command # @return [various] Returns the true value of the comamnd symbol # ignore the :reek:NilCheck for this function def true_cmd(command) cmd_given = @conf['cmd']["#{command}_given".to_sym] cmd_line = @conf['cmd'][command.to_sym] # if we specify something on the cmd line or there is no setting in the # config, that takes precedence return cmd_line if cmd_given || @conf[command.to_s].nil? # otherwise return the config variable @conf[command.to_s] end # make sure the parameter combinations are valid, terminating otherwise. # @param [none] # @return [void] def check_params return unless true_cmd(:dump) Optimist.die 'You cannot use --dump AND --import'.red if true_cmd(:import) Optimist.die 'You cannot use --dump AND --dump-remote'.red if true_cmd(:dump_remote) end private # terminate if we cannot load the configuration file for any reason. # @param [none] # @return [integer] exit code 1 def config_error if @conf.status[:errors] == Status::ERR_CANT_LOAD print 'Note that the the default configuration file was '.red, "changed to ~/#{CONFIG_FILE} from v0.4.0 onwards\n".red end exit 1 end # Set up the Trollop options and banner # @param [none] # @return [void] # rubocop:disable Metrics/MethodLength # rubocop:disable Layout/LineLength # rubocop:disable Metrics/AbcSize def set_options Optimist.options do version "update_repo version #{VERSION} (C)2022 G. Ramsay\n" banner <<-OPTION_TEXT Keep multiple local Git-Cloned Repositories up to date with one command. Usage: update_repo [options] Options are not required. If none are specified then the program will read from the standard configuration file (~/#{CONFIG_FILE}) and automatically update the specified Repositories. Options: OPTION_TEXT opt :color, 'Use colored output', default: true opt :dump, 'Dump a list of Directories and Git URL\'s to STDOUT in CSV format', default: false opt :prune, "Number of directory levels to remove from the --dump output.\nOnly valid when --dump or -d specified", default: 0 # opt :import, "Import a previous dump of directories and Git repository URL's,\n(created using --dump) then proceed to clone them locally.", default: false opt :log, "Create a logfile of all program output to './update_repo.log'. Any older logs will be overwritten", default: false opt :timestamp, 'Timestamp the logfile instead of overwriting. Does nothing unless the --log option is also specified', default: false opt :log_local, 'Create the logfile in the current directory instead of in the users home directory', default: false, short: 'g' opt :dump_remote, 'Create a dump to screen or log, listing all the git remote URLS found in the specified directories', default: false, short: 'r' opt :dump_tree, 'Create a dump to screen or log, listing all subdirectories found below the specified locations in tree format', default: false, short: 'u' opt :verbose, 'Display each repository and the git output to screen', default: false, short: 'V' opt :verbose_errors, 'List all the error output from a failing command in the summary, not just the first line', default: false, short: 'E' opt :brief, 'Do not print the header, footer or summary', default: false, short: 'b' opt :quiet, 'Run completely silent, with no output to the terminal (except fatal errors)', default: false opt :save_errors, 'Save any Git error messages from the last run for future display', default: false, short: 's' opt :show_errors, 'Show any Git error messages from the last run of the script', default: false, short: 'S' opt :noinetchk, 'Do not check for a working Internet connection before running the script', default: false, short: 'n' end end # rubocop:enable Metrics/MethodLength # rubocop:enable Layout/LineLength # rubocop:enable Metrics/AbcSize end end
STACK_EDU
ARIMA requires stationarity, but it generates trends - paradox? If a data set is stationary, does it mean it has no trend? Can we use ARIMA or AR models if there is no trend in the data? If there is AR term, it means that our current value is dependent on previous data, and hence it means there will be some trend as future values are dependent on previous ones. So in that scenario, we should have trend at least in our data if we want to use ARIMA or AR models. Please clarify. ARIMA and AR models can apply to both stationary processes and non-stationary processes. Note that the definition of a stationary process discusses the joint probability distribution. So they can't have a 'trend' in the traditional sense (trending up or trending down), but do have a trend of returning to the mean value. (If they didn't, it wouldn't be possible to satisfy the stationarity relationship.) While they can't have global structure, they can have local structure, because what is preserved is the joint probability distribution. Imagine an 'oscillating' series where the next observation can be predicted to be positive or negative with high probability; that can still be stationary so long as the oscillation relationship doesn't change over time and the oscillation is damped. Why is it then that we alwaysake the data stationary when arima can both be applied to both types of data? The whole premise of I in ARIMA isbto treat non stationary data. Whats it need then. What needs stationarity is ARMA. The I is in there to reduce a nonstationary series to a stationary ARMA The base condition to use an AR, MA or ARMA models is that your data should be stationary. If your data is not stationary, to make it stationary, we need to differentiate the data that's what 'I' stands for in AR-I-MA model. To know the trend component in your data, you need to decompose the time-series data. Please run the below R-code to understand better. library(forecast) births <- scan("http://robjhyndman.com/tsdldata/data/nybirths.dat") #Converting data into time-series data birthstimeseries <- ts(births, frequency=12, start=c(1946,1)) plot.ts(birthstimeseries) #decomposing the time-series data to get seasonal,trend,observed and random components plot(decompose(birthstimeseries)) #building a arima model ar.model1 <- auto.arima(birthstimeseries) ar.forecast1 <- forecast.Arima(ar.model1,h = 12) plot.forecast(ar.forecast1) Sources: https://www.youtube.com/watch?v=Aw77aMLj9uM&index=8&list=PLUgZaFoyJafhB73-1JUTRT0y5u_5fjFCR http://a-little-book-of-r-for-timeseries.readthedocs.io/en/latest/src/timeseries.html I don't think this answer actually addresses the question's two key concerns at all - is it true that stationary data has no trend? Does an AR times series have a trend? My question is different- I wanted to understand if stationary data means having no trend? Because if you have trend then your mean sales will change over time & your variance could also change. If my definition of stationarity is right & stationary data means no trend- then is it not contradictory that we cannot use ARIMA on stationary data but we need to make a non stationary data, stationary to use ARIMA
STACK_EXCHANGE
SystemC Language Working Group (LWG) This group is responsible for the definition and development of the SystemC core language, the foundation on which all other SystemC libraries and functionality are built. Chair: Laurent Maillet-Contoz, ST Microelectronics Vice-Chair: Andrew Goodrich, Cadence Design Systems Together with the release of IEEE 1666-2011 "Standard SystemC Language Reference Manual," the SystemC Language Working Group (LWG) released version 2.3.0 of the open source proof-of-concept library at no charge to the worldwide electronic design community. This implementation is fully compatible with IEEE1666-2011 and includes support for transaction-level modeling (TLM), a critical approach to enable high level and more efficient design of complex ICs and SoCs in a single library. Since then, Three maintenance releases (versions 2.3.1, 2.3.2 and 2.3.3) of the SystemC proof-of-concept implementation have been published, providing a number of new features beyond the current IEEE Std. 1666-2011. This enables the SystemC community to gain experience from practical use looking towards the next revision of IEEE 1666. In addition to bug fixes and addressing errata, the most notable new features beyond IEEE Std. 1666-2011 include - Initial C++11/14 support (2.3.2) - Extended simulation phase callbacks (2.3.1) - Initialization of signals at construction time (2.3.1) - Extended hierarchical name registry (2.3.2) - Improved synchronization mechanism with external processes (2.3.2) - Improved sc_time conversions and operations (2.3.1, extended in 2.3.2) - Improved error and exception handling (2.3.1, extended in 2.3.2) - Improved VCD tracing support, including hierarchical scopes, events and time values (2.3.2) - Improved support for dynamically linked TLM-2.0 models, including Windows DLL support (2.3.2) - New template-independent interfaces for TLM-2.0 sockets (2.3.2) - New optionally bounds TLM-2.0 convenience sockets (2.3.2) - New macro SC_NAMED to simplify naming of SystemC objects/events (2.3.3) The open source proof-of-concept releases were re-licensed under the Apache 2.0 License, enabling easier adoption of parts of the implementation in derived products. Documentation has been reorganized for clarity, and there is a new document highlighting the features added for compatibility with the latest version of the SystemC standard. The library, installation notes and readme files have been updated to support installation on the latest operating systems and compilers, including support for C++11 and C++14. To ensure a high quality release, the library has been reviewed and tested by members of the SystemC Language Working Group, and feedback from the public review has been incorporated in the release. Current activities in the LWG are centering around new features enabled by modern C++ language standards (C++11, C++14), improved datatype implementations within a dedicated SystemC Datatypes sub-working group, and extensions towards better integration of the control, configuration and inspection features developed by the SystemC CCI Working Group. Join this Working Group If you are an employee of an Accellera member company and wish to participate in this working group, please log in or create an account in the Accellera Workspace. Once you are logged in to the Workspace, select "View Workgroups", select SystemC Language Working Group, and click the Join button.
OPCFW_CODE
Workers spend about 20 percent of their time searching for content or tracking down others who can help answer their questions and solve their problems. Intranets in-a-box and Digital Workplace tools try to solve this by customizing search services, building intranets, “organizing” their information architecture, social apps and communication tools… however, this normally results in a huge investment, long delivery times and a higher dependency on technical skilled resources. OpenSky’s Intelligent Workplace services put the end user experience at the core to deliver an integrated experience across the broad Office 365 toolset by deploying intelligent intranets and teamwork spaces that embrace the continuity between collaboration and communication, automate data classification and eradicate the need for switching about between applications. Our services are ahead of Microsoft when it comes to delivering an integrated experience and make every relevant business object information (i.e. customers, personnel, projects, cases) and their related documents available in SharePoint through an automated and standardised process based on your CRM/ERP data. Our workspaces integrate directly into the user interfaces of Outlook, Microsoft Office, and Microsoft Dynamics making SharePoint document management features easily accessible in the right context. Our intelligent workspaces leverage Cognitive Services and Machine Learning and best of breed workflow engines for task automation and a “natural” and “human” way of looking for information. OpenSky’s Intelligent Workplace services leverage tools and services that are align with the SharePoint and Office 365 roadmap; for example, we can integrate SharePoint modern sites into our modern workspaces. Our extensibility framework also means it is possible for you to perform some customisations on your intranet without impacting your ability to upgrade in the future. SharePoint content also remains in SharePoint and so it utilizes SharePoint security mechanisms should you wish to choose a different product in the future. Our solutions ensure that governance is at the core of every product and service we use to guarantee it is supported by the following levels of governance: Technical: supporting excellent performance, security, compatibility etc. Content: ensuring your content meets standards, is up-to-date and is findable Collaboration: ensuring collaboration runs smoothly and optimally following organizational recommended practices and rules Strategic: supporting the strategic direction and decision-making on your intranet While the traditional analytical tools that comprise basic business intelligence (BI) examine historical data, our advanced analytics leverage Machine Learning for predictive and prescriptive analysis to put the focus on forecasting future events and behaviours, enabling Government Agencies & Regulators to conduct what-if analyses to predict the effects of potential changes in business strategies and make better informed decisions. Machine Learning is very powerful and by its application, we are able to provide models with the ability to automatically learn and improve from experience without being explicitly programmed, however, it can take months of work setting up different pipelines and selecting different algorithms and parameters to find out the combination that delivers the best results. Our Data Scientists use automated machine learning as a tool to greatly reduce the hard repetitive work and produce simpler solutions, faster creation of those solutions, and models that often outperform those that were designed by hand. In OpenSky we believe that data is the foundation of AI and we recognise that there is a need for insight before intelligence. OpenSky’s Business Intelligence services leverage the Microsoft’s business application platform from Power BI for analysis that mixes horizontal data, covering common business concepts, with vertical industry-specific data, allowing Government Agencies & Regulators to explore their data looking for insights that can be used to build new machine-learning (ML) models that can be included in existing and/or new applications. In conclusion, leveraging an analytics platform, represents an important but only a small part of the overall solution for a comprehensive information intelligence platform. You need to first, use modern toolkits, ensure your core processes are digital, automate your data classification and yes finally leverage a powerful analytics platform coupled with advanced machine learning. Not an easy task for sure, however working with experts like OpenSky will allow you to approach it in stages with the end-goal always in site for an improved & more effective operating model with fast decision making.
OPCFW_CODE
Have now bought a couple of licences. But I stiil cant use my two main applications Word and MindManager to patch into the save as dialgue box and predefined folders as target destination. In Word Save as I have History, My documents, Desktop Favorites and My Network Places. If I click on Favorites, all i see are links to Desktop and Local disk. Can anyone help and tell me a) where do I create my folders (or links to them ) so I can save the document. b) when i am in my application and invoke the save as dialgue , where should I see the favorites This is driving me crazy. I tried adding favorites from the dopus favorite menu . They show up there but not in the save as dialgue bax. I'm not really following what you're trying to do for sure, nor am I familiar with MindManager. However depending upon your version of your version of Windows, and perhaps Word, you may be able to configure your "save as" dialog quick locations, by adding them to "My Places". Doing that is easy, and has nothing to do with Opus. In the Word save as dialog box look in the upper right hand corner for "Tools". Click that and you can add or delete locations to the left side of the save as dialog box. If you do not see the "Tools" button in the save as dialog box, you can still customize the locations however it involves some registry tweaking. Plus the Windows powertoy TweakUI also offers way to customize your favorite locations. If you've installed Pascal Hurni's DOpus Favorites shell extension then you should see a DOpus Favorites item below the Desktop in the Save-As dialogs. Within that you'll find sub-folders for the different things which the shell extension can expose (Favorites, Smart Favorites, Recents and Aliases), as well as a flat view of whichever items you've chosen to put in the root DOpus Favorites folder. You can change what's put in the root level, as well as icons and some other stuff, via the config dialog you get by right-clicking the DOpus Favorites icon on the Desktop and selecting properties. Thank you v. much John and Nudel. I think I am geeting the hang of it now. Haven't been able to try yet beacasue I am backing up. So I am not 100% sure yet. I like the overall programme. For reognising files and folders. Especially file collections. Was hoping they could be colour coded, but haven't been able to find a way yet. Some small steps to progress. Is this any use for colour coding? [Color Folder Icons) I am just wondering how this is installed as it looks as if it could be very useful for me. Have been thru the tutorials, very helpful! How what's installed? (Or do you mean you're not wondering anymore?) The colour folder icons I linked don't need anything special installed, but if you want to change all icons by default, instead of a few specific folders, then that's different. I was referring to the installation of the icon package for customising folder colours. So I am still wondering or more specifically I don't know how to "install" the pacakge to chnage folder colours. There's nothing to install. Icon customizing (for individual folders) is built into Windows and Opus supports the same system. You just need to follow the steps listed on the download page below the screenshot of the folder icons. Not sure which version of Windows added the customize tab in Preferences. It might be a Windows XP only thing but maybe it's been there for a while, I don't know for sure. You only get the Customize tab in the Properties dialog for a folder if it's a normal folder (not Desktop or anything special like that) and you have write-access to it (you can't customize a folder that you cannot write to). I'll give higher priority to the other problems i am expereincing, this colouring functionality is a luxury noiw.
OPCFW_CODE
A Brief Overview of Optimal Robust Control Strategies for a Benchmark Power System with Different Cyberphysical Attacks Complexity 2021:1-10 (2021) AbstractSecurity issue against different attacks is the core topic of cyberphysical systems. In this paper, optimal control theory, reinforcement learning, and neural networks are integrated to provide a brief overview of optimal robust control strategies for a benchmark power system. First, the benchmark power system models with actuator and sensor attacks are considered. Second, we investigate the optimal control issue for the nominal system and review the state-of-the-art RL methods along with the NN implementation. Third, we propose several robust control strategies for different types of cyberphysical attacks via the optimal control design, and stability proofs are derived through Lyapunov theory. Furthermore, the stability analysis with the NN approximation error, which is rarely discussed in the previous works, is studied in this paper. Finally, two different simulation examples demonstrate the effectiveness of our proposed methods. Added to PP Historical graph of downloads Similar books and articles Design and Simulation of Voltage Amplidyne System Using Robust Control Technique.Mustefa Jibril, Messay Tadese & Eliyas Alemayehu - 2020 - Researcher Journal 12 (8):13-17. Design and Simulation of Voltage Amplidyne System Using Robust Control Technique.Mustefa Jibril, Messay Tadese & Eliyas Alemayehu - Speed Control of Ward Leonard Layout System Using H Infinity Optimal Control.Mustefa Jibril, Messay Tadese & Eliyas Alemayehu - manuscript Quarter Car Active Suspension System Design Using Optimal and Robust Control Method.Mustefa Jibril - 2020 - International Research Journal of Modernization in Engineering Technology and Science 2 (3):197-207. Stability Control of a Rotational Inverted Pendulum Using Augmentations with Weighting Functions Based Robust Control System.Mustefa Jibril, Messay Tadese & Eliyas Alemayehu - 2020 - New York Science Journal 13 (10):14-18. Modelling Design and Control of an Electromechanical Mass Lifting System Using Optimal and Robust Control Method.Mustefa Jibril, Messay Tadese & Eliyas Alemayehu - 2020 - Researcher 12 (6):52-57. Adaptive Robust SMC-Based AGC Auxiliary Service Control for ESS-Integrated PV/Wind Station.Xiao-Ling Su, Zheng-Kui Zhao, Yang Si & Yong-Qing Guo - 2020 - Complexity 2020:1-10. Online Optimal Control of Robotic Systems with Single Critic NN-Based Reinforcement Learning.Xiaoyi Long, Zheng He & Zhongyuan Wang - 2021 - Complexity 2021:1-7. Robust Fixed-Time Inverse Dynamic Control for Uncertain Robot Manipulator System.Yang Wang, Mingshu Chen & Yu Song - 2021 - Complexity 2021:1-12. High-Order Observer-Based Sliding Mode Control for the Isolated Microgrid with Cyber Attacks and Physical Uncertainties.Hao Wang, He Jiang, Yan Zhao, Huanxin Guan, Bo Hu & Shunjiang Wang - 2020 - Complexity 2020:1-11. Robust Fractional-Order PID Controller Tuning Based on Bode’s Optimal Loop Shaping.Lu Liu & Shuo Zhang - 2018 - Complexity 2018:1-14. Design & Control of Vehicle Boom Barrier Gate System Using Augmented H2 Optimal & H Infinity Synthesis Controllers.Mustefa Jibril, Messay Tadese & Tesfabirhan Shoga - 2020 - Researcher Journal 12 (8):5-12. Optimal Control of a Delayed SIRS Epidemic Model with Vaccination and Treatment.Khalid Hattaf, Abdelhadi Abta & Hassan Laarabi - 2015 - Acta Biotheoretica 63 (2):87-97. Data-Driven Approximated Optimal Control of Sulfur Flotation Process.Mingfang He - 2019 - Complexity 2019:1-16. Optimal Control for Networked Control Systems with Markovian Packet Losses.Xiao Han, Zhijian Ji & Qingyuan Qi - 2020 - Complexity 2020:1-11.
OPCFW_CODE
Filters: Tags: 2050 (X)4 results (155ms) Estimated Frequency of Severe Thermal Stress in the 2050s under an IPCC business-as-usual emissions scenario Grid reflects the estimated frequency of severe thermal stress (NOAA Bleaching Alert Level 2) for decade 2050. Values are a percent (as integer) of the decade in which the grid cell would experience severe thermal stress under an IPCC "business-as-usual" emissions scenario. The specific indicator used in the model was the frequency (number of years in the decade) that the bleaching threshold is reached at least once. Frequencies were adjusted to account for historical sea surface temperature variability. Values range from 0 to 100. See the Reefs at Risk Revisited report and technical notes for more information. Developed by the Oak Ridge National Laboratory (ORNL), the LandCast 2050 High-Resolution Population Projection models future national-level human population densities. The models estimate the probability of a population being at a particular location, which measures where people will likely be in the future, not necessarily their places of residence.The LandCast 2050 data set is an empirically-informed spatial distribution of projected population of the contiguous U.S. for 2050 compiled on a 30” x 30” latitude/longitude grid. Population projections of county level numbers were developed using a modified version of the U.S. Census’ projection methodology-with the U.S. Census’ official projection as the benchmark.... This layer represent the direction of housing density change in 2050 as compared to 2010. It was created by substracting bhc2050 from bhc2010. Negative values represent an increase in housing density (e.g. shifts from '2' in year 2010 to '3' in year 2050). 0 indicates no change. Positive values would indicate a shift to a lower (less dense) category (e.g. moving from '3' to '2') ICLUS v1.3 Housing Density for the Conterminous USA. The data are classified into descriptive categories for general analytic and cartographic purposes. 99 = Commercial/Industrial 4 = <0.25 acres/unit = "urban" 3 = 0.25 to 2 acres/unit = "suburban" 2 = 2 to 40 acres/unit = "exurban" 1 = >40 acres/unit = "rural Climate and land-use change... This data layer was created by subtracting is2050rclss (representing projections for year 2050 under a2) from is2010reclss (representing projections for year 2010) to create a difference in percent impervious surface layer. Negative values represent an increase in percentage by 1, 2, or 3 levels, 0 indicates no change, and positive values represent a decrease in impervious layers by 1 or 2 levels. Levels refer to the values, 1-7, of is2050rclss and is2010reclss created by reclassifying the source rasters such that 0%=1, 0.01-9.5%=2, 9.5-19%=3, 19-29%=4, 29-38%=5, 38-48%=6, 48-58%=7. Description from original file: ICLUS v1.3 Estimated Percent Impervious Suface for the Conterminous USA. Pixel values are projected...
OPCFW_CODE
Those who read my posts know that I have been a Windows Phone user since it launched in late-2010. I like the platform a lot, and do believe it is more efficient for the way I use a smartphone. Before I switched to Windows Phone, I used an iPhone 3GS. Since then, my exposure to iOS has been through my iPad (1 and 2) and my iPod Touch. However, those iOS devices are at most used for an hour a day, so it is not fair to use that to compare against the Windows Phone platform. So, when I recently got an opportunity to get an AT&T iPhone 4S, I jumped on it. I decided to give it my full attention, use it as my primary(-ish) phone for some time, and compare and contrast iOS with Windows Phone after actually using it. I figured, rather than compare specs on paper, which anybody can, it would be better to compare usage. With that in mind, I present this new series, where I will talk about various aspects of using Nokia Lumia 800 Windows Phone vs. using an iPhone 4S. My intention is to look at the common tasks one performs with a smartphone and how they differ across these two platforms. This is not so much of a “competition” to determine who “wins”, it is more of a comparison to identify the tasks where one platform may excel and the other may not. I plan to break the series into the following: - Out of the box experience, and set up - Screen: size and quality - Common tasks - Email, Calendar - Music (specifically, podcasts, which I use a lot) - Phone calling - Notification Center - Battery life - App browsing, discovery and purchasing What I do not want to do is: - Look at hard specs like cores, PPI, version of bluetooth supported, etc. If any of these happen to make it more difficult for me to do normal things, I will point them out. - I am going in with the assumption that we are going to live in a heterogeneous world where I may have a Windows PC and related apps along with my iPhone or iPad. As a result, I will try to stay away from stuff that is clearly going to remain “Apple-only”. For example, iMessage or certain aspects of iCloud which do not carry over to say, a Windows Phone, like contacts and calendar sync. There are other platform-specific tie-ins with Windows Phone like Xbox LIVE Achievements, which again, I won’t go into. I am genuinely excited, both, to try the iPhone 4S (it’s been about 2 years since I used an iPhone), as well as to compare that experience to how I do things on my Lumia. Is there anything specific you would like me to look at in this experiment? Let me know!
OPCFW_CODE
Linux: amd64 AppImage fails to launch Describe the bug amd64 AppImage fails to launch: I can see a black window for a fraction of a second, and then it just dies. From the console the following log is produced: ./zmk-studio_0.1.0_amd64.AppImage Could not create surfaceless EGL display: EGL_BAD_ALLOC. Aborting... zsh: IOT instruction (core dumped) ./zmk-studio_0.1.0_amd64.AppImage To Reproduce Try to launch the AppImage Expected behavior I would expect the AppImage to launch. Screenshots If applicable, add screenshots to help explain your problem. Environment: Manjaro with KDE(Plasma 6.1.5)/Wayland, kernel 6.11.2-4-MANJARO App amd64 AppImage release 0.2.0 Also fails in an X11 session: ./zmk-studio_0.1.0_amd64.AppImage Gtk-Message: 04:53:33.012: Failed to load module "xapp-gtk3-module" Could not create surfaceless EGL display: EGL_BAD_ALLOC. Aborting... zsh: IOT instruction (core dumped) ./zmk-studio_0.1.0_amd64.AppImage I am using an AMD 7900XT with the open amdgpu driver. I have a similar issue with the AppImage. The app starts but the windows is blank. Gtk-Message: 10:37:46.524: Failed to load module "xapp-gtk3-module" Could not create default EGL display: EGL_BAD_PARAMETER. Aborting... ** (app:8069): WARNING **: 10:38:13.563: atk-bridge: get_device_events_reply: unknown signature Environment: CachyOS (Arch) Kernel: 6.11.6-2-cachyos X11/i3 GPU: AMD 7900XT App-Version: 0.2.2 I'm also having an issue with the AppImage window being blank. When run from the terminal I see similar output, only difference is I'm missing that first line: Could not create default EGL display: EGL_BAD_PARAMETER. Aborting... ** (app:33041): WARNING **: 17:52:39.447: atk-bridge: get_device_events_reply: unknown signature Environment Fedora Linux 41 Workstation Edition Kernel: 6.11.7-300.fc41.x86_64 GNOME DE (Wayland session) App version: 0.2.3 Hardware: Framework Laptop 13 AMD CPU: AMD Ryzen™ 7 7840U GPU: AMD Radeon™ 780M Can folks give the environment variables mentioned here a try? https://github.com/getyaak/app/issues/75#issuecomment-2344749813 Just gave that a try, still got the same errors in terminal and same viewport behavior With the variables, the behavior changes - I get new errors: WEBKIT_DISABLE_COMPOSITING_MODE=1 WEBKIT_DISABLE_DMABUF_RENDERER=1 ./zmk-studio_0.1.0_amd64.AppImage Gtk-Message: 14:44:11.568: Failed to load module "xapp-gtk3-module" Gtk-Message: 14:44:11.799: Failed to load module "xapp-gtk3-module" Could not create surfaceless EGL display: EGL_BAD_ALLOC. Aborting... ** (app:130420): WARNING **: 14:44:36.852: atk-bridge: get_device_events_reply: unknown signature And a white screen that's just open forever (as opposed to a black window that blinked on and disappeared before): Please try the downloaded artifact from #92 and let me know if it fixes things for you as well. If so, I will get it merged and released. Thanks! The window contents now render correctly! Thank you for getting that working. I haven't had time to use it extensively yet but I was able to get to the studio unlock step, so that's a good sign. Doesn't seem to be working for me yet, unfortunately. Will report back once I can do a bit more testing. #from actions for PR 92 ./zmk-studio_0.2.3_amd64.AppImage Cannot get default EGL display: EGL_SUCCESS Cannot create EGL context: invalid display (last error: EGL_SUCCESS) zsh: segmentation fault (core dumped) ./zmk-studio_0.2.3_amd64.AppImage #0.1.0 release ./zmk-studio_0.1.0_amd64.AppImage Gtk-Message: 23:41:03.998: Failed to load module "xapp-gtk3-module" Could not create surfaceless EGL display: EGL_BAD_ALLOC. Aborting... zsh: IOT instruction (core dumped) ./zmk-studio_0.1.0_amd64.AppImage However, WEBKIT_DISABLE_COMPOSITING_MODE=1 WEBKIT_DISABLE_DMABUF_RENDERER=1 ./zmk-studio_0.2.3_amd64.AppImage actually works now. Is there a way to use those variables as defaults? #from actions for PR 92 ./zmk-studio_0.2.3_amd64.AppImage Cannot get default EGL display: EGL_SUCCESS Cannot create EGL context: invalid display (last error: EGL_SUCCESS) zsh: segmentation fault (core dumped) ./zmk-studio_0.2.3_amd64.AppImage #0.1.0 release ./zmk-studio_0.1.0_amd64.AppImage Gtk-Message: 23:41:03.998: Failed to load module "xapp-gtk3-module" Could not create surfaceless EGL display: EGL_BAD_ALLOC. Aborting... zsh: IOT instruction (core dumped) ./zmk-studio_0.1.0_amd64.AppImage However, WEBKIT_DISABLE_COMPOSITING_MODE=1 WEBKIT_DISABLE_DMABUF_RENDERER=1 ./zmk-studio_0.2.3_amd64.AppImage actually works now. Is there a way to use those variables as defaults? Let me research the impact before we try forcing the values universally. Thanks for testing! If it matters, I launched the AppImage as root, not sure if that'd affect things for you but it's worth a shot It really shouldn't require that. Can you please test running as a normal user? Yeah, it does launch but it gives me issues with "permission denied" errors when selecting a device. Forgot to mention that, my bad, that was just a separate issue I worked around but need to look into. Yeah, it does launch but it gives me issues with "permission denied" errors when selecting a device. Forgot to mention that, my bad, that was just a separate issue I worked around but need to look into. Does your user have correct permissions to the /dev/ttyACM# device? That was the issue, good catch! Had to add my user to the dialout group and restart, it now launches as expected and remaps keys successfully. #from actions for PR 92 ./zmk-studio_0.2.3_amd64.AppImage Cannot get default EGL display: EGL_SUCCESS Cannot create EGL context: invalid display (last error: EGL_SUCCESS) zsh: segmentation fault (core dumped) ./zmk-studio_0.2.3_amd64.AppImage #0.1.0 release ./zmk-studio_0.1.0_amd64.AppImage Gtk-Message: 23:41:03.998: Failed to load module "xapp-gtk3-module" Could not create surfaceless EGL display: EGL_BAD_ALLOC. Aborting... zsh: IOT instruction (core dumped) ./zmk-studio_0.1.0_amd64.AppImage However, WEBKIT_DISABLE_COMPOSITING_MODE=1 WEBKIT_DISABLE_DMABUF_RENDERER=1 ./zmk-studio_0.2.3_amd64.AppImage actually works now. Is there a way to use those variables as defaults? I've merged the build fix. I'm hesitant to try to enable those by default until: We've tested the impact more broadly. We understand scope of conditions that make this required. For now, I'd rather add this to our (planned) Studio troubleshooting documentation.
GITHUB_ARCHIVE
Discussion in 'other anti-malware software' started by sm1, Mar 4, 2011. o ok, thanks Actually, I found out a button was created recently (after my post). It offers quick access to disabling, testing, and information on it, but is useless for me. Removing it by dragging it into customize toolbar window didn't work, it reappears on every new window. Therefore, I dragged it on an empty toolbar and hid that. Here it is. I don't know for the rest of you guys,but for me it slowed down my browsing...Uninstalled. Not much difference in browsing speed for me. Someone I know installed it and now wants to uninstall it. Is the uninstaller in the Windows AddRemove or do you just use the Firefox Add-on's Uninstaller? Thanks in Advance. It's in Add or Remove. It's visible on all 3 browsers i have,IE,Chrome and FF.Oh well,i think i can live without G-Data cloud. Yeah,i used Revo. When will this support Firefox 4.0? Does it support Chrome? According to the website it supports only IE and Firefox. I can't tell it, for the obvious reasons - I'm not part of their team - but, according to their FAQ, they're planning on supporting it. I guess you already knew that, though. You guessed right. I may switch over to something else if it doesn't support Firefox 4 soon. Removed, because it was causing connection and page rendering issues, as well as false positives. slow browsing and did nothing... Gdata keeps to create heavy things ^^ I tried G-Data CloudSecurity for about 5 minutes and it cause Firefox to lock up, so I uninstalled it and switched to avast, trusting in their web shield, network shield, behavior shield, and auto-sandbox. By combining Bitdefender w/ the old slow resource hungry avast v4 engine, they managed to create one of the heaviest things around. Why are they keeping this tradition with an in-the-cloud product totally avades me. To sum up the experience, the browsing felt like the old Adblock (classic, not Plus) with about every possible filter list out there loaded on a congested connection with huge latency. Uninstalled. I have never encountered any browser extension/plugin which provides better protection than just some common sense, furthermore they add an annoying button or even toolbar. In other words: in my eyes all those security/privacy extensions/plugins are useless. The only security/privacy extension which is worth the try is Adblock Plus for Google Chrome, on Mozilla Firefox 4 Adblock Plus also ain't that nice anymore. Internet Explorer 9 doesn't need any security/privacy add-on since it has the SmartScreen filter and Tracking Protection Lists. IE9′s anti-tracking tool flawed – Microsoft should try harder So, that brings us to the next one: Privacy protection and IE9: who can you trust? Well, lets see how the guys describe themselves: Wonderful. Exactly what we needed. I have already read them both a while ago, doesn't concern me in any single way to be honest. Just use the EasyList only, and you don't have any problems. It does concern $average_joe; that is really what matters. For most people who are at least savvy enough discover the feature, it's natural to think adding everything is the best. By doing that, they will effectively nullify any other block lists this way. Seriously the TRUSTe junk is completely misplaced there and needs to be removed. More privacy is NOT about mass whitelisting advertising crap but rather the exact opposite. Did they pay MS for including them? Looks pretty likely. If you add allow and deny rules into Windows Firewall, the deny naturally wins. Needs to be exactly the same here. Anyway -> off-topic here. I do agree with you that the TRUSTe list is useless crap, and should be removed from that page. I will agree with you that it is a fat download (primarily due to the Bitdefender signature file). However, as you will read in other threads here on Wilders, this baby sheds its' weight very quickly once it is installed. Now, with regard to the engines running in the background, we are ALWAYS reviewing the performance of other scanners. We will move to switch if it will improve our software's performance. That's one reason we don't openly discuss which engines we use (though you all know!). For the typical enduser, if we decide to make the switch, it will be totally transparent to them. Not sure what you are doing, but it doesn't slow your browsing experience down. Well, unfortunately it does. And not just for me. It's compatible with FF4 now ... Separate names with a comma.
OPCFW_CODE
|/ Home \||\ Subjects /| |24/10/2008 05:19 Xene:| |"Cannot load module 'SQLite' because required module 'pdo' is not loaded" | I haven't been able to figure out why I can't get the php to work proplery. I've tried using manual install as well as msi (apache 2.2.x) then directing smallsrv to the php-cgi.exe after editing the configurations display_startup_errors = On arg_separator.input = "&" variables_order = "GPECS" register_globals = On magic_quotes_runtime = On enable_dl = On cgi.force_redirect = 0 force_redirect = 0 The line ;pfpro.defaultport = 443 must be comented as was told, however it continues to return this error or if this error isn't returned the page loads scrambled. I also cannot find "force_redirect = 0 The line ;pfpro.defaultport = 443 must be comented" in the php.ini file (php 5.x.x latest). I've tried commenting/uncommenting both sqlite and pdo but to no avail. Does anyone have a sure fire way of getting the php to work with smallsrv? (the same php works with the apache fine but apache just doesn't have the features smallsrv does..) www.xene.isa-geek.com/ is the page I'm trying to get it working on for a small forum for some friends, any help would be greatly appreciated. |24/10/2008 11:39 Xene:| |The solution found was to use php-cgi.exe and edit the above things (if some are missing from your ini that's alright) copy the php.ini to the windows folder and copy all non php-x files inside the C:\php\ to your C:\windows\system32 | I used www.peterguy.com/php/install_IIS6.html#phpInstall to install the php initially then changed minor things. The current .dll's un commented are as follows php_mysql.dll php_mysqli.dll and php_openssl.dll and it appears to now work with smallsrv. |25/10/2008 14:32 Max:| |Copping files to SYSTEM32 it is effective, but not very good solution. Probably add path to PHP to system PATH environment variable is enough. (Right button on "My computer", then Properties, then "Environment" on one tabs ) | If you would like to use php5isapi.dll instead php-cgi.exe, in some version of PHP, php5isapi.dll must be in the same directory that php5ts.dll Also you may try to add registry key: |21/03/2009 00:41 AnrDaemon:| |Cannot load module 'SQLite' because required module 'pdo' is not loaded | This has it all in front of you, dunno why you can't see it. You must have PDO module loaded before you try to load SQLite module. Check the loading order in INI file.
OPCFW_CODE