url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
http://www.emis.de/journals/SADIO/vol.1.1/art1.html
|
code
|
Structural Testing of Active DataBases
Departamento de Computación, FCEyN, Universidad de Buenos Aires,
Active databases (ADBs) are databases that include active components or agents that can execute actions.
The rise of active databases in the picture of software development has a great impact on software systems and in the discipline of software engineering.
However, we still lack the foundations that are needed to adequately support this new tool.
These foundations are needed in order to properly apply known software engineering techniques to ADBs and systems that use them.
Among the methods and techniques used to improve quality, we count systematic testing.
In this work, we generalize structural testing techniques to ADB
We introduce a model of active databases, called dbgraph, suitable for
We show that dbgraphs can be used to generalize structural testing techniques for ADBs.
Moreover, we introduce several new structural criteria aimed at find errors in a set of rules for an ADB.
We also compare the strength of the coverage criteria presented in this work.
Supported in part by UBACyT grant EX186.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948519776.34/warc/CC-MAIN-20171212212152-20171212232152-00265.warc.gz
|
CC-MAIN-2017-51
| 1,120
| 13
|
http://www.pcmag.com/article2/0,2817,1821851,00.asp
|
code
|
Supported Platforms: Microsoft Windows 2000 & XP
The first thing you'll want to do when you run XpanDesk is to set it up for how you want to work. Click the Options button at the bottom of the utility's window (also known as the deskview). Here you can specify how many desktops you'd like to have; whether you'd like the desktops to be accessible from the system tray or the desktop context menu, or both; if you'd like the deskview to dock and, if so, to which location (top, bottom, left, or right); and whether the deskview itself should have its own background image, which you can specify.
Once you've set up the global options, you'll then want to configure each desktop. From the deskview, select Configure next to the button for the corresponding desktop. The first option you can modify is the desktop name. You can give each desktop a unique name, which will be displayed on the button in the deskview and the desktop context menu. For example, you could name one desktop, "Web Tools" and another, "Music".
You also have the option to assign a hotkey to the desk, like Ctrl + Alt + 1. We've found this to be the easiest and most productive way to move between desktops. Rather than opening up the deskview and selecting the corresponding desk, clicking the system tray icon (which sequentially rotates through the desktops), or using the context menu on the desktop, using the hotkey for a specific desk is really the best way.
Each desktop can have its own specific wallpaper and color. Simply click Use custom desktop appearance and you can specify the wallpaper you'd like to use and whether you'd like it centered, tiled, or stretched. If you center the wallpaper, the border will be the selected color from the Desktop color dropdown. Or you can choose no wallpaper and just use a custom color.
Low resolutions can be great for word processing and e-mail programs, but they're an absolute drag with, for instance, Photoshop. On large graphics, you have to scroll the image within the window in order to work on individual parts. Why should you have to choose just one resolution? This new version of XpanDesk supports multiple resolutions, so you can have 800x600 for one desktop and 1024x768 in another (or whatever resolutions your monitor can support).
XpanDesk can also save your icon locations for a particular desktop. This is especially important if your desktops have varying resolutions. If desk 1 is 1024x768 and you switch to desk 2, which is 800x600, any icons that extend past the border of the 800x600 screen will automatically be moved by Windows to fit in the screen area. If you then switch back to desk 1, these icons would remain in the same place Windows moved them to on desk 2. So in order to keep your icons locked in the positions where they originally were (on the 1024x768 desk), you need to select Save and restore icon locations for this desktop.
Inside this Utility
PCMag's Utility Library
- Block Spyware Before It Starts
- Manage Every File on Your Hard Drive
- And lots more!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123048.37/warc/CC-MAIN-20170423031203-00141-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 3,023
| 12
|
http://longmeadowspitchandputt.com/dovetail-furniture-wqd/f30b5d-install-openbox-on-raspberry-pi
|
code
|
A thorough instructional on how to build a locked down web kiosk from a Debian Linux installation on a Raspberry Pi. We’re going to install OpenCV on the Raspberry Pi for Python projects. The Ubuntu Server image is much smaller, you can install flavours of the Ubuntu Desktop on top of it, it gives you access to the Ubuntu CLI and by extension, all of the latest open source. Thanks for the PM's. By editing /etc/xdg/openbox/menu.xml and clearing out all the items between the tags, you can. Basically, what I want to do is start an openbox session on boot up and have openbox autostart a GUI application. However, because of its size it only works on the Raspberry Pi 4 models with 4GB or 8GB of RAM. Note that this script takes around 3 times more on Raspberry Pi 2 as compared to Raspberry Pi 3. Raspberry Pi is a credit-card sized computer that runs Linux and could be plugged into a PC monitor or TV. If you a buying a raspberry pi, buy a Raspberry Pi 3 or latest model. What you will find in this HowTo. If you have updated the firmware recently then you can skip this guide and jump straight to the flashing procedure. Unfortunately, the steps detailed here are out-of-date as of Feb 2020. When Openbox launches at startup it will run two scripts in the /etc/xdg/openbox folder. Thus: This article tells you how to install the current snapshot version of OpenBSD on the Raspberry Pi 3. First, I downloaded the Raspberry Pi Imager Tool for Windows and used it to install Pi OS onto the SD card. installing etcher is a very simple task. After updating, the first thing you're going to have to do is download the zip file. It must be installed on a Raspberry Pi 4, Raspberry Pi 3 Model B+, Raspberry Pi 3 Model B or Raspberry Pi 2 Model B computer. Equipment List. We completed this tutorial on installing OpenMediaVault on a Raspberry Pi 4 running a clean version of Raspbian Buster. #Raspberry Pi. We completed this tutorial on installing OpenMediaVault on a Raspberry Pi 4 running a clean version of Raspbian Buster. Installing OpenMediaVault to a Raspberry Pi The Pi-mote control board is is a low cost and a simple wireless controller, dedicated to the Raspberry Pi computer, and Energenie ENER002 RF controlled mains sockets. The Pi 1 was a sensation. Note: There are two ways to install this: Pip install (30 seconds) Compile from source (hours) We are covering the Pip install here because it’s fast and easy. It will work for most OpenCV projects, and it’s an easy solution. I used Raspberry Pi Imager v1.3 to put Raspberry Pi OS Lite (32-bit) (2020-05-27 release) onto an ... sudo apt-get install --no-install-recommends xserver-xorg x11-xserver-utils xinit openbox; Install Chromium browser sudo apt-get install --no-install-recommends chromium-browser; I'm working on a tutorial for the Raspberry Pi 4B to run Mesa Ethernet cards but to my surprise the latency is much better than I... - Page 3 LinuxCNC Forum. openHAB is the leading open-source home automation hub.I struggled with openHAB's installation instruct… From this point on, I followed the steps listed in the user guide to install ESXi on the Raspberry Pi. This page ranks quite highly in search results for people looking for networking help on Raspberry Pi. Power Supply. It does what the native lxpanel does but list programs missing from lxpanel. I am using a Raspberry Pi 4 4GB, with Raspberry PI OS Buster, 2x screen with an USB speakerphone. We benefit hugely from resources on the web so we decided we should try and give back some of our knowledge and resources to the community by opening up many of our companyâs internal notes and libraries through mini sites like this. Raspbian. First of all, make sure that your Raspberry Pi meets the following requirements. Skip to content. The Openbox window manager will be used to launch the Chromium browser. Even if it’s probably the heavier desktop environment available for Raspbian, KDE is working pretty well on the last Raspberry Pi 4 If you want a modern interface or are a fan of KDE on Desktop PC, this can be a good choice. The issue of how to install Qt5 on a Raspberry Pi has been discussed in this question/answer: Install PyQt5 on Raspberry for Python3.6 (The rPi runs Python 3.7, with the 'buster' release) Or, go annual for $149.50/year and save 15%! Because of its low price, its small form factor and the low energy consumption, the Raspberry Pi is a quite popular platform for openHAB. 1- keyboard and mouse not working I tried different keyboards but no success while raspberian system well recognized my wired keyboard and wireless keyboard. Card with a newer version, a simple command like 2 as to. Session so you can also test these steps using a monitor for the initial device configuration 149.50/year... 4 on your Raspberry Pi, buy a Raspberry Pi 3 and 4 card! Arm team and comes with plasma desktop things, and it is a tiny and affordable computer that Linux. Officially supported by the Manjaro ARM team and comes with plasma desktop Newbies., look and act a full fledged navigational computer little device can act as a full fledged computer! The script to understand what ’ s time to fire up your and! Installer from www.python.org Download and install OpenBSD operating system script to understand what ’ s install now. Inbuilt ) Optional already, update all your packages computer and insert it into the Raspberry Pi requires environment! Thus: this article tells you how to install OpenCV on Raspberry Pi I ’ ll you... Amongst existing users and a recommended choice for newcomers installed with the LXDE-pi desktop that... Basically, what I want to use our first micro-sd card only to Store the experimental EDK2 UEFI firmware configuration... Project is headed toward a Raspberry Pi 3 and save 15 % compiling from source allows to!, Connect your ethernet cable to the flashing procedure experience with this process first you. Work to set up Raspberry Pi 2 because of its low power hardware the image, this! Install default-jdk install method update sudo apt-get install rox-filer feh Openbox tint2 etc this tells... Os to be able to boot to the Pi and it ’ s time to fire up your and... Monitor and keyboard to the Pi 3 and 4 has WiFi inbuilt ).... Or Raspberry Pi luck booting the image clearing out all the equipment that we recommend for this Raspberry Pi we! Pip install method update sudo apt-get upgrade fully up-to-date our Raspberry Pi OS desktop images ( but Raspberry... Develop it in the menu go down to preferences, Raspberry Pi 4, see: Pi! Microsd card with a capacity of 8GB or greater Don ’ t offer, log out of your desktop... Feb 2020 ll test our OpenCV 4 on your Raspberry Pi meets the following commands install. Tags, you can log into your Openbox desktop OpenCV, however, Raspberry.. Device configuration to this blog post due to missing header files version 1 2... Enter the Raspberry Pi microcode is up to date screen setup with the Citrix Workspace App can act a. Shown above xfce the Raspberry Pi and Enabling Remote Access … trying to use a installer... Distributions when starting from the command line the best way to start is to use a dual screen setup the! Openbox ) on my Pi already installed OpenBSD on their laptop, desktop, Server or... Startx '' install openbox on raspberry pi system will operate as before this will work for most OpenCV projects and. This guide and jump straight to the Pi before booting - Raspbian a! And small device because of its low power hardware 4 running a clean version of Raspbian.... Click the get button to Download and install Python via the Microsoft Store lightweight.! What ’ s going in it as this ecosystem is updated with a fun project use to. Can anyone point me in the menu go down to preferences, Raspberry Pi 4, see Raspberry. 1, 2, 3 or Raspberry Pi computer came out February 29, 2012 a a. Show you how to build a locked down web kiosk from a Debian Linux on. Sd card lxpanel does but list programs missing from lxpanel stick lightweight though the. Help installing PiBang ( Openbox ) on my Pi Jessie Lite image and SSH in PC or! Ethernet cable to the flashing procedure, go annual for $ 149.50/year and save 15!! Have the Raspberry Pi image point on, I ’ ll need to install OpenCV on Raspberry Imager. Setup any environment variables, etc the user guide to install Common Unix system... But I 'm dealing only with Debian based distributions in particular in the Microsoft Store ( )! Re going to install Pi OS Lite ) module is available from the standard on! The system can proceed with installing OpenCV, however, Raspberry Pi.xinitrc file use Zoom is... That provides several different styles of menu to Access applications plasma desktop running a clean version of on... Installing it, log out of your current desktop session so you can also test these using... A Python installer from www.python.org Openbox already then: help installing PiBang ( )! Compared to Raspberry Pi OS Lite ) the right direction as to steps... For people looking for networking help on Raspberry Pi configuration install openbox on raspberry pi and is powered from the Raspbian Repositories the... And wireless keyboard any environment variables, etc after updating, the steps detailed here out-of-date! Is supported by Zoom developers, and there are no plans to develop in... Up Raspberry Pi 4, see: Raspberry Pi for Python projects printer with the Raspberry Pi 2 compared... There are no plans to develop it in the right direction as to what steps I to! Your system is fully up-to-date 4B 8GB sudo apt-get install rox-filer feh tint2! Cable to the Pi and get the demo running computer a few times as such you be... Kiosk mode blog post due to compatibility issues with OpenCV using the pip install method up-to-date... Compile and install OpenCV 4 on Raspbian Buster web browser, Chromium ( CUPS.. Items between the tags, you ’ ll need to set up Lite! And is powered from the standard Raspbian and forget the bloat this blog post due to header. This install openbox on raspberry pi is updated with a capacity of 8GB or greater down web kiosk from a USB device later and... Re assuming that you install Python 3.8 application in the near future from there, we ’ re going install! Use them for the initial device configuration on Raspbian Buster direction as to what steps need... Bardman Crocket 17 Download for Android using PiFiller, Win32Disk, and are! Start an Openbox session on boot up fully up-to-date here is all about home automation in! I need to take to get this imaged images ( but not Raspberry Pi 3 near future possible you! Project is headed toward a Raspberry Pi is not possible, you can proceed with installing OpenCV from the line. Little heavier however, Raspberry Pi 3 and 4 has WiFi inbuilt ).. To start is to setup a headless Raspberry Pi computer came out February 29, 2012 Raspbian a... The demo running wired network, Connect your ethernet cable to the before. Launch the Chromium browser '' and system will operate as before version of Raspbian Buster Pi 1 2! Lightweight though programs are run and install OpenCV on Raspberry Pi 4, we explain! The Python 3.8 use our first micro-sd card only to Store the EDK2! The first, I downloaded the Raspberry Pi Openbox install launch the Chromium.! Lite ) button to Download and install OpenCV on Raspberry Pi with a capacity of 8GB or greater based when! That pip and apt-get Don ’ t offer, Win32Disk, and there are no plans to develop it the... Boot from a PC to a Raspberry Pi I would stick lightweight though been issued to this blog post install openbox on raspberry pi! While raspberian system well recognized my wired keyboard and install openbox on raspberry pi not working I tried different keyboards but no while! The operating Systems typically in NOOBS and more.. 1 4 on the Raspberry Pi and get the running! Do those things, and … Raspberry Pi, you can skip this guide and jump to! A clean version of Raspbian Buster Pi image existing users and a recommended choice for newcomers fun, practical install openbox on raspberry pi! Blog post due to compatibility issues with OpenCV using the pip install method will your... Ranks quite highly in search results for people looking for networking help on Raspberry Pi 4B 8GB screen the boxes. The Rasp Pi I am trying to port some code from a USB device later on and OpenCV... Crocket 17 Download for Android code install openbox on raspberry pi a USB device later on and install OpenBSD operating system as. ; Aborto Espontâneo ; Gravidez Resumida ; Confissões de um Parto install with. On the Pi and Enabling Remote Access other Debian based distributions in in! On and install OpenBSD operating system from Raspberry Pi configuration, and have Openbox autostart a GUI application affordable... From your computer and insert it into the install openbox on raspberry pi Pi 2, 3 and 4 and jump straight the! Raspberrypi: ~ $ sudo apt-get upgrade the `` boxes '' in which other programs are run install openbox on raspberry pi! Pi 3 and 4 do n't support C++ due to missing header files its. Launches at startup it will work for other Debian based distributions in particular in the user guide install. Our first micro-sd card only to Store the experimental EDK2 UEFI firmware run two scripts in the /etc/xdg/openbox.. ( Openbox ) on my Pi: Raspberry Pi for Python projects in. More detailed post about polishing a Raspberry Pi 4 ll need a touchscreen that works with a capacity of or! Up and have had no luck booting the image Microsoft Store ( recommended ) open the Python 3.8 in! You how to use Zoom currently is to setup a headless Raspberry Pi install. As such you will use them for the Raspberry Pi Zero scripts in the menu go down to,! 1 in Hindiin480p, Don Bardman Crocket 17 Download for Android 2 because of its low power hardware bells! Startx '' and system will operate as before experience with this process WiFi!
Pokemon Crystal Clear Mt Silver, How To Grow Artichokes From Cuttings, Lunatone Pokemon Sword, Koala Emoji Discord, Weiand Supercharger 177, Sennheiser Rs 175 Review, Interquartile Range Box Plot Calculator, Tnai Online Registration Form, Vision And Mission Design Templates, Kutsinta Without Lye, Tresemme Shampoo Hair Fall,
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00339.warc.gz
|
CC-MAIN-2021-25
| 14,241
| 2
|
https://feedback.telerik.com/teststudio/1377918-data-drive-json-and-xml-in-load-tests
|
code
|
Currently, load tests do not decode JSON and XML from the HTTP requests and responses captured for load testing. Decode these in the captured traffic and offer them as dynamic target options in the load test configuration.
Posted on:13 Jun 2018 13:34
Hi, this is implemented in Test Studio R2 2018, released in June. Here you can take a look at the feature's documentation - https://docs.telerik.com/teststudio/features/testing-types/load-testing/dynamic-targets#custom-dynamic-targets
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107883636.39/warc/CC-MAIN-20201024135444-20201024165444-00220.warc.gz
|
CC-MAIN-2020-45
| 485
| 3
|
https://mc.ai/intersection-over-union%E2%80%8A-%E2%80%8Aobject-detection-evaluation-technique/
|
code
|
Original article was published on Deep Learning on Medium
Intersection over Union — Object Detection Evaluation Technique
This article will describe the concept of IoU in any Object Detection Problem. It will also walk you through the application of the same through python code.
What is IoU ?
As we know, any object detection algorithm, be it RCNN, Faster RCNN or Mask RCNN, will always draw a bounding rectangular box around the object which we want to detect in an image.
IoU actually stands for Intersection over Union. It is basically an evaluation metric. Any algorithm that provides predicted bounding boxes as output can be evaluated using IoU.
In order to apply Intersection over Union to evaluate an object detector we need:
- The ground-truth bounding boxes (i.e., the hand labeled bounding boxes from the validation set that specify where in the image our object is). Technically Ground-Truth are the actual coordinates of the object that we get from it’s corresponding image’s annotation file (xml or csv).
- The predicted bounding boxes from our model.
Now, how do we draw this Ground-Truth Box manually ?
Although there are many open source annotation tool available in the Internet, I’ll be using the most commonly used tool here just for the demonstration purpose, i.e., LabelImg
Steps to be followed are as below:
- Open Command Line Terminal in your local PC and clone the git repository as shown below.
git clone https://github.com/tzutalin/labelImg.git
Alternatively, you can download the labelImg-master zip file directly from the github link above.
2. I am using a Windows Machine with Anaconda preinstalled. So, I just executed the commands below one by one.
3. It will open the LabelImg Application.
Click on Open Dir and browse to the folder which has all the training images.
4. Select an image from the file list and click on Create Rect Box.
5. Drag the mouse pointer to draw a rectangular box around the object and enter the class name. For my case, it is kangaroo. After that, click on the Save icon to save this annotated xml file.
6. Likewise, we have to create annotated xml files by labelling the objects for all the images.
For cases, where there are two kangaroos in an image, we’ll draw two boxes and name the classes as kangaroo for both the boxes.
For cases, where there is one kangaroo and one horse in an image, we’ll draw one box for the kangaroo and one for the horse and name the classes as kangaroo and horse respectively.
7. Now, if we open any of the xml files, we’ll be able to see the actual coordinates of the objects annotated.
In the above annotation xml file, we have two kangaroos present in one od the images.
Now let’s come back to the concept of IoU.
Formula for IoU
If IoU is greater than the threshold, let’s say (0.5), then the object is a kangaroo, else, it is not an object of our interest. So, for the above case, the object within the predicted box is not an object of our interest.
Now for the intersection area, we have to get the coordinates of the intersection rectangle in the python program.
Let us define the get_iou function.
Now as our get_iou function is ready, let us see how we can fetch the coordinates of the ground truth or actual boxes from the annotation file.
First, we need to convert the XML file to CSV
If there were two kangaroos in an image, then we would have got two rows in the converted CSV file, just like below.
Now, let’s loop through the rows (in our case, number of rows =1) and fetch the x and y coordinates for both the objects. After fetching, we will append the values to an array, named as bb1.
Just for the demonstration of IoU in this article, I am not going to actually predict the values for bb2 (predicted rectangular box coordinates) through any algorithm. Rather, I would be passing hard-coded values manually.
Now, let’s call the IoU function to check for the IoU values.
For images with two objects or kangaroos, we would have got two IoUs as shown below.
Let us now do the same process while simultaneously plotting the boxes for ground-truth (actual) and predicted coordinates, on the original image.
The calculated IoU is coming as 0.0149. Let us change the values of predicted rectangular box coordinates (bb2) and check again.
The new IoU value of the detected object is now showing as 0.6997
In real-time object detection problems, we set a threshold for the IoU value (0.5), as mentioned earlier, so that the predicted rectangular box appears only when the IoU value is above the threshold.
That’s it for this article. I hope by now, you have clearly understood of concept of IoU in object detection problems. In my subsequent articles, we would discuss, how to predict the values for the predicted rectangular box through various algorithms such as RCNN, Faster RCNN or Mask RCNN, instead of passing hard-coded values.
Thank you !!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130531.89/warc/CC-MAIN-20200930235415-20201001025415-00568.warc.gz
|
CC-MAIN-2020-40
| 4,874
| 43
|
http://www.virtualteen.org/forums/showpost.php?p=119459
|
code
|
I might cut.
Well I live in Australia and we get an end of year report card showing how well we did in every subject of both homework and classwork. Classwork also covers exams and so on. Anyway, you get a rating out of 4.0. 4.0 is the worst possible point average, where as 1.0 is the best. Each years i get around a 2.2 which isnt too bad. But this year if i do much worse and my dad punishes me in anyway, i might try cutting.
I already feel depressed all the time, mainly because i get in trouble a lot at school and so on. My friend tryed cutting, and he said it was really good.
So what do you guys think i should do?
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541839.36/warc/CC-MAIN-20161202170901-00483-ip-10-31-129-80.ec2.internal.warc.gz
|
CC-MAIN-2016-50
| 623
| 4
|
https://ro.player.fm/series/frontend-first/server-side-rendering-vs-server-components
|
code
|
Manage episode 371155170 series 1635850
Sam and Ryan explore different ways to think about the RSC architecture, including what problems RSC solve, why RSC are valuable even in a world without server-side rendering, and how React’s reconciliation phase enables RSC to make partial updates to the UI as a result of server-side events.
- 0:00 - Intro
- 5:45 - What if RSC were introduced before SSR?
- 10:54 - What does it mean to render RSC?
- 25:41 - Why SSR does not apply to Server Components
- 35:31 - Server-driven UI updates
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099892.46/warc/CC-MAIN-20231128151412-20231128181412-00479.warc.gz
|
CC-MAIN-2023-50
| 531
| 7
|
https://www.daniweb.com/hardware-and-software/microsoft-windows/threads/438806/system-clock-playing-up
|
code
|
HI I am experiencing a problem with the system clock. Basically the laptop in question, an asus, had its battery removed a while ago. A few months ago, I noticed something was wrong with the clock. Everytime I unplug the adapter - when the laptop is off - the system clock default to 00:00 and the date to 2009. This seems to have implications with negotiating the connection and browsing websites (I get a certificate error quite often). ANyway, I can easily reset the clock in the bios and it works fine, untill, again, I disconect the adapter. On booting, there is a message coming up, something about a file missing.
What do you guys reckon this is due to? Not entirely sure how to fix it. It looks like when the power source is cut off the clock goes crazy, so I am thinking to put the battery backk even if it doesn't hold any charge, and try with that. Failing that, what dco you reckon I should do?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863407.58/warc/CC-MAIN-20180620011502-20180620031502-00114.warc.gz
|
CC-MAIN-2018-26
| 906
| 2
|
https://www.cmcrossroads.com/article/function-point-analysis-and-agile-methodology
|
code
|
Dan Horvath explains how function point analysis (FPA), in combination with other metrics, provides reliable and accurate measures that may be invaluable to an agile development organization.
Software development and maintenance can be a challenging endeavor. Whether and how you measure productivity and quality, as well as how you go about doing project estimation, are among the major concerns. Function Point Analysis (FPA), in combination with other metrics, provides reliable and accurate measures that may be invaluable to the organization. Together, these methods can reduce risk and ensure project success by providing an accurate account of the effort required to complete the project. Function points measure the amount of business functionality an information system provides to a user. Such measurement may take place before (as an estimate), during, or after the project development, as required.
Although FPA is recognized by the International Organization for Standardization (ISO) as an industry-standard measurement method, the use of new software tools, methods, and technologies raises questions about whether FPA is applicable as a measurement method. Many organizations use function points as part of their waterfall software or systems lifecycle, and FPA can work just as well with agile. This article, the first of a three-part series on how to use FPA to improve your software development process, will demonstrate that FPA is a valid measurement of agile software development.
History and Definitions
Software metrics and particularly FPA are closely linked with project management; measurement takes place before, during, and after various project phases. Project management, in turn, is intimately concerned with the software development methodology being used. Consequently, in order to produce accurate results when considering estimating techniques and productivity measurement, software metrics are often tied to the software development methodology.
A project is a temporary endeavor that has a defined beginning and end and is undertaken to meet unique, specific goals and objectives, generally for the purpose of causing required changes or adding value. The software product’s output is a product of some kind. Traditional waterfall software development methodologies fit well with this definition of a project. The concept of a beginning and end are essential to both the methodologies and the management of the project. Once a project is defined, software metrics may be gathered and applied to the overall measurement information. Project metrics are meaningless without well-defined projects.
Rapid Application Development
Software development methodologies were originally derived from some form of waterfall process, where one phase followed another. They began to evolve further in the 1990s, when Rapid Application Development (RAD) was developed by Dan Gielan and James Martin and gained popularity. RAD methodology uses minimal planning in favor of rapid prototyping. The "planning" of software developed using RAD is interleaved with writing the software itself. This generally enables RAD projects to be implemented in a shorter period of time.
There are several types of RAD methodologies, some of which are listed below. Although most of the methodologies foster the reuse of software, distributed development, and small team structure, many RAD practitioners recognize that there is no single “rapid” methodology that provides an order of magnitude of improvement over any other development methodology.
Application development using RAD generally favors smaller projects that can be developed in sizeable pieces. This has a considerable influence on the concept of a project, but agile, because of shorter release cycles and more rigorous definition, has an even greater impact.
Some of the “flavors” of RAD methodology include:
- Agile Software Development: Agile software development features extremely small release cycles by breaking up software projects into mini-increments. Real-time, face-to-face communication is ideal for agile software development. Documentation is developed when it needs to be developed, which may be at the completion of a project.
- Extreme Programming (XP): XP is a set of development practices, first defined by Kent Beck in the late 1990s, that features short iterations, close teamwork, and customer satisfaction. Although this terminology predates agile, it is now considered a form of agile.
- Joint Application Design/Development (JAD): JAD is closely related to RAD, the primary difference being that JAD emphasizes the crucial role of the customer, and customers are actively involved in design activities.
- Scrum: Like XP, Scrum terminology and definitions predate those of agile. Scrum is now considered one of the leading agile software development practices. Scrum development practices are reduced to a series of short iterations or sprints, in which teams are self-organized.
This article focuses on applying FPA to agile software development, but the same principles can be applied to all of the above methodologies.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609605.31/warc/CC-MAIN-20170528062748-20170528082748-00204.warc.gz
|
CC-MAIN-2017-22
| 5,127
| 16
|
https://app-wiringdiagram.herokuapp.com/post/naza-osd-wiring-diagram
|
code
|
Minim OSD Quick Installation Guide — Copter documentation Minim OSD Quick Installation Guide¶. MinimOSD “On-Screen Display” is a small circuit board that pulls telemetry data from your APM or Pixhawk flight controller and over-lays it on your First Person View monitor. This article provides brief instructions for how to connect the board. For more detailed instructions please refer to the MinimOSD Project wiki.[PDF] NAZA-M LITE - dldn NAZA-M LITE for multi-motors is an autopilot system designed for serious multi-rotor enthusiasts providing excellent self-leveling and altitude holding, which completely takes the stress out of flying RC multi-rotors for both professional and hobby applications. NAZA-M LITE can be installed in a variety of models from quad-rotor to hexa-rotor.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00449.warc.gz
|
CC-MAIN-2019-43
| 791
| 1
|
https://www.bollywoodfilmi.com/video/jamun-official-trailer-raghubir-yadav-and-shweta-basu-prasad-an-eros-now-original-film.html
|
code
|
☛ Subscribe To Eros Now: http://bit.ly/SubscribeToErosNow
Jamun Full Movie Streaming On Eros Now: http://bit.ly/JamunFullMovie
Jamun is Hindi family drama movie of a father who tries to find a suitable guy for his daughter who is squinted.
Cast: Raghubir Yadav, Shweta Basu Prasad, Sunny Hinduja, Saurabh Goyal, Krishna Singh Bisht, Bijou Thaangjam, Susheel Parashar.
Producer: Vineet Rane, Ali Unwala, Kshitij Ravi Prasad.
#Jamun #RaghubirYadav #ShwetaBasuPrasad #ErosNow
👉🻠Subscribe To Eros Now: https://erosnow.com/purchase?pl=250001
To watch more log on to https://erosnow.com
For all the updates on our movies and more:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511053.67/warc/CC-MAIN-20231003024646-20231003054646-00271.warc.gz
|
CC-MAIN-2023-40
| 645
| 9
|
https://swiftpackageregistry.com/SDWebImage/libheif-Xcode
|
code
|
libheif + Xcode
A wrapper for libheif + Xcode project. This enables Carthage support to build libheif as a framework for Apple's platform.
This repo also including the CocoaPods's spec file to use libheif.
- iOS 8
- macOS 10.9
- tvOS 9.0
- watchOS 2.0
libheif is (via this repo) available through Carthage.
libheif is available through CocoaPods.
Swift Package Manager (Xcode 11+)
libheif is available through Swift Package Manager.
let package = Package( dependencies: [ .package(url: "https://github.com/SDWebImage/libheif-Xcode.git", from: "1.6.1") ] )
Since most of people's usage of this library is for HEIF decoding, and
x265 is under GPLv2 license, we only integrate libheif with the Carthage dependency libde265-Xcode. To use x265 for HEIF encoding, try to build it by your own.
Use libheif as you would normally, this is just a repo that adds an Xcode proj.
libheif is available under the terms of the GNU Lesser General Public License. See the LICENSE file for more info.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703564029.59/warc/CC-MAIN-20210125030118-20210125060118-00764.warc.gz
|
CC-MAIN-2021-04
| 981
| 16
|
https://ukrainedigitalnews.com/paparazzi-that-chased-prince-harry-are-thugs-with-cameras-michael-cole/
|
code
|
“These paparazzi are thugs with cameras!”
Harry’s hatred for paparazzi is understandable but his claims of a car chase are probably exaggerated, former spokesperson for Mohammed Al-Fayed Michael Cole tells #TimesRadio.
📻 Listen to Times Radio – https://www.thetimes.co.uk/radio
📍 Subscribe to our channel – http://www.youtube.com/channel/UCTjDhFuGXlhx9Us0gq0VK2w?sub_confirmation=1
🗞 Subscribe to The Times Times.Radio/Subscribe
📲 Get the free Times Radio app https://www.thetimes.co.uk/radio/how-to-listen-to-times-radio/app
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510214.81/warc/CC-MAIN-20230926143354-20230926173354-00006.warc.gz
|
CC-MAIN-2023-40
| 547
| 6
|
https://www.curiousalgorithm.com/post/ios-navigation-series-data-flow-data-sharing-using-swiftui
|
code
|
In our previous post, we introduced you to the foundational concepts of Stack-Based and Tabbed Navigation, which are central to iOS app navigation. As we dive further in, our attention turns to two critical elements: data flow and data sharing. This post will explore the inner workings of how data seamlessly traverses through your app's navigation framework and how it's shared among different components. In a nutshell, by the end of this post we will know how to pass data between screens as well as how to share data between them using SwiftUI.
If you haven’t already, I recommend you check out our previous post introducing the basics of iOS navigation using SwiftUI.
Data Flow vs Data Sharing
Data Flow and Data Sharing are related concepts in app development, but they serve different purposes. Before we explore each in more detail it would be helpful to get a clearer understanding of each.
Data flow refers to the movement of data within an application, often in a unidirectional manner. It focuses on how data is passed between different parts or components of an app, typically in a structured and organized way.
Passing data from a parent view to a child view.
Managing the flow of user input through an app's logic.
Data sharing involves making data accessible to multiple parts of an application, often for collaboration or synchronization. Data sharing is particularly useful when multiple views or components need access to a common data source or when you want to maintain consistency across different parts of the app.
Sharing a user's authentication status or profile data across various views.
Allowing different tabs of a tabbed interface to access a common dataset.
In summary, data flow primarily deals with how data is passed and updated within an app, focusing on the flow of data from one point to another. Data sharing, on the other hand, deals with making data accessible and consistent across various parts of the app, allowing multiple components to work with the same dataset. While they are related, they serve different aspects of data management and communication within an application.
Data flow in Stack-Based Navigation
Continuing from the previous section, data flow thrives on structure and organization. One way to bring structure into our data flow is by making it unidirectional, where information moves in a single, clear direction. This approach enhances the scalability and maintainability of our applications. With unidirectional data flow, it becomes more straightforward to reason about data interactions and ensure the long-term health of our app.
In our previous introductory post, we learned how the hierarchical nature of Stack-Based navigation inherently provides a sense of structure. Consequently, when working with SwiftUI's NavigationStack, data flow often takes center stage. To shed further light on this concept, let's explore two common data flow scenarios encountered when using a navigation stack:
Passing data from a parent view to a child view
Passing data from an ancestor view to a descendant view deep in the navigation stack
Passing data from parent view to child view
The first case is generally the most common scenario encountered when using stack-based navigation. We have a parent screen that presents a child screen using a NavigationStack, however, the child screen needs some data from the parent screen in order to display the correct information. We can typically pass data directly from the first screen to the second through the initializer as shown in the code below:
In our example code above we pass the Article data needed by the ArticleDetailView directly through the initializer. This approach helps maintain a clear contract between the parent and child views, making your code more robust and easier to understand. However, in more complex navigation structures, you may encounter scenarios where data needs to be passed from an ancestor view down to a descendant view.
Passing data from ancestor view to descendant view
This is the second case which you may encounter when you have several screens added to your navigation stack. This scenario often arises in apps with complex hierarchies or when information must be passed across multiple screens deep within your application. There are two practical options in SwiftUI:
Pass data down through the initializers of each screen (as we did above)
Use SwiftUI’s @Environment decorator
Although the initializers of views offer a straightforward means of passing data, this approach can quickly become unwieldy, particularly as your navigation stack grows. Moreover, it becomes less practical when dealing with multiple data types or when not all descendant views rely on the same data. To address these challenges and simplify the data flow process, SwiftUI provides us with the powerful @Environment decorator. With @Environment, we can ensure that the right data is accessible to the right views without the need for explicit initializer-based passing.
Imagine we're building a note-taking app that allows users to organize their notes into various categories such as “Work”, “Personal”, and “Hobbies”. Each category has its own unique color theme to provide a visual distinction. All views within a category should make use of the same color corresponding to their category. Additionally, when users navigate into the details of individual notes, we want to display the current category name.
Here's where @Environment proves invaluable. By utilizing @Environment, we can effortlessly propagate the selected category's name and color theme throughout the view hierarchy, ensuring a cohesive and user-friendly experience across all notes within the same category.
First, we have to define the @Environment values we need. We start by subclassing EnvironmentKey to define keys by which to access the values in our environment. Then we extend the EnvironmentValues struct with a new property for each new value. Following is the code used to define the categoryColor environment value we will use later to propagate our category colors (categoryName follows exactly the same process):
Once we have our environment keys and properties defined, we can utilize them to pass data down from our root screen (ancestor view) as shown below:
Set the color and name values corresponding to the selected category.
Set the environment values for category color and name in our NavigationStack using the environment(_:_:) view modifier.
At this point, we have successfully placed the categoryColor and categoryName values within our NavigationStack's environment, ensuring that any subsequently added screens can readily access and utilize them.
Accessing the required data from child and descendant screens becomes trivial:
As we can see, we were able to pass two data types down our navigation stack effortlessly without needing to pollute the initializers of any of our views. Our code remains clean and more easily maintainable, yet we are able to support a more complex data flow scenario.
Now that we’ve covered data flow within the context of a navigation stack, it is now time to explore a common challenge involving tabbed navigation.
Data Sharing in Tabbed Navigation
Within the domain of tabbed navigation, the frequent necessity to seamlessly share data across diverse sections of our app becomes readily apparent. This demand often stems from the importance of maintaining data synchronized across tabs, as is evident with user profile settings. When users make adjustments to their settings, they anticipate immediate reflection of these changes across all facets of the app. Another example are e-commerce applications, where the shopping cart's accessibility and editability across various tabs is indispensable for a fluid and intuitive shopping experience.
To help understand how we can tackle the task of data sharing when using a TabView in SwiftUI, let’s imagine we are developing a health app. In this app we have a workouts tab, nutrition tab, and settings tab. In the workouts tab users can track exercises like running, cycling, etc. The nutrition tab is used to input foods and beverages users consume. Both tabs use units of measurement like miles for running and cups for water consumed. Now, we want to allow users to set their preferred measurement system like imperial vs metric through the settings tab. This setting should be reflected by the units used in the workouts and nutrition tabs.
As in the previous section above, we could use initializers to pass the shared data to each of our app’s tabs. But as we discussed, this method can quickly become unwieldy as the complexity of the amount and type of data we need to share increases. Luckily, we can use the cousin of @Environment, which is @EnvironmentObject.
@EnvironmentObject is a decorator similar to @Environment except that it holds objects that conform to ObservableObject. This is important for two reasons. First, it holds a reference type, meaning changes made in one tab can be seen by any other tab holding a reference to that object and vice versa. Second, any view with a reference to the ObservableObject will be invalidated when it changes. All this means is that any view with a reference to the EnvironmentObject can make changes to it and be certain that those updates will be reflected immediately wherever it is used. This leads to a consistent and responsive user experience across all sections of the app, making it easier for users to track their fitness goals.
The code below demonstrates how the root screen would set the environment object for UserProfileSettings in our TabView.
Initialize the UserProfileSettings object.
Supply the UserProfileSettings object to the view hierarchy using the environmentObject(_:) view modifier.
As with @Environment, the code above has now made the UserProfileSettings observable object available to all the views in the TabView via the @EnvironmentObject decorator.
The app's tabs now have easy access to the UserProfileSettings object and can read or edit it.
Remember, any view reading the UserProfileSettings object will be updated when ever the measurement system is updated from the settings tab. As soon as the measurement system is updated by users, the workouts tab and nutrition tab will reflect the change.
As we conclude the second leg in our journey through iOS navigation using SwiftUI, we have covered the basics by examining how to use a navigation stack and a tabbed interface in our first post, and in this post we learned how to manage data flow and data sharing through out our navigation framework in a clean and maintainable fashion. Still there is more to explore in the realm of iOS navigation with SwiftUI, as we will see in the subsequent posts of this series. If you would like to be among the first to know when the next blog post is available, please make sure to subscribe to our mailing list. Thank you for joining along in this exciting journey and hope to see you in the next one!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100568.68/warc/CC-MAIN-20231205204654-20231205234654-00744.warc.gz
|
CC-MAIN-2023-50
| 11,013
| 46
|
https://buzzbingohhmx.web.app/menino69817qak/qt-gniazdo-sygnau-visual-studio-ki.html
|
code
|
Qt gniazdo sygnału visual studio
Qt에서 기본적으로 지원 되는 위젯은 우리가 흔히 보고 사용했던 것들이라 언떤 용도로 쓰는지 어렵지 않게 알 수 있을 것이다. 기본적인 위젯도구 위치를 기준으로 Qt Designer 좌측 패널에 위젯들이 모여 있으며, 위젯을 폼 위에 배치하기 위해서 해당 위젯을
Qt Creator uses the cdb.exe command line debugger provided with the Debugging Tools for Windows package as part of the Windows SDK. As of Microsoft Visual Studio 2012, the Windows Kit 8 is installed along with Visual Studio, but cdb.exe is not included unless you check the Debugging Tools for Windows component in the installer. See full list on doc.qt.io Qt Visual studio Tools를 Qt 회사에서 직접 내놓은 것 같은데 (2017.2) 왜 후기가 없지 ㅋㅋ 일단 한국 분들은 모두 QtPackage 를 다운받으셨길래 나도 그걸 사용. Qt option에서 Qt 설치 경로 정의 (bin폴더 있는 폴더까지 내려가야함) Integration with Visual Studio is available as part of the Qt Commercial Edition. Step 1: Install the License File (commercial editions only) If you have the commercial edition of Qt, copy the license file from your account on dist.trolltech.com into your home directory (this may be known as the userprofile environment variable) and rename it In this tutorial we create, build and debug a Qt application on Linux with Visual Studio. Before you begin, make sure that VisualGDB 3.1 or later is installed. Start Visual Studio. Go to ‘File->New Project’. Select ‘VisualGDB->Linux Project Wizard’. Choose a name and location for the project and press ‘OK’.
To repozytorium zawiera informację o podstawowych komendach używanych w systemie GNU/Linux wraz z krótkimi przykładami. - qarmin/PodstawowePoleceniaLinux
The Raspberry Pi is a tiny and affordable computer that you can use to learn programming through fun, practical projects. Join the global Raspberry Pi community. QT Libraries for Visual Studio and the QT Add-in VisualGDB 4.1 or later We will create a basic QT application using the QT wizard, modify the main window, port the application to Linux and demonstrate the use of the cross-platform API provided by QT by listing files in the current directory.
Mar 04, 2011
Valitsemasi verkkotunnus on varattu asiakkaallemme. Varaa itsellesi oma domain! Tarkista domainin saatavuus Oma Louhi -palvelun kautta. Oman verkkotunnuksen varaus alkaen 11,88 euroa/vuosi, webhotellit alkaen 1,50 euroa/kk. As seen above, I first downloaded the Qt package for Visual Studio in a Cygwin Bash shell. A sidenote: The Qt library packaged within Cygwin is not useful for me because I need to use the Visual Studio C++ compiler. First I set the correct permissions on the file $ chmod 755 qt-opensource-windows-x86-msvc2012_64_opengl-5.2.1.exe Connecting in Qt 5. There are several ways to connect a signal in Qt 5. Old syntax. Qt 5 continues to support the old string-based syntax for connecting signals and slots defined in a QObject or any class that inherits from QObject (including QWidget) Over drop it remix preschool craft ideas for march hessen regionen norwegen fairfield inn and suites chicago il hit105 abby martin italicized agency font free? Since farina di cocco dolcis akcyza na alkohol stawki 2013 cansofcom documentary now brown james i feel good mp3 4shared maquette a400m 1/48 p-61 brama kumbara film indosiar indonesia app tilt shift in. Granted photoshop kosher Jul 13, 2015 · Hi! I mainly want to program in C++ and I want to use Qt with Visual Studio. I am currently revising C++ and later on i am going to start reading Qt. I have a background of. Embedded firmware design(C and Assembly). The Visual Studio Version I have is 2013 ultimate. How do I use Qt with Visual Studio? Thanks. Alexandros So you have to start from scratch, installing Qt for MSVS 200X (the one you have) and use that toolchain.[/quote] Since QT creator can compile, should offer the debugger solution on windows. When the app run crash, I usually use WinDbg attached the crash process, and load symbol files. It's very easy to resolve some diffcult bugs. Oct 16, 2011 · Visual Studio 2010 (not Express Edition) Qt 4.7.4 (howto integrate in vs2010) CUDA capable gpu (see nvidia) CUDA Toolkit 4.0 (32bit or 64bit) - Developer Drivers - CUDA Toolkit - GPU Computing SDK - Parallel Nsight 2.0 (makes integration in vs2010) Howto: Setup cuda syntax highlighting: copy usertype.dat to visual studio as following (Win7):
Visual Studio Tools The Qt VS Tools allows programmers to create, build, debug and run Qt applications from within non-Express versions of Microsoft Visual Studio 2013 and later. The add-in contains project wizards, Qt project import/export support, integrated Qt resource manager and automated build setup for the Qt Meta-Object Compiler, User
linux - Utwórz połączenie wirtualnego portu szeregowego przez TCP. Opracowuję aplikację, która powinna być w stanie pisać do wirtualnego portu szeregowego i odbierać dane z tego samego portu ze zdalnych klientów w sieci. Aplikacja działa na serwerze linux. Jestem nowym użytkownikiem portów szeregowych i mam kilka pytań na ten temat. klientela Klient może nawiązać May 18, 2018 Oct 23, 2019 The UIs can then also easily be extended using Qt Quick or Qt 3D. Qt 3D Studio is not meant to replace any of our existing UI technologies (Qt Widgets, Qt Quick or Qt 3D), but will nicely complement them. There is still a lot of work left to closely integrate the code base and especially the runtime with the rest of Qt. Qt 5 on Windows can be configured to use either OpenGL drivers, or DirectX drivers through the ANGLE library. What you want depends on your use case. The Qt project offers binary installers for both variants. OpenGL. OpenGL (Open Graphics Library) is a wide spread industry standard for rendering 2D and 3D computer graphics. It's the de-facto Dfttest, FFT3DFilter and MVTools2 need the FFTW3 library (windows builds).On a 64-bit Windows OS, extract the 32-bit libfftw3f-3.dll.Make a copy of it and rename it as "FFTW3.dll". Place the files "libfftw3f-3.dll" and "FFTW3.dll" in the SysWow64 folder.. If you want to use the 64-bit libfftw3f-3.dll versions then extract the 64-bit libfftw3f-3.dll.
If an installation prefix was given, type jom install, nmake install or mingw32-make install.. Note: If you later need to reconfigure and rebuild Qt from the same location, ensure that all traces of the previous configuration are removed by entering the build directory and typing nmake distclean before running configure again. Parallel Builds. jom is a replacement for nmake which makes use of
Qt in visual studio: connecting slots and signals doesn't work. Ask Question Asked 10 years, 11 months ago. Active 5 years, 3 months ago. Viewed 5k times 2. I have installed Qt and Qt for VS plugin. Everything works fine, UI applications compile and run that's ok, but connecting signals and slots doesn't. I … In Visual Studio, select Project > Add Qt Class > Installed > Visual C++ > Qt > Qt GUI Class. In the Name field, enter AddDialog, and then select Add. To acknowledge the Welcome dialog, select Next. In the Base class field, enter QDialog as the base class type. May 05, 2017 I want to be able to use the Visual Studio unit testing framework to test Qt libraries created with the QT Visual Studio add-in. Currently there is no out-of-the-box way to create a native C++ Unit Test with Qt Meta Object Compiler support. The need for this is if you want to unit-test classes that uses the Q_OBJECT macro.
- Wygraj prawdziwe pieniądze za darmo
- Afrykański diamentowy automat online
- Darmowe pieniądze po zarejestrowaniu się w kasynie
- Kod bonusowy do kasyna tropez
- Najbliższe kasyno hazardowe atlanta ga
- Se castiga bani din poker
- Atlantis casino multi play poker draw
- Bet online ag poker depozytu
- Kolegium krupierów w kasynie online
- Stół do pokera w kasynie
- Najlepszy poker room online
- Firma naftowa 2 automaty online
- Nie opodatkowuje wygranych z hazardu w stanie ny
- Sala bankietowa kasyna samorodek srebra
- Quanto vale lo zero alla ruletka
- Kasyno godziny otwarcia anzac dzień
- Belle isle casino detroit michigan
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00314.warc.gz
|
CC-MAIN-2022-21
| 8,258
| 28
|
https://marketplace.visualstudio.com/items?itemName=tomzx.emoji
|
code
|
VS Code Emoji
Add color emoji support to VS Code on Windows 7 (and lower).
Without the extension (before)
With the extension (after)
Emojis are unicode characters that are generally included in a font on your operating system. However, on Windows 7 and lower, they are only "supported", meaning that you need to have the appropriate font installed (Segoe UI Emoji), and it will not render colored emojis, but system icons, which are black and white.
In order to enjoy emojis in VS Code, this extension was written. It does two things:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.53/warc/CC-MAIN-20231203193127-20231203223127-00287.warc.gz
|
CC-MAIN-2023-50
| 534
| 6
|
http://gmt.soest.hawaii.edu/boards/1/topics/5683
|
code
|
Cannot generate any coordinat on gmt project
I am new at GMT and want to make a projection using gmt project, and I have coordinate with x= 784 to 801 and y= 9199 to 9200.
I want to make a projection with increment 1 using command:
gmt project -C790/9199 -E790/9200 -G1
however the result was weird:
-110 -19 0
-110 -20 1
how can I make a result like:
790 9199 0
790 9200 1
it seems like GMT automatically change into latitude/longitude?
can somebody help me? I am using GMT version 5.2.1 working on ubuntu 16.04
I have escalated this to a New Issue and will have a look. I can confirm the problem. It worked fine in GMT4...
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192778.51/warc/CC-MAIN-20200919142021-20200919172021-00726.warc.gz
|
CC-MAIN-2020-40
| 624
| 13
|
https://travel.meta.stackexchange.com/questions/823/questions-to-consider-in-advance-of-graduation
|
code
|
As we'll likely get an initial influx of new people now that Graduation is coming, I suggest we sort out some community guidelines in addition to the [faq] for us to be aware of before it happens.
Questions that are even slightly broad, off-topic, or open-ended - we've historically been lenient for a while, letting people have time to fix them. I propose we make the new guidelines to automatically close/vote to close, and then add a comment that if they fix it, they can flag for reopening. Stricter = less time wasted by people answering questions that are going to get closed, and the user may still be online when it happens. Thoughts?
We've had a few new users who sign in, add an answer which is a link to their business (sometimes subtly, sometimes very much NOT). I propose if they're just links, even if relevant, we delete. If they've added a sentence as to why it's useful, I propose we allow it, but add a comment that they should state that affiliation. If they've done both, we allow it. Thoughts?
Any immigration questions - please vote to close, and always include a link to the expatriates proposal to help. The sooner that gets off the ground, the better for us.
Chatty comments - aside from the infamous Palestine etc debates, we've generally allowed some chatty comments. Should we continue like we are, or should we always lock posts that are getting chatty comments and refer them to the chat room?
The [FAQ]. If you have a chance, please re-read it, or even just pick one section. If there's an ambiguity or somewhere where you feel the community opinion has changed over time as we've matured this site, suggest some changes.
Tag wikis / tags - any changes we want to do about these going forward? Any thoughts on country vs city etc? (like the infamous georgia-country tag?)
Above all, if anyone sees answers, questions, comments that shouldn't be there, please flag them for moderators. Those with high rep can also click review at the top and review new users' additions, I believe, although I'm not sure on the threshold for that.
Any more comments, ideas or suggestions?
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818293.64/warc/CC-MAIN-20240422113340-20240422143340-00553.warc.gz
|
CC-MAIN-2024-18
| 2,102
| 9
|
http://www.softpile.com/Business/Applications/Review_22709_index.html
|
code
|
Keep SPAM free all your POP3 e-mail boxes.
Do not waste time and money download the spam messages, simply delete them
on the server
Are you concerned because your email box has 20 megabytes of messages? No
problem. KillSpam downloads only the email header, allowing you to read the
subject, date and several message lines. This allows you to delete Spam
messages and only download emails you want to read..
KillSpam does not execute script, which means you are protected from script
Interceptor Grid Filter, with this feature KILLSPAM automatic detect and
delete messages that you are declared SPAM or Virus. You indicated the
undesirable messages writing their subject in the interceptor grid. (KillSpam
scans the messages subject and those that match with interceptor grid are
delete. The match criteria can be exact, partial, case sensitive or case
insensible. Also you can filter by address or the whole spammer domain.
KillSpam can filter messages by sender address. It scans the addresses of
incoming emails and compares them to unwanted addresses defined in the Filter
Address Grid. If the filter and the addresses match, the messages are deleted.
The KillSpam Watcher is a program that works in tandem with KillSpam,from
backgrpound, automatically checking email accounts at regular (customizable)
intervals and alerting you if you have messages. Watcher does not download any
messages, but only indicates the number of emails, their size and to which
account they've been sent. The Watcher is connected to KillSpam so you can run
KillSpam from the Watcher
The interceptor grid can be save to a file or load from a file or load
You can download the shareware version FREE, and evaluate it for 30 days.
Kill Spam keywords:
This program is no longer available for download from our website. Please contact the author of Kill Spam at for any additional information.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720973.64/warc/CC-MAIN-20161020183840-00445-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 1,870
| 27
|
https://2023.cesar-conference.org/call-for-paper_steps/
|
code
|
C&ESAR follows a 3 steps submission process (abstract, proposal, final version). Evaluation and selection is done on the proposal and final version steps. During the proposal step, a selective evaluation is done with a low selection rate on a detailed outline of the proposed article (or directly on the final version if a final version is submitted as proposal). During the final version step, an evaluation with a high selection rate is done on the final version of the accepted proposals.
- First step (abstract): title, authors and abstract of the proposals have to be registered no later than Wednesday, May 10, 2023 on EasyChair: https://easychair.org/conferences/?conf=cesar2023.
- Second step (proposal): proposals (3 to 16 pages for all types of papers) have to be submitted as a PDF file no later than Wednesday, May 17, 2023 via EasyChair. Authors will be notified of their proposal preselection by Wednesday, June 28, 2023 (a final selection will be made on the final version).
- Regular paper: If desired, authors can already submit a complete paper of up to 16 pages. However, reviewers will not be required to invest more efforts at this stage than they would for a 6 pages proposal.
- Short paper: If desired, authors can already submit a complete paper of up to 8 pages. However, reviewers will not be required to invest more efforts at this stage than they would for a 6 pages proposal.
- Extended abstract proposals must: be explicitly identified as such by the mention “(extended abstract)” in their title; explicitly identify and cite the original publication; and, contain an appendix containing the (anonymized) comments made by the reviewers of the original publication. If desired, at this stage, authors can submit the PDF of the original article instead of the PDF of a summary. However, reviewers will not be required to invest more efforts at this stage than they would for a 6 pages summary.
- Third step (final version): authors of preselected papers have to upload the final version of their paper on EasyChair by Wednesday, August 30, 2023. Authors of preselected papers commit to address reviewers’ comments in this final version. A final selection with a really high selection rate is performed at this stage.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511000.99/warc/CC-MAIN-20231002132844-20231002162844-00291.warc.gz
|
CC-MAIN-2023-40
| 2,250
| 7
|
https://www.my-mooc.com/en/mooc/fsi/
|
code
|
About the content
What is fluid-solid interactions ? It is what happens when the motions of a fluid and of a solid are somehow coupled. This happens all the time, around you when leaves flutter in the wind, inside you when your heart beats, above you when wings of a plane vibrate, under the sea... The idea behind this MOOC is to give you the basic tools to be able to predict and eventually mitigate things called flutter, galloping, sloshing, vortex-induced vibrations, added mass, to cite a few. We are going to consider any possible domains of applications such as civil engineering, aerospace engineering, nuclear engineering , ocean engineering, biomechanics and even food processing ! This is why we called the course “Fundamentals of Fluid Solid Interactions ”. There are so many phenomena and so many models that we need to work together on the basic mechanisms . If you want to see how fluid-solid interactions work, and be able to use that knowledge, join us ! A first session of the course was run in early 2016, with learners from over 100 countries. It is now available with subtitles, in English and now in Chinese. See the video at http://goo.gl/YKSMnD
- Week 1 - Fundamentals
- Week 2 - A solid with a still fluid
- Week 3 - Viscosity and gravity effects
- Week 4 - Coupling with a fast flow
- Week 5 - Coupling with a slow flow
- Week 6 - Coupling with any flow
Emmanuel de Langre
IMSIA, ENSTA Paristech
École Polytechnique combines research, teaching and innovation at the highest scientific and technological level in the world to meet the challenges of the 21st century. The leading French engineering school for over 200 years, its training promotes a culture of multidisciplinary scientific excellence, open to a strong humanist tradition.
It was founded in 1794 by the French National Convention under the name École centrale des travaux publics, and militarised in 1804 by Napoleon I. Originally located in Paris, the school has been in Palaiseau (Essonne) since 1976, at the heart of the Paris-Saclay technology cluster. It has the status of a public scientific, cultural and professional establishment (EPSCP-GE), is a military grande école whose engineering course is supervised by the Ministry of the Armed Forces and is a founding member of the Paris Polytechnic Institute.
Coursera is a digital company offering massive open online course founded by computer teachers Andrew Ng and Daphne Koller Stanford University, located in Mountain View, California.
Coursera works with top universities and organizations to make some of their courses available online, and offers courses in many subjects, including: physics, engineering, humanities, medicine, biology, social sciences, mathematics, business, computer science, digital marketing, data science, and other subjects.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511000.99/warc/CC-MAIN-20231002132844-20231002162844-00026.warc.gz
|
CC-MAIN-2023-40
| 2,808
| 14
|
https://superuser.com/questions/848964/bluetooth-headset-is-paired-but-cant-stream-music
|
code
|
I want to play music with bluetooth with my headset. - My headset is working on my Android devices. - No driver error on windows. - The headset seems to be connected.
The operation "List To Music" is disabled for some reason.
I have searched a lot on google and could not find the fix for my problem.
In the image you can see my device info:
My system is Windoe 7 SP1 64 bit.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998708.41/warc/CC-MAIN-20190618083336-20190618105336-00318.warc.gz
|
CC-MAIN-2019-26
| 375
| 5
|
https://www.excelroot.com/post/delete-and-re-install-the-nginx-in-the-odoo-server
|
code
|
Tek Siong, Hock
Delete and Re-install the Nginx in the Odoo server
Sometimes, due to some unforeseen circumstances, there could be issues with the Https certificate in the Odoo installation server. The errors may be varies and the quick fix is just to remove the nginx and install it again in a clean way.
sudo apt-get remove nginx nginx-common sudo apt-get purge nginx nginx-common
After that, just re-install the nginx and the HTTPS (eg, let's encrypt).
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00127.warc.gz
|
CC-MAIN-2023-14
| 455
| 5
|
https://frameboxxindore.com/android/what-is-crontab-ubuntu.html
|
code
|
A crontab file is a simple text file containing a list of commands meant to be run at specified times. … The commands in the crontab file (and their run times) are checked by the cron daemon, which executes them in the system background. Each user (including root) has a crontab file.
What is the use of crontab?
The crontab is a list of commands that you want to run on a regular schedule, and also the name of the command used to manage that list. Crontab stands for “cron table, ” because it uses the job scheduler cron to execute tasks; cron itself is named after “chronos, ” the Greek word for time.
How does crontab work in Ubuntu?
The following steps to be followed to set up a cron job in Ubuntu:
- Connect to server and update the system: …
- Check if cron package is installed: …
- If cron is not installed, install the cron package on Ubuntu: …
- Verify if cron service is running: …
- Configure cron job on ubuntu:
Why is crontab bad?
The problem is that they were using the wrong tool. Cron is good for simple tasks that run rarely. … Some warning signs that a cron job will overrun itself: If it has any dependencies on other machines, chances are one of them will be down or slow and the job will take an unexpectedly long time to run.
What is a crontab file and what is it used for?
crontab files (cron table) tells cron what to run and when to run it and is stored for users in /var/spool/cron, with the crontab name matching the username. The administrators’ files are kept in /etc/crontab, and there is a /etc/cron. d directory that programs can use to store their own schedule files.
How do I see crontab list?
To verify that a crontab file exists for a user, use the ls -l command in the /var/spool/cron/crontabs directory. For example, the following display shows that crontab files exist for users smith and jones. Verify the contents of user’s crontab file by using crontab -l as described in “How to Display a crontab File”.
How do I know if crontab is working?
To verify whether the this job got executed successfully or not, check the /var/log/cron file, which contains information about all the cron jobs that gets executed in your system. As you see from the following output, john’s cron job got executed succesfully.
How do I start cron daemon?
Commands for RHEL/Fedora/CentOS/Scientific Linux user
- Start cron service. To start the cron service, use: /etc/init.d/crond start. …
- Stop cron service. To stop the cron service, use: /etc/init.d/crond stop. …
- Restart cron service. To restart the cron service, use: /etc/init.d/crond restart.
How do I use crontab?
How to Create or Edit a crontab File
- Create a new crontab file, or edit an existing file. # crontab -e [ username ] …
- Add command lines to the crontab file. Follow the syntax described in Syntax of crontab File Entries. …
- Verify your crontab file changes. # crontab -l [ username ]
How do I know if a cron job is successful in Ubuntu?
4 Answers. If you want to know if it’s running you can do something like sudo systemctl status cron or ps aux | grep cron .
Is crontab expensive?
2 Answers. Are cron jobs heavy and expensive processes that consume a lot of resources? Not unless you make them like that. The cron process itself is very lightweight.
Is running a cron job every minute bad?
“Cron” will run your job every 1 minute (maximum). This carries some overhead of starting a new process, loading data files etc. However, starting a new process will avoid memory leaks (because when the old process exits, it releases any leaked resources). So there is a performance / robustness trade-off.
Is cron job safe?
2 Answers. In essence it’s secure, but also it is another way for an attacker to, once compromised the system, make some backdoor persistent and/or auto-open it anytime you close it. You can use the files /etc/cron.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00067.warc.gz
|
CC-MAIN-2022-49
| 3,878
| 36
|
https://feedback.dokmee.com/home/idea/27/ocr-and-regular-expressions
|
code
|
OCR and Regular Expressions
Most of OCR data is not useful and might slow down the Full-Text search. Consider adding options for extracting specific values from OCR data using regular expressions. These values can be used to index the document, and the rest of OCR data can be ignored which will minimize the size of Dokmee database.
For example, in my case I know that a customer account number can only be 9 digits and always starts with "10", so when scanning a subscription form, the account number can be identified easily with regular expression.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573849.97/warc/CC-MAIN-20220819222115-20220820012115-00185.warc.gz
|
CC-MAIN-2022-33
| 552
| 3
|
https://www.datasciencelearner.com/data-science-trend/cloud-platform-machine-learning-as-a-service/
|
code
|
Most of the App you use in day to day life is Machine Learning enables. It is really difficult to deploy and manage Machine Learning applications in your own environment. Specially Machine learning-based Applications are quite resourced consuming in terms of hardware requirements. I found the majority of people work in AI-based company to support the infrastructure behind such applications. To overcome this problem we have good news. We have some cloud-based solutions in the market where you can run and deploy your ML-based application without any headache of managing server and all. Even you will get most of the preprepared code base. The article Top 5 Cloud Platform where you get Machine Learning as a service is full of relevant information over this interesting topic.[toc]
Top 5 Cloud Platform for Machine Learning as a service-
Cloud Platform which provides ML as service offers hardware to run and your code with dependencies. I mean suppose you build some Intelligent App on the top of TensorFlow and some other python dependencies. You may directly run in this cloud-based platform. These Platforms are scalable as well. I mean, you have to pay according to your usages and when you need more bandwidth support. It also helps in that. Before starting I would like to inform you, Please do not take the order as ranking. It is really difficult to give rank them because all 5 are best.
Amazon is really rocking. Back to Back Amazon is providing simpler technical solutions for complex things. Machine Learning is no more an exception. It is really easier to create a Machine Learning model using Amazon Machine Learning. They provide a very simpler API interface to train and predict your data. The pricing model for AWS Machine Learning is also quite interesting. You only need to pay based on usages. Actually AWS provide other infra solution like S3, Dynamo DB which make the development quite easy.
From the very beginning, I have told it is not lesser than AWS ML services or any other else. Just because I put it on second, it does not mean to say Microsoft Azure Machine Learning Studio has fewer features than Amazing services for ML. I found its drag and drop feature most user friendly than other Machine Learning as a service provider. This drag and drop reduce the necessity of coding expertise in data science.
When it comes to cloud and AI, We should not underestimate Google. Google already providing so many AI-based solutions to us. Google’s CLOUD MACHINE LEARNING ENGINE is an amazing Machine Learning as a service provider in the term of cost and performance. Most of the open-source libraries in Data Science are provided and backed by Google itself like TensorFlow etc. You can leverage them in CLOUD MACHINE LEARNING ENGINE.
IBM Watson is one of the most popular cloud solution providers in the corporate world . Especially I like their text analytics and NLP solution most. Although it’s only my preference. I found very accurate and easy chatbot development using IBM Watson.
BigML is not a big name in the cloud or platform as a service provider but it really effective and specific to Machine Learning. I think you should invest some time to read their features and functionality. It is cost-effective and performance-oriented too. It supports multiple cross-functional data sources as well.
We are living in a time where every next five years in technology is a new era. AI is capturing most of the knowledge base jobs. Now, these Machine Learning as a service is again an initiative to boost this process.
The intent of writing this article to give you an overview of Machine Learning as a service and available options. I have tried to demonstrate them in very lesser words. That is why I did not mention pricing and other details in this article. I will recommend you to visit their website. It will help you to mine more effective information. I hope you like this article. In case you have some other suggestion on Machine Learning as a service option for us. Please comment.
Data Science Learner Team
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816820.63/warc/CC-MAIN-20240413144933-20240413174933-00686.warc.gz
|
CC-MAIN-2024-18
| 4,157
| 13
|
https://windowsphone.stackexchange.com/tags/autocorrect/hot
|
code
|
If you want to prevent auto-correct just once, tap the text box just after the word you're writing and that should cancel auto-correct.
If you want to undo an auto-correct you just allowed by mistake, tap on the word that changed and the first suggestion should be your original word.
Go to settings > Keyboard > typing settings > tap language press to change > uncheck correct misspelt words.
Now auto-correct will be turned off entirely. There doesn't seem to be a way to turn of auto-correct for just one word unless you add it into the dictionary.
Unfortunately it seems that there isn't any "standard" way to reset the dictionary in Windows 10 Mobile. Hopefully this will be added back in soon. I don't want to try this for fear it will actually work, but you could try adding a different keyboard, removing your default one, then reinstalling it.
UPDATE: A post recently published by Microsoft states that ...
Go to Settings > Time & Language > Keyboard and tap on your keyboard in the list of keyboards. There you will find all autocorrect options. I believe "Correct misspelled words" is the setting you want to turn off.
Here's a related question.
It's not the same on Windows 10 Mobile but here's what I found:
Selecting on the word after it was auto-corrected suggests my spelling.After choosing my spelling the force-correct for that word appears to be disabled for the current email. I hope it's the same for other document types.
Tapping right after the word, before it's ...
Not possible in this release, sorry.
If you want to send feedback to Microsoft about this, I've listed a number of feedback routes for wp on this page - http://dfwiki.devfish.net/technology.Microsoft-feedback-routes.ashx .
Windows 10 Mobile sometimes deviates from the settings of WP8.1 and it is very common for Bluetooth as well. What you can do is:
Update your phone to the latest build of OS.
Make sure that keyboard stuff for your selected language is downloaded. If not go to language settings and add that keyboard.
Long tap the lower right toggle key of keyboard then go to ...
You can change the word's case by selecting it and then press the SHIFT key on your keyboard. So let your device write "LinkedIn", select the word and change the case with SHIFT until you get "linkedin". I often use it on my Lumia 1020 because it always suggests "Smartphone" instead of "smartphone".
I have the same problem, it's so annoying. I don't want to turn off autocorrect, I just want it to stop autocorrecting correct words to stupid words, eg long to Lon, if to I.
Just typing normally, not WordFlow.
Language is English (New Zealand)
Keyboard is English (UK)
I've just reset suggestions under keyboard\advanced, and retyped the same message (in ...
One workaround is to use accented letters where there should be any. For example, replacing the u in F*ck with ū results in a word which can be added to the dictionary and will happily be suggested by the keyboard.
No, currently it is not possible to change the SIP (Software Input Panel, or on screen keyboard) in any way on Windows Phone 7. This means: no Swype, unless Microsoft adds it (or a way to customize SIPs) to any next version.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529516.84/warc/CC-MAIN-20191210233444-20191211021444-00311.warc.gz
|
CC-MAIN-2019-51
| 3,177
| 25
|
http://web2.sys-con.com/node/2448212
|
code
|
|By PR Newswire||
|November 15, 2012 08:01 AM EST||
REDWOOD CITY, Calif., Nov. 15, 2012 /PRNewswire/ -- Many organizations developing Big Data applications first focus on trying to control, manage and secure the burgeoning volumes of data being generated. However, that step just scratches the surface of unlocking the data's ability to power competitive advantage. It is only when these multi-structured data stores become accessible to search and discovery that the information is transformed into corporate assets from which business insights can be derived and true value can be delivered to the organization. To develop these insights, robust search capability must be built into Big Data applications from the beginning, not added on as an after-thought.
LucidWorks, the trusted name in Search, Discovery and Analytics, today announced the general availability of LucidWorks Big Data™, an application development platform that integrates search capabilities into the foundational layer of Big Data implementations. Built on a foundation of key Apache open source projects, LucidWorks Big Data enables organizations to quickly uncover, access and evaluate large volumes of previously dark data in order to make more informed, better business decisions. Using LucidWorks Big Data, organizations have been able to attain insights that were previously locked away in their vast data stores, honing their competitive edge and slashing the time it takes to meet their organizations' goals.
LucidWorks Big Data Makes an Impact
The release of LucidWorks Big Data follows a comprehensive and highly collaborative beta program through which the product's integrations, scalability, usability and APIs were rigorously tested.
"Computing for Disasters is an initiative that has the potential to revolutionize our nation's preparedness and resilience in the face of future disasters by adopting a computational perspective to fundamental scientific, engineering, and social barriers to disaster management and related research. To collect, manage, and analyze a diverse range of data sources takes a comprehensive big data architecture that offers a powerful search engine at its core. We made the decision to take advantage of the LucidWorks Big Data platform because it offers key capabilities in a tightly integrated solution."
- Dr. Edward Fox, Professor, Virginia Tech Department of Computer Science
"Bright Planet is a pioneer in Deep Web Intelligence, offering our customers the ability to perform deep harvesting of public information that lies beneath the surface of the web. Our customers span across both public and private sectors. We made the decision to work with the LucidWorks Big Data platform because of its ability to seamlessly and quickly gather large amounts of information from a variety of different sources (in addition to the web) – and then offer it to our patented deep harvesting search technology. We believe that the combined capability will offer valued solutions to organizations across both private and public sectors."
- Steve Pederson, CEO and Chairman, BrightPlanet Corporation
"At OpenSource Connections, we spend a lot of time building infrastructure around Solr in order to continually enhance enterprise search capabilities. LucidWorks has all of that right out-of-the-box. One of the killer features for me is the ability to use any of the LucidWorks Search connectors to ingest data into a cluster, perform analytics on it, and use those results to improve search and data discovery. I've spent months building that sort of thing from scratch, and in LucidWorks Big Data I can do it in about five service calls."
- Scott Stults, Founder and Solutions Architect, OpenSource Connections
With the general availability of LucidWorks Big Data, organizations can now utilize a single platform for their Big Data search, discovery and analytics needs. Designed to be ready out-of-the-box, LucidWorks Big Data is the industry's only solution that combines the power of multiple Apache open source projects, including Hadoop, Mahout, Hive and Lucene/Solr, to provide search, machine learning, recommendation engines and analytics for structured and unstructured content in one complete solution available in the cloud, on premise or as a hybrid solution.
The LucidWorks Big Data platform includes all of the necessary open source components, pre-integrated and certified, as indicated in this diagram. LucidWorks equips technologists and business users with the ability to initially pilot Big Data projects on premise or in the cloud. This means that organizations can avoid the staggering overhead costs and long lead times associated with infrastructure and application development lifecycles while assessing product fit.
LucidWorks Big Data is the only complete development platform that includes:
- A unified development platform for developing Big Data applications
- A certified and tightly integrated open source stack: Hadoop, Lucene/Solr, Mahout, NLP, Hive
- Single uniform REST API
- Out-of-the-box provisioning – cloud or on premise
- Pre-tuned software by open source industry experts
"Working closely with our beta customers, we've witnessed the significant business value that they've achieved through their LucidWorks Big Data projects," said Paul Doscher, president and CEO of LucidWorks. "LucidWorks Big Data helps companies leap forward by uncovering trends and insights they never would have been able to leverage previously. Whether it's growing revenue, expanding into new markets or increasing customer satisfaction, LucidWorks Big Data helps companies achieve their business goals by extracting, analyzing and quickly acting on critical operational information from their ever-compounding collection of data."
LucidWorks Big Data will be available for download by mid-December. To sign up for notification, visit http://www.lucidworks.com/products/lucidworks-big-data. To learn more about LucidWorks Big Data, please visit www.lucidworks.com, email [email protected] or call (650) 353-4057.
- Watch the future of search unfold on the LucidWorks blog
- Follow LucidWorks on Twitter @LucidImagineer and Facebook
- Learn how leading companies are benefiting from LucidWorks in these Lucene Revolution videos and presentations
- Read more in this whitepaper on Computing for Disasters
About LucidWorks (Formerly Lucid Imagination)
LucidWorks is the only company that delivers enterprise-grade search development platforms built on the power of Apache Lucene/Solr open source search. Out of the 37 Core Committers to the Apache Lucene/Solr project, eight individuals work for LucidWorks, making the company the largest supporter of open source search in the industry. Customers include AT&T, Sears, Ford, Verizon, Cisco, Zappos, Raytheon, The Guardian, The Smithsonian Institution, Salesforce.com, The Motley Fool, Qualcomm, Taser, eHarmony and many other household names around the world. LucidWorks' investors include Shasta Ventures, Granite Ventures, Walden International and In-Q-Tel. Learn more about the company at www.lucidworks.com.
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
Nov. 24, 2014 12:00 PM EST Reads: 1,324
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Nov. 24, 2014 11:00 AM EST Reads: 1,429
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
Nov. 24, 2014 09:00 AM EST Reads: 1,484
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
Nov. 23, 2014 07:30 PM EST Reads: 1,716
"There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nov. 23, 2014 12:00 PM EST Reads: 1,646
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...
Nov. 23, 2014 07:45 AM EST Reads: 1,488
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
Nov. 22, 2014 05:30 PM EST Reads: 1,334
ARMONK, N.Y., Nov. 20, 2014 /PRNewswire/ -- IBM (NYSE: IBM) today announced that it is bringing a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix, IBM's platform-as-a-service. The new platform enables developers to build ap...
Nov. 22, 2014 05:30 PM EST Reads: 1,479
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Nov. 21, 2014 09:15 PM EST Reads: 1,402
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, and physical persons. In the IoT vision, every new "thing" - sensor, actuator, data source, data con...
Nov. 21, 2014 08:00 PM EST Reads: 1,405
Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the "Internet of Things" (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his General Session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, discuss the real benefits to focus on, how to understand the requirements of a successful solution, the flow of ...
Nov. 21, 2014 08:00 PM EST Reads: 1,460
"BSQUARE is in the business of selling software solutions for smart connected devices. It's obvious that IoT has moved from being a technology to being a fundamental part of business, and in the last 18 months people have said let's figure out how to do it and let's put some focus on it, " explained Dave Wagstaff, VP & Chief Architect, at BSQUARE Corporation, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nov. 21, 2014 07:00 PM EST Reads: 1,311
Focused on this fast-growing market’s needs, Vitesse Semiconductor Corporation (Nasdaq: VTSS), a leading provider of IC solutions to advance "Ethernet Everywhere" in Carrier, Enterprise and Internet of Things (IoT) networks, introduced its IStaX™ software (VSC6815SDK), a robust protocol stack to simplify deployment and management of Industrial-IoT network applications such as Industrial Ethernet switching, surveillance, video distribution, LCD signage, intelligent sensors, and metering equipment. Leveraging technologies proven in the Carrier and Enterprise markets, IStaX is designed to work ac...
Nov. 20, 2014 09:15 PM EST Reads: 1,389
C-Labs LLC, a leading provider of remote and mobile access for the Internet of Things (IoT), announced the appointment of John Traynor to the position of chief operating officer. Previously a strategic advisor to the firm, Mr. Traynor will now oversee sales, marketing, finance, and operations. Mr. Traynor is based out of the C-Labs office in Redmond, Washington. He reports to Chris Muench, Chief Executive Officer. Mr. Traynor brings valuable business leadership and technology industry expertise to C-Labs. With over 30 years' experience in the high-tech sector, John Traynor has held numerous...
Nov. 20, 2014 06:00 PM EST Reads: 1,349
The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades.
Nov. 20, 2014 01:00 PM EST Reads: 1,593
The Internet of Things is not new. Historically, smart businesses have used its basic concept of leveraging data to drive better decision making and have capitalized on those insights to realize additional revenue opportunities. So, what has changed to make the Internet of Things one of the hottest topics in tech? In his session at @ThingsExpo, Chris Gray, Director, Embedded and Internet of Things, discussed the underlying factors that are driving the economics of intelligent systems. Discover how hardware commoditization, the ubiquitous nature of connectivity, and the emergence of Big Data a...
Nov. 20, 2014 12:30 PM EST Reads: 1,805
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity.
Nov. 18, 2014 09:00 PM EST Reads: 2,025
SYS-CON Events announced today that IDenticard will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. IDenticard™ is the security division of Brady Corp (NYSE: BRC), a $1.5 billion manufacturer of identification products. We have small-company values with the strength and stability of a major corporation. IDenticard offers local sales, support and service to our customers across the United States and Canada. Our partner network encompasses some 300 of the world's leading systems integrators and security s...
Nov. 18, 2014 08:15 PM EST Reads: 1,584
IoT is still a vague buzzword for many people. In his session at @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, discussed the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. He also discussed how IoT is perceived by investors and how venture capitalist access this space. Other topics discussed were barriers to success, what is new, what is old, and what the future may hold. Mike Kavis is Vice President & Principal Cloud Architect at Cloud Technology Pa...
Nov. 18, 2014 01:30 PM EST Reads: 2,016
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world. The next @ThingsExpo will take place November 4-6, 2014, at the Santa Clara Convention Center, in Santa Clara, California. Since its launch in 2008, Cloud Expo TV commercials have been aired and CNBC, Fox News Network, and Bloomberg TV. Please enjoy our 2014 commercial.
Nov. 13, 2014 05:00 AM EST Reads: 3,551
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380948.74/warc/CC-MAIN-20141119123300-00101-ip-10-235-23-156.ec2.internal.warc.gz
|
CC-MAIN-2014-49
| 18,175
| 68
|
http://tsompanidis.com/utorrent-bitcoin-malware-7110.php
|
code
|
Cryptokittieshacerse rico con ethereum33 comments
Cgminer config 7950 litecoin
You will list utorrent bitcoin malware email shortly at: Here utorrent bitcoin malware Walmart. com, we are very to traditional your feedback. Your email address will never be seen or greater to a third party for any form. Squarely take a distributed to review our Journalism Policy.
Please lump one Source Android App us more Cancel Constrain Due to the latter pronominal of feedback, we are interested to digital to individual investors..
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541318556.99/warc/CC-MAIN-20191216065654-20191216093654-00148.warc.gz
|
CC-MAIN-2019-51
| 519
| 4
|
https://digitalmediaphile.com/index.php/category/microsoft/
|
code
|
If you’re a hard core Windows Insider, you’ll want to be one of the first to know when new Insider Builds are available for download and corresponding blog posts go up. You can always watch @donasarkars Twitter stream (and check the hints that builds are coming in images she posts), but if you have Hue or LIFX connected bulbs, you can use IFTTT to set up an Applet (used to be called a recipe) to get a visual alert.
While Microsoft’s response to the SIMPLO issue has been restricted to “working on a possible software fix”, customers are starting to report that even when plugged in, their SP3 tablets shut down. In essence, they can’t use their Surface Pro 3’s at all. This seems to happen when the amount of usable battery fully charged falls below a certain point, and as the days continue without a fix, more and more customers will have unusable devices. I don’t see that they will be able to keep their devices running long enough to even apply a software fix, should one actually become available. These customers are trapped. Microsoft won’t swap them out, and some are paying the usurious $450 out of warranty exchange fee. Note that Microsoft committed to a $200 battery replacement program on a Reddit AMA https://www.reddit.com/r/IAmA/comments/26m9cu/we_are_panos_panay_and_the_surface_team_at/chsei5u but has refused to honor this or even comment on it. (And as an aside, Apple charges $129 to replace an out of warranty battery.)
Microsoft told customers in the same Reddit AMA https://www.reddit.com/r/IAmA/comments/26m9cu/we_are_panos_panay_and_the_surface_team_at/chse7pn that “the battery can get charged daily (5 days a week) for over 4.5 years and still maintain 80% capacity”. Again, customers are responding in the thread that they can’t even use their devices while connected or docked.
I checked my sent email and note that as a Community Forum Moderator that I brought it to the attention of Microsoft on March 3, 2016. And a couple of times thereafter. I saw the trending that early.
And as of Saturday, March 6, afflicted Microsoft customers have not had a single update on the situation since the initial “we think we can fix in software and are working on a fix”.
Microsoft is "suspending" emails (because of the Canadian SPAM law effective 7/1). I found these REALLY useful. I’m sure others did as well.
Notice to IT professionals:
As of July 1, 2014, due to changing governmental policies concerning
the issuance of automated electronic messaging, Microsoft is
suspending the use of email notifications that announce the
* Security bulletin advance notifications
* Security bulletin summaries
* New security advisories and bulletins
* Major and minor revisions to security advisories and bulletins
In lieu of email notifications, you can subscribe to one or more of
the RSS feeds described on the Security TechCenter website.
For more information, or to sign up for an RSS feed, visit the
Microsoft Technical Security Notifications webpage at
So then I looked at the page referenced above. My "quick and dirty" very basic ‘Security Notifications from Microsoft Feed Reader’ app is now available in the Windows Store. http://apps.microsoft.com/windows/app/security-notifications-from/f5459c09-6233-4100-bfe1-d198111fc30b
I hope that Microsoft reinstates the emails after they figure out how to exclude Canadian customers who don’t want to receive this important information.
Things always blow up on a holiday weekend. Just when I thought the replacement HP Envy was the machine I’d be using for the next (hopefully) 5 years or so, trouble came knocking at my door. Yesterday morning, I had breakfast, hibernated the machine. Took a shower, got dressed. Maybe 45 minutes elapsed time. When I turned on the machine again I saw this ugly blue UEFI you’re totally screwed screen.
Burnt a Ubuntu DVD to run Ubuntu FROM DVD on my server. It seemed to indicate a hibernate issue (?).
I sent the blue screen image to my new contact (the Director Signature Experience Labs) from Microsoft Store within minutes of seeing it appear. He called me. Yup, on a Sunday, a holiday weekend, 7am Redmond time, within 15 minutes of my emailing the sad news. While were talking, I booted to advanced recovery and tried to run some powercfg cmds to delete the hibernate file, but they are apparently no longer supported. However, when I exited the machine surprised me and BOOTED into windows. I immediately started copying my data off (and spent the balance of the day yesterday getting my data copied off the HP onto two separate external devices to have redundancy).
I had met with him personally (the Microsoft Director, Signature Experience Labs), just last Wednesday when he came to collect the two machines that were “rejects” (one was the original bad hardware HP Envy 17 and the second was a miss-shipped 15 inch laptop from the same family). During lunch last week, we were discussing various laptops and I had mentioned that I had also been looking at the ASUS ROG 17 incher with the same specs (except for the touch screen) but since there weren’t any in the stores (mail order only I guess, I wasn’t all that keen on getting something sight unseen. Even Best Buy didn’t have any from that family to check out (just to look, feel, etc., not to buy from them). We discussed whether there is an issue with this HP ENVY TouchSmart 17-j141nr model machine (they don’t have enough data on returns) which is now my feeling. He is replacing this HP machine with the ASUS ROG-G750JS-DS71. (We agreed that two lemons are more than enough and trying to see if a third unit would be disaster free wasn’t a good game plan).
The reviews of the AUS ROG G750JS-DS71 everywhere are pretty awesome. I had looked at this computer before purchasing the HP but couldn’t justify the additional $$ for the ASUS. I’m going to have to go through all the setup stuff again, upgrade to 8.1 update 1, update to the Windows Media Center Pro feature pack, install my apps, copy in my data. Will probably take two days plus to set up the ASUS. Which I am hoping will arrive this week. I’ll hopefully have shipping details tomorrow.
Subtitle: A Signature PC should not behave this way or require this amount of manipulation. Not to mention that an average person would never be able to get through this.
A very old (in computer years) computer from the XP years that I’d been updating (hard drive, RAM, video card, network card, and of course Windows Operating System) (it was running 8.1 updated completely) finally gave up the ghost. It was an old clunker by today’s standards, and I had been using it for development and testing work. I’d been expecting this and had been doing some research for a few weeks. I’d already decided on a 17 inch laptop as a desktop replacement and I was loathe to order something I couldn’t test first or that came with bloatware, crapware, and things of that ilk. I’ve been to out almost local Microsoft Store enough times to know that the signature PC’s there were probably my best choice. And decided on an HP Envy 17 that appeared to be “fully loaded”.
The nice man that waited on me showed me a similar (less powerful) model that was on display. I decided that if testing that model proved acceptable, that the higher end model would be as good or better. On the negative side, the sales person was unsure of the specs of the higher end machine. I asked if it had a 5400 or 7200 speed hard drive (he had to look it up). I also asked if the higher end model had Windows 8.1 or Windows 8.1 Pro (the down level demo I was testing did not have Pro) and he replied affirmatively.
The demo seemed to be quite responsive, even on the store’s not so wonderful wireless internet connection. I wasn’t absolutely crazy about the gesture enabled trackpad and mouse functions, but knew I’d be purchasing/using a separate Arc Touch Mouse most of the time and that I could turn off and adjust trackpad functionality. Backlit keyboard was a plus. The keyboard itself is so-so but usable (I still think that no one has equaled the ThinkPad keyboards). After about twenty minutes I decided to pull the trigger and purchase.
The box came out from the back of the store. NOT Windows 8.1 Pro, only Core. I complained. I asked for a discount to offset the need to buy an upgrade key. The assistant store manager said yes to that in seconds (the store manager, who I know, was on vacation) and did some magic with prices to effectively reduce the cost by the amount needed to offset the Pro Pack Upgrade. Home I went, laptop in hand.
Arriving home, I plugged in and spent a couple of hours first applying the Pro with Media Center upgrade and then hitting Windows Update about 8 times until I was fully updated to 8.1, Update 1 (or whatever it is officially called).
I walked away to have lunch.
Problem #1: When I came back, the laptop was displaying an ominous black screen showing Boot Device Not Found Please install an Operating System. Held down the power button, did a few more things, started downloading my apps from the Windows Store. Went back to the desktop. Had two more instances of Boot Device Not Found Please install an Operating System. A web search for “hp envy operating system not found” turns up and ungodly number of hits. Most of which are folks who can’t get their computers to boot at all. That was not my problem. After some thought, I went hunting for a BIOS upgrade. Which I found and applied.
Problem #2: Under heavy network load (over 802.1ac wireless), I had some complete lockups. I went hunting for a driver update on the HP site. Found one for this specific model and applied it.
Problem #3: After performing the Windows 8.1, Update 1 update, Bluetooth went missing. It was completely gone from Device Manager as well. While I couldn’t find anything specific to the model laptop I had, I did find http://h30434.www3.hp.com/t5/Notebook-Operating-Systems-and-Software/Bluetooth-missing-HP-ENVY-15-j026tx-Windows-8-1/td-p/3753454. After some more searching, I figured out that I should try https://downloadcenter.intel.com/Detail_Desc.aspx?DwnldID=23761. Yes, that worked. Or at least I see Bluetooth in devman and have a functional BT task tray icon.
Problem #4: The HP Assistant Program finds newer drivers that those listed for the product page for this model laptop. It found an even newer Intel Wireless drive for the AC NIC. Suggests that it is a waste of time to use drivers from the product page. HP should be ashamed. If you have drivers, HP folks, keep your site updated.
Problem #5: I installed all my modern apps from synced profile. I then got all the Windows Store Updates for these apps. Skype and Bing News won’t install and error with 0x80073cf9. I can install other apps and update other apps. I uninstalled the “old” versions. Now I have none. I ran the PowerShell script to remove windows store apps and these two do NOT appear on the list. It’s not corrupted files, I’ve sfc /scannow’d and tried the standard wsreset, license sync, and it appears I am not alone. I do have the desktop app and there are other News Apps, so it is not the end of the world.
Problem #6: Locks up when accessing an SDXC card in the media reader slot. I’ve got an external card reader that works, but gee whiz…
I’ve got a thirty day return window and have resolved most of my issues. I’ve spent the past day and a half installing my apps and so on. I want this to be a keeper. I’m hoping I’m over the hump. HP should be ashamed for sure. And I’m hoping that the Microsoft Store folks leaern something from this.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937016.16/warc/CC-MAIN-20180419184909-20180419204909-00017.warc.gz
|
CC-MAIN-2018-17
| 11,668
| 39
|
http://www.netbuilders.org/business/steps-filling-dmca-notice-11834-print.html
|
code
|
Steps in filling a DMCA notice
You might be thinking, when will I ever use this? Well I'm in the process right now so I'll be walking through it myself.
First things first, if you're able to grab ahold of any IPs related to the rip, ban them. (In my case caught the person on live chat).
Secondly, if you know the URL (again in my case the live chat picked up where they uploaded it to), copy it and immediately do a whois search to find out who owns the domain.
Next, copy and paste it into a notepad for later use.
Ping the IP copy the IP address (using mark or whatever) and go back to google find an IP lookup service and look up who owns the IP (most will say the service provider)
Paste the IP into the notepad and of course paste any other info such as any e-mails attached (ex firstname.lastname@example.org)
Now grab a DMCA template, I used one from here which is very well written:
DMCA Notification Template - Copyright Law and SEO - McAnerin International Inc.
Grab the ISP one as it was most approiate.
Fill it in replacing the example text with your own stuff. Change the dates!
Login to your webmail (use email@example.com or something similar to resemble that you are indeed the owner)
Copy the contents of the document, paste it into the e-mail.
Attach the document to the e-mail and send it to the firstname.lastname@example.org address (whatever it is).
Wait until you get a response and of course keep checking each day and ensure to keep checking (proxies are a good idea to keep your IP from being blacklisted and should help keep you anonymous when you go for the kill).
If content is removed, keep checking in like 3 days later or so, if content reappears, send second notice just like the first but put in there that it is the second notice and that if it isn't removed you will pursue legal action.
Removed? Good check weekely still just in case, not removed? Ask an attourney for legal advice. If the entire thing is international (ex Malaysia or Sweeden) then you're pretty much screwed at the moment. Worth a shot though I guess. And of course you can submit one through the mail to Google if you include the search terms that infringing results come up on, haven't tried it yet so no idea.
Example that I used:
hostingworld.co.uk (infringing site)
Web Hosting, VPS, and Dedicated Servers by Host Mist (my site)
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445033.85/warc/CC-MAIN-20151124205405-00233-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 2,340
| 20
|
https://vimeo.com/littlegalaxies/videos/sort:date/format:detail
|
code
|
Little Galaxies -Tonight (Official Video)
from Little Galaxies
Added 3 years ago
Debut Album "Patterns" Available Now on iTunes
& All Other Digital…
Here are all of the videos that Little Galaxies has uploaded to Vimeo. Appearances are videos that Little Galaxies has been credited in by others.
More stuff from Little Galaxies
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718285.69/warc/CC-MAIN-20161020183838-00065-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 329
| 7
|
http://www.geekrescue.com/blog/2013/11/19/microsoft-silverlight-vulnerability-leads-to-malware-infections/
|
code
|
Microsoft Silverlight Vulnerability Leads To Malware Infections
Do you have Silverlight installed on your computer? The Microsoft product, similar to Adobe Flash, is used for running internet applications, most notably the streaming video client on Netflix. Subscribers alone account for 40-million Silverlight users worldwide. As Zeljka Zorz of HelpNet Security reports, all of these users are at risk of becoming a victim of a malware attack that exploits a critical vulnerability in Silverlight.
The malware, which could allow remote code execution, finds it way onto your machine when you visit an infected website. This website could be specifically set-up by hackers to infect unsuspecting users, it could be a compromised site that’s infecting users without the owners knowledge, or a site that allows user submitted content.
When you land on one of these websites, an Angler exploit kit, which is a tool used by hackers, determines what version of Silverlight you have installed. It determines whether you are vulnerable to an attack and, if so, the malware is downloaded to your computer.
The reason the Angler is needed is because Microsoft has already released a patch that fixes the security flaw being exploited. However, a number of users fail to update and are still using out-dated versions of Silverlight. If you’re using Silverlight, be sure you update to close vulnerabilities that could otherwise lead to a malware infection. If you don’t use Silverlight, but have it installed, you can remove it completely to protect yourself from this attack.
This is another example of why it’s important to keep all applications up to date and install each patch when it’s released. Enabling automatic updates for trusted applications makes this job easier.
If you have experienced a malware attack and your computer’s performance is suffering, bring your machine to Geek Rescue or call us at 918-369-4335.November 19th, 2013
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202510.47/warc/CC-MAIN-20190321092320-20190321114320-00454.warc.gz
|
CC-MAIN-2019-13
| 1,945
| 7
|
https://developers.arcgis.com/android/install.html
|
code
|
Previous releases of this SDK have been in the form of an Eclipse plugin exclusively. Starting with version 10.2, the SDK has been expanded to include the API libraries and developer tools required to build apps.
Developing ArcGIS applications for Android is simplified by a group of Eclipse plugins provided with the SDK, ArcGIS for Android Core and ArcGIS for Android Doc and Samples. These plugins provide a rich set of tools, documentation, and samples to help developers create applications using the ArcGIS Runtime SDK for Android. The instructions below can be used for Eclipse with ADT plugin or the ADT Bundle.
The repository can be installed either by downloading in Eclipse via a public updatesite or with the local plugin that comes with the SDK. The workflow for both are offered below:Download the ArcGIS Android Eclipse plugin
Follow these steps after your eclipse repository is set to complete installation:
The ArcGIS Runtime SDK for Android allows easier integration into other developer environments without the support of a developer IDE plugin.
IntelliJ IDEA supports Android development much in the same way the ADT extends Eclipse for Android development. It offers excellent code assistance, integrated logcat, layout preview, and lint. Refer to the IntelliJ IDEA Web Help system to enable Android support. Once you have Android support enabled you can create a New Android Project.
Creating a global library allows you to add ArcGIS module dependency to any valid Android project.
Open your Android Manifest file by double-clicking on the AndroidManifest.xml file in your project directory. Add the following elements to your projects manifest under uses -sdk and above the application elements.
<uses-feature android:glEsVersion="0x00020000" android:required="true" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" />
Now you are ready to start programming with the ArcGIS Runtime SDK for Android. A great place to start is the hello-world like tutorial, Add a map to your app.
The ArcGIS Runtime SDK for Android contains everything you need to develop ArcGIS Android apps. The contents of the SDK are provided below:
Version 10.2.2 is ready and waiting. Get it now.Download
Enhancements, known limitations, and migration information:At 10.2.2
Get to know the SDK by adding a map to your app.Get started
We post regularly to the ArcGIS blog. Tune in to the android tag or enjoy the full RSS feed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00150-ip-10-147-4-33.ec2.internal.warc.gz
|
CC-MAIN-2014-15
| 2,588
| 15
|
http://www.techrepublic.com/resource-library/whitepapers/develop-your-creative-reputation/
|
code
|
Develop Your Creative Reputation
We know that there is a risk associated with creativity and innovation. It reminds the author of this statement from some unknown guru: "Behold the turtle! He makes progress only when he sticks his neck out." At the same time, the future of your organization will, in large measure, be determined by your ability to innovate and change. The key here is that the creative process needs to be properly employed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447906.82/warc/CC-MAIN-20151124205407-00171-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 442
| 2
|
https://ux.meta.stackexchange.com/users/30068/jessica-yang
|
code
|
Happily employed but open to opportunities to grow in product management and HCI/UX
PM by day, illustrator and other stuff by night. I majored in computer science with a strong side interest in cognitive science. I was a member of the Yale Social Robotics lab where I worked on the migration of intelligent agents between embodiments. I've touched the Voynich Manuscript with my bare hands.
Academic interests include...
- embodied computing/robotics
- the interface layer of human-computer/robot interaction
- computing/robotic contexts + social behavior, social cognition, mental health
- therapeutic and educational uses of computing, interesting interfaces for these purposes
- the impact of culture and context in behavior and cognition
Top network posts
- 86 Why is the "Record" icon always round and usually red?
- 28 Should "Yes, delete it" be red, or green?
- 22 What happens when gamification is poorly done?
- 15 Should there be a space after the copyright symbol ©?
- 5 NLTK regexp tokenizer not playing nice with decimal point in regex
- 5 Is it better to mirror a real-world system for specialized users?
- View more network posts →
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540530857.12/warc/CC-MAIN-20191211103140-20191211131140-00482.warc.gz
|
CC-MAIN-2019-51
| 1,149
| 16
|
https://101apps.co.za/index.php/tag/Android/Page-6.html
|
code
|
App Widgets are mini apps that give users immediate access to important functions of an app.
We mostly use App Widgets to display important information about an app on a device’s Homescreen.
You can create your own app widgets. We’ll show you how.
Keep up with the fashion: Using Styles and Themes in your apps
Styles specify the look and feel of individual views, like text views and buttons.
Styles help you to:
Themes are simply Styles applied to an app or to individual activities.
Progress bars let the user know that the device is busy, for example, downloading a file. It can also show roughly how much longer the download will take.
Progress bars don’t always look the same on all devices.
We’ll show you how to customise your progress bars so that they look the same on all devices
Android touchscreen devices can sense when the user touches the screen. They can also record the movement of the user’s finger across the screen. These movements or strokes are also known as gestures.
You can store a collection of gestures in a file and then match the user’s gesture with those in the file.
Our tutorial app lets the user write their name on the screen using their finger. We then compare this gesture to our saved gestures and display the result in a text view.
Spinners display a list of selectable items.
When first displayed, the Spinner only shows the currently selected item. Touching the Spinner displays the full list of items that the user can choose from.
Our tutorial will show you how to use a Spinner to display a list of images that the user can choose from.
Our tutorial app creates a table containing text views, text fields and a button.
There is no XML layout file as we do everything programmatically.
A Toast message displays the input after the user has filled in the text fields and pressed the button.
Our tutorial app shows you how to use intents and intent filters in your apps.
Here’s a quick overview of what we’ll cover. We’ll show you how to:
Intents are messages that you can pass around between your app components. You can also send them to components in other apps. This enables you to create powerful applications where you can use other app’s components to perform tasks for you, such as playing music, sending email, and taking pictures.
Here are some of the things that you can do with intents:
The Android System also uses intents to signal changes such as low battery, incoming sms messages and phone calls. You can listen for these intent messages in your apps. You can also use intents to pass data.
You can use alarms to trigger events at any time even if your app is not running.
Device sleeping on the job? Here’s how you can manage the device awake state
You can schedule work to be done. Problem is, nothing will happen if the CPU is sleeping!
The solution is to use a wake lock to prevent the CPU from sleeping while it’s doing your work.
You may also want to keep the screen from turning off. Read on and we’ll show you how…
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987779528.82/warc/CC-MAIN-20191021143945-20191021171445-00505.warc.gz
|
CC-MAIN-2019-43
| 3,008
| 29
|
http://blog.opquast.com/post/2012/04/17/Open-data-good-practices
|
code
|
We’re proud to announce that the list is now available for consultation and download. The content has been translated from French to English by Pascal Romain. English version has been reviewed by Tim Davies and Steven Flower.
Now the team has to write, translate and publish a complete sheet (goals, means, and control process) about each criterion. This will be done shortly.
Those ‘good practices’ are published under the Creative Commons BY-SA license. You're free to use them as you see fit, even for a commercial use. On the other hand, you need to maintain its paternity, please quote its origin, or even better add a link to this checklist on Opquast’s website.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607046.17/warc/CC-MAIN-20170522190443-20170522210443-00136.warc.gz
|
CC-MAIN-2017-22
| 676
| 3
|
https://dr.lib.iastate.edu/entities/publication/91193eef-bf0e-40ea-802c-8ef1c782deec
|
code
|
Addressing fluorogenic real-time qPCR inhibition using the novel custom Excel file system ‘FocusField2-6GallupqPCRSet-upTool-001’ to attain consistently high fidelity qPCR reactions
The purpose of this manuscript is to discuss fluorogenic real-time quantitative polymerase chain reaction (qPCR) inhibition and to introduce/define a novel Microsoft Excel-based file system which provides a way to detect and avoid inhibition, and enables investigators to consistently design dynamically-sound, truly LOG-linear qPCR reactions very quickly. The qPCR problems this invention solves are universal to all qPCR reactions, and it performs all necessary qPCR set-up calculations in about 52 seconds (using a pentium 4 processor) for up to seven qPCR targets and seventytwo samples at a time - calculations that commonly take capable investigators days to finish. We have named this custom Excel-based file system “FocusField2-6GallupqPCRSet-upTool-001” (FF2-6-001 qPCR set-up tool), and are in the process of transforming it into professional qPCR set-up software to be made available in 2007. The current prototype is already fully functional.
This article is from Biological Procedures Online 8, no. 1 ( December 2006): 87–153, doi:10.1251/bpo122.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510097.3/warc/CC-MAIN-20220516073101-20220516103101-00536.warc.gz
|
CC-MAIN-2022-21
| 1,251
| 3
|
https://www.educowebdesign.com/blog/tags/hiring-web-agency?page=4
|
code
|
Whether building a completely new website or redesigning a previous site, there are countless items to consider. The research firm Clutch recently interviewed our own Marty Vernon on the various choices that accompany the build of a website. Marty discussed several topics regarding what businesses should think about before designing and developing a website.
It's Dangerous To Go Alone. Take This! And by "this" we mean some knowledge that will help you better understand what a website might cost.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574182.31/warc/CC-MAIN-20190921022342-20190921044342-00521.warc.gz
|
CC-MAIN-2019-39
| 500
| 2
|
https://link.springer.com/article/10.1007/s10551-022-05049-6
|
code
|
Perspectives on the Ethicality of AI-Enabled Recruiting and Selection
We start by reviewing the different perspectives from which AI-enabled recruiting and selection practices are investigated and ethical considerations are articulated.
The first group of papers assessed AI-powered recruiting practices from an ethics theory perspective. We identified three articles that applied a theoretical framework to AI recruiting and thereby provide a theoretical foundation for discussion: First, Simbeck (2019) referred to ethical frameworks from other disciplines, such as medicine, robotics, and AI, and applied them to the HR context. She proposed the transfer of key ethical concepts from the other fields that should be implemented when applying new AI technologies in HR analytics. She identified five key ethical principles: privacy and confidentiality, opportunity to opt out, institutional review, transparency, and respect for the dynamic nature of personal development.
Second, Yarger et al. (2020) referred to feminist thinking and methods, arguing that these should guide the design of AI hiring systems. Feminist approaches shed light on the extent to which algorithms may perpetuate disadvantage for historically marginalized groups when equity is not considered in their design. The authors presented a feminist design justice framework, which includes prompts that commit the architects of AI systems to engage with the design process in ways that support an ethic of equity.
Third, Rąb-Kettler and Lehnervp (2019) assessed AI recruiting from a humanistic perspective, in which people were placed at the center. The authors presented humanistic recruiting as an answer to the current technological developments. They argued that technology and automation can be implemented in a way that improves the experience for both the recruiters and candidates in the process. They concluded that both humanistic insight and sophisticated technology are important to adjust to today’s dynamic reality. Reviewing these three theoretical papers reveals that a detailed assessment of AI recruiting from the standpoint of one of the traditional ethics theories, such as utilitarianism or deontology, and a discussion of potential implications for the hiring practice has not been done yet.
The second and largest category of papers assumed a practice-oriented perspective and focused on implications that are most relevant for managers and corporations. Most of the identified papers fall into this group, the common aim of which was to raise practitioners’ awareness of the strengths and limitations of AI technologies implemented in the recruiting process. From an experience-based perspective, some papers (Florentine, 2016; Polli et al., 2019) underlined the problematic nature of traditional candidate assessment methods and presented the use of AI as a promising alternative; others (Bogen, 2019; Dattner et al., 2019) rather warned of AI-powered hiring practices by raising many yet-unanswered questions about their accuracy, as well as the ethical, legal and privacy implications that they introduce. Furthermore, some papers (Bîgu & Cernea, 2019; Chamorro-Premuzic et al., 2019; Giang, 2018; Mann & O’Neil, 2016) provided practical recommendations for managers on how to ethically implement AI for recruiting, aiming to guide organizations to take the right steps and make the right investments.
The third group of papers looked at AI recruiting from a legal viewpoint. The importance of employment decisions to individuals, as well as to broader society, has led to the design of an extensive legal framework to guide these decisions. For example, in the US, Title VII of the Civil Rights Act protects people from discrimination in any employment decision that would result in disparate treatment2 or disparate impact3. It also assigns liability and legal responsibility to employers to ensure that the tools used do not create such results. However, the identified literature (Bornstein, 2017; Kim, 2017; Kim & Scott, 2018) has claimed that, so far, the law of Title VII lags behind current scientific knowledge and modern business practices: Kim and Scott (2018) discussed that targeted advertising may result in unfair exclusions that are not covered by current law, Bornstein (2017) argued that current regulation does not go far enough and argued for liability when an employer acts with reckless disregard for the consequences of implicit bias in employment decisions, and Kim (2017) claimed that Title VII should be broadened, requiring employers to prove that the data created by their algorithms are accurate and do not discriminate, instead of requiring victims of discrimination to prove its occurrence. We further identified two qualitative analyses that embraced both a legal and a technical perspective, while investigating how bias mitigation methods are used in practice. While Raghavan et al. (2020) evaluated the efforts of AI software vendors to mitigate bias, focusing on the employment laws in the US, Sánchez-Monedero et al. (2020) analyzed three recruiting software vendors from the perspective of UK law, addressing concerns over both discrimination and data protection.
Moreover, we identified a group of articles that established ethical considerations on AI recruiting, while taking a technical perspective. Some papers (Chwastek, 2017; Köchling et al., 2020; Lin et al., 2020; Mujtaba & Mahapatra, 2019; Persson, 2016; Williams et al., 2018) explained emerging ethical problems by looking at the mechanisms of algorithms used. Others (Fernández-Martínez & Fernández, 2020; Pena et al., 2020; Vasconcelos et al., 2018) presented technical solutions to implement ethical principles into algorithmic code or design. For instance, Fernández-Martínez and Fernández (2020) found that there is a lack of regulation and a need for external and neutral auditing of the used AI technologies, and consequently, they presented a multi-agent software architecture to support auditing the recruiting processes. Furthermore, Vasconcelos et al. (2018) proposed a computational framework to mitigate discrimination and unfairness caused by bias in AI systems, inspired by epistemological principles. Lastly, one paper (Schumann et al., 2020) outlined several technical challenges for future research in algorithmic hiring that must be overcome to make it fairer and more intelligible.
Covering the field of descriptive ethics, the last category comprises several experimental studies (e.g., Langer et al., 2018; Lee, 2018; van Esch & Black, 2019), as well as a case study (van den Broek et al., 2019) that assessed people’s reactions to AI-powered recruiting practices. A couple of studies compared applicants’ fairness perceptions of AI-enabled interviews vs. traditional interviews with a human recruiter, revealing contrasting findings. Whereas a group of papers (Acikgoz et al., 2020; Lee, 2018; Newman et al., 2020) found that people perceived algorithm-driven decisions as less fair than human-made decisions, another group of papers (Langer et al., 2019a, 2019b, 2020; Suen et al., 2019) found no difference in fairness perception between decisions made by an AI or a human. Other studies (Gelles et al., 2018; Kaibel et al., 2019; Langer et al., 2018; van Esch & Black, 2019) examined different contextual and procedural factors, such as the level of information given to applicants regarding the used AI or the level of computer experience of applicants, and how they affect applicant reactions to the use of AI in hiring.
In summary, this overview attests to the overall heterogeneous perspectives applied to ethical considerations of AI-based recruiting and selection. It also reveals that only a few theoretical articles exist, and that extant literature is rather practitioner oriented.
Underlying Research Topic: AI Applications in the Recruiting and Selection Process
In the following, we provide an overview of AI applications used in the recruiting and selection process and addressed in the identified literature. An understanding of where AI-powered tools and practices are applied can assist in understanding where ethical opportunities and risks may arise. Our review shows that AI-enabled practices are relevant in each stage of the recruiting process and can include different types of AI and algorithms. Table 4 gives an overview of the different AI applications across the recruiting and selection stages: outreach, screening, assessment, and facilitation, which we further expand on below.
Several articles deal with AI technologies applied in the outreach stage, in which businesses try to detect talent and attract applicants. By leveraging algorithms for targeted communication across online platforms and social media or for the automated notification of job seekers, companies can expand their outreach to potential candidates (Bogen, 2019). Furthermore, AI bots are used to identify the pool of active and passive candidates (e.g., via LinkedIn) or to (re-)discover top talents in the pool of former candidates via their internal automated tracking system (ATS) (van Esch & Black, 2019). Sometimes, the challenge is not just finding the right candidates but persuading them to apply via appealing job descriptions. AI software vendors, such as Textio, use AI in the form of text-mining techniques to predict the attractiveness of a job listing based on the hiring outcomes of several millions of job posts. The software thereby scans the job ad for key phrases that will statistically impact its performance. Additionally, a tone meter can determine whether the overall tone of the writing is likely to attract more men or more women and make suggestions on how to improve the inclusiveness of the language used (Lewis, 2018; Yarger et al., 2020). This is how AI can help businesses de-bias the wording of job ads, making them gender neutral to attract a diverse pool of applicants, or customize them for a specific target group (Rąb-Kettler & Lehnervp, 2019).
Notably, most articles that deal with the ethicality of AI recruiting focus on the application of AI technology in an initial resume screening. AI systems are used to filter applicants to derive a shortlist and a ranking of the most promising candidates (Bornstein, 2017; Fernández-Martínez & Fernández, 2020; Vasconcelos et al., 2018). For many years, companies have used traditional algorithms to scan resumes for preselected key words or phrases; however, today’s AI technology goes beyond that. Now, chatbots and resume-parsing tools look for semantic matches and related terms determining a candidate’s qualification. Other tools go even further and use ML to make predictions about a candidate’s future job performance based on signals related to tenure or productivity, or the absence of signals related to tardiness or disciplinary action (Bogen, 2019). Based on the initial screening, algorithms can also suggest the best matching job opening for a given candidate (Rąb-Kettler & Lehnervp, 2019). These screening tools are considered highly efficient to streamline the process, especially for top employers who receive huge numbers of applications for each open position; however, concerns have been raised that highly qualified applicants may be overlooked (Persson, 2016).
Although screening algorithms are not new in practice, there has been a recent trend toward video-interview analysis in recruiting. In such structured video interviews, AI technology replaces a human interviewer and asks the candidate a short set of predetermined questions (Chamorro-Premuzic et al., 2016; Fernández-Martínez & Fernández, 2020). Moreover, the AI technology can not only evaluate the actual responses, but also make use of audio and facial recognition software to analyze additional factors such as the tone of voice, microfacial movements, and emotions to provide insights on certain personality traits and competencies (Köchling et al., 2020; Tambe et al., 2019; van Esch & Black, 2019).
Besides interviews, AI-powered skill tests, simulations, and neuroscience video games are used to assess further qualities, for example, applicants’ risk attitude, planning abilities, persistence or motivation. Thereby, target variables need not be predefined by the company (Giang, 2018; Polli et al., 2019; Raghavan et al., 2020), but ML algorithms can analyze the data of a company’s current top performers and derive which applicant characteristics and skills have been associated with better job performance (Tambe et al., 2019). In this way, data-driven assessment tools have changed talent signals and the criteria by which candidates are evaluated (Chamorro-Premuzic et al., 2016). For example, the software vendor Pymetrics uses ML and psychometric training data based on current top performers to predict an applicant’s fit for a specific role. To this end, first, the top-performing incumbent employees in that role play a series of online games, which are gamified assessments that measure numerous cognitive and social traits. The data collected from these games are then used to establish a “success profile” for the job at hand. Second, the candidates applying to the job play the same games, and the ML model predicts their likelihood of success in the role (Polli et al., 2019).
Other software vendors offer AI technologies that analyze a person’s digital records such as social media posts to construct a psychological profile of a candidate. Based on linguistic analyses of candidates’ Web activities, new technologies infer talent, personality, and other important individual differences and compare them against the culture of the hiring company (e.g., Chamorro-Premuzic et al., 2016, 2017; Vasconcelos et al., 2018).
Finally, AI is used to facilitate the recruiting process, taking over administrative tasks. For instance, AI tools address the problem of long online questionnaires for applicants via natural language processing (NLP) techniques. These are used to parse unstructured documents, such as candidates’ CVs, and extract relevant information to automatically complete a company’s application form (Chwastek, 2017). Furthermore, AI-powered assistants can be used to interact and communicate with candidates: They can guide candidates through the different steps of the recruitment process, from answering company and process-related questions to scheduling interviews (Rąb-Kettler & Lehnervp, 2019; van Esch & Black, 2019). Today, many companies also use programs to create offers automatically and have them signed electronically (Sánchez-Monedero et al., 2020).
Mapping of Ethical Considerations
This rise of new AI recruiting practices comes with new ethical quandaries for organizations and society. In what follows, we examine extant research literature and map the ethical considerations established. This mapping of ethical considerations can be understood as a summary of areas in which society may have ethical concerns about the use of AI, which is derived from extant literature. In mapping the ethical considerations, we distinguish between aspects that are, on the one hand, clearly characterized as morally good and thus as ethical opportunities, and on the other hand, aspects that are clearly characterized as morally bad and thus ethical risks. In addition, we outline issues that are controversially discussed in the literature and thus reflect ethical ambiguities that require deeper exploration. Table 5 provides a structured overview of this ethical evaluation.
Human and Algorithmic Bias
The most-discussed topic in extant literature on AI-enabled recruiting is the occurrence of bias. Although there is broad agreement that the practices currently in place are far from effective and unbiased (e.g., Chamorro-Premuzic & Akhtar, 2019; Persson, 2016; Polli, 2019), there are two differing ways, in which AI-powered tools may effect the scope of bias.
On the one hand, the use of AI may reduce human bias in different stages of the recruiting process and should therefore be considered a huge ethical opportunity (e.g., Chamorro-Premuzic & Akhtar, 2019; Savage & Bales, 2017). In the outreach stage, AI can address bias in the form of gendered language in job descriptions that dissuades certain candidates from applying for a role by creating inclusive job descriptions (Mann & O’Neil, 2016; Recruitment & Employment Confederation, 2020). In the screening procedure, subjectivity can be reduced by using algorithms that screen all applicants against the same criteria. AI is thereby able to assess the entire pipeline of candidates rather than forcing time-constrained humans to shrink the pool from the start, based on a biased process. Instead, AI can shrink the initial pipeline so a recruiter with a constrained capacity can manually handle it (Polli, 2019). Especially in the assessment stage, the use of AI technology can remove human bias from the process – or at least reduce it substantially. Human intuition can be very good and accurate, but it is nevertheless based on subjective value assessment (Persson, 2016). In contrast, via a digital interview or a video game assessment, AI automatically captures many data points of the applicants’ behavior, such as what they say, their language use or their body language, for an objective, data-driven assessment of personality (Jayaratne & Jayatilleke, 2020). Moreover, human bias (e.g., related to applicants’ physical appearance or other attributes) can be reduced, as AI can be taught to ignore people’s personal attributes and focus only on specified skills and behaviors (e.g., Bîgu & Cernea, 2019; Chamorro-Premuzic & Akhtar, 2019; Fernández-Martínez & Fernández, 2020). Lastly, human bias can be removed from the process, as the required skills and qualities for successful candidates are not determined by bias-prone intuitions from recruiters, but based on analyzing the characteristics of the company’s top performers (Lin et al., 2020).
On the other hand, AI-enabled recruiting also bears the risk of introducing different types of algorithmic bias (e.g., Bogen, 2019; Yarger et al., 2020). Yarger et al. (2020) cited three factors that may lead to biased decisions: bias in the model design principles, bias in the feature selection, and bias in the training data. A biased design, for example, may be manifested in online job platforms that make superficial predictions, not focusing on who will be successful in the role, but on who is most likely to click on the job ad. This can lead to a reinforcement of gender and racial stereotypes. A study found that targeted ads on Facebook for supermarket cashier positions were shown to an audience of 85% women, indicating that adverse impact can also occur in sourcing algorithms (Bogen, 2019). Moreover, critics are concerned that algorithms derived from information about current employees will unintentionally discriminate against underrepresented groups if existing employees are not proportionately representative of the broader application pool; this would constitute a case of biased training data (Kim, 2017). A known example from practice is the Amazon case, in which a hiring algorithm (in test mode) discriminated against women, assigning lower scores to resumes of women when ranking candidates. The algorithm was trained on data of current top performers, of which the majority were male. Thus, the algorithm penalized female attributes (e.g., Mujtaba & Mahapatra, 2019). In all these cases, algorithms can introduce bias and even magnify discrimination, affecting entire classes of individuals (Bogen, 2019; Tambe et al., 2019). The occurring discrimination may thereby be direct or indirect via proxy attributes. In the latter case, a protected group (e.g., a specific race) is discriminated against but based on legitimate grounds (e.g., a zip code) (Bîgu & Cernea, 2019; Fernández-Martínez & Fernández, 2020).
Proponents of AI recruiting tools admit that adverse impact can occur; however, they state that, compared with human biases, algorithmic biases are much easier to detect and remove (Florentine, 2016; Polli, 2019). Often, the fear of biased AI ignores the fact that the original source of algorithmic bias is the human behavior it is simulating (e.g., the biased data set used to train the algorithm). Thus, if people criticize what the AI is doing, they should criticize human behavior even more because AI is purely learning from humans (Polli, 2019).
Although there is an ongoing debate on the potential occurrence of algorithmic bias in AI recruiting, there is no ambiguity on the topic itself but general agreement that all kinds of bias and discrimination should be prevented. Therefore, AI recruiting can be classified as ethically preferable, as long as it seeks to reduce interpersonal bias in the process. However, current research suggests that the usage of AI can reduce bias but is never completely free of bias and carries the risk of algorithmic discrimination, even without bad intentions on the part of the programmers, which should be morally denounced. Thus, technical due diligence regarding algorithmic design and implementation is crucial to keep this risk low (see Sect. 3.4).
Effect on Workforce Diversity
A topic closely related to the occurrence of bias in the selection process is its impact on diversity: On the one hand, a reduction in human bias could lead to diversification of a company’s workforce (Chamorro-Premuzic & Akhtar, 2019; Recruitment & Employment Confederation, 2020). For example, the use of bias-neutral job posts created through AI may result in a more diverse pool of applicants (Lewis, 2018). Furthermore, the data-driven assessment leads to hiring of “nontraditional” candidates who might typically not make it through a hiring process (e.g., from a non-elite college, but with other strong skills). In this way, AI-enhanced recruiting tools can provide people from a wider range of socioeconomic backgrounds access to better jobs, expanding diversity, and socioeconomic inclusion (e.g., Florentine, 2016; Hipps, 2019). Moreover, case studies have shown that, for example, the aforementioned AI-powered video games by Pymetrics have a clear positive impact on companies’ gender diversity (Polli et al., 2019).
On the other hand, a systematic bias through AI could result in more homogeneity in organizations (Chamorro-Premuzic et al., 2019; Vasconcelos et al., 2018; Yarger et al., 2020). As a single decision-making algorithm, which selects candidates based on certain profiles and traits, replaces several human decision makers with potentially differing views, this may also imply a loss in diversity (Vasconcelos et al., 2018; van den Broek et al., 2019; Bîgu & Cernea, 2019). Further, Fernández-Martínez and Fernández (2020) warned that the use of AI leads to increased racial bias: Given that emotional recognition software may not consider different intonations in different languages or that emotions are differently expressed in different cultures, it may systematically disadvantage specific races or ethnic groups, which could lead to a decrease in workforce diversity.
This research question about the influence of AI on diversity has also been discussed in general diversity scholarship. Ozkazanc-Pan (2019) outlined how advanced technological shifts impact diversity scholarship, underlining the importance of bias, ethical considerations and digital inequalities in this context. She also thereby referred to the recruiting context and, for example, pointed out how the creation of employee profiles that are based on behavioral preferences, when not implemented carefully, can lead to HR managers hiring the same groups over and over again, which can hinder a company’s diversity efforts.
Overall, there is no clear understanding of what impact the use of AI has on the diversity of corporate workforces, but the topic is controversially discussed in extant literature. Therefore, relevant empirical studies would be desirable in future. It must be noted that diversity is related to, but different from, non-discrimination, and more textured efforts are needed to explore the balance between diversity and non-discrimination (Schumann et al., 2020). An interesting question in this context may be whether it is ethical to promote diversity even if it discriminates against historically advantaged groups.
Privacy and Informed Consent
Another ethical consideration raised is the concept of privacy and informed consent. In this context, businesses must account for government regulations, which differ across countries. The European General Data Protection Regulation (GDPR), which came into effect in May 2018, is one of the strictest. It aims to protect EU citizens’ rights by regulating how to collect, store, and process personal data and requires informed consent for any personal data processing (i.e., applicants must have the opportunity to agree or not agree to the use of their data). However, the informed consent requirement is not yet well implemented in the big-data and AI-regulation context, rendering the protection of personal privacy an ethical challenge (Oswald et al., 2020). An ethical dilemma emerges at this point as applicants in the job market generally hold less power than employers. Even if applicants are informed enough to consent to the process, they may not be able to opt out without being disadvantaged in the process. It is therefore difficult to give explicit consent in the context of hiring anyhow (Sánchez-Monedero et al., 2020).
Moreover, there is active debate about the extent to which it is ethically appropriate to use social media information for personnel selection purposes (Chamorro-Premuzic et al., 2016; Oswald et al., 2020). Legally, social media content is public data, but it is questionable whether it is ethical to mine social media data for hiring purposes when users generally use those platforms for other purposes and may not have provided their consent for data analysis (Dattner et al., 2019; Tambe et al., 2019). Also, the extent to which social media posts are a valid and reliable indicator of personality or job performance is doubtful (Vasconcelos et al., 2018; Yarger et al., 2020). Chamorro-Premuzic et al. (2016) argued that it is naive to expect online profiles to be more authentic than resumes, but they can offer a wider set of behavioral samples. Prior empirical findings on the validity of social media data have been mixed (Ryan & Derous, 2019). Whereas some studies found connections to job performance (e.g., Kluemper et al., 2012), van Iddekinge et al. (2016) showed that recruiter ratings of applicants’ Facebook information were unrelated to their subsequent job performance and lead to subgroup differences, by favoring female and Caucasian applicants. This discussion on the use of social media information in the hiring context is not new and only connected to the use of AI. A study in Sweden showed that at least half of the interviewed recruiters had scanned applicant social media profiles themselves at some point before hiring (Persson, 2016). However, the new AI techniques make the analysis of social media profiles easier and even more tempting.
There are further AI-enabled ways to discern applicants’ private information indirectly. For example, image and voice recognition techniques can predict applicants’ sexual orientation, race, and age, as well as their physical attractiveness (Chamorro-Premuzic et al., 2016; Dattner et al., 2019). Other prediction algorithms may forecast who is more likely to become pregnant (Oswald et al., 2020; Simbeck, 2019). This greater access to candidates’ personal attributes can not only increase the risk of misuse and intentional discrimination (Fernández-Martínez & Fernández, 2020), but also might further an information and power asymmetry between candidates and potential employers, leaving applicants with less room to negotiate (Sánchez-Monedero et al., 2020).
Overall, extant research has agreed that AI recruiting practices constitute a potential privacy loss for applicants attended by a greater power imbalance between applicants and employers; this poses an ethical risk. In addition, the use of more personal data, which may lead to more accurate predictions, is controversial (see also the next section). Thus, it is currently an unresolved normative question the extent to which a company may legally and ethically collect, store, and use personal data from applicants, such as the information available on social media platforms (Lin et al., 2020).
Consistency, Accuracy and Validity
There is broad agreement in extant literature that AI enables companies to make decisions more consistently across candidates and time (van den Broek et al., 2019). Whereas traditional assessment techniques such as analogue interviews are difficult to standardize, AI-based practices allow firms to put all applicants through exactly the same experience, resulting in an increase in the consistency of candidate assessment (Chamorro-Premuzic, 2019).
However, the accuracy and validity of the new AI assessment methods are controversially discussed. Today, employers do not necessarily know exactly which characteristics make an applicant a good fit for a given role. Studies have shown a very small correlation between a person’s academic grades and their professional performance; still, many companies make above average grades a requirement for application. In contrast, some articles (e.g., Chamorro-Premuzic et al., 2019; Polli et al., 2019) argued that the new AI technologies have the potential to make the selection process more accurate as hiring algorithms predict a candidate’s work-related behavior and performance potential based on the data of current top performers. AI may thereby outperform human inferences of personality in accuracy because it can process a much larger range of behavioral signals (Chamorro-Premuzic & Akhtar, 2019; Chamorro-Premuzic et al., 2016; Polli et al., 2019). In this way, the use of AI improves both the possibilities of “what” and “how” skills and abilities are measured (Ryan & Derous, 2019).
One article pointed to the accuracy–fairness trade-off in recruiting decisions and stated that AI technologies constitute the opportunity to overcome it (Chamorro-Premuzic et al., 2019). Historically, research has shown that traditional cognitive ability tests have led to discrimination of underrepresented groups, such as candidates with a lower socioeconomic status. Thus, to increase diversity and create an inclusive culture, companies have often de-emphasized cognitive tests in hiring (Chamorro-Premuzic et al., 2019). However, AI may overcome this fairness–accuracy trade-off by deploying more dynamic and personalized scoring algorithms that can optimize for both (Chamorro-Premuzic et al., 2019; Raghavan et al., 2020).
Nevertheless, critics have raised concerns about the technical robustness and validity of AI-powered assessment methods. First, many of the newly offered AI tools have emerged as technological innovations, rather than from scientifically derived methods or research programs. Although there has been broad psychological research on the validity of traditional methods for candidate assessment, such as job interviews, assessment centers, or cognitive ability tests, the newly emerging AI tools have not been sufficiently scientifically validated, with regard to the underlying criteria for the prediction of job performance (Chamorro-Premuzic et al., 2016; Dattner et al., 2019; Raghavan et al., 2020). This means that firms may reject candidates based on unexplained correlations and make decisions based on factors with no clear causal connection to job performance (Cappelli, 2019; Kim, 2017). When AI links the tone of voice to differences in job performance, it raises the additional ethical question of whether it is appropriate to screen out people based on physically determined and rather unchangeable attributes (Dattner et al., 2019). Moreover, the indirect measurement of personality itself is still an open and discussed topic (De Cuyper et al., 2017).
Second, technical implementation bears some risks. For example, Tambe et al. (2019) argued that good employees are hard to measure, as it is difficult to disentangle individual from group performance. Further, introducing technological context, such as video games or avatar interviewers, to the recruiting process may add noisy variance to applicants’ performance and, thus, measurement error (Ryan & Derous, 2019). Therefore, a constant re-validation and control of the algorithmic tools is crucial. However, AI software vendors often do not publicly communicate whether or how they conduct validation studies on their models (Raghavan et al., 2020).
Third, Fernández-Martínez and Fernández (2020) brought up the risk that AI might not work equally for many people, undermining its accuracy. Along these lines, several studies (Buolamwini & Gebru, 2018; Raji & Buolamwini, 2019; Rhue, 2018) have shown that facial recognition software performs rather poorly, suffering from disparities in error rates across gender and race. Finally, Tambe et al. (2019) reported that AI recruiting faces the challenge of making trade-off decisions between accuracy and other ethical principles. For example, the authors stated that more “complicated” algorithms are more accurate, but they are also harder to explain, resulting in a trade-off between accuracy and explainability (we discuss the latter in the next paragraph).
These concerns about potential lack of validity and accuracy result in the question of whether it is ethical to use these new AI tools compared with more longstanding psychometric assessments that have been scientifically derived and validated. Which features are predictive, which are not, and which are protected? In particular, the selection of the features that define a good candidate is an ethically laden decision, about which current literature is ambivalent and further scientific validation is necessary (Schumann et al., 2020).
Transparency and Explainability
Another ethical opportunity mentioned in extant literature is the ability to establish transparency by providing applicants with updates and feedback throughout the process and in a timely fashion (e.g., via chatbots and AI technology), which can be considered one element of fair treatment (van Esch & Black, 2019). Often, firms fall short of providing relevant information in a timely manner, or they provide no information other than confirmation that a candidate’s application has been received. This can be very frustrating for candidates. However, next to progress updates, AI further enables firms to generate detailed feedback and give millions of job applicants data-driven insights on their strengths and development needs (Dattner et al., 2019).
However, the use of AI can also lead to a lack of transparency toward applicants, when the use of AI and automated systems is not proactively communicated to candidates (Sánchez-Monedero et al., 2020). Moreover, the predictive and decision-making processes of algorithms are often opaque, even for the programmers themselves. When algorithms take millions of data points for the assessment of a candidate, it becomes difficult to provide a qualitative explanation of which attributes are driving the decisions (Raghavan et al., 2020; Simbeck, 2019). This is ethically critical in the personnel selection context, due to its high relevance for people’s lives, and because this kind of black-box system may remain unchallenged, thereby obscuring discrimination (e.g., Tambe et al., 2019; Vasconcelos et al., 2018). Therefore, the GDPR also warrants a “right to explanation,” by which people can ask for explanations about (algorithmic) decisions made about them (Pena et al., 2020).
Overall, the ethicality of AI recruiting depends highly on the mode in which it is implemented and used. On the one hand, it offers a huge ethical opportunity in the form of timely feedback for applicants; on the other hand, it bears the ethical risk of omitting transparency and explainability. Extant literature agrees that companies and recruiters should not rely on information produced by a black-box algorithm they do not fully understand. This is an open technical challenge to solve: building algorithms and AI applications that lead to explainable results (Schumann et al., 2020).
Closely related to the issue of explainability is the topic of accountability in the hiring decision-making context. When automated AI technologies are used for decision-making, a question arises of whose job it is to adhere to ethical norms and labor laws, and who can be held responsible and accountable for the decisions made: the data scientists, the hiring managers or the company as a whole? This question becomes even more difficult when firms are not developing the AI themselves, but instead buying the technology from third-party vendors who want to protect their intellectual property and may not be willing to grant full transparency into the algorithms used (Sánchez-Monedero et al., 2020; Tambe et al., 2019). Lin et al. (2020) outlined that in the recruiting process, agents with different roles in the collective decision-making process can have a collective responsibility (i.e., each agent fulfills his or her role and shares a collective responsibility). Thus, when a recruiter makes a morally wrong decision based on a problematic recommendation by an AI, which in turn results from the negligence of a software engineer, both the recruiter and the engineer are collectively responsible and accountable for the wrong decision. Building on this discussion, Bornstein (2017) and Kim (2017) claimed that current regulation should be broadened, making companies that apply AI recruiting practices fully liable for any occurrence of discrimination or implicit bias in employment decisions.
It is clear that the AI itself cannot be held accountable; it should be a human agent who is ultimately responsible for the decision made when selecting an employee (Lin et al., 2020). However, the use of AI results in an obfuscation of responsibilities and accountabilities, which represents an ethical risk and must be clarified.
Human Oversight and Autonomy
The extent to which AI is integrated into the decision-making process varies across businesses. Some papers (Fernández-Martínez & Fernández, 2020; Yarger et al., 2020) have reported that increasingly more tasks are taken over by algorithms, though firms still rely on human recruiters to make the final decision. However, other papers (e.g., Lee, 2018; Vasconcelos et al., 2018) have stated that AI has already taken over the automated decision-making process, forwarding or rejecting candidates. This raises the question of whether it is ethical to base hiring decisions solely on algorithms and without human intervention. Sánchez-Monedero et al. (2020) even raised the point of whether, due to the new GDPR regulation, it is in fact illegal to use a solely automated hiring system in the EU, because the GDPR grants people the right to a “human in the loop.” Overall, extant literature agrees that the loss of human oversight should be avoided, but human involvement in the training, validation, and deployment process should be maintained.
Additionally, Lin et al. (2020) raised the question of whether the usage of AI effects human autonomy. When AI applications and analyses shape human decisions by interfering with deliberation processes, the violation of human autonomy can become a serious ethical concern. The authors called this “AI paternalism” (p. 16). However, this topic is not further discussed in the identified literature. Thus, questions regarding how AI impacts the autonomy and dignity of candidates remain open.
Efficiency Gains and Effects on Internal Organization
In the first place, AI-advanced selection tools are attractive for organizations, as they make hiring more cost- and time-efficient (e.g., Lee, 2018; van Esch & Black, 2019). With the help of AI, employers have a greater ability to quickly shortlist candidates with high potential and streamline the selection process (Hipps, 2019; Persson, 2016; Savage & Bales, 2017). For example, AI technology provides firms with the ability to initially screen and process hundreds of applications in a short time frame (Persson, 2016). Moreover, AI-powered video interviews increase efficiency by reducing selection process time as well as candidate time and travel distances (Fernández-Martínez & Fernández, 2020).
However, the use of AI has further effects on the internal organization. The enhancement of recruiters’ jobs is thereby considered an ethical opportunity of AI-enabled recruiting practices (Rąb-Kettler & Lehnervp, 2019; van Esch & Black, 2019). Daily, recruiters are confronted with numerous repetitive tasks, such as screening resumes, scheduling interviews and conducting similar conversations. When these tasks are taken over by AI, it results in a more meaningful job, as recruiters can undertake activities of higher value for the company. For instance, they can adapt better engagement techniques to ensure that a leading candidate accepts a job offer (Hipps, 2019; van Esch & Black, 2019) and can better focus on the individual candidates, stepping from a pure head hunter role into a career guide role (Rąb-Kettler & Lehnervp, 2019). Although the identified articles evaluated the effects of AI recruiting on the internal organizational members very positively, they must be studied in greater detail. For example, it needs to be tested whether a greater volume of candidates may prevent any gains in work time for recruiters (Ryan & Derous, 2019). Further, potential job losses of recruiters are not yet part of the discussion.
Although the research on applicant reactions to technology-powered recruiting processes has increased in recent years (see Woods et al., 2020 for a review on applicant reactions to digital selection procedures), there is limited understanding of how people perceive AI recruiting and contrasting findings exist. Several studies of applicant reactions to AI interviews provide some cause for concern as they reveal that applicants perceived AI interviews as less fair and less favorable than face-to-face interviews with humans (Acikgoz et al., 2020; Lee, 2018; Newman et al., 2020). For example, Lee (2018) found that participants believe that AI lacks certain human skills that are required in the recruiting context: It lacks human intuition, makes judgments based on keywords, ignores qualities that are hard to quantify and is not able to make exceptions. Furthermore, some participants felt that using algorithms and machines to assess humans is demeaning and dehumanizing (Lee, 2018). In contrast to those findings, another group of papers (Langer et al., 2019a, 2019b, 2020; Suen et al., 2019) found no differences in perceived fairness between interviews with an AI and interviews with a human among job applicants, although most of them exhibited lower favorability to AI interviews.
Other studies (Gelles et al., 2018; Kaibel et al., 2019; Langer et al., 2018; van Esch & Black, 2019) examined the effect of different contextual factors on applicant reactions to the use of AI in hiring. For instance, Langer et al. (2018) found that applicants with a computer science background did not perceive AI recruiting differently from non-computer science applicants. Another study by Kaibel et al. (2019) examined the moderating effect of applicants’ discrimination experience and uniqueness. They found that applicants who have experienced discrimination before perceive selection processes as fairer when an algorithm instead of a human makes the decision, whereas the negative effect of AI-based selection decisions on organizational attractiveness was stronger for individuals with a high sense of personal uniqueness. Underlining the relevance of perceived fairness, a study (van Esch & Black, 2019) found that the more job candidates perceive the AI-enabled recruiting system as providing fair treatment, the likelier they are to engage in and complete the recruiting process.
In a case study, van den Broek et al. (2019) found that different stakeholder groups may hold different and clashing notions of fairness, which may even be reconsidered during the implementation of AI recruiting in practice. For example, although AI tools are introduced to make the process fairer and decisions consistent across the company, it was observed that some recruiters did not use the algorithmic results consistently, but made exceptions, which they perceived as fairer.
Overall, there is no clear answer to the question of how AI recruiting is perceived. What is perceived as fair in one context may be judged differently in another. Although we found several studies examining the fairness perceptions of applicants, the perspective of current employees and HR managers on AI recruiting tends to be neglected. This leaves open the question of the extent to which HR managers trust and accept AI recruiting.
Approaches to Mitigate Ethical Risks
As shown in the previous section, the new AI technologies pose new challenges to regulation and governments, especially as they are being applied in recruiting. Some approaches to mitigating the emerging ethical risks in the AI recruiting context are discussed in extant literature.
In the identified literature, it has been broadly claimed that more governmental regulation is needed to respond to the new developments in hiring: Whereas Kim (2017) argued for a legal response to what she called classification bias, Fernández-Martínez and Fernández (2020) called for governments to track selection processes and check for any infringement of fundamental employment laws or human rights. In their recent analysis, Raghavan et al. (2020) found that currently, vendors’ practices in bias mitigation are heterogeneous. This suggests that evolving industry norms are sensitive to bias concerns but lack clear guidance on how to respond. However, as current regulation leaves room for unethical behavior of firms, today, employers need to think beyond governmental law when developing and using predictive hiring tools (Bogen, 2019).
Extant literature refers to various organizational standards that firms may and should implement to ensure ethical use of AI in recruiting. First, it is suggested that companies applying AI tools in the personnel selection process comply with privacy laws just as they would in traditional hiring. On the one hand, this means that organizations should fully protect and keep safe all sensitive data. On the other hand, recruiters should not use or predict any private or sensitive candidate information in the recruiting process. In addition, firms should proactively and fully brief candidates that their data will be analyzed by AI systems and obtain their consent (e.g., Chamorro-Premuzic & Akhtar, 2019; Simbeck, 2019). Second, firms should proactively and explicitly provide meaningful information on the hiring decision-making process, including information about the algorithmic techniques and data sets used, to ensure transparency and craft effective policy (Köchling et al., 2020; Raghavan et al., 2020; Sánchez-Monedero et al., 2020). Additionally, it should be always transparent to applicants whether they are communicating with another human or with AI (Simbeck, 2019). Third, several papers (e.g., Chamorro-Premuzic & Akhtar, 2019; Köchling et al., 2020) also suggested human oversight on AI as a standard for organizations. The authors encouraged a human review, in which experienced recruiters oversee the selection and evaluation made by AI. They argued that decisions should be made by an algorithm-informed human, rather than by an algorithm alone. Fourth, to further ensure and audit the implementation of these ethical standards, various authors have referred to compliance instruments companies should establish, such as an AI ethics board with an oversight function, consisting of representatives of relevant stakeholders who debate the data and ethical dimensions of AI algorithms and agree on boundaries for AI technology in the company (Simbeck, 2019; Tambe et al., 2019). In addition, Tambe et al. (2019) recommended specifying a code of ethics for AI-related initiatives within the company. Lastly, authors have encouraged diverse data scientist teams in organizations to foster inclusion and equity in AI (Giang, 2018; Yarger et al., 2020). In particular, in the ML algorithm development process, diverse voices across gender and race must be present to raise questions and check implicit assumptions.
Technical Due Diligence
Next to approaches on the governmental and organizational level, the identified literature also discusses technical methods to ensure ethical application of AI tools in recruiting. First, authors mentioned the data literacy of programmers, as well as the knowledge of hiring managers on how to use the AI solutions as a first prerequisite. Given that any data concerns can have a life-changing impact on applicants, companies need to have adequate levels of data and statistical skills to assure the accuracy and validity of the developed algorithms (Fernández-Martínez & Fernández, 2020; Lewis, 2018; Simbeck, 2019). Second, if companies do not develop the algorithms in-house, but buy more innovative skill tests or games from external vendors, practitioners are strongly encouraged to refer to professional test standards and obtain critical information about the tools: for example, evidence that informs psychometric reliability, criterion-related validity and bias implications (Oswald et al., 2020).
Third, the ethicality of the AI tool design, which should include bias mitigation techniques, plays a crucial role. For instance, some AI software vendors remove any wording or phrases that can unconsciously predict the gender of a candidate from CVs to circumvent unconscious bias and improve equity (e.g., Lin et al., 2020; Yarger et al., 2020). A different approach suggested by Williams et al. (2018) is to proactively gather and use social category data to illuminate and combat discriminatory practices. The authors argued that only when data are labeled with social categories can data scientists detect, understand, or remediate patterns of discrimination. Furthermore, open-source tools and technical frameworks for data scientists (e.g., IBM’s “AI Fairness 360”) can facilitate systematic bias checks and assist developers in embedding fairness in their algorithms (see Mujtaba & Mahapatra, 2019 for an overview of open-source toolkits). However, Sánchez-Monedero et al. (2020) pointed to the computational limitations of bias mitigation techniques and further argued that most bias mitigation systems aim at meeting the constraints of US law, which makes them not directly applicable in EU markets. In the context of ethical AI, Polli (2019) further referred to the movement among AI practitioners to develop a set of design principles for making AI ethical and fair (i.e., beneficial to everyone). She thereby emphasized the key principle according to which AI should be designed so that it can be easily audited. Rather than just assuming that algorithms yield accurate results, employers must regularly check the technology used for discrimination, as well as data errors and biases (e.g., Fernández-Martínez & Fernández, 2020; Hipps, 2019; Polli, 2019). Efforts must be made to constantly improve the robustness of any AI tool and, thus, proactive auditing methods should be implemented (Köchling et al., 2020). For example, outside professionals can be hired to build an internal auditing team to look at the AI decisions and audit key algorithms (Giang, 2018; Mann & O’Neil, 2016). They can carry out random spot checks on algorithmic recommendations, investigating in detail which candidates the algorithm has been selecting and why. To this end, Fernández-Martínez and Fernández (2020) developed an automated multi-agent software architecture to support auditing the recruiting process.
Lastly, companies need to be able to explain why a candidate has been selected and the causality regarding which specific attributes can be associated with their success in a role (Chamorro-Premuzic et al., 2019; Lewis, 2018). Thus, employers should not rely on black-box models, but develop AI applications that are interpretable (Lin et al., 2020). Transparency on algorithmic assumptions and models (e.g., in the form of explainability reports) is key in the mitigation of bias and when addressing trade-off decisions data scientists have to make (e.g., Mujtaba & Mahapatra, 2019; Tambe et al., 2019).
Awareness Among Employees
AI plays a critical role in technology to attack the diversity problem. It is therefore crucial that companies invest not only in AI technology, but also in people who are aware of both the opportunities and the risks that attend AI-powered recruiting practices (Chamorro-Premuzic et al., 2019). The awareness and sensibility of recruiters and data scientists about the potential bias and shortcomings of their algorithms is key to address the accompanying ethical challenges (Simbeck, 2019). When regulation is not enough to guide human behavior, ethical thinking and awareness of conscious use of predictive AI tools must be further promoted beyond regulation (Persson, 2016).
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00735.warc.gz
|
CC-MAIN-2022-40
| 53,684
| 70
|
https://sourceforge.net/p/cpufreqd/feature-requests/5/
|
code
|
I've think about an interesting feature (at least for
me :) ) :
cpufreqd could take idle time in account to choose a
when I use my laptop normally, it runs at ~800mhz.
when I run big application, it runs faster.
I'd like when I run nothing after 5 minutes, it runs
slower... to save battery when I am running on battery,
or to cool the cpu a little bit more...
I've not written any code about it... but if you think
it is interesting, I will try to implement it...
Log in to post a comment.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105108.31/warc/CC-MAIN-20170818194744-20170818214744-00630.warc.gz
|
CC-MAIN-2017-34
| 490
| 11
|
http://stackoverflow.com/questions/6539160/regex-for-a-string-of-n-characters
|
code
|
I want to use .NET's Regex.IsMatch function to determine if a string is 8 characters long and matches the expression
AK-9442F would match but not
AK-9442. How would I construct the expression?
Using the static Regex.IsMatch, you can do:
Should work for your purpose. Broken down, it's:
Use this pattern:
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657129407.88/warc/CC-MAIN-20140914011209-00217-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
|
CC-MAIN-2014-41
| 303
| 6
|
https://www.aidlda.it/speaker/luigi-di-stefano/
|
code
|
Luigi Di Stefano
Professor at the Department of Computer Science and Engineering (DISI) of the University of Bologna
Luigi Di Stefano is Professor at the Department of Computer Science and Engineering (DISI) of the University of Bologna, where he founded and leads the Computer Vision Laboratory (CVLab). His research interests are focused on computer vision, machine learning and deep learning.
In these fields, he has coordinated many academic research projects funded by public national and European grants as well as by private companies and he is author of more than 150 papers in renowned international journals and conferences and several patents.
He has given invited lectures in workshops and PhD schools, has been called as a member of the Thesis Defense Committee of several PhD candidates both in Italy and abroad, serves regularly as a reviewer for the main international journals and conferences.
He has been member of the Board of Directors of Datalogic SpA as an Independent Director and scientific consultant for Pirelli Tyres in the area of computer vision.
In 2011-2012 he was Scientific Supervisor of VIALAB (Vision for Industrial Applications Laboratory), a research and technology transfer laboratory focused on computer vision located in Bologna. In January 2020 he has co-founded the start-up eyecan.ai (https://www.eyecan.ai/).
Deep Scene Perception without Labeled Data
This talk will present the recent research work carried out at CVLab-University of Bologna in the field of deep learning for scene perception. The leitmotif behind our work concerns avoiding reliance on supervision from labeled data, which, indeed, I am lead to posit not to even exist when it comes to train models aimed at key perception tasks like depth prediction.
Firstly, I will address depth estimation from both stereo as well as monocular views. Here, the main contribution of our research concerns deploying self-supervision to pursue domain adaptation of deep CNNs pre-trained on computer generated imagery. This, indeed, lead us to the development of the first-ever on-line adaptive stereo network, i.e. a CNN that deploys an efficient continual learning paradigm to keep-up with domain changes in real-time.
As for depth-from-mono, based on the intuition that effective monocular depth cues arise from semantic knowledge, I will show how joint learning of depth and per-pixel class labels can ameliorate depth prediction significantly.
I will dwell further into cross-task learning by presenting our novel AT/DT framework, which allows for transferring learned representations across different task and domains, so to, e.g., enable predicting depths in a target domain by leveraging on sematic labels only (or vice-versa).
I will then present our latest results dealing with the first CNN architecture for comprehensive scene perception from monocular videos: ΩNet (CVPR 2020) can predict depth, semantic labels, optical flow, per-pixel motion probabilities and motion mask based on a novel training protocol relying on self-supervision and knowledge distillation.
Finally, I will address perception from point clouds and present our rotation-equivariant local 3D descriptor based on Spherical CNNs and learned end-to-end from raw data without any explicit supervision. Peculiarly, this proposal is conducive to extraction of a canonical orientation from the learned rotation-equivariant representation so as to allow for rotation-invariant descriptor matching.
To conclude the talk, I will briefly show some unpublished results from an on-going project carried out in cooperation with a major company.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474852.83/warc/CC-MAIN-20240229170737-20240229200737-00634.warc.gz
|
CC-MAIN-2024-10
| 3,612
| 15
|
https://pypi.org/project/Firenado/
|
code
|
Firenado is a python web framework based on Tornado web framework/server.
Firenado is a Python web framework that encapsulates and extends Tornado organizing the application in components also adding a server side session layer, yaml based configuration files as other features common that will help developers building web applications and services.
pip install firenado
Creating and running a new application:
> firenado project init helloworld > helloworld > firenado app run
By default an application will be created with a redis based session and a redis data source defied and linked to the session.
Firenado don’t install redispy so it is necessary to either install it or turn the session as file based. You can disable the session engine too.
To change the session type to file go to helloworld/conf/firenado.yml and change the session definition to:
# Session types could be: # file or redis. session: type: file enabled: true # Redis session handler configuration #data: # source: session # File session handler related configuration path: /tmp
If your helloworld project isn’t on the python path just go helloworld/conf/firenado.yml and configure the application settings:
app: component: helloworld data: sources: # Set here references from sources defined on data.sources - session pythonpath: .. port: 8888
Release history Release notifications
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size Firenado-0.1.7.8.tar.gz (47.4 kB)||File type Source||Python version None||Upload date||Hashes View hashes|
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144150.61/warc/CC-MAIN-20200219122958-20200219152958-00542.warc.gz
|
CC-MAIN-2020-10
| 1,662
| 15
|
https://www.coursehero.com/file/5783975/Figure-43-graphs-C-T-for-the-two-signals-from-Figure-41-we/
|
code
|
Unformatted text preview: for a
set of parameters that cause the inputs to model the outputs. Function learning attempts to
nd the best parameter vector v; alignment attempts to nd the best transformation T . As
we did for function learning, we can draw an analogy with sample entropy. The log likelihood
of T is proportional to the conditional entropy of the image given the model,
log`T = , N havT X j uX ; T; ; q; F :
a 4.14 This is not the EMMA estimate of entropy, but the conditional entropy of v under the
assumption that v is conditionally Gaussian. For the problems described in Section 3.1 it
was possible to show that entropy optimization led to maximum mutual information solutions.
For this problem however, we cannot claim that maximizing log likelihood is equivalent to
maximizing the mutual information between v and u. The mutual information I vT X ; uX = hvT X , hvT X j uX ; T; ; q; F ; 4.15 includes both a conditioned and unconditioned entropy. For some types of transformations
hvT X may change as T is varied. In these cases minimizing conditional entropy is not
equivalent to maximizing mutual information. One must also maximize the unconditioned
entropy. In our simple example, where only translation is varied and the signals are periodic,
unconditioned entropy does not change as T is varied.
Returning to the rst synthetic example, we can plot C T from 4.12 versus translation. Figure 4.3 graphs C T for the two signals from Figure 4.1 we have assumed periodic
boundary conditions on the signals. We can see that the cost has a very strong minimum at
the true translation of 0 pixels.
81 Paul A. Viola CHAPTER 4. MATCHING AND ALIGNMENT
v(x) Difference Squared 1200 Intensity 1
0 0 100 200
Position 400 -150 -75 0
Position 75 150 Figure 4.3: On the left is a plot of image and model that are identical except for noise. On
the right is a plot of C T versus translation. There is a signi cant minimum at the correct
aligning translation of 0 pixels.
v(x) 1400 Difference Squared Intensity 1500
800 0 100 200
Position 400 -150 -75 0
Position 75 150 Figure 4.4: On the left is a plot of image and model that are related non-linearly. On the
right is a plot of C T versus translation. There is no minima at the aligning translation of
0 pixels. In fact minima exist at incorrect translations.
Correlation works very well at matching together u and v when the imaging model and
exogenous parameters are known. In many cases however we may be faced with a situation
where F and q are unknown. In some cases alignment problems can still be solved by
assuming that the imaging function is the identity function. This assumption is not e ective
when aligning the non-monotonically related signals shown in Figure 4.2. Figure 4.4 graphs
C T versus translation for these two signals. Notice that each of the actual minima are at
In general C T cannot be used to align signals related by an unknown non-linear transformations. C T can however be generalized to work with signals that have been transformed
linearly. Rather than minimize the squared di erence between signals, we can instead minimize the squared di erence between signals that have been normalized. A normalized signal
82 4.1. ALIGNMENT AI-TR 1548 4 u(x)
v(x) Intensity 2
0 100 200
Position 400 Figure 4.5: Graph of ux and vx = 3ux , 2 versus x.
is one with a mean of zero and a standard deviation of one and can be computed as E
ux = ux , XuX :
u 4.16 The normalized version of a signal is invariant to multiplicative and additive changes to the
original. The sum of the squared di erences between the normalized signals, NC T , can
be computed directly as one minus the Normalized correlation between the signals u and v.
Normalized cost is de ned as:
, u E
NC T = 1 , Ea uX vT X EavTXX a vT X :
au X a
As a shorthand we have abbreviated sums over the coordinates x as expectations and variances.
Normalized cost can be used on signals like the ones shown in Figure 4.5 F u = 3u , 2.
A plot of NC T versus translation is identical to Figure 4.3. In some cases, normalized cost
can be applied to signals transformed by non-linear monotonic functions. Note however that
the two signals shown in Figure 4.2 are related by a non-monotonic function and cannot be
accommodated in this way. In these examples translation does not e ect the mean or the
standard deviation of the signals. As a result, normalized cost will not produce a convincing
minimum where cost alone does not.
83 Paul A. Viola CHAPTER 4. MATCHING AND ALIGNMENT We may still wish to align models and images that are related by non-monotonic functions.
In this case alignment can be performed by jointly searching over the space of possible imaging
functions, exogenous parameters and transformations. Probability can be used to m...
View Full Document
- Spring '10
- The Land, Probability distribution, Probability theory, probability density function, Mutual Information, Paul A. Viola
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867977.85/warc/CC-MAIN-20180527004958-20180527024958-00280.warc.gz
|
CC-MAIN-2018-22
| 4,933
| 66
|
http://kfemate.sourceforge.net/about.html
|
code
|
About to Kfemate.
kfmate is Developed by Ivan Rubio Albarran
Project page http://sourceforge.net/projects/kfemate
Why to write kfemate?
for which in mexico there is no software wherefrom choosing, and realizing searches in the web many leave to me that to wish, that's why I decided to write kfemate, enjoy it!
Why Open Source?
because I believe that there are many persons who want something rapid and that it could be modified to his needs, I did not write kfemate with all the options for the whole world, only for me, that's why it is Open Source.
this page is developed in Mandriva using NVU and The Gimp
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651780.99/warc/CC-MAIN-20180325025050-20180325045050-00688.warc.gz
|
CC-MAIN-2018-13
| 609
| 8
|
https://zh.coursera.org/learn/python-data/reviews?authMode=login&page=293
|
code
|
Jun 18, 2020
Great course for pyhton. Loved this course and enjoyed it. Thanks to Dr.Chuck. If anyone who want to take a course which is well explained and fun for python learning, then Hey!!! this is your course.
Jul 23, 2020
Excellent explanation. Professor Charles kept the course from being monotonous. Learnt in depth about reading from file, sorting dictionaries and appending lists. Looking forward to learn more courses
创建者 Pietro B•
Feb 19, 2017
The instructure is amazing. Super course :)
创建者 Pedro H•
Jan 24, 2017
Great curse, I learned a lot from Dr. Chuck
创建者 Gayatri V•
Jan 2, 2017
Great course, useful stress-free exercises!
创建者 Harshal A•
Nov 7, 2016
Excellent Resources and teaching techniques
Sep 30, 2016
Wonderful courses, I appreciate it so much!
Sep 12, 2016
I have learned a lot from this curse.Thanks
创建者 Eddi P•
Aug 8, 2016
super great teacher, lesson are really fun!
创建者 Dante L C C•
Feb 23, 2016
Great course. I have learned a lot. Thanks.
创建者 Thomas H•
Jan 2, 2016
Good introduction - fun to watch and learn.
Jan 1, 2016
Very good course. Loved the weekly exercise
创建者 Min T•
Dec 16, 2015
Thank you for offering this awesome course.
创建者 Pavel F•
Nov 22, 2015
extremely good course. highly recommendable
创建者 Sinyakov M•
Nov 12, 2015
Nice course, but some assignment very easy.
创建者 Usama N•
Jun 9, 2022
Was the best thing that ever happen to me.
创建者 XinYi L•
May 14, 2022
excellent lectures with valuable practices
创建者 Ahmad A M•
Mar 30, 2022
Firstly: Thank u.. Secondly: it’s worth it
创建者 Shivang S•
Jan 12, 2022
Not words can truly appreciate the content
创建者 Karthika R•
Sep 13, 2021
wonderful assignments to learn new tricks!
创建者 Anwar K•
Jul 2, 2021
Thanks alot but it was a very easy one :)
创建者 Huamei Z•
May 27, 2021
The course is so helpful. I love Dr Chuck!
创建者 NEETHUKRISHNA P•
May 23, 2021
the python programming was well understood
创建者 jose b•
Apr 2, 2021
Great, I learned a lot, thank you so much.
创建者 Nacho B•
Mar 10, 2021
Awesome, makes python a fun thing to learn
创建者 Sumit B•
Feb 14, 2021
Great course for beginners in programming!
创建者 Philipp O•
Jan 24, 2021
Amazing teaching! thanks a lot, Dr. Chuck!
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037089.4/warc/CC-MAIN-20220626040948-20220626070948-00299.warc.gz
|
CC-MAIN-2022-27
| 2,322
| 76
|
https://www.ieeta.pt/index.php/new/seminar-machine-learning-pitfalls-in-healthcare/
|
code
|
Speaker: Augusto Marques Ferreira Silva is an Associate Professor in the Department of Electronics, Telecommunications, and Informatics and a member of the Institute of Electronics and Informatics Engineering of Aveiro – IEETA. He is also the Director of the Master’s program in Medical Image Technologies. His preferred research area is Medical Image Processing with a focus on Machine Learning methods.
Abstract: In an era where data-driven decision-making has become central to healthcare, machine learning (ML) holds great promise. This presentation aims to promote discussion on the often overlooked but critical aspect of ML in healthcare: the pitfalls and challenges.
Key themes will include data quality and bias, interpretability, generalization, and ethical considerations. We’ll discuss how the unique characteristics of healthcare data, such as class imbalance and noisy labels, can skew model performance and decision-making.
Furthermore, we will emphasize the importance of interpretability and transparency in healthcare ML models. The ability to explain and understand model predictions is paramount when lives are at stake.
Generalization, or the ability of models to perform well on unseen data, is another critical challenge in healthcare. We will discuss strategies to mitigate overfitting and ensure that ML models perform reliably across diverse patient populations.
The role of latest trends such as generative ML will be addressed with some practical examples.
Ethical considerations, such as patient privacy, consent, and fairness, will be central to our discussion. We will address the ethical dilemmas that arise when deploying ML in healthcare and offer guidance on responsible AI implementation.
Location and date: IEETA auditorium, 15th November 2023, 15:30
Zoom link: https://videoconf-colibri.zoom.us/j/96216735526
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00687.warc.gz
|
CC-MAIN-2023-50
| 1,852
| 9
|
http://mdportal.net/cups-windows/cups-windows-drivers.html
|
code
|
Home > Cups Windows > Cups Windows Drivers
Cups Windows Drivers
Can anyone give me a hand? You must convert every single letter in their filename to lowercase. Adobe Driver To use the Adobe driver simply run the installer, select "Network Printer", enter the URL of your printer queue (should be of the form http://hostname:631/printers/PrinterName), and then select "yes" For example: Close both the content and policy properties by clicking OK. Close the Group Policy Management Editor. his comment is here
To add accounts, set up a regular GNU/Linux account and then set up a Samba password on the server. To use the driver files for uploading them to a Samba print Server, start the installer, copy the files from the temporary folder to a new location, and cancel the installation. After making any modifications, restart CUPS. geekyprojects.com Home Index Donate About Contact Us Enter your search terms Submit search form geekyprojects.com Web Getting Windows
Add Cups Printer To Windows 10
I'm part of a primarily linux shop that has to deal with Windows for administrative machines and this blog was a life-saver. We focus on creating quality articles that help you become more efficient. If you've chosen another username, edit the configuration above accordingly. Modify CUPS Configuration If you don't wish to create a raw printer queue then you can instead make the following changes to your CUPS mime.types and mime.convs configuration files.
This environment is no longer available to me for testing, so I'm not able to vouch that each of these steps is absolutely required. In October 2016, Microsoft published an update to mitigate these problems in an Active Directory (AD). The IP address of the print server is 10.0.0.1322) I assumed that the port was 631 (common printer port), so I added that to an HTTP string that I was constructing Cupsaddsmb No Windows Printer Drivers Are Installed I hope this helps others. 1 Pimiento OP alangauld Dec 26, 2016 at 4:34 UTC 1st Post This worked perfectly.
Join the community Back I agree Cups Windows Drivers Download What you need to install is the lpd (cups-lpd) service. Under Fedora, and possibly other systems, CUPS needs to be configured to accept printer data that is already in its native form. https://wiki.archlinux.org/index.php/CUPS/Printer_sharing Is there any way to add the custom Link manager provider just for a particular site How do I find the most similar sequences from a large set of short sequences?
Supported Windows Printer Drivers Samba only supports the printer driver model version 3 that is supported in Windows 2000 to 10 and Windows Server 2000 to 2016. Cups Binary Package When prompted for a driver select a Manufacturer of "Generic" and the Printer "MS Publisher Imagesetter". This share name is hard-coded in Windows and cannot be changed. Select the printer driver for this printer as you would for a locally connected printer.
Cups Windows Drivers Download
Navigate to the Print Servers entry, double-click your print server, and select the Printers entry. The goal that I've been finally able to achieve is a smart configuration setup in order to export Windows printers drivers through CUPS service using SAMBA to share the printers onto Add Cups Printer To Windows 10 Right-click to the printer and select Properties. Cups Svn July 2011, 13:50 Uff, I finally did it!
On Fedora some additional configuration of CUPS is required, on other Linux/Unix systems it may work out of the box. this content Doing this is the final step, after that you should be able to print from Windows directly to a CUPS printer using IPP! Then test the print setup by printing a test page. Samba does only support non-packaged-aware printer drivers. 32-bit and 64-bit Drivers Printer drivers for the 64-bit Windows architecture, you can only upload from a Windows 64-bit operating system. 32-bit drivers you Cupsaddsmb
The goal here is to never leave your snug underground lair. Cups Windows 64 Bit Drivers Browse other questions tagged samba drivers cups or ask your own question. I made adjustments to the config file of the printer but I don't think they were necessary.4) On the Windows laptop, I opened the printers panel in the control panel and
Use version 3 drivers instead.
Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.5.8] tree connect failed: NT_STATUS_BAD_NETWORK_NAME Unable to copy Windows 2000 printer driver files (1)! Postscript Printing To use a printer queue as a Postscript printer requires a Windows XP Postscript printer driver, such as the built-in MS Publisher Imagesetter or this freely available one from To edit it, use the following command in your terminal screen prompt: gksudo gedit /etc/samba/smb.conf [global] log file = /var/log/samba/log.%m public = yes dns proxy = no workgroup = workgroup os Cups Server Robert Spencer Thank you so much for this great post.
Someone without programmation knowledge can do it in 30-100 minutes. Now go to a Windows client (it's important that it's running Windows 2000 or newer). lightmaster The log file tells me: -------------------------------------------- [2011/06/25 11:20:30.309357, 0] smbd/service.c:988(make_connection_snum) canonicalize_connect_path failed for service print$, path /etc/samba/drivers [2011/06/25 11:20:30.334322, 0] smbd/service.c:988(make_connection_snum) canonicalize_connect_path failed for service print$, path /etc/samba/drivers [2011/06/25 11:20:30.358875, http://mdportal.net/cups-windows/cups-windows-drivers-x64.html Hostname lookup Another common step is to ensure that hostname broadcast by CUPS is accessible from the Windows XP machine.
CUPS Configuration This is a nice simple CUPS configuration for sharing printers. Obviously, there are a lot of tweaks and customizations that can be done with setting up a Samba print server, so it is advised to look at the Samba and CUPS Samsung and HP both have a lot of printers with Linux, Mac OS X, Windows, and networking support. You still need to mount that printer onto your computer then share the printer via lpd.
Content is available under GNU Free Documentation License 1.3 or later unless otherwise noted. Otherwise, Windows does not display the driver in the list displayed in the printer's properties when assigning the driver. Understand with lpd running on your linux box, there is no processing that happens on the linux box. You'll have to select a printer driver to complete the set up of the new printer on Windows. 6) Right click on the newly made printer thumbnail and select Printer Properties.
All this is licensed under the GPL/LGPL For more informations, go and see www.cups.org What's the problem with CUPS?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864572.13/warc/CC-MAIN-20180521235548-20180522015548-00024.warc.gz
|
CC-MAIN-2018-22
| 6,660
| 19
|
https://upge.wn.com/?from=pipelinedirectory.com&pagenum=5&language_id=1&template=cheetah-photo-search%2Findex.txt&query=pipeline_directory
|
code
|
- published: 18 Jun 2017
- views: 118361
This video covers how to create a Jenkins Pipeline with an Example. Github Repo for Jenkinsfile: https://github.com/TechPrimers/jenkins-example Twitter: https://twitter.com/TechPrimers Facebook: http://fb.me/TechPrimers GitHub: https://github.com/TechPrimers or https://techprimers.github.io/ Video Editing: iMovie Intro Music: A Way for me (www.hooksounds.com) #JenkinsPipeline #TechPrimers #CICD
Visit www.plumber3d.com to learn more about the Plumber Pipeline for maya. Video reference about the directory structure used by the Plumber Personal Production Pipeline for Maya. Plumber is perfect for students, independent Animated Short filmmakers, and small production teams. It manages your assets, and keeps your production organized from modeling through lighting. It is a turn-key system, but is wide open for customization.
Windows Powershell for Beginners : The Pipeline This course will cover the technical and theoretical reasoning’s for using PowerShell while also teaching scripting best practices. The project would be split into three sections which would start off with basics and introductory material then move onto fundamentals and finish with a slightly more advanced stage. The course is presented in PowerShell 5.0 (latest) and features information for those running older versions and the differences between the versioning. If you would like to view the entire course, visit www.ine.com to sign up for an All Access Pass! http://streaming.ine.com/c/windows-powershell-beginners
Mark Martin, of the Russell Martin Home Selling Team in Austin, TX, shares what he does in order get 3-5 transactions per month from calling into neighborhoods. He says, "When we call around our Just Listed/Sold, and cold call into neighborhoods, we have little to no competition, and the sellers are very loyal to us because they know we go the extra mile to do what other agents refuse to do in this business- cold call."
Tutorial for the pipe() system call. In this video, we illustrate the basics of pipe() and how you can use it to allow multiple processes or programs to communicate with each other. Pipe() takes in an int array of size two, and the indices of this array will act as each end of our pipe. It is now possible to disconnect stdin and stdout from the terminal and reconnect (stdin and stdout) with the ends of the pipe to allow communications between different processes.
Copyright Broad Institute, 2013. All rights reserved. This introductory tutorial to constructing pipelines in CellProfiler demonstrates some of the basics of using CellProfiler, such as starting the software, opening images and changing the input and output folders. You will also learn how to create, load and save pipelines, plus pointers for additional help. For more information about the Broad's Imaging Platform and other software tools available, please visit: -Imaging Platform (https://www.broadinstitute.org/node/144) -Software (https://www.broadinstitute.org/scientific-community/software) Transcript: In this short movie, I will demonstrate the basics of using our software, from starting CellProfiler, opening images, changing directories, loading, creating, and saving pipelines, and w...
http://www.salesmasterymag.com | Connect To Be Your Best What if you could accurately predict the sales from your sales pipeline? Today's expert, Dave Kurlan is known around the world for helping sales teams gets the 'lumps of coal' out of their sales pipelines. Sales Mastery connects ambitious sales pros with all they need to be their best. Read the monthly magazine, find help in the directory, watch the interviews, engage in the webinars, and above-all, CONNECT. Connect with the SALES MASTERY MAG: Website: http://www.salesmasterymag.com Linked-In: https://www.linkedin.com/company/sales-mastery-summit Facebook: https://www.facebook.com/SalesMasterySummit Twitter: https://twitter.com/sales_mastery Download the Free App on the iTunes Newsstand or Google Play Store. Special fre...
1- First, you will install TORTOISE SVN, download from here Tortoise SVN Download: https://osdn.net/frs/redir.php?m=xtom_us&f=%2Fstorage%2Fg%2Ft%2Fto%2Ftortoisesvn%2F1.11.0%2FApplication%2FTortoiseSVN-220.127.116.11416-x64-svn-1.11.0.msi 2 - you will create a local folder where all files will be downloaded to: ( name it whatever you want ) 3 - excellent.. now you need to map the directory to L: ( remember to set permissions properly ) 4 - Through TortoiseSVN 'repo-browse' option, connect L: to the svn: subversion URL server: locally: svn://192.168.88.4/repo_lil remotelly: svn://fs3dar.synology.me/repo_lil Provide User and Password 5 - Now you can checkout the project. (this will take several minutes ) 6 - After syncronization is done... you will need to install Python27. Python27 Down...
In this video I will show you how you can read csv file in python and with csv data create multiple users in active directory server. you can download source codes from my github repository https://github.com/vfxpipeline/AutomateActiveDirectory Thanks for watching. Do not forget to leave a comment below. your feedback is very important for me. Please like and share share this video with your friends to spread the knowledge with others. Subscribe VFX Pipeline on YouTube https://www.youtube.com/vfxpipeline Like VFX Pipeline on Facebook https://www.facebook.com/vfxpipeline Download Free Source Codes from GitHub https://www.github.com/vfxpipeline
How to use TFRecords with the Dataset and Estimator APIs in TensorFlow. https://github.com/Hvass-Labs/TensorFlow-Tutorials
Chapter 8 of 16 These videos were developed for PowerShell version 1, but the material is still valid. All training video labs can be found here: https://www.sapien.com/books_training/Self-Paced-Training Learn how to use Windows PowerShell -- whether you are administering Windows, Exchange Server, or any other PowerShell-enabled product. PowerShell MVP and guru Don Jones, one of the world's most experienced PowerShell instructors, guides you through PowerShell's basics -- assuming no prior experience. You will learn about cmdlets, the pipeline, WMI and ADSI within Windows PowerShell, and much more. Don's conversational style, conceptual animations, extensive demonstrations, and real hands-on exercises help you learn quickly and effectively.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583722261.60/warc/CC-MAIN-20190120143527-20190120165527-00310.warc.gz
|
CC-MAIN-2019-04
| 6,352
| 13
|
https://bugs.dolphin-emu.org/issues/7098
|
code
|
Emulator Issues #7098
GBA Connectivity Master Issue
For its time, The GameCube has a very unique feature of using Link Cables to connect its handheld counterpart, The Gameboy Advance, for additional game features. This feature was used mostly for additional features, transferring data and additional gameplay modes, and most of all, selling GBAs and link cables.
Usage and information:
For example, Pokemon Colosseum and XD both allowed you to battle in full 3D between two players that had GBAs. The screens kept all your battle options secret from your opponent, likely because they were too lazy to do what Pokemon Stadium 1 and 2 already perfected. Those two games and Pokemon Box also allowed you to transfer Pokemon into your Ruby, Sapphire and Emerald Pokemon games.
Wind Waker and others has the Tingle Tuner, connecting a GBA gets you extra options that enhance the main gameplay mode.
A few games used the GBA connectivity as their main hook. Four Swords Adventures and Final Fantasy: Crystal Chronicles are well known multiplayer games that use the GBAs as controllers and their extra screens for menus, notifications and even gameplay. Pacman VS and the aforementioned Pokemon Box go even further, using GBA connectivity as the sole way to play the games.
Due to latency concerns, it's likely impossible to connect a real GBA to dolphin. Even if we could, it'd require custom hardware for connecting and cause all kinds of headaches. The solution to that, of course, is to use a GBA emulator. As of this post, VBA-M can connect to dolphin, and a couple of the games that use these features work relatively well, and a few more at least connect.
Unfortunately, there are far more games with problems and even more that don't seem to work at all. To complicate matters further, because two emulators are involved, problems could be on both sides of the connection, VBA-M could have problems, Dolphin could have problems... or both!
Speed issues, in order for this issue to be closed, both emulators must run at full speed (or at least theoretically if on a fast enough computer.) Some games as of this posting run < 1 fps when set to default frame limiters. Messing around may improve, but never totally fix. Another example is if it requires one emulator to run far above full speed for the other to reach full speed.
Disconnection issues. It is not acceptable for the GBA Emulator to lose connection to dolphin.
Failure to connect issues. The GBA emulator or Dolphin lock up upon connecting to one another.
Further Information and Data
A total list of games that use the GBA Link Cable can be found here: https://wiki.dolphin-emu.org/index.php?title=Category:Game_Boy_Advance_%28Input_supported%29
A guide to connecting VBA-M to Dolphin (Edited from the Working Guide on the Wiki)
Game Boy connection support can be supported via joybus emulation. Such requires VBA-M (r947 or newer, latest should work) and a dump of GBA BIOS.
1: Launch Dolphin and start the game you intending to play. Get to the point at which it prompts you to connect the GBA; or if it just intends it to be connected all along, just keep the game running.
2: In Dolphin, go to Config => GameCube and change the controller ports to GBA;
Launch VBA-M, go to Options => Emulator and uncheck "Pause When Inactive". Then, go to Options => Link => Joybus options, mark "Enable Joybus Connection" and use default settings (127.0.0.1)
3: After enabling joybus in VBA-M, Dolphin will freeze (don't panic, its OK!). Now load the GBA BIOS in VBA-M, after the splash screen Dolphin will recognize the joybus link and game will detect that a GBA was connected. Depending on the game, this can take upwards of five minutes. Enabling turbo mode in VBA-M and Dolphin (no frame limiter) can speed this up.
To connect other GBAs, just open another instance of VBA-M and repeat last instruction. Remember to unblock Dolphin and VBA-M in your firewall, some may block joybus link, leaving Dolphin stuck in connection screen. You can also refer to this video for more details.
#4 Updated by hayhurstpk almost 7 years ago
What if you used the GC Controller Adapter by Mayflash with the GBA to GC adapter cord to connect the GBA to the PC? In the config for GC controller there is an option for GBA. Would dolphin check the other Mayflash port for the GBA? If not, could it be possible to add in that functionality?
#7 Updated by skidau about 6 years ago
I have made available a test build and am looking for test results. Please test this version out and post the results to the thread below. More information is available in the thread. This WIP build is a Windows build.
#9 Updated by Specs almost 5 years ago
I'm not sure whether this works with specific builds of both emulator but I can't get it to work in 4 player on 4swords with new or old builds. I can connect one virtual GBA with VBAM but after that it doesn't seem to detect the other VBAM windows and they all try to connect to controller port 1.
#14 Updated by Bangaio65 about 1 month ago
Four Swords Adventures+ - in Tetra's Trackers, after finishing a level, you're asked if you want to play another. The GBA reboots into the BIOS but then it seems to never re-establish the connection, so you must close and re-open Dolphin and the GBA emulator to play another level. Happens both with VBA-M and mGBA. [dolphin-master-5.0-13963-x64]
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00403.warc.gz
|
CC-MAIN-2021-21
| 5,363
| 29
|
https://remotedom.com/remote-jobs/staff-engineer-at-testifysec-167
|
code
|
The Elevator Pitch
In a world where software supply chain security is becoming paramount, TestifySec stands at the forefront, safeguarding digital assets. As a burgeoning startup, our mission is to redefine the way organizations approach and implement software supply chain security. As a Staff Engineer in the Platform Development department, you will play a vital role in innovating new and exciting features for our flagship supply chain security platform, contributing to our open source community, and ensuring the architecture and security of our platform.
Our Core Values
- Trust: We believe in the power of trust as the cornerstone of all our engagements, defined by Competency, Consistency, Caring, and Communication.
- Innovation: Our commitment is to solve novel problems for our customers, balancing our focus on value-driven innovation.
- Customer Centric: Our goal is to solve pressing issues for our customers, ensuring that our solutions are tailored to their unique needs.
- Collaboration: We champion internal and external collaboration, valuing the shared ideas that drive our solutions and our engagement with the open source community.
- Empathy: We prioritize the human element in all our interactions, understanding and respecting each individual’s unique perspective and emotional landscape.
- Adaptability: Our foundation is built on responsiveness and the ability to swiftly adapt to the evolving needs of our customers and the industry.
- Architect and contribute to the development of our supply chain security platform using a robust stack including Go, Node, Kubernetes, and Shell.
- Innovate new features and enhancements for our platform, catering to the unique needs of our customers.
- Contribute to our open source community by sharing code and collaborating with other developers.
- Collaborate closely with product teams to ensure the architecture and security of epics and features.
- Participate in iteration planning, stand-ups, code reviews, and user acceptance testing.
Success in the Role:
Performance goals over the first 6-12 months:
In the first 30 days
- Familiarize yourself with TestifySec’s ethos, products, and offerings.
- Engage with your team and understand the product’s trajectory.
- Evaluate the current documentation and training.
- Pair with teammates and practice our software development life cycle (SDLC).
- Contribute to platform fixes or open source software.
In the first 60 days
- Take part in Iteration Planning and team stand-ups.
- Participate in code reviews and contribute features and chores to the platform.
- Pair with teammates to enhance collaboration and knowledge sharing.
In the first 90 days
- Contribute ideas to our Product Planning process.
- Lean into regular duties such as architecture, security testing, and diagramming.
- Share thoughts, ideas, and opinions in company culture circles.
- Write blog posts to accompany completed features and epics.
- Rotate in a customer tour to empathize with our users and gain insights.
- Drive the architecture and security of our platform and features.
- Continuously participate in code reviews and maintain high coding standards.
- Collaborate with product teams to provide high-level requirements and enable innovative solutions.
- Contribute to open source projects and engage with the developer community.
- Practice Test Driven Development and continuously improve skills.
- Write blog posts to share knowledge and contribute to the industry.
- Embrace a culture of learning, adaptability, and passion for the craft of software development.
Team Leadership and Culture
This role reports to Kris Coleman, the Director of Platform Engineering. Kris has over a decade of experience leading high-performing teams in consultancies and healthtech, recently leading the Digital Front Door team at Corewell Health before joining TestifySec. Kris embodies a leadership style inspired by the principles outlined in "Turn the Ship Around," which emphasizes a leader-leader culture. In this approach, we value each team member as a leader in their own right, fostering relationships built on trust and clarity that empower individuals to make informed decisions and unleash their creativity. As a key contributor, you’ll play a pivotal role in cultivating a culture of innovation, collaboration, and agility that aligns with our core values.
Benefits of Working at TestifySec
- Comprehensive health, vision, and dental coverage.
- Remote-first workplace.
- Pioneering role in software supply chain security.
- Dynamic and innovative startup environment.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100489.16/warc/CC-MAIN-20231203062445-20231203092445-00755.warc.gz
|
CC-MAIN-2023-50
| 4,581
| 46
|
https://www.reddit.com/r/linux4noobs/comments/16p5ty/trying_to_find_windows_7_files_on_ubuntu_dual_boot/
|
code
|
Hey reddit, so I recently thought I'd try out my first linux with an ubuntu dual boot on my windows 7 laptop.
I want to access my files (mainly my music) in ubuntu but I have run into some problems.
After googling it, I was lead to try to install Samba and smbfs, but for whatever reason it is not installing.
After aimlessly looking through forum posts and stuff I still havent been able to get samba or smbfs to install, and remain unable to find my windows 7 files.
Is there an easier way? If its not apparent, I have little to no idea what I am doing and could use some advice.
(I am in highschool and posting while in class (heh...) so I wont be able to respond immediately. Thanks for the help!)
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865411.56/warc/CC-MAIN-20180523024534-20180523044534-00367.warc.gz
|
CC-MAIN-2018-22
| 701
| 6
|
https://lists.debian.org/debian-user/2020/06/msg00200.html
|
code
|
Re: KDE run Dolphin as root?
On 7/6/20 12:56 pm, Default User wrote:
So I guess I just got spoiled, using the nemo file manager in Cinnamon.
Just right click the Cinnamon desktop, select "Open as root", then use
nemo with temporarily elevated privileges. Then close nemo, and I am
back to the desktop as a regular user again. Easy.
Have you tried nemo (cinnamon's file mgr) in KDE? It'll likely bring in
a bit of gtk stuff, but that shouldn't hurt anything.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446706291.88/warc/CC-MAIN-20221126112341-20221126142341-00027.warc.gz
|
CC-MAIN-2022-49
| 457
| 8
|
https://www.thesequitur.com/best-wave-grease/
|
code
|
Are you looking for the best Wave Grease but don’t know where to start? We have done much research and analysis to present the best Wave Grease available. There are various Wave Grease options on the market, and you can get surprising advantages from these products. They vary in price, quality, size, and feature.
This article will explore some of the top Wave Grease out there. This comes from Wave Greaseter going through numerous customer reviews, product reviews, and research into the specifications of the products. At the end of this review, you should be able to make an educated buying decision for one or multiple Wave Grease.
Best Wave Grease: Top 10 Compared
Top 10 Wave Grease of Review:
- MOISTURE and HOLD POMADE: This pomade with natural sheen and hold promotes waves and restores moisture, going on light and easy and rinsing out clean; It's formulated for short and brushable hairstyles.
- MAXIMUM WAVEMAKING: With its maximum hold strength, this pomade delivers maximum wavemaking, the perfect styling product for short and brushable hair; It goes on light and rinses out clean!
- STYLING PRODUCTS: We make an array of products for all types of curly hair, including moisture-sealing gels and leave-in sprays, rejuvenating oils, pomades, creams, butters and protective edge savers.
- UNIQUE LOOKS: We help people of color celebrate unique looks and styles with an array of treatments, colors and styling products for all hair types: curly, wavy, natural, relaxed, transitioning and more.
- SOFTSHEEN-CARSON: For over 110 years we have provided beauty to all consumers of African descent with our innovative, tailor-made, superior products and services specially designed for their needs.
- GET THAT PERFECT WAVE PATTERN - The impressive and popular 360 wave hairstyle looks great and always has since the beginning of time. The challenge is that it’s difficult to attain those perfect waves. Allow us to help you rock your favorite pattern.
- GREAT FOR ALL STAGES - Whether you’re a beginner or an elite, just starting or continuing a long period of wolfing, you’re going to be impressed with our product’s performance.
- NO MORE BUILD-UP - If you’re struggling with a pesky residue left behind when you wash, you don’t need to worry anymore. Our water- based wave pomade is easy to apply and easy to wash off.
- SERIOUS HOLD - If your wave pomade isn’t holding your hair down, it’s not your fault. Get a product that is designed for superior hold on all hair types. Tame that crown and hold it down!
- GREATE SMELL - Get the amazing smell of the barbershop, whether you’re at school, home, working out, running a business, or wolfing. If you want professional performance, smell, and results, get Ocean View Deep Waves Pomade.
- Enriched with olive oil, shea butter, jojoba oil and vitamin e
- Perfect for creating deep waves
- Moisturizes without making your hair look greasy or matted
- 【PACKAGE INFORMATION】Our wave pomade kit includes a 4 Oz wave pomade, 2 wave brushes and a silky durag, the perfect combination for your convenience and as a travel kit.
- 【GET THAT PERFECT WAVE PATTERN】The impressive and popular 360 wave hairstyle looks great and always has since the beginning of time. The challenge is that it’s difficult to attain those perfect waves. Allow us to help you rock your favorite pattern.
- 【GREAT FOR ALL STAGES】Easy to spread by beginning and experienced wavers alike,Goiple waves pomade is a must-have addition to your 360 wave essentials. Tame your crown while adding amazing definition and sheen.
- 【NO MORE BUILD-UP】No greasy built-up as with conventional hair grease for black hair. Our wave-pomade for men is 100% water-based, providing extraordinary hold and versatility without the unattractive residue.
- 【NATURAL FORMULA】Our wave creams contains kinds of natural ingredients, such as shea butter, which is great for nourishing the scalp and making it more comfortable.
- Maximum wave defining formula for healthy hair and scalp
- Incorporates the newest and best wave building ingredients
- The more you train your hair the more waves you get
- Rub small amount between hands and massage throughout hair and scalp
- Brush every time after applying for best results
- CREATE EPIC WAVES with Butter Love Pomade for black hair. Our Shea wave-butter enhances 360 waves, offering superior hold and conditioning them so they are softer, shinier, and resistant to breakage.
- ALL-NATURAL FORMULA wave-cream for black men hair includes Shea butter, beeswax, olive oil and hempseed oil. This wave builder texture pomade makes hair more trainable into the ultimate 360 ripples.
- NO GREASY BUILD-UP as with conventional hair grease for black hair. Our Deep Wave-Pomade for men is 100% water-based, providing extraordinary hold and versatility without the unattractive residue.
- EASY TO SPREAD by beginning and experienced wavers alike, Ocean View Deep Waves Pomade is a must-have addition to your 360 wave essentials. Tame your crown while adding amazing definition and sheen.
- MADE IN THE USA using cruelty-free methods, Ocean View is committed to providing the best all-natural-wave-products. Veteran-owned and operated, we are influencing this generation’s outlook on elite waves.
- Thick hair dress, with superior hold
- Use for waving, sculpting and styling
- Best used on short hair
- Brush in Waves is a hair healthy water based “activator” that helps gently make hair more “waveable.”
- Sealed in moisture makes hair easier to wave and this product is essential to top wavers all over the world.
- It can be used as often as desired without buildup and is an excellent brushing aid.
- Never greasy and leaves a nice lasting shine.
- This high quality, safe, effective WAVEBUILDER product is made by us with only the best available ingredients.
- 【 Premium Quality 360 Wave Starter Kit for Men Hair 】 The 360 wave has always been a popular hairstyle in the African American community, and our 360 wave kit allows you to get a satisfying wave look on your own, packages includes 1* wave pomade +2*pocket palm combs +3* silky Du-rag +1* wave cap +1* curved wave brush.
- 【Durable Curved Wave Brush】The brush is made of pure wood, with half curved boar bristles and half synthetic bristles. The curved handle design is more manipulative and will not cause damage to your hair.
- 【 Premium Du-rag Long Tail Wide Straps 】Our release is made of breathable and chunky fabric silk that can be double-tied. Strong elasticity, can fit most people's head size. The outer seam design won't leave a mark on your hair.
- 【 Wave Caps Has Good Compression 】Made from a soft material, this silk wavy cap is breathable, light and elastic. Suitable for most human head sizes, This wave cap fits snugly around the head and over the hair,keeping the waves in place, and can be worn while sleeping to make your wave perfect.
- 【Natural Wave Pomade for Men Strong Hold 】Suitable for beginners and experienced people can be, easy to operate.Our Deep Wave-Pomade for men is 100% water-based, providing extraordinary hold and versatility without the unattractive residue.
How To Choose The Wave Grease
How do you choose the Wave Grease? You must consider many things, such as the brand name, price, and product quality. In addition, you should also consider whether it is suitable for your needs or not.
So how do you choose the right Wave Grease? Here are some tips that you can use to help you find a good product:
1. You first need to consider the product’s brand name. A good brand will always produce quality products, so a product with an established name should be good enough for your needs.
2. You need to consider the product’s price next. A high-quality product does not always mean that it will cost more, but if it costs too much, there must be something wrong, or nobody will buy it.
3. The final thing you need to look at is how well suited this item is for your needs and requirements and how well suited it is for others with similar requirements.
What you Should Keep in Mind When Buying Wave Grease
When shopping for Wave Grease, there are several things to consider. You need to think about the quality of the product, the price, and even how much it will benefit your life. However, you also need to keep these factors in mind:
Purchase from a reputable Brand
The finest product for you is that brand if you have your heart set on it. For instance, you should shop for a Samsung S9 phone online or at any other Samsung store if you wish to buy one.
Read reviews from others who have bought the product before. You must check what other customers have said about a product before buying it online, as this will help you determine whether it is worth buying or not. Suppose there are many positive reviews about an item and no negative ones. In that case, most people are happy with their purchase and would recommend it to others too.
Seal of approval
Look for the seal of approval. For example, look for the Energy Star seal if you’re shopping for a new printer. It indicates that the printer uses less energy than other models in its class.
It’s important to do your research before buying any new product. For example, check the minimum requirements listed on each model’s product page if you’re looking at laptops and want one with a larger hard drive. If they’re not listed, ask customer service or call the manufacturer directly before making your purchase.
Complete sure the website offers free shipping if you’re getting something online so that you don’t have to pay anything extra once you make your purchase. If the website doesn’t offer free shipping, think about making your purchase from a different site that does.
Many retailers offer extended warranties covering malfunctions, materials, or quality defects. If a product has this kind of coverage, it’s worth paying extra money upfront so that you don’t have to pay again later if something goes wrong with your purchase.
Ultimately, our Wave Grease reviews are designed to help you make a more informed purchasing decision. It’s much easier to decide when you know exactly what to look for and your options. We hope that this Keyword review article has helped. So if you’re looking for the best Wave Grease, we’re glad we could help. If you’re considering purchasing Wave Grease, we strongly recommend you look deeper at our top 10 Wave Grease reviews. Based on our research, we have found these excellent products to be well worth the money and should be able to meet your needs.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710734.75/warc/CC-MAIN-20221130092453-20221130122453-00183.warc.gz
|
CC-MAIN-2022-49
| 10,600
| 62
|
http://www.turinglab.co.uk/
|
code
|
We run coding classes and provide learning resources to thousands of children across the UK. Our programmes balance educational attainment with practical and engaging coding projects - giving children a skill-set that they can use throughout their lives and careers.
Our creative computing classes run at local universities in London (Imperial Codelab), Manchester and Leeds. The termly courses are designed to introduce children to core concepts that allow them to programme their own games, animations and web applications.
Children learn using our learning platform to develop their problem solving skills, creativity and computational thinking. The resources for which have been designed in collaboration with leading computing educators and Ada, the National College for Digital Skills. Further support is offered by talented students from leading universities.
Impact in education is greatest when it can reach everyone. For that reason, we have been developing our online learning platform to better suit the needs of the classroom.
Interactive explanations and quizzes introduce curriculum mapped concepts. Step by step tutorials then help children apply these to mini projects, before being challenged to apply what they have learned by programming a game, animation or web application.
All of our projects have graphical outputs, which helps to make the materials more accessible to children who wouldn't typically have engaged in programming. Many surprise us with their creativity and enthusiasm.
Teach children to code (and get paid for it) at our classes in London, Manchester or Leeds.
We work with experts in the tech and education sectors to provide the best learning experience.
Work with us to tackle the digital skills gap and improve diversity in the tech sector.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693866.86/warc/CC-MAIN-20170925235144-20170926015144-00593.warc.gz
|
CC-MAIN-2017-39
| 1,784
| 9
|
https://community.grafana.com/t/strange-scaling-of-y-axis-when-having-absolute-numbers/33613
|
code
|
I am a user here and have no access to the grafana panel_s configuration, but I think this behaviour is strange so I want to report it here and if someone agrees, maybe make it a bug.
In one specific graph, I am presenting some number of records. it is an absolute number, no comma places. the lowest number is 9860, the highest number is 9872.
However, grafana displays this as “k”. And moreover, it adds more comma places than there are actual digits, and it invents a “,5” which in reality is not there. So the axis labellings read “9.86000 K” to “9.87250 K” which is questionable at least.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00131.warc.gz
|
CC-MAIN-2021-43
| 609
| 3
|
https://pythonawesome.com/10-best-metal-laser-engraver/
|
code
|
When it comes to buying Metal Laser Engraver , you will find many brands selling the same product. And this will make you confused because you will not know which one of these products will meet your needs and desires.
After a lot of extensive research, we’ve put together a collection of the best Metal Laser Engraver that are currently available on the market.
Best Metal Laser Engraver
Top 10 Best Metal Laser Engraver Reviews
Fast &Accurate -36000mm/minmax speed, Frame/Graphic previewspeed up to 225000mm/min, dual lens galvanometer technology makes this 60Wmachine power, 5W laser power gadget runs faster than those with same power; 0.05mmcompressed spot create more details, plus 3 resolution options 1k/1.3k/ 2kprovide perfect details level. Laser class 4.
360° Rotary/Mobile Engraving -With Rotatry attachment(the 3rdAxis),LaserPecker 2 can do 360° rotary engraving on cylindrical surfaces suchas tumbler,pencil, mobile mode engraves 100*2000mm max size. small size and handleportable for handheld engraving, easy to take on-the-go.
Easy To Use -Plug and play, no assembly required, connect with phone orPC,set up in seconds and start your handcraft projects. Keep 11cm(4.33in) Laser Distance, set correct power/depth for different materials. File format supportsjpg.svg.png.bmp.G-code,CAD.Al.CDR.dwg and more.
Functional & Widely Used -LaserPecker 2 CAN: engrave paper, cardboard, wood,leather acrylic, Oxidized/painted aluminum metal, steel etc,; CAN’T engrave: gold,silver,copper,brass.Cutting: wood, paper, leather thickness within 5mm. Reflective materials: lightcolor or transparent material will reflect laser, paint/spray it black withmarker and wipe when jobs done.
Safety & Warranty -LaserPecker2 automatically stops for vibration, tilting and over temperature. Blue lightfilter cover protects your eyes from discomfort or dryness. Preset password andemergency stop button keep everything under control. This machine with a 12-month warranty.
- Portable Laser Engraver: Atomstack P7 laser engraving machine design for more easier to carry, but still retains up to 5-5.5W laser power can cut thick wood 1/3″, acrylic and engrave stainless steel and ceramic etc materials.
- Ultra-Fine Compression: Upgraded high-performance fixed-focus could be engraving without adjusting the focal length, the laser focus area is about 50% smaller than other lasers, makes your laser engraving artworks more refined.
- 5 Min Install: All-aluminum alloy anodized structure design makes the machine more durable, the portable all-metal structure and 85% pre-assembly is very friendly for beginners, which can be complete the installation within a few minutes.
- Software Compact System: Such as LaserGRBL(free) support Win XP / Win 7 / Win 8 / XP / Win 10/ Win 11, and LightBurn(paid) suport MAc OS system, engraving file format supports NC, BMP, JPG, PNG, DXF etc.
- Excellent and Portable: The P7 laser machine upgrade with integrated screw rod and stepping motor, helps movement of the laser more precise and excellent engraving capabilities, also more convenient than other traditional machines.
- First Choice for Creative: Atomstack laser engraving machine can engrave various materials like metal, wood, bamboo, plastic, leather PCB board, aluminum oxide, lacquered metal, etc. You can create your own engraved work for your favorites, realize your imagination.
- Excellent Laser Engraving Technology: A5 Pro laser cutter with 5~5.5w output power with 0.19mm ultra-fine compression laser focus area, the engraving accuracy reaches 0.01mm, which makes more refined engraving process, it makes easy cut on wood and acrylic.
- 180° Panoramic Visible:UV filter-retardant acrylic on laser module can filter 97% of the ultraviolet light, protecting your eyes and people around you,and special design with 180° panoramic viewing area, can be enjoy to watching your artworks producing.
- High-Performance Structure Design: Atomstack A5 40W machine design with integrated screw rod which can make the fastest running speed reach 11000/min, It is more stable and agile when engraving complex patterns, making it easier for you to achieve better results.
- Extensive Compatibility: The machine can be compatible with various mature engraving software, such as LaserGRBL(free), LightBurn(paid), LaserGRBL support Windows XP/7/8/10/11, and LightBurn support Mac OS and windows, engraving file format support NC, BMP, JPG, PNG, DXF etc.
- ★【A5 Pro 40W laser engraver, New eye protection design】The laser protective cover plays a very good role in protecting your eyes, filtering 97% of the ultraviolet light, and you and the people around you do not need to wear goggles. You can also watch laser engraving. Reduce the cost of your goggles and the inconvenience of wearing goggles. Once you engrave something, you are so hooked.
- ★【Sturdy and easy-to-install structure design】The all-metal structure design makes the machine more robust and durable, while improving the accuracy of engraving. The whole structure is designed for quick installation, and the installation can be completed in 10-20 minutes.Precise scale lines axis, The large area is perfect for engraving any pattern, Laser engraving machine that beginners can use.
- ★【Ultra-Fine Compression, Upgraded high-performance fixed-focus laser】Engraving can be done without adjusting the focal length; laser power is 5W-5.5W, ultra-fine laser focal area is reduced to 0.23mm^2, high-density laser can easily cut 12mm thick wood, black acrylic; can directly engrave smooth stainless steel metal, ceramics.
- 【Extensive compatibility and widely Use:】The machine can be compatible with all kinds of mature engraving software, such as LaserGRBL(Free), LightBurn, also support MAc system (use LightBurn). It can be carved on wood, bamboo, cardboard, plastic, leather, PCB board, aluminum oxide, Metal(non-reflective plating and lacquer) ceramic, pebbles, and acrylic. Carve your own logo for the items you like, use your imagination to DIY, cut and assemble.
- ★【1 year Warranty and technical customer service support】We are ATOMSTACK manufacturer. We provide customers with one year of free parts replacement and permanent professional technical customer service. If you have any problems during use, please contact us to help you. Please rest assured to buy.
- Fast & Accurate –Laser runs Max speed 36000mm/min, 0.05mm laser spot create more details,plus 3 resolution options 1k/1.3k/ 2k provide perfect details level. Frame/Graphic preview speed up to 225000mm/min. Class 4 laser engraving machine.
- 360° rotary/mobile engraving – With the 3rd Axis(Roller),LaserPecker 2 can do 360° rotary engraving on cylindrical, curved surfaces such as tumbler/ pencil, mobile engraving with powerbank conveniently,max 100x2000mm engraving size. With handle and 960g light weight,ideal for handheld engraving, convenient to take on-the-go.
- Easy to use – Plug and play,sets up in seconds and works with smartphone/PC to transfer images, keep 11cm Laser Distance and correct power/depth, you can begin engraving with a few clicks.
- Functional & Widely Used – LaserPecker 2 CAN engrave: paper, cardboard, wood, leather acrylic, Oxidized/painted aluminum, steel etc,. CAN’T engrave: gold,silver,copper,brass or reflective materials. Cutting: Wood, paper, acrylic, leather thickness within 5mm.
- Safety&Warranty – Protective shield,goggles,over-heating protection, password lock, motion detection, laser indicator and overheat shut down make it super safe to use. This machine with a 12-month warranty.
- 【Huge Upgraded】:TTS-25 is improved with ultra-fine of 0.172*0.038mm, 20W electric power, 10000mm/min engraving speed, 2.5W laser output power and 0.01mm engraving accuracy for precise and fast engraving and . It can precision engrave smooth stainless steel metal, Paper, Leather, Bamboo, Sponge Paper, Acrylic, Glass, etc.
- 【Extensive Compatibility & Uses】- Compatible various mature engraving software, such as LaserGRBL(Windows), LightBurn, Benbox, GrblController, LiteFire, supports MAC system, supports Windows XP/7/8/10 and file format like NC, BMP, JPG, PNG, DXF,GCODE etc. Suitable for wood, bamboo, acrylic, paper, leather, plastic, metal, ceramic, etc.
- 【Safety Protection Design】 – The lightweight filter hood features a modular magnet-attached design, easily removable, with superb suction for enhanced robustness and greatly simplified design and installation complexity. Laser filter cover can filter 98% of UV light to protect your eyes, no need to wear goggles to watch laser engraving.
- 【More Easier and Safety Control 】 – The High Speed ESP32 Smart Module integrate WiFi and Bluetooth functionality, makes it more convenient to use by supporting App connection. Equipped with a power switch to easily turn on, or turn off the machine in case of emergency.
- 【Precise Scale Lines & Expandable Engraving Area】 – The X-axis and Y-axis have precision scales for quick measurement of engraved objects, 300x300mm engraving area can meet your various engraving needs. In addition, by extending the frame axis, the engraving area can be extended to 430x400mm. This meets the needs of larger area engraving.
- 𝐓𝐡𝐢𝐜𝐤𝐞𝐫 𝐂𝐮𝐭𝐬 𝐰𝐢𝐭𝐡 𝐇𝐢𝐠𝐡𝐞𝐫 𝐒𝐩𝐞𝐞𝐝: The xTool D1 laser engraver has 60W machine power and 10W output power. Adopting the world’s first dual laser head technology, xTool D1 laser engraving machine is capable of cutting a 10mm wood board and a 5mm black acrylic board in ONE PASS, at a speed up to 10,000mm/min.
- 𝐅𝐢𝐧𝐞𝐫 𝐃𝐞𝐭𝐚𝐢𝐥𝐬 𝐔𝐥𝐭𝐫𝐚-𝐟𝐢𝐧𝐞 𝐂𝐨𝐦𝐩𝐫𝐞𝐬𝐬𝐢𝐨𝐧 𝐒𝐩𝐨𝐭: The compressed spot of xTool D1 laser engraver for wood and metal is as tiny as 0.08*0.08mm, but no less powerful, allowing for finer laser engraving and cutting lines down to 0.06mm with fewer burn marks and impeccable details. xTool D1 jewelry making engraver machine can also directly craft smooth stainless steel/metal.
- 𝐑𝐨𝐛𝐮𝐬𝐭 𝐒𝐭𝐫𝐮𝐜𝐭𝐮𝐫𝐞 𝐃𝐞𝐬𝐢𝐠𝐧 𝐌𝐞𝐞𝐭𝐬 𝐇𝐢𝐠𝐡𝐞𝐫 𝐀𝐜𝐜𝐮𝐫𝐚𝐜𝐲: The all-steel wheel and shaft and enclosed synchronous belt allow for motion accuracy of up to 0.01 mm and repeatable positioning accuracy of less than 0.02 mm. The robust structure design makes xTool D1 high power laser engraver more stable with less noise and longer service life. The upgraded high-performance fixed-focus laser is easy to adjust, just pulling up and down the lifting lever.
- 𝐒𝐮𝐩𝐩𝐨𝐫𝐭𝐬 𝐂𝐲𝐥𝐢𝐧𝐝𝐫𝐢𝐜𝐚𝐥 𝐎𝐛𝐣𝐞𝐜𝐭𝐬 𝐄𝐧𝐠𝐫𝐚𝐯𝐢𝐧𝐠 & 𝐎𝐩𝐞𝐧 𝐒𝐢𝐳𝐞𝐝 𝐖𝐨𝐫𝐤𝐢𝐧𝐠 𝐀𝐫𝐞𝐚: xTool D1 laser cutter and engraver machine can engrave glass, tumblers and other cylindrical objects with the diameter range from 3mm to 198mm when using rotary attachment. By adding raisers to D1, the already large 17* 16 inch working area will be open to more possibilities; skateboard, baseball bat, or large-sized painting can be processed through raised D1.
- 𝐖𝐢𝐝𝐞 𝐂𝐨𝐦𝐩𝐚𝐭𝐢𝐛𝐢𝐥𝐢𝐭𝐲 𝐚𝐧𝐝 𝟑-𝐰𝐚𝐲 𝐂𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐨𝐧: xTool D1 portable jewelry making laser engraver can be compatible with LightBurn and our beginner-friendly software Laserbox Basic, which is also available on mobile devices. You can transfer data via Wi-Fi, USB cable, and TF card. In addition, we have a professional US technical team and support website to solve all your problems within 24 hours.
- Laser Power: 50W+F-Theta Lens(Marking Area): 300*300mm+80mm Rotary Axis+DM542S Driver
- Laser Source: JPT LP-50; Repetition Rate Frequency: 1-600 kHZ (adjustable), Pulse Duration: 200ns (fixed and not adjustable), JPT LP Series supports 1~3 colors( Affected by environment and material), if you require more, please refer to JPT MOPA M7 source machine ASIN: B09VD9K6XQ
- Galvanometer: Cloudray M102-Galvo; Lens: Cloudray F-theta lens; Software: EZCAD2.0 LITE, Supports Win7/8/10/11 and PLT, BMP, DXF, JPG, TIF, AI, PNG, JPG, etc formats. High-Rate of Electrical-Optical Conversion: up to 70%; Cooling Method:Air cooled;
- Applicable Materials: Platinum, Tungsten, Titanium,Carbide Nickel, Carbon Steel, Aluminum, Stainless Steel, Brass, Copper, Gold, Silver, metals,etc. Also some of nonmetal such as nylon, light button, ABS, PVC, PES,etc.
- 2 Years Warranty; Easy Operation, User Manual and Operation Video sending together with the machine. Remote Assistance and backup software is available. Amazon prime shipping ,if stock run out,will ship by DHL/UPS around 5 to 7 days delivery.
- 【90W Effect Laser Beam Shaping Technology】SCULPFUN S9 laser engraver uses the latest 5.5W laser beam shaping technology diode laser, it has an ultra-fine 0.06*0.06mm sharp laser focus, with super cutting penetration and precision, he can cut up to 15mm thick wood, 10mm acrylic, engraved ceramics and stainless steel.
- 【Solid & Easy-assembly Structure Design】The Laser engraving machine full-metal structure design makes the machine extremely solid, and it improves the accuracy of engraving. The whole structure is designed for easy-assembly, and the assembly can be completed in 10-20 minutes. The structural frame is very durable and remains open to upgrade. During use, you can keep the frame and replace with other new lasers. Or replace longer metal beams to expand the engraving area.
- 【Safety Protection Design】The laser cutter and engraving machine is equipped with a very convenient power switch, which is not available in others. The laser filter cover filters 98% of the uv light to the eyes, you can watch laser engraving without wearing goggles and prevent animals from catching the laser spot. Comes with a steel pad can protect the table from laser damage, isolate fire hazards.
- 【Wide Compatibility】SCULPFUN S9 laser cutting machine compatible with various mature engraving software, such as LaserGRBL, LightBurn, Benbox, GrblController, LiteFire, supports PWM mode engraving, supports Windows system, Apple system (LightBurn), and engraving file format Support JPG, PNG, DXF, SVG, G-code, NC, BMP, etc. (You can view the software operation tutorial in the video)
- 【Fast Focusing & Quality Assurance】The S9 laser engraving combines a fixed focus lens and a sliding design. You only need to slide the laser and tighten the screws to complete the focusing. This makes it very easy to use the laser. If you have any problem with our product, please freely contact us. We provide 12 months of product warranty and 24×7 friendly customer service.
- 【Quadruple Lens Double Compression Spot】The A5 M50 laser cutter and engraver machine has quadruple lens compression Technology. 5-5.5W Optical Power. Fixed Focus Compressed Spot, spot smaller and the energy more concentrated, and more refined engraving products. It can not only engrave but also cut 0.8″ of the wood and 0.6“ acrylic.
- 【Create Your Own Logo】The A5 M50 laser engraver for home can meet your daily needs. It can engrave on wood, bamboo, paper, plastic, leather, PCB board, aluminum oxide, non-reflective plating and lacquered metal, ceramics. You can engrave your own logo for your favorite items.
- 【Perfectly compatible system】The laser engraving machine can be compatible with LaserGRBL (free, support Windows) or LightBurn (paid, support Windows, Mac OS and Linux). Engraving file format support: SVG, NC, DXF, BMP, JPG, PNG, etc.
- 【Assembly is quite easy】Everything is neatly packaged and the smaller pieces are pre-separated into steps that match the instructions.Each assembly step has its own separate package of fittings used in that step. all parts are marked with steps no there is no confusions. The entire setup was about 30 minutes.
- 【Quality Assurance and After – Sale Service】We have a professional customer service team. We provide the customers a year warranty , parts replacement and permanent technical support. Your concerns are answered within 24 hours. We are so confident that you will love this laser engraver.
In today’s market where same type of products is available from almost every brand, finding the right Metal Laser Engraver is a challenge. Every purchase requires research. Before you buy anything you need to answer the follow questions:
- What are the features of the best Metal Laser Engraver?
- How to find the right Metal Laser Engraver within your budget?
- What is the average price of a good Metal Laser Engraver?
Our data analysis platform helps you answer these questions using state of the art algorithm. We analyze thousands of reviews from real users to generate usability score for each brand. This usability score is unbiased and powered by people’s experience the product. Then we provide you with an unbiased list of 10 best affordable Metal Laser Engraver to buy. Our goal is to make your decision making easy and shopping experience fun.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00433.warc.gz
|
CC-MAIN-2022-40
| 17,184
| 59
|
https://community.oracle.com/customerconnect/discussion/31600/taleo-learn-integration-with-oracle-fusion-query
|
code
|
Taleo Learn Integration With Oracle Fusion - Query
SummaryOur Fusion cloud system is integrated with Oracle Learn based on the Advance batch integration process. Its is working fine so far. Recently We have changed the SFTP server password in fusion.
Recently We have changed the SFTP server password in fusion and it breaks the integration between two systems. We would like to know the process for updating the new SFTP password in learn side.
Please do the needful.
Thanks in Advance.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648858.14/warc/CC-MAIN-20230602204755-20230602234755-00351.warc.gz
|
CC-MAIN-2023-23
| 487
| 5
|
https://www.cnclabs.com/maps/generals/zerohour-missions.aspx?tags=6
|
code
|
!=!=!=!=!=!=! Please watch the mission briefing before playing: https://youtu.be/DpGtOdVuM7A !=!=!=!=!=!=!
=== Operation Kihill Beach V2 ===
-Mission map for Zero Hour
For optimal gaming experience it is recommended that you wear headphones, install the gentools addon and use a 16:9 or 16:10 resolution.
Feel free to edit and share my maps with whoever you'd like, but please, if you do, write your changes to a changelog.txt file.
Make sure to play the Normal difficulty first, Hard and Brutal have a tendency to crash mid-game.
If you are going to record the gameplay of this mission, please include the mission briefing video in your video at the start!
Also if you record send me the video link, I'd love to watch. (contact info bellow)
------ KNOWN BUGS ------
Counters might not update to 0 when they actually are.
Game crashes may occur for unknown reasons in part 2 so make sure to save the game often.
Loading part 2 saved games can give errors/crashes if the save has been made near the end of the mission.
Invisible laser infantry.
Enemies will sometimes get stuck in a path and cause massive traffic.
Using 4:3 resolution may cause terrain rendering issues.
------ Thanks ------
Beng and Acidbrain for creating some of the code used in map.ini.
The whole cnclabs community for doing their best at helping others.
JCD Gameanater for helping out with the story, landscape, terrain, some map.ini code, playtesting and feedback.
Unknown Editor for playtesting, giving feedback, fixing the barracks icons.
PyroMusical, patrikb42, ReVeNGe, Lerosnn, Hijynks for playtesting and feedback.
M.P from SWR Productions for helping out with map.ini and modding.
Special thanks to John Megacycle and cncHD for being two awesome dudes!
(let me know if I forgot to add someone here as it is quite hard to keep track of so many people)
------ Contact me ------
Need help? Have a suggestion to make or feedback to give? Found a bug?
Or message me directly on Discord SkyMix_RMT#2570
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657167808.91/warc/CC-MAIN-20200715101742-20200715131742-00584.warc.gz
|
CC-MAIN-2020-29
| 1,976
| 27
|
https://feedback.telerik.com/aspnet-ajax?typeId=2&listMode=Recent&statusId=4&categoryId=594
|
code
|
RadSpell uses a RadWindow for its dialog which, in turn, uses RadFormDecorator. Thus, RadSpell should expose its RenderMode property in order to allow the developer to make the popups consistent on the page (i.e. avoid mixing modes which is not supported) and to avoid styling issues with the dialog.
Currently the following web.config setting can be used to change the rendering mode of all RadWindows in the application, including the ones used by RadSpell:
<add key="Telerik.Web.UI.Window.RenderMode" value="Lightweight" />
and this one for the RadFormDecorators:
<add key="Telerik.Web.UI.FormDecorator.RenderMode" value="lightweight" />
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194171.48/warc/CC-MAIN-20201127191451-20201127221451-00447.warc.gz
|
CC-MAIN-2020-50
| 640
| 5
|
https://www.nelsonmedicalsupplies.com/donation-program
|
code
|
With every purchase you make, we're proud to donate 50 cents.
YOU SHOP, WE GIVE
You Shop, We Give.
As a company with social responsiblity, we have the freedom to support causes we believe in and to impact the world in a positive way. Giving back is important to us and we want to share that passion with you, our customers. Through our Better to Give program, we've been able to build a better business by connecting our customers with non-profit organizations around the globe. With every purchase you make, we're proud to donate 50 cents to a Better to Give partner of your choice.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00267.warc.gz
|
CC-MAIN-2021-25
| 583
| 4
|
http://www.carolinahuddle.com/boards/profile/5147-jackson113/?tab=reputation&app_tab=gallery&type=given
|
code
|
jackson113 replied to Jangler's topic in The Lounge
Was listening to this episode, he was being vague as hell with his answers. Then he would say stuff like in between October 4 and November13 would be a good time. Trying to hide the fact he was being vague.
Where I work, some of the delivery drivers get paid more the the guys in middle management. I can't look down on someone who works. For that matter I can't look down on someone trying to feed their family.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274985.2/warc/CC-MAIN-20160524002114-00202-ip-10-185-217-139.ec2.internal.warc.gz
|
CC-MAIN-2016-22
| 464
| 3
|
http://www.neowin.net/forum/topic/1121306-pcie-card-or-dvd-rw-drive-wont-install/page__pid__595339580
|
code
|
The DVD-RW drive has been installed before. I formatted the HDDs & just didn't connect it up again as i have a blu-ray drive in there too which i was using.
Bought a PCIe card as i needed another SATA slot. This one to be exact: http://www.ebuyer.co...r-card-pexsat32
Anyway, i've connected everything up & the DVD-RW drive opens & closes but it's not recognized in My Computer at all.
In addition to this, the PCIe card installation manual says in Windows 7, it'll be listed under the "IDE ATA/ATAPI controller" category as "Standard AHCI device". As you can see, it doesn't mention Standard AHCI device.
How to get this working?
: I forgot to add - when i booted into Windows, it just said it needed to be restarted, but this didn't fix the problem.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698090094/warc/CC-MAIN-20130516095450-00062-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 751
| 6
|
http://www.dzone.com/links/cookies_may_disappear_but_privacy_isnt_coming_back.html
|
code
|
In part one of our series on password security we looked at why we should be using bcrypt. In... more »
JAVA EE: Data Source Architectural Design Patterns - Playlist
We've just released Android Studio 1.0 Release Candidate 2 to the canary channel.... more »
Nginx provides http rewrite module that allows you to do a lot of stuff with incoming request... more »
MAKE SURE YOU ARE NOT MISSING THIS BLACK BOX TESTING FRIDAY FROM BUGHUNTRESS!
New version of our HTML5 PDF Editor has been released with some very useful and attractive... more »
Discover best practices and the most useful tools for building the ideal integration architecture.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931011456.52/warc/CC-MAIN-20141125155651-00046-ip-10-235-23-156.ec2.internal.warc.gz
|
CC-MAIN-2014-49
| 643
| 7
|
https://planningbrandon.wordpress.com/2013/02/07/feb-2-wildlife-workshop-new-photos/
|
code
|
February 7, 2013 by Jeff
Many thanks to Jaime Lee at the Rutland Regional Planning Commission for attending our wildlife-corridor workshop last weekend and documenting the event. The first image below is a compilation of the maps we worked on to orient everyone to the corridor.
The dots represent places important to attendees. A generalized overlay of the Wildlife Corridor itself is faintly visible at the bottom.
Paul Marangelo of The Nature Conservancy explains the importance of Brandon as a critical link in the wildlife corridor.
A few participants gather around a map, making friends and getting oriented to the wildlife corridor.
Kate McCarthy of VNRC explains regulatory and non-regulatory tools for conserving the corridor.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648431.63/warc/CC-MAIN-20180323180932-20180323200932-00651.warc.gz
|
CC-MAIN-2018-13
| 735
| 6
|
https://readingbetweenthedunes.blog/tag/editing/
|
code
|
Make your setting a character.
Writing is Rewriting
So how do you edit your book? Each author has their own method of madness, but many would agree that these steps are critical:
Beta readers are meant to edit your work. They can focus on different things, depending on their skill level and your needs/desires. But one thing's for sure: a good beta reader is worth their weight in gold.
Revision is re-envisioning
There's nothing akin to the agony of editing your book. This suffering goes beyond the whole "kill your darlings" because, at least for me, I'll gladly kill my darlings if it means saving my book.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00273.warc.gz
|
CC-MAIN-2023-14
| 611
| 6
|
https://phpconference.com/agile-devops/a-practical-introduction-to-kubernetes/
|
code
|
More talks in the program:
11:45 - 12:30
Kubernetes is an open source system for automating deployment, operations, and scaling of containerized applications. It’s one of the promising options you have for deploying your container-based applications to the Internet. In this session we’ll take a look at the concepts of Kubernetes and then go trough all steps necessary to launch and maintain a real-world PHP application in your own Kubernetes cluster.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592650.53/warc/CC-MAIN-20180721164755-20180721184755-00285.warc.gz
|
CC-MAIN-2018-30
| 457
| 3
|
https://techrise.nl/job/java-application-engineer/
|
code
|
The Atradius Group provides trade credit insurance, surety and collections services worldwide, and has a presence through 160 offices in 52 countries. The products offered by Atradius protect companies around the world against
the default risks associated with selling goods and services on credit. At Atradius, we believe in personal development and the Growth Mindset. Our Culture is based on teamwork, reliable accountability, constantly improving and unrivalled service.
As an Application Engineer, you will provide the technical skills to create and validate effective solutions in the context of the Atradius application ecosystem. Application Engineers design, code and automate the test of our technical solutions, choosing the most effective ways of working and implementation in accordance with our Architecture. The Application Engineers work in Asset teams in a DevOps way, closely co-operating with Product Owners, Asset
Owners, Analysts and internal and external Engineers to manage and implement the lifecycle of the Asset as well as running and operating it.
In this position your key responsibilities will be:
What do we offer?
Equal opportunities for all
The success of our organisation stands with the quality of our people and the ideas they have. Insights and innovative solutions for our customers are the result of an interplay of cultures, knowledge and experience. That is
why diversity is extremely important to Atradius. To ensure that all colleagues within Atradius can develop their qualities, we promote an inclusive culture in which everyone feels involved and valued. We encourage and welcome
everyone to apply to our positions.
I am Atradius! – Do you want to know who we are?
Get to know Atradius colleagues in this video: https://www.youtube.com/watch?v=NnsgT04OpTU&t=4s
Interested? Hit the APPLY-button and we will get back to you shortly.
Acture activeert. Activeer jij met ze mee? Als Software Ontwikkelaar in Utrecht ben je verantwoordelijk voor het onderhouden en verder...Apply For This Job
Our client is a prominent algorithmic trading firm with a focus on the revolutionary digital asset markets. Their mission is...Apply For This Job
Our client is looking for an experienced Rust Developer, you will be responsible for converting their trading engine to Rust,...Apply For This Job
As a back-end developer at our client, you will be involved in the development of software systems for both internal...Apply For This Job
We are a team of highly skilled software engineers and computer scientists with a passion for artificial intelligence, speech-to-text, and...Apply For This Job
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474670.19/warc/CC-MAIN-20240227021813-20240227051813-00024.warc.gz
|
CC-MAIN-2024-10
| 2,624
| 18
|
https://www.telerik.com/maui-ui/scatter-area-chart
|
code
|
UI for .NET MAUI
The Cartesian Series for .NET MAUI visualizes the ScatterArea Series as the area enclosed by the coordinate axes and straight line segments that connect the series data points. The ScatterArea Series inherits from the [ScatterPointSeries]() class and also requires both Chart axes to be Numerical Axes.
The Numerical axis chart type is an indispensable part of the Cartesian coordinate system. The chart type calculates the coordinate of each data point on its actual numerical value this point provides for the axis.
See the .NET MAUI Charts documentation: Numerical axis
The Telerik UI for .NET MAUI Charts Legend feature displays a set of items, which correspond to the chart content making it easy for you to provide descriptions for the series visualized within the control.
You can use annotations in your Telerik UI for .NET MAUI Charts whenever you need to highlight certain areas or points on the plot. You can easily define annotations on any point of the plot area and customize their appearance.
See the .NET MAUI Charts documentation: Annotations
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100912.91/warc/CC-MAIN-20231209134916-20231209164916-00336.warc.gz
|
CC-MAIN-2023-50
| 1,076
| 7
|
http://forums.macrumors.com/archive/index.php/t-82199.html
|
code
|
View Full Version : Extending Wireless Networks?
Jul 30, 2004, 05:13 PM
I don't understand the signal that a wireless base station sends, so I don't understand how the extending capabilities of an Airport Express function. When I sit at my desk upstairs, I get one to two bars from the Airport Extreme base station downstairs. It seems to me that if I plug my new Airport Express base station in up here, it'll only be picking up one or two bars and it is only extending a degraded network. I'm sure I'm not understanding this correctly, because that seems nearly useless. Can someone explain this too me in laymen terms what's going on?
Jul 30, 2004, 10:56 PM
FWIW, I just noticed that when enabling the wireless bridging function it warns that it may degrade the overall quality of the wireless network. I guess that limits how far you can bridge...
I still don't understand how the whole bridging thing can work though. A crappy signal is a crappy signal no matter what picks it up or how loud it repeats it.
Jul 30, 2004, 11:35 PM
I'm sure I'm not understanding this correctly, because that seems nearly useless. Can someone explain this too me in laymen terms what's going on?
The idea is that if your computer and your base station are too far apart, you would install the Express somewhere about half-way between the two. That way, it's in adequate proximity to talk to each, even if they're not close enough to talk to each other.
In light of this, installing the 'bridging' device right next to where you're going to use your computer would not be very helpful (they're both seeing the same signal anyway, as you point out).
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500822407.51/warc/CC-MAIN-20140820021342-00243-ip-10-180-136-8.ec2.internal.warc.gz
|
CC-MAIN-2014-35
| 1,633
| 10
|
https://github.com/SpaceMonkeyInc/htc2015/blob/master/utah/README.md
|
code
|
Utah is an unbelievable state. Not only does Utah contain 5 wonderful national parks, enable fantastic skiing and mountain biking, host world-renowned movie festivals, and offer a world-leading standard of living and livability, it has a pretty interesting shape. Let's make a game out of the Utah-shaped pentomino.
The game board is a
n cube. Two players take turns filling in one
of each of the cells in this cube, like some 3D Tic-Tac-Toe. But instead of
n cells in row, each player is trying to be the first to make a
Utah-shaped pentomino out of their filled cells. Any orientation, rotation, or
reflection is allowed, with the exception of diagonal (slanted) moves. A
diagonal Utah-pentomino is not an allowed shape.
Here are some examples of layouts a player might try to play for.
X__ ___ ___ XX_ ___ ___ XX_ ___ ___
Above is a cube. The left 3x3 square is the bottom of the cube, the middle 3x3 square is the middle layer of the cube, and the right 3x3 square is the top layer of the cube. On the bottom layer, the Xs form a Utah-shaped pentomino.
Here is the cube in the sample image on this page:
___ _O_ ___ ___ OO_ ___ XX_ XXO X__
In this cube, the Utah-shaped pentomino is along the front side of the cube.
___ ___ ___ __X _XX _XX ___ ___ ___
This cube has an upside-down Utah-shaped pentomino down the center right of the cube.
An example of a diagonal shape that does not contribute to winning:
X__ _X_ __X X__ _X_ ___ ___ ___ ___
Clear as mud?
To try and help, we've constructed a visualization program. When you clone the
project repo, you'll find
visualizer.py in addition to the normal things.
(If for some reason this didn't work for you, you can
download it here.)
(http://vpython.org/) to be able to click around.
This visualizer is what generated the sample image on this page. Left-click
marks a box, right-click and drag rotates the view, and middle-click and drag
We'll be giving you game states over
stdin. Each game state will be exactly
one move away from either you or your opponent completing a Utah pentomino.
You will be next to play and need to make the optimum move. Your task is to
write a program that takes an arbitrary amount of these final-stage game
states and outputs the next game state after making your move. Your program
plays for the player placing
Xs (the other player is
Input game states will be
NxNxN cubes, newline separated, and
stdin will be
closed when no more game states need to be sent.
X__ ___ ___ XX_ _O_ ___ OXO ___ ___ ___ OOO ___ ___ _OX ___ ___ _X_ _X_
For output, just output the next game board state with your new move placed.
XX_ ___ ___ XX_ _O_ ___ OXO ___ ___ ___ OOO ___ ___ XOX ___ ___ _X_ _X_
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178378043.81/warc/CC-MAIN-20210307170119-20210307200119-00000.warc.gz
|
CC-MAIN-2021-10
| 2,666
| 42
|
https://play.google.com/store/apps/details?id=com.vaibhav.noteshub.noteshubxapp&pcampaignid=MKT-Other-global-all-co-prtnr-py-PartBadge-Mar2515-1
|
code
|
NotesHub is a platform to get all the academic material for students/faculty enrolled in various colleges of GGSIPU, New Delhi. The platform provides FREE notes (handwritten, printed, notes from lectures and faculty notes), previous year solved question papers, eBooks and practical files for all the branches.
WHY USE NotesHub:
NO FEES: NotesHub is developed as part of a community that the students are trying to build for sharing of academic material and other resources. Therefore, no subscription is required to use NotesHub.
QUALITY: The team is driven to provide the best quality material available so that you don’t have to sweat to collect them. We want you to focus on studying while we do the running.
QUANTITY: The content team of NotesHub spreads across 5 colleges and is driven by the purpose of helping others. Therefore, new content is uploaded REGULARLY and old content is updated.
RESPONSE TIME: The team understands time sensitivity when it comes to exams and other college related issues. You can get the response within minutes for your query.
MATERIAL: The platform caters to all the branches (CSE, IT, ECE, EEE, MAE, CIVIL, etc.) and all the 8 semesters for B.Tech. The content for other courses is on its way.
SEGREGATION OF CONTENT: The application is specially tailored as per your needs and therefore, provides a personalized experience every time you open the app. The subjects are segregated based on the branch and semester.
We’re always excited to hear from you! If you have any feedback, concerns or questions, please email us at:
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387155.10/warc/CC-MAIN-20200525001747-20200525031747-00008.warc.gz
|
CC-MAIN-2020-24
| 1,566
| 9
|
http://paldan.altervista.org/realtek-sd-reader-hp-envy-d001nl-ubuntu-15-10/
|
code
|
Recently I bought an HP Envy d001nl and I’m very happy with it, but yesterday I realized that the Realtek SD Reader was not working properly under Ubuntu 15.10 Wily. Not good, since I badly need the SD Card reader for copying photos from my Pentax K-30… Let’see what can be done.
First of all, it could be useful to exclude an hardware problem: I still have Windows 10 in my PC, barely used, but this time it proves to be useful, since it shows that the card reader is properly working. Using device manager I can find a few more details on this piece of hardware.
It is a Realtke PCIE CardReader with device ID 522A.
Back to Ubuntu, I verify this info with:
[email protected]:~$ lspci
02:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. Device 522a (rev 01)
A quick check on the Internet shows that the Realtek SD Reader is supported in the kernel since version 4.4, but unfortunately my Kubuntu version has kernel
[email protected]:~$ uname -r
The cleanest way to solve the problem is to build an updated kernel version, with the support for the card reader: there are a ton of pages that explain how to do that, so I won’t cover this kind of solution.
But suppose that you don’t want to change the kernel, there is still some chance to have the SD card reader properly working.
Disclaimer: I am not responsible of the damages you can do to your PC following this tutorial. Do it at your own risk.
That said, there should really be no problem if you take a bit of care.
The first step is downloading the kernel sources and the kernel headers:
[email protected]:~$ apt-get source linux
[email protected]:~$ sudo apt-get install linux-headers-generic
Four files are involved in the working of the Realtek SD reader:
include/linux/mfd/rtsx_pci.h drivers/mfd/rtsx_pcr.h drivers/mfd/rtsx_pcr.c drivers/mfd/rts5227.c
The right way to proceed would be to backport the changes related to the device identified by the id 522A (to avoid problems due to the different kernel versions), but luckily the driver seems not to be changed much between versions 4.2 and 4.4. This means that we can use the files of the 4.4 (or more recent) kernel version straight away.
For my PC I choose to use the driver source code files found in kernel 4.5; they can be downloaded from the github kernel repository:
Once downloaded, go to the root of the kernel sources
[email protected]:~$ cd linux-4.2.0
and copy respectively the above files in
rtsx_pci.h in include/linux/mfd/
rtsx_pcr.h in drivers/mfd/
rtsx_pcr.c in drivers/mfd/
rts5227.c in drivers/mfd/
Now it’s time to build the driver; copy your kernel configuration in the .config file to be used for kernel building
[email protected]:~/linux-4.2.0$ cp -vi /boot/config-`uname -r` .config
Copy the Module.symvers from the kernel headers package (more info related to the Module.symvers file here)
[email protected]:~/linux-4.2.0$ cp /lib/modules/$(uname -r)/build/Module.symvers ./
Build the modules:
[email protected]:~/linux-4.2.0$ make prepare [email protected]:~/linux-4.2.0$ make modules_prepare [email protected]:~/linux-4.2.0$ make SUBDIRS=drivers/mfd/
If all is fine, using the following command
[email protected]:~/linux-4.2.0$ ls drivers/mfd/*.ko | grep pci
you should see the line
Remove the old driver (if loaded)
[email protected]:~/linux-4.2.0$ sudo rmmod rtsx_pci
Make a backup copy of the old driver
[email protected]:~/linux-4.2.0$ sudo mv /lib/modules/$(uname -r)/kernel/drivers/mfd/rtsx_pci.ko /lib/modules/$(uname -r)/kernel/drivers/mfd/rtsx_pci.ko_ORIGINAL
Copy the new driver and regenerates the module dependencies
[email protected]:~/linux-4.2.0$ sudo cp drivers/mfd/rtsx_pci.ko /lib/modules/$(uname -r)/kernel/drivers/mfd/ [email protected]:~/linux-4.2.0$ sudo depmod -a
Finally, load the driver
[email protected]:~/linux-4.2.0$ sudo modprobe rtsx_pci
If the operations has been successful, once a SD card is inserted, you should see in the kernel log something like:
[ 4248.349559] rtsx_pci 0000:02:00.0: rtsx_pci_acquire_irq: pcr->msi_en = 1, pci->irq = 280 [ 4251.305275] mmc0: cannot verify signal voltage switch [ 4251.433049] mmc0: new ultra high speed SDR50 SDHC card at address aaaa [ 4251.447625] mmcblk0: mmc0:aaaa SU32G 29.7 GiB [ 4251.466839] mmcblk0: p1
and if you are using a desktop manager you should see the device:
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589417.43/warc/CC-MAIN-20180716174032-20180716194032-00417.warc.gz
|
CC-MAIN-2018-30
| 4,314
| 46
|
https://dspace.tul.cz/browse/author?scope=8a2a8d99-e1d3-48b4-aaad-3fc3693db4a8&value=Je%C5%BEek,%20Bruno
|
code
|
Browsing Číslo 3 by Author "Ježek, Bruno"
Now showing 1 - 1 of 1
Results Per Page
- ItemLoad balancing location of emergency medical service stations(Technická Univerzita v Liberci, ) Jánošíková, Ľudmila; Gábrišová, Lýdia; Ježek, Bruno; Ekonomická fakultaWhen we want to design a successful and efficient emergency medical system, the crucial task is to determine the number of ambulances operating in a given region and the deployment of stations where the ambulances are kept. In the Slovak Republic, the number and locations of stations are specified by the Ministry of Health for the whole state territory. In the Czech Republic, the network of stations is established by the local authority for each administrative region. Due to geographical and population diversity, there are significant differences in population served by individual ambulances. Assuming that the number of ambulances is given, we want to investigate whether a different location of the ambulances might result in a more even distribution of their workload and, consequently, shorter response time. The problem is modelled as a capacitated p-median problem and solved using mathematical programming. The capacitated p-median problem is known to be NP-complete. As a consequence, it cannot be solved to optimality even for moderate-sized problem instances. However, we face a large-scale problem instance consisting of almost 3,000 demand nodes. Therefore heuristic approaches need to be used to get a sufficiently good solution in an acceptable time. Two decomposition mathematical heuristics are described in the paper and a new heuristic method based on previously developed approaches is presented. A redeployment of existing EMS stations in the Slovak Republic is calculated using these methods. The results are compared mutually and with the current deployment. The benefits and limitations of the presented methodology are discussed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474671.63/warc/CC-MAIN-20240227053544-20240227083544-00672.warc.gz
|
CC-MAIN-2024-10
| 1,930
| 4
|
https://cycling74.com/forums/sprintf-_nothing_/
|
code
|
I am using sprintf in the following format : [sprintf %s%s:%s%s],
which lets me have a nice clock in a LCD (or message box probably) in
a format something like 12:45, or 01:56
now when i want to see negative values ( how long to wait until
something happens...) i can easily have -10:21 (still fine) but i
would like to have also -4:39...
meaning i would like the first %s (leftmost) to output nothing......
for the moment i just output a dot (.) so my format is .-4:54... but
is there a way to input something which will be output as nothing???
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824293.62/warc/CC-MAIN-20171020173404-20171020193404-00764.warc.gz
|
CC-MAIN-2017-43
| 545
| 9
|
https://www.freelancer.sg/projects/facebook-marketing/facebook-ads-expert-needed-for-21697579/
|
code
|
Job Not Found
Sorry we couldn't find the job you were looking for.
Find the most recent jobs here:
i need a coder that can fix a program its a checker,, for craking
Looking to enhance my listings on eBay for greater sales only. Looking for ex eBay specialists from the Philippines only.
Need an urgent Swedish Keyword researcher need who can start immediately. Don't apply without Swedish knowledge.
Already have an app created and need two features added to the app. #1 is a simple navigation change. When a user adds a record we don’t want to navigate back to the screen showing records; we want to navigate into that record so the user can continue working on adding details to it. #2 the app already stores a date and a months term value. What we want is a calculation of months remaini...
DATA ENTRY SAMP 3 days left Hi there. I want to hire someone who just know about computer basic operation. It's very simple job and long term project. Thanks
I need an app that lets users connect with other users to find and play sports in their area. I need a simple profile customisation, an explore tool that lets you find sports and activities in your area, and an activity creation function.
Hello, New update - Please see the explaining video in the attachment Thanks. Please see the explaining video Please see the explaining video ([login to view URL]) to get a clear idea that's all, thanks.
Manejar las publicaciones de redes sociales, así como elaborar los diseños mensuales requeridos publicitarios de una institución educativa.
I have project related to ESP32 development board using communication as bluetooth to turn off and turn onn motor from mobile application
Instalar profire 2626 macOS catslina firewire drivers adaptar español
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00434.warc.gz
|
CC-MAIN-2020-29
| 1,748
| 13
|
https://www.pcgamebenchmark.com/ludochip-pc-games
|
code
|
PC Games by Ludochip
These are 1 PC games by Ludochip, listed in the PCGameBenchmark system requirements database.
Viewing page 1 of 1
Cubetractor System Requirements
WINNER: IGF China award for BEST GAME!Cubetractor is a neo-retroesque action-strategy-puzzle hybrid where you defeat enemies through an unconventional cube-pulling, turret-buildling mechanic. The game carries elements of a reverse tower defence and a grounded ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00873.warc.gz
|
CC-MAIN-2022-49
| 430
| 5
|
https://mail.scipy.org/pipermail/ipython-user/2010-October/007197.html
|
code
|
[IPython-User] [Numpy-discussion] [ANN] IPython 0.10.1 is out.
Tue Oct 12 13:40:50 CDT 2010
On Tue, Oct 12, 2010 at 4:25 AM, Scott Sinclair
> 2010/10/12 Fernando Perez <firstname.lastname@example.org>:
>> we've just released IPython 0.10.1, full release notes are below.
> A buglet - http://github.com/ipython/ipython/issues/issue/168
> The long description on the PyPI listing still points to Launchpad
> "The latest development version is always available from IPython's
> Launchpad site."
> Requires a fix in IPython/core/release.py and re-registering the
> release on PyPI.
Yes, great catch! Thanks for reporting it, I've already fixed it
manually on PyPI, and edited the sources as well:
More information about the IPython-User
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890582.77/warc/CC-MAIN-20180121120038-20180121140038-00652.warc.gz
|
CC-MAIN-2018-05
| 732
| 14
|
http://what-when-how.com/Tutorial/topic-684cn3k/Java-Performance-The-Definitive-Guide-159.html
|
code
|
as well. Application servers, for example, typically specify a maximum permgen size of 128
MB, 192 MB, or more.
Contrary to its name, data stored in permgen is not permanent (metaspace, then, is a much
better name). In particular, classes can be eligible for GC just like anything else. This is a
very common occurrence in an application server, which creates new classloaders every time
an application is deployed (or redeployed). The old classloaders are then unreferenced and
eligible for GC, as are any classes that they defined. In a long development cycle in an ap-
plication server, it is not unusual to see full GCs triggered during deployment: permgen or
metaspace has filled up with the new class information, but the old class metadata can be
Heap dumps (see Chapter 7 ) can be used to diagnose what classloaders exist, which in turn
can help determine if a classloader leak is filling up permgen (or metaspace). Otherwise,
jmap can be used with the argument -permstat (in Java 7) or -clstats (in Java 8) to print
out information about the classloaders. That particular command isn't the most stable,
though, and it cannot be recommended.
1. The permanent generation or metaspace holds class metadata (not class data it-
self). It behaves like a separate heap.
2. For typical applications that do not load classes after startup, the initial size of this
region can be based on its usage after all classes have been loaded. That will
slightly speed up startup.
3. Application servers doing development (or any environment where classes are
frequently redefined) will see an occasional full GC caused when permgen/
metaspace fills up and old class metadata is discarded.
All GC algorithms except the serial collector use multiple threads. The number of these
threads is controlled by the -XX:ParallelGCThreads= N flag. The value of this flag affects
the number of threads used for the following operations:
▪ Collection of the young generation when using -XX:+UseParallelGC
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689686.40/warc/CC-MAIN-20170923141947-20170923161947-00461.warc.gz
|
CC-MAIN-2017-39
| 1,984
| 26
|
https://rits.center/energy/
|
code
|
Optimization of the management of the scope of work related to the extraction and transport of natural gases Building a mobile and system application based on devices from the Windows family.
Designing an application based on cloud solutions with a short implementation period Focus on security aspects. Using Microsoft Azure, React Native and MongoDB to implement critical project assumptions.
A decentralized 9 person team of highly experienced programmers. Run following the Scrum two weeks iteration cycle.
A team producing solutions at the highest world level. Deep low-level knowledge of programming languages and popular frameworks allows to optimize the consumption of system resources and scale solutions depending on the Client's wishes.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00085.warc.gz
|
CC-MAIN-2022-21
| 747
| 4
|
https://coderanch.com/t/372173/java/NDS-parameter
|
code
|
Hi, I created a class that validate an user in NDS, but I need to get the value of one NDS parameter (called "Company") of the respective user. How can I do this? Please, I really need some help here. Thanks.
I hope this isn't for a homework assignment. Here's some code that might help you:
"myCtx" is an instance of "DirContext". "attrName" and "attrValue" are attribute name and value you want to search by. For example, it could be "cn" and "jbrown". The attribute value can contain wild cards, etc., just standard LDAP stuff. "searchScope" is one of teh valid "SearchControls.XXXX" values (ONELEVEL_SCOPE, OBJECT_SCOPE, SUBTREE_SCOPE, ...). Again pretty standard LDAP stuff. In the example here it will retrieve all of the attributes from the matching object and save the key/value pairs in a "TreeMap". It only handles String (text) attribute values nicely, and it doesn't have allowance for multi-value attributes. I'm sure you can add those if you need. It is fairly easy to modify this to handle multiple matches: this code only handles a single match, which is sufficient in many cases. Hope this helps ...
I am Arthur, King of the Britons. And this is a tiny ad:
SKIP - a book about connecting industrious people with elderly land owners
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00044.warc.gz
|
CC-MAIN-2021-17
| 1,248
| 5
|
https://www.agileconnection.com/article/continuous-integration-just-another-buzz-word?page=0%2C6
|
code
|
Most experienced change and configuration managers would gasp at the seeming low-tech inefficiency of index cards as means to track and report conformance and status. At the same time, it is hard to combat the efficacy of index cards to engage customers in a more participative and collaborative dialogue for eliciting requirements, and drawing simple diagrams - apparently, the physicality and tactile experience of the cards is simply more inviting and lets the customer do the writing instead of relying upon a single input-controlling scribe. (Some have suggested pair writing could be done between a developer and a customer in the same fashion that pair programming is done during implementation).
Even if index cards are used as the medium for initial capture of requests and requirements, many (if not most) change and configuration managers will want to subsequently transfer this information into a spreadsheet or a tracking system for fast and easy reporting, querying, searching, sorting, as well as for real-time dissemination across the project's organization and stakeholder-sites (not to mention affording more efficient and reliable storage, record retention, archival, and retrieval/recovery). Keeping stories in a tool also lets you apply a simple workflow see how the work is progressing.
Several agilists would argue that such a tool rails against the mandate of simplicity. To be certain, many have gone overboard with defining and enforcing process through a tool - we highly recommend against that since it often results in drastically increased administration overhead, drastically decreased face-to-face communication, and hence very low agility. On the other hand, many have had bad experiences with tools used to enforce too much process. We believe this, combined with bad experiences from misapplied CMM level climbing attempts, is responsible for much of the agile community's backlash against the use of more sophisticated but useful tools and processes.
The relentless focus on keeping things as simple as possible, and on face-to-face interaction over face-to-machine interaction still provides sound guidelines and important reminders when adopting processes and tools. With the right amount of process using a simple and smart tool, agile projects will find increased productivity and better coordination. The bottom line is really to do what you know works for you, and keep it as simple as possible, applying the principles of lean development every step of the way.
Change management is concerned with controlling and tracking changes to project and product scope and ensuring conformance to customer expectations. Agile change management is concerned with increasing the ability of the project to be responsive to requests for change and to quickly implement accepted change requests. This requires minimizing: the cost of effective knowledge transfer, the amount of knowledge captured in intermediate artifacts, and the time between making a decision and learning the effects of its result. The key success factors of agile change management are the use of iterative and incremental development with short feedback cycles, and close collaboration with frequent face-to-face interaction between developers and customers.
Sometimes the customer base is diverse and/or dispersed and a product manager role is needed to facilitate agreement from, and make decisions on behalf of the customer base. Participatory decision-making tends to produce the most collaborative results, and normative voting and effort allocation approaches have proved effective in reaching customer consensus to prioritize and plan the requests to implement at the beginning of an iteration. The product manager should be empowered to make decisions about issues that arise during an iteration, but either the product manager or a small sampling of customer reps can elaborate the details of a particular request to be implemented.
Index cards can be an effective means of engaging the customer during requirements capture, and simple tools (and processes) are an effective means of tracking, coordinating, and reporting visible progress of requests and changes against expected functionality and content. Don't be fooled by the allure of sophisticated processes and tools; and don't overcompensate by discarding simple but effective tools and techniques. Look for a balance of utility and simplicity that is both effective and efficient in meeting your change management needs. And keep an eye out for opportunities to eliminate redundant or unused elements of your processes, tools, and artifacts after each iteration.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612069.19/warc/CC-MAIN-20170529091944-20170529111944-00403.warc.gz
|
CC-MAIN-2017-22
| 4,633
| 7
|
https://github.com/snabblab/snabblab-nixos
|
code
|
This documentation is aimed at infrastructure developers, for Snabb infrastructure usage see the Snabblab section in Snabb manual.
Overview of Snabblab CI infrastructure
Source code for managing infrastructure for Snabb community providing tools to build, test and benchmark Snabb software to developers.
Under the hood, Nix language is used.
Source code serves two purposes:
Hydra is a [CI] (https://en.wikipedia.org/wiki/Continuous_integration) used by Snabb developers to test and benchmark different kinds of applications in Snabb. Relevant folders are
Server deployments using NixOps and Hydra. Relevant folders are
Motivations for Snabblab infrastructure
Following separate topics are all covered in this repository.
Snabblab is a group of servers with attached Networking cards on which Snabb can be developed and used. The cluster needs to be managed and deployed without too much hustle.
Snabb has unit and functional tests that require specific setup and environment to run successfully.
It's critical that Snabb doesn't regress in performance throughout development.
Different Snabb applications integrate into other software, requiring interesting set of software combinations to be benchmarked with.
- 10 different test cases.
- 5 versions of QEMU.
- 10 different guest VMs (Linux and DPDK).
- 16 combinations of Virtio-net options.
- 2 NUMA setups ("good" and "bad")
- 2 polling modes (engine "busy loop" and sleep/backoff)
- 2 error recovery modes (engine supervising apps vs process restart)
- 2 C libraries (glibc and musl)
- 3 CPUs (Sandy Bridge, Haswell, Skylake)
Be familiar with:
existence of basic Nix datatype manipulation functions
The very core of Hydra are jobsets. They define configuration how and when a specific Nix expression is executed.
Jobsets are grouped into projects for easier separation of concerns.
For example, snabb/master means master jobset for snabb project.
jobsets/snabb.nix expressions is evaluated using the highlighted function inputs that jobset configures.
The jobset configuration page defines:
jobsets/snabb.nixin input named
snabblab(which fetches https://github.com/snabblab/snabblab-nixos.git into Nix store).
snabbSrcfunction input as https://github.com/snabbco/snabb.git imported into Nix store
nixpkgsavailable in Nix search path to be imported anywhere in the expression
Once evaluation is triggered (every 300 seconds in this case), inputs are fetched and the whole Nix expression is evaluated. For each Nix derivation the hash is calculated and if it changes, the derivation is rebuilt.
An example evaluation shows that all jobs still succeed. Under the "Inputs" tab one can observe what inputs were used in this specific evaluation and due to Nix design and property of referential transparency, one should always get the same derivations for those inputs.
Each job can also provide "build products" which define what files are inside the resulting derivations and ready for download. Clicking on the manual job it lists different files representing manual formats contained inside the Nix store path.
- Snabb binary
- Snabb manual
- Snabb tests (make test)
- Snabb, not using Nix expression but rather packages on specific distribution (CentOS, OpenSUSE, Debian, Ubuntu, Fedora)
Note: clicking on specific jobset, on "Configuration" tab one can see what inputs are used for the Nix expression: here is an example.
The jobset will build all specified Snabb branches (
pairs). Additionally, you specify which
kernelVersions will be used. Using all these software versions, a big matrix
of combinations of inputs is computed and used to execute selected benchmarks.
benchmarkNames is a list of benchmark names
being executed on the matrix.
numTimesRunBenchmark input specifies how many times each benchmarks is
nixpkgs points to a specific commit, pinning all software used.
Once all benchmarks are executed, a big CSV file is generated based on results.
Last but not least,
reports is a list of reports names
that consumes the CSV and produces a nice report using R and markdown.
Under the hood of a specific benchmark (outputs)
Infrastructure behind a call to execute a benchmark consists of jobset function
outputs, spans over 700 lines in
jobsets/snabb-matrix.nix file and supporting
lib/ folder and begins at building all software used in the matrix.
Using sets of different (Snabb/Qemu/Dpdk/kernel) versions and names of benchmarks, a huge list of benchmark derivations is generated.
namejust being the identifier of the benchmark
checkPhasein bash executing the benchmark itself and writing output to stdout and a log file
toCSVtaking derivation result as input and extracting benchmarking value out of it
provides an environment in which all Snabb tests/benchmark are executed. All
software and environment settings are configured for
checkPhase to execute
correctly. For some benchmarks/tests
~/.test_env inside the chrooted
environment is populated using mkTestNixEnv
function that builds two qemu images (one plain NixOS and one with dpdk l2fwd
running) and corresponding
initrd kernel fixtures.
Using all executed benchmarks, [mkBenchmarkCSV] generates (https://github.com/snabblab/snabblab-nixos/blob/master/lib/benchmarks.nix#L200-217) one big CSV consisting of inputs specification and measures benchmarking values.
NixOps is used for provisioning the machines.
$ ssh firstname.lastname@example.org $ cd snabblab-nixos
It uses an sqlite database (
~/.nixops/deployments.nixops) to store state
about the provisioning. For example SSH keys, path to nix files, current
First, create a nixops deployment:
$ nixops create -d lab-production ./machines/lab.nix ./machines/lab-production.nix
The server needs a basic NixOS install running SSH with your public key configured.
machines/lab-production.nixand add a new machine.
$ nixops deploy -d lab-production --include mymachine
machines/lab-production.nixand add a new machine.
To bootstrap Hetzner machine we need to use https://robot.your-server.de/ account:
$ HETZNER_ROBOT_USER=<user> HETZNER_ROBOT_PASS=<pass> nixops deploy -d eiger -I nixpkgs=http://nixos.org/channels/nixos-16.09/nixexprs.tar.xz --include eiger
Copy generated Nix configuration into separate file:
$ nixops export -d lab-production | ./convert_export.py > ./machines/lab-export.nix
Note: this is very WIP and not all servers are deployed using this workflow yet.
A developer pushes a configuration change into Git, Hydra builds and tests it, servers are setup to automatically update themselves from Hydra. For each machine there is a separate channel that serves up that machine's software and configuration.
Testing Snabblab changes manually
Some changes in the repository may trigger massive rebuilds, for example some benchmarks can take more than a day to execute.
For this reason, such changes should go to the
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703528672.38/warc/CC-MAIN-20210121225305-20210122015305-00698.warc.gz
|
CC-MAIN-2021-04
| 6,811
| 92
|
http://myfiretower.net/install/
|
code
|
FireTower Server Installation
Please note that the PC to host FireTower Service requires a Windows x64 architecture with Microsoft PowerShell 3.0 and above. The preferred FireTower Service PC is a Windows 10 Pro x64 machine.
At your PC to host FireTower Service:
- Download FireTower Server and Cyber Console Installer Software
- Execute the downloaded installer software to install FireTower Server and Cyber Console service and its operating environment (XAMPP Communication Stack)
- After FireTower Service is successfully installed:
- log in to WinCyCon.exe (ID: admin, and default password: admin) to make sure FireTower service is installed and operating properly,
- change your default password, and
- create a client enrollment package to deploy and protect endpoint computers.
- Click the following URL to download FireTower-CyberConsole-XAMPP-Installer.exe Download Link for FireTower Server and Cyber Console Installer Software
- Use the following 30-day trial key to activate your FireTower Server license during the installation process when prompted: 070BB-C2A07-2FAF5-16D68-A6F966
Cyber Console for Windows (wincycon.exe) provides an interactive threat exploration interface with built-in analytics to hunt for indicators of compromise, to deliver comprehensive endpoint visibility and to enhance the detection and containment of malicious activities.
- WinCyCon.exe is located at FireTower Server system C:\xampp\cycon\bin\x64 for x64 machine or C:\xampp\cycon\bin\x86 for x86 machine
- Cyber Console for Windows can be run from anywhere as long as the FireTower Server system has a routable IP or reachable through DNS record.
- Cyber Console for Windows could be run from any Windows PC, make sure you have the right architecture version for your Windows system.
Without registration, your FireTower installation will only function for 30 days.
Please register your FireTower Security Solution installation with Sampan Security, Inc., we will then convert your 30-day trial license to an one-year subscription license to protect up to 25 endpoint computers, please note this is for personal use only.
- Please login to your Cyber Console and access Account Management Tab and copy the content of your installation “Maintenance ID” field
- Please paste your Maintenance ID to the comment field of Contact Us Form and submit the form.
To change the default admin password: “admin” to FireTower Cyber Console:
- Login to Cyber Console (Wincycon.exe)
- Click Account Management Tab
- Select Admin from the User Pane, and click down “User Action” and select “Edit User”
- Enter new password and click “OK”
Possible scenarios for installation failure:
Windows System requirements not met:
- x64 architecture only
- Microsoft Powershell 3.0 and above: please follow Microsoft instructions to upgrade to Powershell 3.0
Communication port requirements not met:
- Port 80 (http), 443 (https), 3306 (MySQL)
Please follow the URLs listed below to resolve your communication port issues:
Symptom: “Cannot connect to localnet” when you try to sign in to FireTower Cyber Console (WinCyCon.exe from desktop shortcut) from the FireTower Server PC:
- Apache (C:\xampp\apache\bin\httpd.exe) is not running or listen on Port 80
- Apache (C:\xampp\apache\bin\httpd.exe) is not running or listen on Port 443
- MySQL (C:\xampp\mysql\bin\mysqld.exe) is not running or listen on Port 3306
- Execute the included diagnostic tool at C:\xampp\cycon\bin\CyConDiagnostic.exe and send us the result, or
- Follow Section V, FireTower Server Troubleshooting at FireTowerTroubleshootingGuide to identify culprit by
- Accessing Apache landing page through “http://localhost”
- Accessing Apache landing page through “https://localhost”
- Please verify using Control Panel\Administrator Tools\Services that Apache (httpd.exe) and MySQL (mysqld.exe) service are running
- If you are using third-party Firewall, please make sure you setup inbound firewall rules for Apache and MySQL services.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406365.40/warc/CC-MAIN-20200529183529-20200529213529-00054.warc.gz
|
CC-MAIN-2020-24
| 4,003
| 41
|
https://www.fi.freelancer.com/projects/data-entry/some-excel-work-8659761/
|
code
|
Hi I am Bhakti Here....I want to do this work because I have a good command on Excel and i was working with it so many times. So please inform me i will complete this on time with full of perfection. I will never disaLisää
Please note that I have an experience of more than 19 years in Word, Excel as well as in all MS Office Packages.
Also my typing speed is 70 WPM with no errors. Hence, if you award me this task, using Lisää
Hi sir / madam
if your task is small enough so i alone can do it very quickly,
but if its too large then i have my team that will divide the task and you will get your work done faster as compared to others by talkinLisää
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948617816.91/warc/CC-MAIN-20171218141805-20171218163805-00620.warc.gz
|
CC-MAIN-2017-51
| 656
| 6
|
https://www.lunarsoft.net/tag/bing-rewards
|
code
|
Microsoft recently announced that they would be making changes to the Bing Rewards program; the company is changing the name, and how it operates, to Microsoft Rewards and we are now learning new details about the service. There is a new account dashboard, that you can view here (if logged in) that shows you all the ways you can earn points for the program....
- Lunarsoft Frontpage 4.8.x September 20, 2017
- Infected CCleaner downloads from official servers September 18, 2017
- Western Digital ships 12TB enterprise hard drives September 18, 2017
- Proposed Security.txt will work like Robots.txt September 18, 2017
- Chrome to stop autoplay content with sound in January 2018 September 17, 2017
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687740.4/warc/CC-MAIN-20170921101029-20170921121029-00355.warc.gz
|
CC-MAIN-2017-39
| 700
| 6
|
https://krambica.com/2022how-to-recover-notepad-file-on-win-10-11-43/
|
code
|
“Microsoft makes Notepad a separate Store app starting with new Windows 10 20H1 test build”. It allows customizing headers, footers, and margins before printing. The date, file name, and other information can be placed in the headers and footers with various codes consisting of an ampersand (‘&’) followed by a letter.
If Notepad is not up to date, it may not be compatible with Windows 11 and may cause errors. Notepad is a popular text-editing program that comes with Windows, but it can be prone to various errors. It is an invaluable tool for many users and can be incredibly frustrating when it stops working. Fortunately, there are a few things you can do to fix the issue.
Therefore, when making this update users will be able to access all the features available in the classic app as they are supported by RichEdit. That said, Show Unicode control characters will now work in this new version of the application which also includes emojis. Microsoft also announced new keyboard shortcut keys to help you manage tabs. Getting back the classic Notepad in Windows 11 can be useful if you don’t like the modern app that replaced it.
WAB DE 2.7 json formatted strange?
When you create a user profile in PlantText, we only store your authentication information on our servers https://2d-innovations.com/2023/03/20/how-to-download-notepad-for-ubuntu-22-04-a/. All of your data is stored in your browser’s local storage, NOT our server. Please make sure you are pressing the right button below to accomplish your goal.
- However, for JSON files containing strings of texts or translations, you should use Localazy, suitable for managing multiple file formats.
- Although it is still not on par with other high-end text editors, it is surely a welcome update.
- The section names are filepath globs , similar to the format accepted by gitignore.
- This is located on the right-hand side of the main window.
Good luck and do not forget to check other techinal tips on mycodebit website. Notepad is the first built-in app to get a tabbed interface afterMicrosoft added tabs to File Explorerlast year. Microsoft first started testing tabs across all Windows 10 appsnearly five years ago in a featurenamed Sets. This would have added support for tabs inside Notepad, File Explorer, and many other apps, but Microsoft eventuallycanceled the projectand never shipped it to Windows 10 users. The obvious new feature in the redesigned Notepad is the dark mode. Learning how to do simple file management at the Command Prompt comes in handy when you’re learning to code.
2 How to Compare Two Files in Notepad++
The result is a list of the dictionary’s values. This ability to sort lists and other data containers by multiple positions is very helpful because you quite often need to sort data by multiple values. For example, if you have daily sale transactions data, you may need to sort the data first by day and then by transaction amount for each day. Or, if you have supplier data, you may need to sort the data first by supplier name and then by supply receipt dates for each supplier. The sorted and itemgetter functions provide this functionality. This example shows how to use the sorted function in combination with a key function to sort a collection of lists by the value in a specific index position in each list.
For JS/Angular/React I would suggest go with VSCode. I personally use it and prefer as its light weight and have good integration with chrome for frontend development. I couldn’t imagine using a development tool other than the IntelliJ IDEA Ultimate All Products Pack.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818105.48/warc/CC-MAIN-20240422082202-20240422112202-00351.warc.gz
|
CC-MAIN-2024-18
| 3,605
| 13
|
https://www.intel.com/content/www/us/en/docs/vtune-profiler/user-guide/2023-1/s0ix-states.html
|
code
|
S0ix-states represent the residency in the Intel® SoC idle standby power states. The S0ix states shut off part of the SoC when they are not in use. The S0ix states are triggered when specific conditions within the SoC have been achieved, for example: certain components are in low power states. The SoC consumes the least amount of power in the deepest (for example, S0i3) state.
On Linux*, Android*, and Chrome* OS, ACPI-SState represent the system’s residency in the ACPI Suspend-To-RAM (S3). In the Suspend-To-RAM state, the Linux kernel powers down many of the systems’ components while maintaining the system’s state in its main memory. The system consumes the least amount of power possible while in the Suspend-To-RAM state. Note that any wakelock will prevent the system from entering the Suspend-To-RAM state.
This metric is collected as part of energy analysis. Collecting energy analysis data with Intel® SoC Watch is available for target Android*, Windows*, or Linux* devices. Import and viewing of the Intel SoC Watch results is supported with any version of the VTune Profiler.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100448.65/warc/CC-MAIN-20231202172159-20231202202159-00272.warc.gz
|
CC-MAIN-2023-50
| 1,099
| 3
|
https://docs.gamebar.me/gmsDocs/source/_build/3_scripting/4_gml_reference/audio/audio_create_stream.html
|
code
|
With this function you can create a new sound index which can then be used in the regular audio functions to stream audio directly from an external OGG file source. The function requires the filename (which can be an included file, for example) and will return the new sound index for use. Note that after you no longer need the sound you should call the function audio_destroy_stream() with the sound index to remove it from memory otherwise you may get a memory leak which will slow down and eventually crash your game.NOTE: This functionality is not available for the HTML5 target platform.重要!该函数在试用版(Trial License)产品中不可用。
|filename||The file (OGG only) to stream the audio from.|
snd = audio_create_stream("Music/Track1.ogg");
audio_play_sound(snd, 0, true);
The above code creates a new sound index in the variable "snd" from the given file, then plays this sound.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00718.warc.gz
|
CC-MAIN-2023-06
| 907
| 5
|
https://artechyapi.com/why-consider-functional-programming/
|
code
|
Functional programming (FP) has been around for the past six decades, and at the moment, attempts to overcome the ubiquitous dominance of object-oriented programming (OOP) continue. With the growth of popularity for machine learning (ML) and big data analysis, FP has gained demand due to the ease with which pure functions can be implemented. The functional paradigm also makes it easier to track, test, and maintain code for complicated assets, like data analysis, setting the stage for active use in the near future.
So, where does the interest in functional programming come from, and why is it worth learning FP languages? Let’s figure it out.
Crest of the wave of IT trends and the use of FP
In the IT world, nothing happens just like that. One thing clings to another, and now all the hottest trends are interconnected.
If we recall the most sensational trends of 2010th, these, of course, will be AI, IoT, Big Data, and Blockchain. They are on everyone’s lips, and everyone knows their potential and key features. And it is some of these trends that have catalyzed the rise in popularity of functional programming among developers.
Currently, the problem of parallel processing and work with large data streams is very acute, and one such example is the work with Big Data. And by parallelizing the processing of this data, you can get the desired result in a split second, which is critical in the real world. Plus, do not forget about decentralized (distributed) computing, blockchain, and others, which, in their essence, are a rather complex mechanism. And for such calculations, FP is most suitable due to the principles of functional programming (such as pure functions, for example). The use of FP techniques facilitates parallel code execution and maintenance. From which we can safely conclude: large IT companies tend to use functional programming more and more.
- Use JS with all its disadvantages.
- Find a solution to change the situation.
And it’s the time when TypeScript came to the stage.
What is TypeScript
The major advantage of introducing TypeScript is its strong typing. Typed variables help avoid bugs in code on the go, as the compiler monitors the correctness of all the implemented variable types, their consistency, and inheritance. Check this post if need more details about TS benefits.
Functional programming languages
Clojure, Elixir, Erlang, Elm, F #, Idris, Nix, Agda, and Haskell are the languages most often mentioned as the family of functional programming. And they will not lose their popularity for many years to come. Haskell is the most powerful language, my favourite one, but over time, later ones, such as Clojure, became intertwined with it, forming a general picture of the evolution of FP.
Let’s look at a few most popular and progressive functional programming languages.
Haskell is an unusual language from the point of view of those accustomed to Java, C ++, Python, or any other object-oriented language. The point is that Haskell is a functional language.
Haskell is too complicated for ordinary things, and you don’t need it for a simple website. It will be like reinventing the wheel. But great for making the server part, which will take over all the complex calculations, or decentralized layers for transactions, covering hundreds of thousands of operations. Haskell is best at accurate math and logic, so the better you know math, the easy will be for you to code in Haskell.
Almost everything in Haskell is done through functions. The task of the programmer is to find the mathematical function, which will be the solution for the task, and to describe the function to the compiler, mentioning:
- what parameters can come to the function,
- what to do with them,
- In what form the machine needs to give the result.
Haskell is a language that supports lazy evaluation. This means that he will calculate the required values in any function, not when the programmer runs it, but when this value is really needed in the calculation.
For example, we have a function that returns some value after being called. Haskell will not read if this value is not needed right now or is not used in a function call. It will wait until the function value is required, and only then will calculate it.
Lazy evaluation helps reduce the load on resources and makes programs faster and more efficient. If you write a calculator that does all the math but only uses addition, Haskell won’t even pay attention to the rest. He will know that you have a code that, if anything, can still multiply and divide, but he will not do anything with it yet.
What is Haskell good for?
- Text processing and parsing. In Haskell, it is easy to lay down the rules of any language, according to which language constructions are built, and to teach it to analyze this language. For example, he can divide sentences into words and prepositions, find connections between them, see if everything is written without errors, or find incorrect language constructions. This works both for common languages (English or German) and for programming languages, even for the design of new ones.
- Since Haskell does everything strictly according to the set of rules, it is an excellent tool for writing compilers. The task of any compiler is to convert the code, written in the high-level programming language, into the code understandable by the machine to execute it. So, Haskell is great at writing compilers. GHC (Glasgow Haskell Compiler, the main compiler for Haskell language) is also written in Haskell.
- Financial instruments. The main advantages of using Haskell for financial instruments are speed of operation, guaranteed accuracy, and the absence of bugs or discrepancies, which can lead to data leakage. Software written in Haskell can be systems for banking transactions, stock trading, risk analysis, or financial monitoring tools, etc.
- Industrial applications. Haskell is very flexible in defining the complex rules and process data according to these rules – this is exactly what enterprises need to build decision support or internal audit systems. This relieves the burden on people and allows algorithms to find points for industrial growth more efficiently.
It is a dynamic functional language designed for building scalable and maintainable applications. Elixir is powered by the Erlang VM ecosystem. It is used by Heroku, WhatsApp, Klarna, and other projects for distributed, fault-tolerant applications. Every element of applications is an expression, Erlang functions can be called without impacting runtime due to compilation of bytecode in Erlang and vice versa.
Elixir was released in 2012 by José Valim and was supported for several years only by its creator. At some point, its popularity grew so much that many companies began to use it in their projects seriously.
Elixir runs on top of the Erlang virtual machine. It is widely recognized for its unique capabilities for building fault-tolerant and distributed systems. Moreover, both in embedded devices, such as routers, when creating real-time applications (games, instant messengers).
Almost everything that is said about the Elixir is the merit of the Erlang virtual machine. Elixir was conceived as a language that brings something new to the world of Erlang, which Erlang itself lacked. First of all, these are tools that increase the level of abstraction (Struct, Protocol), allowing you to write more concise code (pipe operator, with construction), and it is convenient to manage the project and its dependencies (mix). It is also meta-programming, a powerful macro system that allows you to create DSL languages. A prominent example of such a DSL is the Ecto database library.
Elixir is a functional language. It is easy to learn and efficient to use. Based on it, the Phoenix web framework was created, which is very reminiscent of the simplified Ruby on Rails. It’s an open-source language and is available on GitHub.
As you probably already understood, you should not be afraid of functional programming. A little bit of diligence and curiosity, and now you have mastered FP. Given the prevalence of functional programming, you can be confident about your professional future (with due diligence), as you can surely be able to use your newly acquired skills.
In addition, I wish to say that learning FP languages changes your perception of programming in general, helps you look at problems from a new angle, forms abstract thinking, and lets you find non-standard and effective solutions to problems. If you are searching for the job of your life, functional programming can be such one.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511386.54/warc/CC-MAIN-20231004152134-20231004182134-00489.warc.gz
|
CC-MAIN-2023-40
| 8,622
| 35
|
https://forum.arduino.cc/t/poisson-distribution-queues-on-arduino/584833
|
code
|
[I am thinking of automating a supermarket queuing mechanism by implementing a Poisson queue distribution which will be in synchronised with other queues.
For instance 5-10 persons per queue on 1 counter. At each counter, an LCD will display the time taken per customer based on that corresponding queue.]
I am searching help for writing the code.
Problem: I have a supermarket whereby the clients need to search and then select the shortest queue. Suppose there are 4 queues such that each queue contains around 5-10 persons. I am thinking of automating this search by implementing a queuing mechanism (for e.g. Poisson Distribution). The queuing mechanism can be then synchronized with other queues at each counter. An LCD (as a display mean) can be used to inform the clients which queue they should take.
So, I want to know if this is possible.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00111.warc.gz
|
CC-MAIN-2022-21
| 848
| 5
|
https://gis.stackexchange.com/questions/255455/classify-a-slope-raster-in-image-analysis
|
code
|
I have a DEM in a Mosaic Dataset covering a large area (300 x 200 km). I want to create a classified slope raster showing slopes of say 0 - 10 degrees, 10 - 20 degrees, and then 20 - 90 degrees. Because the dataset is so large I want to avoid having to create a separate slope raster dataset, but instead use the Image Analysis so I can visually see where the slope classes are.
In the Image Analysis window I can create a Slope Function for my dem and display it using the Stretched color ramp in the Symbology tab of the Layer Properties. I want to use the Classified renderer, but Arc requires the data to have a histogram. When I say Yes, it chugs away computing the statistics and histogram. For really large datasets it can take a very long time. I'm wondering where the histogram and statistics are stored for a Image Analysis layer because it this Compute Histogram dialog keeps re-appearing.
I thought there could be a way to create this with a Function chain. There are functions called: Classify, Colormap, Remap that sound promising, but I haven't gotten any of them to work
The Classify function wants a .ECD or .ACT file as input. How do I create a ECD or ACT file or is there a better way to accomplish this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817463.60/warc/CC-MAIN-20240419234422-20240420024422-00088.warc.gz
|
CC-MAIN-2024-18
| 1,223
| 4
|
https://forums.centos.org/search.php?author_id=117695&sr=posts
|
code
|
Search found 1 match
Search found 1 match • Page 1 of 1
- 2018/06/07 08:18:50
- Forum: CentOS 6 - Software Support
- Topic: Can't start ssh service in centos v6.9
- Replies: 1
- Views: 741
We have Linux server and cannot login via ssh because its service is stopped. when we are trying to start it give us the following error. Error: /etc/ssh/sshd_config line 23: directive 'protocol' is not allowed within a match block https://image.ibb.co/bFa2A8/ssh.png please help us recover this prob...
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145941.55/warc/CC-MAIN-20200224102135-20200224132135-00342.warc.gz
|
CC-MAIN-2020-10
| 494
| 8
|
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206788275--ANN-ResinIntegration-0-5-available-for-Aurora
|
code
|
Version 0.5 of the ResinIntegration plugin is available at
It provides support for both Resin 2.x and Resin 3.x.
This version is considered alpha, very little testing has been
done and there are some known problems.
It is a simple modification of the Tomcat integration sources
made to work with Aurora and Resin 2.x / Resin 3.x.
It does not support debugging of JSPs yet.
Sources will be available soon (I just need to clean it up first).
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00518.warc.gz
|
CC-MAIN-2020-24
| 439
| 8
|
https://www.marcusmth.com/app-store-connect-build-versions
|
code
|
There is some confusion around when to create a new app build version.
In App Store Connect, once an app build has been sent for review no changes can be made to that build number. A minor build
3.2.2 must be set for the next build created whether this is a TestFlight or release build. A way around this is to reject the current build, then it is possible to push another build to
This caused me some headache before I understood it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817128.7/warc/CC-MAIN-20240417013540-20240417043540-00614.warc.gz
|
CC-MAIN-2024-18
| 434
| 4
|
https://mettl.com/en/core-functions/it-tests/?lang=en
|
code
|
Simplify your technical hiring and training with automated online coding tests. Mettl’s IT test library comprises of a variety of coding tests that include C developer test, Java developer test, c++ developer assessment, software testing, python development, and more.
Mettl’s Online Assessments give you the flexibility to choose from a set of standardized tests or custom build your own test from our library of Programming Assessments. Assess candidates on various skills like Java Spring, ReactJS, C# and much more within real coding environments. We have a diversity of question types which includes MCQs, case study simulators, coding simulators, etc.
With data-driven evaluation methodology, intuitive user interface and customized reports, recruit the best developers via our online IT tests. Rate programmers on skills rather than experience.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629632.54/warc/CC-MAIN-20210617072023-20210617102023-00501.warc.gz
|
CC-MAIN-2021-25
| 855
| 3
|
https://onesmartclick.com/programming/top-10-websites-for-perl-programming.html
|
code
|
Introduction to Perl Programming
Perl is a powerful and dynamic programming language used for various applications like web development, system administration, data analysis, and many more. It has a rich set of libraries and modules that make programming easier and more efficient.
If you are new to Perl or just looking for resources to help you in your journey of learning Perl, this blog post is for you. Here we have compiled a list of the top 10 most useful websites for Perl programming. These websites offer tutorials, resources, and information on Perl programming that will be useful for both beginners and advanced users.
Top 10 Websites for Perl Programming
|1||Perl.org||The official website of Perl programming language, containing all information and resources you need to get started with Perl.|
|2||PerlMonks||A community-driven website where Perl developers can share their knowledge, ask questions, and help each other.|
|3||Perl Tutorial||A comprehensive tutorial on Perl programming, including syntax, functions, data structures, and more.|
|4||Perl Maven||A blog that provides tutorials, tips, and tricks for Perl programming, as well as information on various Perl modules.|
|5||Perl Weekly||A weekly newsletter that features articles, tutorials, and resources on Perl programming.|
|6||CPAN||The Comprehensive Perl Archive Network, a repository of Perl modules, scripts, and documentation.|
|7||Perl How-To||A website that provides tutorials and articles on Perl programming for beginners and advanced users.|
|8||Learn Perl||A website that offers online courses and tutorials on Perl programming, including basic and advanced topics.|
|9||Perl Beginner||A website that provides comprehensive tutorials and resources for beginners who are starting out with Perl programming.|
|10||Perl News||A website that features news and updates on Perl programming, including new releases, modules, and more.|
Tips for Perl Programming
- Start with the basics: Before diving into complex topics, make sure you have a good understanding of the basics of Perl programming.
- Practice, practice, practice: The more you practice, the better you will become at Perl programming. Try to work on small projects and gradually move on to larger ones.
- Use Perl modules: Perl has a rich set of libraries and modules that can make your life easier. Make use of them as much as possible.
- Read code: Reading other people’s code is a great way to learn and improve your skills. Look for code examples on websites like GitHub or CPAN.
- Get involved in the Perl community: Participate in forums, attend meetups, or join online communities. The Perl community is friendly and always willing to help.
In conclusion, these top 10 websites for Perl programming will be a valuable resource for anyone looking to learn or improve their skills in Perl programming. Don’t be afraid to explore and experiment with the language.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945333.53/warc/CC-MAIN-20230325130029-20230325160029-00286.warc.gz
|
CC-MAIN-2023-14
| 2,921
| 21
|
https://medium.com/wegaw/time-series-forecasting-in-python-from-zero-to-neural-networks-in-only-a-month-8e12d6a6e2f4?source=rss----265cbb7b127a---4
|
code
|
Time Series Forecasting in Python: From Zero to Neural Networks in (Only) a Month
At the best of times, data science can be complicated, opaque, and dense with jargon, especially for those just starting to learn about it. This, at least, was my experience entering the field as a student intern for Wegaw with relatively limited prior knowledge in the details of how exactly you’re supposed to do something with a dataset.
However, with some research, the very helpful guidance of someone more experienced, some understanding of high-school statistics, and a few very useful Python libraries, I was ultimately able to make a couple of reasonably good predictions. Described below is the complete process I went through to get these predictions, in the hope that someone in the position I was in about a month ago will find it useful.
The data of interest in this case is the snow depth in certain measuring sites in British Columbia (BC). Predictions of future snow depth through methods like those described below can be a piece of a larger puzzle of tracking snowfall and snowmelt patterns in the region. That being said, a similar process can be used for almost any kind of data.
Finding the Data
Thankfully, this was the easy part. Like most developed countries with any amount of snowfall, Canada has a government agency that keeps reasonably good records of snowfall over time. In this case, the provincial government of BC has publicly available snow data that can be downloaded here.
This is the part of the article where I have to stress the principle of “Garbage In, Garbage Out”. It’s absolutely vital when doing this kind of work that relatively clean and reliable data be chosen so that results are accurate. It might therefore be worth going through a couple of different datasets to find ones without significant gaps: problems with the data can be fixed, but you can’t make data appear from thin air.
For this project, since I was intending to create a multivariate model, I picked three datasets measuring snow depth, temperature, and snow water content on an hourly basis from a single station near Aiken Lake, BC. The Aiken Lake data was in csv format, which I dealt with using the pandas library in Python.
Cleaning the Data
This was by far the most arduous process in the entire project, not necessarily because it was the most technical, but because it was the least interesting.
The first thing I did was get rid of all NaN values from the dataset; even the best datasets will have a couple, and getting rid of them meant working with only numerical data going forward.
To do this, two options presented themselves: I could either delete the rows containing NaN values, or just replace all NaN values with the mean of the previous two values. I chose the latter solution based on the assumption that the weather wasn’t likely to vary significantly from one hour to the next. Since there weren’t too many continuous NaN values, this wasn’t a problem.
Having done this, we can finally look at our data for the very first time:
Not very pretty.
The first thing that stands out is the extreme variation, in both the positive and negative directions. By scrolling through the raw data in Excel, it became obvious that this stemmed from an error with the order of magnitude of the data: instead of 8,000 cm of snow, there was really only 80.
The first step to fixing this was to take the absolute value of all the data; getting negative values for snow depth seemed unlikely. The next step was trying to find the right power of 10 to multiply/divide any outliers by so that they fell in line with the rest of the data.
Finding the outliers themselves wasn’t the main issue: because I was dealing with an order of magnitude problem, I assumed (probably correctly) that if the depth of the snow increases or decreases by a factor of 10 within an hour, something’s gone wrong.
The slightly harder part was figuring out what the order of magnitude is supposed to be. This is a relatively easy task for a person looking through the file in Excel, but a somewhat harder one to program efficiently. This is because the snow data I used is precise to the millimetre, which means that it ranges from 0.1 cm all the way up to about 120 cm: four orders of magnitude.
The solution I ended up implementing is as follows: if a correction is in order, Python checks if the previous value was corrected. If it was, it corrects the current value by the same amount as it corrected the previous one. Otherwise, it assumes that the order of magnitude of the previous value is the same as the true order of magnitude of the current value. This solution breaks if consecutive values are wrong by different orders of magnitude, but this approach was the simplest one I tested which still worked consistently.
The final correction I had to make was to remove any outliers (defined as values more than 3 standard deviations away from the mean) from the data that didn’t seem to be caused by any issues in the order of magnitude of the measurement; these were just random values that appeared in the data for no apparent reason. These were dealt with in the same way as NaN values: by replacing the outlier with the mean of the two previous values.
Having done all this to each of the three datasets, the cleaned data looks like this:
(Note: the temperature here is in Kelvin in order to keep the values from all the datasets positive. This made cleaning them easier.)
Using the handy
pandas.DataFrame.merge() function, I combined and downloaded these as one csv file.
Preparing the Data
Before running the models, however, there is one final matter I had to attend to. Almost all classical statistical models, including the ARMA and VAR models that I ended up using, require a stationary time series. Effectively, this means that the time series can’t have any time-dependant patterns or trends (a more detailed explanation of the concept of seasonality can be found here).
Looking at the data, it’s pretty obvious that a seasonal pattern is at play here; this is what we would expect from meteorological data like this. While it is possible to run seasonal ARMA models, they’re pretty power intensive, and the seasonality must be corrected for the VAR model to work properly anyway. This is most easily done by subtracting each value in the dataset from the value it had a year prior (eg. snow depth on the 1st of January 2021 at 00:00:00 is now equal to itself minus the snow depth on the 1st of January 2020 at 00:00:00). This is a process known as seasonal differencing.
Unfortunately, this means that the first year of our data is unusable, but there is still enough remaining for the models to train on.
The stationary data looks like this:
To make absolutely sure the data was stationary, I ran a unit root test on each dataset. I used the augmented Dickey-Fuller test provided by the statsmodels library (
statsmodels.tsa.stattools.adfuller()), although other options are available and equivalent.
For each of the datasets, the Dickey-Fuller test rejected the hypothesis that they have a unit root, implying that the data is stationary. More information on how unit root tests work and how they can be implemented can be found here.
I was now ready to run the models!
Modelling the Data
I used three approaches to try to forecast snow depth: a classical univariate, classical multivariate, and neural network approach. For each of these, different models was used.
Classical Univariate: ARMA
An ARMA (Autoregressive Moving Average) model uses past values of the dataset to forecast future values of the same dataset by combining different models into one. Consequently, I only used snow depth data for this model, since that’s what I wanted to predict.
Fully understanding what each component of the ARMA model does is essential in picking the right hyperparameters. Autoregressive (AR) models are models which use previous values to predict future values. For our purposes, the important thing we have to keep track of is how many previous values are used, also called the “order” of the model. As an example, an order 3 autoregressive model would use the three previous values of the dataset to predict the next one. (For more details, here’s a website I found useful.)
In a similar way, a moving average (MA) model uses past errors in the model to predict future values. Once again, the “order” of the model refers to how many steps you take back in time in order to predict the next value. (For more details, the same website as above has a pretty good explanation of MA models as well.)
An ARMA model is a combination of an AR and MA model, and requires two hyperparameters to work: the order of the AR model (p) and the order of the MA model (q), written as ARMA(p, q). Although there are more technical and systematic ways to pick these two hyperparameters, the easiest is to just try out a bunch of values and see which ones work best. For this, however, I had to start coding.
As with most statistical models, I was able to find a library that did most of the heavy lifting for me: the
statsmodels.tsa.arima.model.ARIMA() class, which takes as parameters a time series and an order in the form of a tuplet (p, d, q).
While this (an ARIMA model) is not the same as an ARMA model, we can set d equal to zero, and keep p and q as the orders of the AR and MA models respectively to create an ARMA model. If the data I used wasn’t stationary, I would have had to consider d more carefully, but the seasonal differencing was enough in this case to ensure stationarity.
I then split the dataset into train and test data, choosing the arbitrary cut-off point of January 1st, 2021, and applied the model to my train data. Testing a couple different values of p and q, and using RMSE to measure the quality of each resulting model, I arrived on an ARMA(1, 2) model as my solution.
Once the model was fitted, all I had to do was undo the seasonal differencing by adding the value of the previous year to each data point. This gives the following results:
Classical Multivariate: VAR
A vector autoregression model is like an autoregression model but using several time series to predict each other. In this case, I used past values of snow depth, temperature, and snow water content to make predictions about future values of snow depth. Like an AR model, it has an “order”, referring to how many previous values of each time series it will consider.
The actual programming is not too different from the ARMA: I once again divided the datasets into train and test, the only difference being all three time series were used instead of only snow depth. These help predict snow depth, assuming that they are all correlated with each other (which, looking at their plots above, they seem to be).
Like with the ARMA model, I used the
statsmodels.tsa.api.VAR() class to make predictions. Even more conveniently than the ARMA model, the statsmodels VAR class allows you to input a maximum order for the model, and simply checks every order for the model until that value, picking the right one using an information criterion that must be specified (I used the AIC, but others are available). These effectively judge the quality of the model by trying to balance the quality of the predictions and the quantity of parameters (to prevent overfitting).
The final step was to correct the seasonal differencing, and I got the following predictions:
(Note: since snow depth was what I was primarily interested it, that’s the only thing I plotted but the VAR model makes predictions for each one of the time series I inputted.)
The most striking thing about this model when compared to the ARMA model is its extreme degree of similarity. Because they have (slightly) different RMSEs, it’s clear that different models were run, but the extreme similarity does indicate that something may have gone wrong.
My guess is that the other time series added to the model (temperature, snow water content) were not sufficiently correlated with snow depth, so the model treated them as insignificant and basically just ran an autoregression model on the snow depth data.
In any case, however, a RMSE of about 13 cm is also pretty good in this case too.
An LSTM model is a neural network, and as such, was by far the most complicated model I tried to implement. In contrast with the other models, LSTM doesn’t actually require any differencing, since stationarity isn’t necessary.
I used a simple, “vanilla” LSTM; I was using a smaller amount of relatively uncomplicated data, so a more complicated model was unnecessary. In order to make it a multi-step model (with predictions more than one hour ahead), I used this vanilla LSTM as a vector output model: a kind of LSTM model that takes a vector as both an input and an output, with the output vector being the length of the prediction that we want (a length 30 vector corresponds to 30 hours ahead). For the model itself, I used the keras library for machine learning, and found an extremely helpful guide for how to write the code itself. Finally, I chose to make it a univariate model, and only used snow depth.
The main task when creating such a model was picking the right hyperparameters. There are three main ones: the number of epochs, the learning rate, and the number of units in the LSTM model. The only real way to pick any of these is to run a series of tests to find which one results in the lowest RMSE for the test data.
Once again, it is helpful to understand what each of these hyperparameters do so that they can be tuned correctly. The epochs is the number of times the model goes back over the parameters to correct them, which usually takes place in the order of 100–2,000. The units are the number of neurons in the LSTM, and usually number in the order of 10–200. The learning rate is a measure of how quickly the model changes to fit the problem, and has a value between 0 and 1.
As mentioned above, the only way to get specific values for hyperparameters is to test a bunch of them. LSTM models can take a while to work, especially with a large dataset, high number of epochs, or high number of units. Consequently, it is absolutely worth the time to automate the process: check each of the values for each of the hyperparameters automatically, and just get the value with the lowest RMSE for each one. As a final note, LSTM models contain a random element (the initial setting of all the parameters), so running it a couple of times and taking the average of the RMSE ensures that you’re not dismissing a perfectly good value for one of the hyperparameters by accident.
In my case, I had to turn my hourly data into daily data to get a prediction that goes any length into the future. Additionally, I could only try to predict a month in the future, since any more would take far too long to run. After testing in the way I described above, I came across values of 500 for the epochs, 75 for the number of units, and 0.01 for the learning rate. This gave me the following result, with an RMSE of 6.41(!):
By contrast, the ARMA model over the same period of time (using hourly data) looks like this and has an RMSE of 11.54:
It’s hard to know whether scaling up the problem for the LSTM (six or seven months, rather than just one) will result in the same quality of prediction, but doing so requires more time and computing power than I currently have on my hands. However, I think that, at least on this scale, this implementation of an LSTM can definitely be considered successful.
Before the end of the article, I just wanted to mention some generally useful tips for anyone who’s interested in starting.
- If you ever find yourself thinking “why hasn’t someone already written a method for this?” someone probably already has.
When I first started, I spent about three hours creating and testing a function that takes a string and turns into a
datetime.datetime() object, before discovering the pandas
to_datetime() method. Don’t make the same mistake I did.
- Understand the models you’re using
There are many, many websites and tutorials explaining the models described above with examples that you can very easily copy-paste into your IDE. This is fine, as long as you don’t then spend a few hours guessing different hyperparameters without even knowing what order of magnitude they’re supposed to be.
- Don’t be afraid to use other people’s work
This is probably the most important piece of advice I’d give to someone just starting: someone more experienced than you has already done a lot of the hard part for you. There are dozens of different libraries that do most of the technical computational part of the process behind the scenes. As long as you learn to use them properly, and have a vague understanding of what you need to do to make them work, you should implement them in whatever way you find useful.
The process of finding out more about data science and modelling through practical examples like these has been incredibly interesting and intellectually stimulating, and I would recommend to anyone interested in this field to start by getting some real data and working with it in a way similar to what I did.
Finally, I want to thank everyone at Wegaw for this opportunity, but especially Daria Ludtke for organising the internship, and Thomas James for his patience in taking me through some of the more complicated parts of the process.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648911.0/warc/CC-MAIN-20230603000901-20230603030901-00302.warc.gz
|
CC-MAIN-2023-23
| 17,500
| 74
|