url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://www.reddit.com/r/turning/
|
code
|
We take square stuff and make it round!
Welcome to /r/turning! The Reddit corner for all things woodturning. If you have questions, projects, updates, gripes, or any other spiny wood related thing. Here is the place to post it.
We love to see your projects (Both successes and failures)
Above image credit = Uglulyx
Header image credit (left to right) /u/MrFurrypants, /u/jclark58, /u/UndocumentedAmerican, /u/tigermaple, /u/Guardianoflives, /u/Fuck_Off_Cancer, /u/curiot,
Want To Start Turning?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00478.warc.gz
|
CC-MAIN-2021-04
| 495
| 6
|
http://thegirl-withthemost-cake.blogspot.com/2010/02/bad-start.html
|
code
|
i am very very sick, so i am having to sleep constantly (at least 20 hours a day). hopefully this is not the swineflu- H1N1.
chicky is still asking me to watch the kids, even though i am at death's door.
i will be back soon! don't forget me!
ciao for now!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592001.81/warc/CC-MAIN-20180720232914-20180721012914-00076.warc.gz
|
CC-MAIN-2018-30
| 255
| 4
|
http://core-cms.prod.aop.cambridge.org/core/search?filters%5BauthorTerms%5D=Leonhard%20Meyer&eventCode=SE-AU
|
code
|
We report on recent polarimetric observations of the 18± 3 min quasi-periodicity present in near-infrared flares from Sagittarius A*. Observations in the K-band allow us a detailed investigation of the flares and their interpretation within the hot spot model. The interplay of relativistic effects plays a major role. By simultaneous fitting of the lightcurve fluctuations and the time-variable polarization angle, we give constraints to the parameters of the hot spot model, in particular, the dimensionless spin parameter of the black hole and its inclination. We consider all general relativistic effects that influence the polarization lightcurves. The synchrotron mechanism is most likely responsible for the intrinsic polarization. We consider two different magnetic field configurations as approximations to the complex structure of the magnetic field in the accretion flow. Considering the quality of the fit, we suggest that the spot model is a good description of the origin for the QPOs in NIR flares.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369420.71/warc/CC-MAIN-20210304143817-20210304173817-00102.warc.gz
|
CC-MAIN-2021-10
| 1,014
| 1
|
http://freecode.com/tags/arcade?page=1&sort=updated_at&with=89&without=
|
code
|
JGame Flash is an ActionScript 3 port of the JGame 3.5 API. It can be compiled with the free Flex toolkit. A Java-AS3 translator is included to make porting games easier. The goal of this project is to eventually enable JGame Java games to be converted (partially) automatically to ActionScript 3. JGame Flash works on Android Flash 10.1 and supports accelerometer input.
Star Defender 4 is a space shooter that still has all the best features of the Star Defender series. Face tons of new enemies with unique styles of behavior and new ways of attacking. Use new Star Defender 4 weapons (machine gun, saw, flame thrower, acid bomb, and cutter) as well as the best weapons from the previous game (parasitron, lasers, infector, ball lightning, missiles, homing laser, and barriers) Blast through more than 100 levels, 8 original missions, and of course huge impressive bosses at the end of every mission.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705310619/warc/CC-MAIN-20130516115510-00051-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 903
| 2
|
http://www.math.lsa.umich.edu/seminars/combin/fall03/sep05.html
|
code
|
The University of Michigan Combinatorics Seminar
Given a finite poset P, there is an associated distributive lattice J(P) consisting of the order ideals of P ordered by inclusion. In this talk I will consider a signed analogue of this association, where the notion of an order ideal is replaced by that of a ``signed order ideal.'' The resulting poset of signed order ideals is Eulerian and EL-shellable. Its cd-index, which encodes basic chain-enumerative information, can be computed by summing entries of the flag h-vector of J(P). The proof of the latter result is based on earlier work of Billera, Ehrenborg, and Readdy establishing a similar relationship between the cd-index of an oriented matroid and the flag h-vector of the underlying matroid.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826354.54/warc/CC-MAIN-20181214210553-20181214232553-00232.warc.gz
|
CC-MAIN-2018-51
| 753
| 2
|
https://www.coursehero.com/file/8968219/C-16-rollback-segments-D-Four-rollback-segments-E-Eight/
|
code
|
Unformatted text preview: ents.
E. Eight rollback segments.
Answer E is correct. You need to create 8 rollback segments to handle 32 transactions in the
OLTP system. There is the Rule of four to plan rollback segments numbers for OLTP systems.
Take the total number of transactions that will hit the database at any given time, and divide by 4
to decide how many rollback segments to create.
A: It's just wasting of resources to keep one rollback segment for each user.
B: It's not reasonable to create one rollback segment for each transaction.
C: Use the Rule of four to calculate the number of rollback segments.
D: Use the Rule of four to calculate the number of rollback segments.
Oracle 8, DBA Certification Exam Guide, Jason S. Couchman, p. 373
Chapter 8: Managing Database Objects I
A transaction fails and returns an ORA-01562 indicating that there is insufficient space in
the rollback segment. What are two possible causes? (Choose two) Actualtests.com - The Power of Kn...
View Full Document
- Fall '09
- Oracle Database, Jason S. Couchman, DBA Certification Exam, Certification Exam Guide
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886237.6/warc/CC-MAIN-20180116070444-20180116090444-00307.warc.gz
|
CC-MAIN-2018-05
| 1,101
| 17
|
http://eurotv.us/books/9888335022/luxe-barcelona-new-edition-including-free-mobile-app.html
|
code
|
Best Quality Writing and Content.
By Best Author
More Books ready to You!
101 Coolest Things to Do in India: 101 Coolest Things to Do in India (Backpacking India, Goa, Rajasthan, New Delhi, Kerala, Mumbai, Kolkata)
Records of the Trials of the Spanish Inquisition in Ciudad Real, Volume Three: The Trials of 1512-1527 in Toledo (Fontes Ad Res Judaicas Spectantes) (English and Spanish Edition)
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866938.68/warc/CC-MAIN-20180525024404-20180525044404-00530.warc.gz
|
CC-MAIN-2018-22
| 393
| 5
|
https://blog.seattlepi.com/microsoft/2010/10/21/amazon-announces-free-tier-of-web-services/
|
code
|
Seattle-based Amazon.com this morning announced a 12-month, free tier for entry-level subscribers of its cloud platform, Amazon Web Services. There are a number of terms, including about a month’s worth of running Elastic Compute Cloud (EC2) Linux Micro Instances. I, like ZDNet’s Mary-Jo Foley, wondered what Microsoft might do in response. A spokesperson said there will be some Windows Azure news at Microsoft’s Professional Developers Conference, which runs Oct. 28 and 29 in Redmond. Stay tuned next week for details.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999740.32/warc/CC-MAIN-20190624211359-20190624233359-00094.warc.gz
|
CC-MAIN-2019-26
| 528
| 1
|
https://www.ias.informatik.tu-darmstadt.de/Team/JuliaVinogradska
|
code
|
Machine Learning, Reinforcement Learning, Optimal Control
Learning control has become a viable approach in both the machine learning and control community. Many successful applications impressively demonstrate the advantages of learning control: in contrast to classical control methods, it does not presuppose a detailed understanding of the underlying dynamics but tries to infer the required information from data. Thus, relatively little expert knowledge about the dynamics is required and fewer assumptions such as a parametric form and parameter estimates must be made.
As for real-world applications it is desirable to minimize system interaction time, model based approaches are often preferred. However, one drawback of model-based approaches is that the model is inherently approximate, but at the same time is implicitly assumed to model the system dynamics sufficiently well. These conflicting assumptions can derail learning and solutions to the approximate control problem may fail at the real-world task, especially when predictions are highly uncertain. Gaussian processes (GPs) offer an elegant, fully Bayesian approach to model system dynamics and incorporate uncertainty. Given observed data, GPs infer a distribution over all plausible dynamics models and are, thus, a viable choice for model-based reinforcement learning.
Julia's research focuses on closed-loop control systems with GP forward dynamics models. There are several open questions in this field she hopes to address during her PhD. One major difficulty of GPs as forward dynamics models in closed-loop control is that predictions become intractable when the input to the GP is a distribution. There are some well-known approximation methods, that offer rather rough estimates of the output state distribution and can be computed efficiently. However, there are several applications where these estimates are not precise enough and which demand for highly accurate multi-step-ahead predictions. One such field that requires high precision approximate inference is stability analysis of closed-loop control systems with GPs as forward dynamics model. This field deals with evaluating the system behaviour under a certain control policy. For example, one may be interested whether a policy succeeds or from which starting states the policy succeeds. In particular, the goal is to derive guarantees that the system will expose a certain (desired) behaviour. While in classical control stability analysis dates back to the 19th century, there has not been much research in this direction for GP dynamics models yet. However, such guarantees are crucial to learn control in safety critical applications. Julia works on several problems with GP dynamics models from this field: highly accurate approximations for multi-step ahead predictions, that enable stability analysis; stability of the closed-loop control structure (i) for finite time horizons (ii) under the presence of disturbances and (iii) asymptotic stability; learning control based on GP forward dynamics for finite and infinite time horizons.
Vinogradska, J.; Bischoff, B.; Koller, T.; Achterhold, J.; Peters, J. (in press). Numerical Quadrature for Probabilistic Policy Search, IEEE Transactions on Pattern Analysis and Machine Intelligence.
See Details [Details] Download Article [PDF] BibTeX Reference [BibTex]
Vinogradska, J.; Bischoff, B.; Peters, J. (2018). Approximate Value Iteration based on Numerical Quadrature, Proceedings of the International Conference on Robotics and Automation, and IEEE Robotics and Automation Letters (RA-L), 3, pp.1330-1337. See Details [Details] Download Article [PDF] BibTeX Reference [BibTex]
Vinogradska, J.; Bischoff, B.; Nguyen-Tuong, D.; Peters, J. (2017). Stability of Controllers for Gaussian Process Forward Models, Journal of Machine Learning Research (JMLR), 18, 100, pp.1-37. See Details [Details] Download Article [PDF] BibTeX Reference [BibTex]
Vinogradska, J. (2017). Gaussian Processes in Reinforcement Learning: Stability Analysis and Efficient Value Propagation, PhD Thesis. See Details [Details] Download Article [PDF] BibTeX Reference [BibTex]
Vinogradska, J.; Bischoff, B.; Nguyen-Tuong, D.; Romer, A.; Schmidt, H.; Peters, J. (2016). Stability of Controllers for Gaussian Process Forward Models, Proceedings of the International Conference on Machine Learning (ICML). See Details [Details] Download Article [PDF] BibTeX Reference [BibTex]
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530505.30/warc/CC-MAIN-20190421080255-20190421102255-00254.warc.gz
|
CC-MAIN-2019-18
| 4,439
| 10
|
https://docs.microsoft.com/en-us/windows/win32/api/uiribbon/nn-uiribbon-iuieventingmanager
|
code
|
IUIEventingManager interface (uiribbon.h)
The IUIEventingManager interface is implemented by the Ribbon framework and provides the notification functionality for applications that register for ribbon events.
The IUIEventingManager interface inherits from the IUnknown interface. IUIEventingManager also has these types of members:
The IUIEventingManager interface has these methods.
|IUIEventingManager::SetEventLogger||Sets the event logger for ribbon events.|
|Minimum supported client||Windows 8 [desktop apps only]|
|Minimum supported server||Windows Server 2012 [desktop apps only]|
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990584.33/warc/CC-MAIN-20210513080742-20210513110742-00432.warc.gz
|
CC-MAIN-2021-21
| 587
| 7
|
https://www.howtogeek.com/51873/how-to-setup-software-raid-for-a-simple-file-server-on-ubuntu/
|
code
|
Do you need a file server on the cheap that is easy to setup, “rock solid” reliable with Email Alerting? will show you how to use Ubuntu, software RAID and SaMBa to accomplish just that.
Despite the recent buzz to move everything to the “all mighty”cloud, sometimes you may not want your information in someone else’s server or it just maybe unfeasible to download the volumes of data that you require from the internet every time (for example image deployment). So before you clear out a place in your budget for a storage solution, consider a configuration that is licensing free with Linux.
With that said, going cheap/free does not mean “throwing caution to the wind”, and to that end, we will note points to be aware of, configurations that should be set in place in addition to using software RAID, to achieve the maximum price to reliability ratio.
Image by Filomena Scalise
About software RAID
As the name implies, this is a RAID (Redundant Array of Inexpensive Disks) setup that is done completely in software instead of using a dedicated hardware card. The main advantage of such a thing is cost, as this dedicated card is an added premium to the base configuration of the system. The main disadvantages are basically performance and some reliability as such a card usually comes with it’s own RAM+CPU to perform the calculations required for the redundancy math, data caching for increased performance, and the optional backup battery that keeps unwritten operations in the cache until power has been restored in case of a power out.
With a software RAID setup your sacrificing some of the systems CPU performance in order to reduce total system cost, however with todays CPUs the overhead is relatively negligible (especially if your going to mainly dedicate this server to be a “file server”). As far as disk performance go, there is a penalty… however I have never encountered a bottleneck from the disk subsystem from the server to note how profound it is. The Tom’s Hardware guide “Tom’s goes RAID5” is an oldie but a goody exhaustive article about the subject, which I personally use as reference, however take the benchmarks with a grain of salt as it is talking about windows implementation of software RAID (as with everything else, i’m sure Linux is much better :P).
- Patience young one, this is a long read.
- It is assumed you know what RAID is and what it is used for.
- This guide was written using Ubuntu server9.10 x64, therefore it is assumed that you have a Debian based system to work with as well.
- You will see me use VIM as the editor program, this is just because I’m used to it… you may use any other editor that you’d like.
- The Ubuntu system I used for writing this guide, was installed on a disk-on-key. Doing so allowed me to use sda1 as part of the RAID array, so adjust accordingly to your setup.
- Depending on the type of RAID you want to create you will need at least two disks on your system and in this guide we are using 6 drives.
Choosing the disks that make the array
The first step in avoiding a trap is knowing of it’s existence (Thufir Hawat from Dune).
Choosing the disks is a vital step that should not be taken lightly, and you would be wise to capitalize on yours truly’s experience and heed this warning:
Do NOT use “consumer grade” drives to create your array, use “server grade” drives!!!!!!
Now i know what your thinking, didn’t we say we are going to go on the cheap? and yes we did, but, this is exactly one of the places where doing so is reckless and should be avoided. Despite of their attractive price, consumer grade hard drives are not designed to be used in a 24/7 “on” type of a use. Trust me, yours truly has tried this for you. At least four consumer grade drives in the 3 servers I have setup like this (due to budget constraints) failed after about 1.5 ~ 1.8 years from the server’s initial launch day. While there was no data loss, because the RAID did it’s job well and survived… moments like this shorten the life expectancy of the sysadmin, not to mention down time for the company for the server maintenance (something which may end up costing more then the higher grade drives).
Some may say that there is no difference in fail rate between the two types. That may be true, however despite these claims, server grade drives still have a higher level of S.M.A.R.T restrictions and QAing behind them (as can be observed by the fact that they are not released to the market as soon as the consumer drives are), so i still highly recommend that you fork out the extra $$$ for the upgrade.
Choosing the RAID level.
While I’m not going to go into all of the options available (this is very well documented in the RAID wikipedia entry), I do feel that it is noteworthy to say that you should always opt for at least RAID 6 or even higher (we will be using Linux RAID10). This is because when a disk fails, there is a higher chance of a neighboring disk failure and then you have a “two disk” failure on your hands. Moreover, if your going to use large drives, as larger disks have a higher data density on the platter’s surface, the chance for failure is higher. IMHO disks from 2T and beyond will always fall into this category, so be aware.
Let’s get cracking
While in Linux/GNU, we could use the entire block device for storage needs, we will use partitions because it makes it easier to use disk rescue tools in case the system has gone bonkers. We are using the “fdisk” program here, but if your going to use disks larger then 2T you are going to need to use a partitioning program that supports GPT partitioning like parted.
sudo fdisk /dev/sdb
Note: I have observed that it is possible to make the array without changing the partition type, but because this is the way described all over the net I’m going to follow suit (again when using the entire block device this is unnecessary).
Once in fdisk the keystrokes are:
n ; for a new partition
p ; for a primary partition
1 ; number of partition
enter ; accept the default
enter ; accept the default
t ; to change the type
fd ; sets the type to be “Linux raid auto detect” (83h)
w ; write changes to disk and exit
Rinse and repeat for all the disks that will be part of the array.
Creating a Linux RAID10 array
The advantage of using “Linux raid10” is that it knows how to take advantage of a non-even number of disks to boost performance and resiliency even further then the vanilla RAID10, in addition to the fact that when using it the “10” array can be created in one single step.
Create the array from the disks we have prepared in the last step by issuing:
sudo mdadm --create /dev/md0 --chunk=256 --level=10 -p f2 --raid-devices=5 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 --verbose
Note: This is all just one line despite the fact that the representation breaks it into two.
Let’s break the parameters down:
- “–chunk=256” – The size of bytes the raid stripes are broken to, and this size is recommended for new/large disks (the 2T drives used to make this guide were without a doubt in that category).
- “–level=10” – Uses the Linux raid10 (if a traditional raid is required, for what ever reason, you would have to create two arrays and join them).
- “-p f2” – Uses the “far” rotation plan see note below for more info and “2” tells that the array will keep two copies of the data.
Note: We use the “far” plan because this causes the physical data layout on the disks to NOT be the same. This helps to overcome the situation where the hardware of one of the drives fails due to a manufacturing fault (and don’t think “this won’t happen to me” like yours truly did). Due to the fact that the two disks are of the same make and model, have been used in the same fashion and traditionally have been keeping the data on the same physical location… The risk exists that the drive holding the copy of the data has failed too or is close to and will not provide the required resiliency until a replacement disk arrives. The “far” plan makes the data distribution to a completely different physical location on the copy drives in addition to using disks that are not close to each other within the computer case. More information can be found here and in the links below.
Once the array has been created it will start its synchronization process. While you may wish to wait for traditions’ sake (as this may take a while), you can start using the array immediately.
The progress can be observed using:
watch -d cat /proc/mdstat
Create the mdadm.conf Configuration File
While it has been proven that Ubuntu simply knows to scan and activate the array automatically on startup, for completeness sake and courtesy for the next sysadmin we will create the file. Your system doesn’t automatically create the file and trying to remember all the components/partitions of your RAID set, is a waist of the system admin’s sanity. This information can, and should be kept in the mdadm.conf file. The formatting can be tricky, but fortunately the output of the mdadm –detail –scan –verbose command provides you with it.
Note: It has been said that: “Most distributions expect the mdadm.conf file in /etc/, not /etc/mdadm. I believe this is a “ubuntu-ism” to have it as /etc/mdadm/mdadm.conf”. Due to the fact that we are using Ubuntu here, we will just go with it.
sudo mdadm --detail --scan --verbose > /etc/mdadm/mdadm.conf
IMPORTANT! you need to remove one “0” from the newly created file because the syntax resulting from the command above isn’t completely correct (GNU/Linux isn’t an OS yet).
If you want to see the problem that this wrong configuration causes, you can issue the “scan” command at this point, before making the adjustment:
mdadm --examine --scan
To overcome this, edit the file /etc/mdadm/mdadm.conf and change:
Running the mdadm –examine –scan command now should return without an error.
Filesystem setup on the array
I used ext4 for this example because for me it just built upon the familiarity of the ext3 filesystem that came before it while providing promised better performance and features.
I suggest taking the time to investigate what filesystem better suits your needs and a good start for that is our “Which Linux File System Should You Choose?” article.
sudo mkfs.ext4 /dev/md0
Note: In this case i didn’t partition the resulting array because, i simply didn’t need it at the time, as the requesting party specifically requested at least 3.5T of continuous space. With that said, had i wanted to create partitions, i would have had to use a GPT partitioning capable utility like “parted”.
Create the mount point:
sudo mkdir /media/raid10
Note: This can be any location, the above is only an example.
Because we are dealing with an “assembled device” we will not use the filesystem’s UUID that is on the device for mounting (as recommended for other types of devices in our “what is the linux fstab and how does it work” guide) as the system may actually see part of the filesystem on an individual disk and try to incorrectly mount it directly. to overcome this we want to explicitly wait for the device to be “assembled” before we try mounting it, and we will use the assembled array’s name (“md”) within fstab to accomplish this.
Edit the fstab file:
sudo vim /etc/fstab
And add to it this line:
/dev/md0 /media/raid10/ ext4 defaults 1 2
Note: If you change the mount location or filesystem from the example, you will have to adjust the above accordingly.
Use mount with the automatic parameter (-a) to simulate a system boot, so you know that the configuration is working correctly and that the RAID device will be automatically mounted when the system restarts:
sudo mount -a
You should now be able to see the array mounted with the “mount” command with no parameters.
Email Alerts for the RAID Array
Unlike with hardware RAID arrays, with a software array there is no controller that would start beeping to let you know when something went wrong. Therefore the Email alerts are going to be our only way to know if something happened to one or more disks in the array, and thus making it the most important step.
Follow the “How To Setup Email Alerts on Linux Using Gmail or SMTP” guide and when done come back here to perform the RAID specific steps.
Confirm that mdadm can Email
The command below, will tell mdadm to fire off just one email and close.
sudo mdadm --monitor --scan --test --oneshot
If successful you should be getting an Email, detailing the array’s condition.
Set the mdadm configuration to send an Email on startup
While not an absolute must, it is nice to get an update from time to time from the machine to let us know that the email ability is still working and of the array’s condition. your probably not going to be overwhelmed by Emails as this setting only affects startups (which on servers there shouldn’t be many).
Edit the mdadm configuration file:
sudo vim /etc/default/mdadm
Add the –test parameter to the DAEMON_OPTIONS section so that it would look like:
You may restart the machine just to make sure your “in the loop” but it isn’t a must.
Installing SaMBa on a Linux server enables it to act like a windows file server. So in order to get the data we are hosting on the Linux server available to windows clients, we will install and configure SaMBa.
It’s funny to note that the package name of SaMBa is a pun on the Microsoft’s protocol used for file sharing called SMB (Service Message Block).
In this guide the server is used for testing purposes, so we will enable access to its share without requiring a password, you may want to dig a bit more into how to setup permissions once setup is complete.
Also it is recommended that you create a non-privileged user to be the owner of the files. In this example we use the “geek” user we have created for this task. Explanations on how to create a user and manage ownership and permissions can be found in our “Create a New User on Ubuntu Server 9.10” and “The Beginner’s Guide to Managing Users and Groups in Linux” guides.
aptitude install samba
Edit the samba configuration file:
sudo vim /etc/samba/smb.conf
Add a share called “general” that will grant access to the mount point “/media/raid10/general” by appending the below to the file.
path = /media/raid10/general
force user = geek
force group = geek
read only = No
create mask = 0777
directory mask = 0777
guest only = Yes
guest ok = Yes
The settings above make the share addressable without a password to anyone and makes the default owner of the files the user “geek”.
For your reference, this smb.conf file was taken from a working server.
Restart the samba service for the settings to take affect:
sudo /etc/init.d/samba restart
Once done you can use the testparm command to see the settings applied to the samba server.
that’s it, the server should now be, accessible from any windows box using:
When you need to troubleshoot a problem or a disk has failed in an array, I suggest referring to the mdadm cheat sheet (that’s what I do…).
In general you should remember that when a disk fails you need to “remove” it from the array, shutdown the machine, replace the failing drive with a replacement and then “add” the new drive to the array after you have created the appropriate disk layout (partitions) on it if necessary.
Once that’s done you may want to make sure that the array is rebuilding and watch the progress with:
watch -d cat /proc/mdstat
Good luck! :)
Using software RAID won’t cost much… Just your VOICE ;-)
- › What Is Network Booting (PXE) and How Can You Use It?
- › How to Use Multiple Disks Intelligently: An Introduction to RAID
- › Microsoft Edge Just Got a Big PDF Upgrade
- › The First Android 14 Preview Is Here, Blocks Older Apps
- › Can You See Who Unfollowed You on Instagram?
- › Save Big on Noise Canceling Headphones & Earbuds, Plus More
- › What Is ChatGPT, and Why Is It Important?
- › How to Delete Your Reddit History
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00215.warc.gz
|
CC-MAIN-2023-06
| 16,163
| 123
|
http://www.linuxhomenetworking.com/forums/showthread.php/10150-GNOME-Control-Panel-syncing-palm-with-Evolution
|
code
|
I'm trying to get my Palm Pilot to sync with Evolution... The computer recognized the palm, detected the user name, etc. but when I hit the hot sync button, is syncs, but nothing happens.
When I went to Ximian's website for support, it talks about the GNOME Control Panel:
"1) If you are not already, enter the GNOME Control Center by clicking on System > Settings
2) Click on Palm Conduits on the left-hand side
3)Click on an entry that you would like to sync, such as EAddress and then click [Enable]
4) Choose the Sync Action, syncronize is the default.
5) Repeat this for each entry that you wish to sync with your PalmOS device
6) Click [OK] when done"
I don't have... or don't know where top find... the Gnome Control Center.
Any ideas how to get my seemingly properly working Palm to sync with Evolution?
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736678979.28/warc/CC-MAIN-20151001215758-00203-ip-10-137-6-227.ec2.internal.warc.gz
|
CC-MAIN-2015-40
| 811
| 10
|
https://forums.rpgmakerweb.com/index.php?threads/6th-annual-driftwood-gaming-game-jam.134089/
|
code
|
- Mar 28, 2017
- Reaction score
- First Language
- Primarily Uses
Make your own game jam!
Choose 2 or more from a list of ten super cool themes, and make your own game jam!
You'll have 30 days to create a 30 minute game based on the themes you pick. Please follow the itch.io link below for full rules and conditions.
A game jam from 2021-03-03 to 2021-05-03 hosted by Drifty & TeasJams. (****PLEASE REFER TO THIS PAGE AND THE FAQ POST IN THE PAGE COMMUNITY SECTION FOR THE MOST UP TO DATE RULES****) Rules: 1. You must pick more than one...
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00017.warc.gz
|
CC-MAIN-2021-21
| 541
| 8
|
http://slashdot.org/~brisk0
|
code
|
(Also if we weren't a commonwealth country, and not everyone else did it that way, probably)
That being said, Arch may not be the way to go, as its developers have the tendency to do stupid things and push things into the repos that shouldn't be pushed, but there's a mailing list that describes all the problems (which the devs insist you subscript to and check every time you want to poke the repos with a stick), and there's always one guy willing to stand up to the abuse in the forums and get a proper answer from someone about how to fix it, and if you want to learn command-line Linux, by gum you'll learn fast. The documentation on the archwiki also goes above and beyond in the way of how to do x. If you do go the arch route, I highly reccomend running another computer nearby to look up the archwiki etc. on whilst you're setting up. And every time you update...
There's a reason that they're called theories.
Because they have been repeatedly confirmed through observation and experiment?
As far as I know your point is still valid, I'm just nitpicking.
So if we assume mediocrity (and assume I'm not just spouting bull), we exist in one of the simpler universes... the original must have been nuts.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986646.29/warc/CC-MAIN-20150728002306-00321-ip-10-236-191-2.ec2.internal.warc.gz
|
CC-MAIN-2015-32
| 1,211
| 6
|
http://www.freelancer.com/projects/PHP-Website-Design/Animated-Greeting-card-site-design.html
|
code
|
Need some work done? Post a Project Today
I want a site built, offering animated electronic greeting cards
Similar to: 123greetings.com / egreetings.com / jacquielawson.com
This will be a subscription site (initially paypal only) and similar preview etc to above sites. I may offer some free cards which would include an advertising message.
Need the ability to add a custom message to end of card (as sample sites)
I will provide the url and hosting info - also need a logo designed.
Site should be simple to navigate and intuitive.
Ability to later add language options.
Please let me know your ideas and whether you would use a CMS system, or suggestions.
The site will be fairly basic to start with with the plan of adding other features and expanding in the future.
Ideally you would have worked on a similar type of site before.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705575935/warc/CC-MAIN-20130516115935-00090-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 834
| 11
|
https://client.style/for-website-editors
|
code
|
This section is for content editors and (non technical) website users and provides basic guidance to using Wordpress
This is still work in progress
Use your individual login url and email address to login. Please note default /wp-admin/ is usually changed to prevent bad manner robots from accessing the login.
Issues with forgot password? Sometimes email server (Sendgrid) configurations can lead to delivery issues with native wordpress emails, especially if they have no other purpose than this. Or sometimes Sendgrid adds lead tracking to email links which can also break down the links. If issues please contact email@example.com and we’ll reset your password.
Using the editor
Line breaks vs Paragraphs
Enter breaks the paragraph (adds space) while shift+enter adds line break (no space, handy for example with contacts etc where you want to keep the items below each other)
Most pages and posts use featured image (aka hero image). It can be found in a seperate box on the right sidebar bottom.
Publish, draft or schedule
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506028.36/warc/CC-MAIN-20230921141907-20230921171907-00683.warc.gz
|
CC-MAIN-2023-40
| 1,030
| 9
|
https://projects.sare.org/sare_project/gs20-226/
|
code
|
- Animal Production: feed/forage, pasture fertility
- Crop Production: nutrient management
- Soil Management: soil quality/health
Researchers and producers are challenged with developing agricultural systems that use limited resources effectively while improving productivity in the face of land degradation and increased climate variability. This is especially critical in semi-arid systems that are vulnerable to soil erosion, nutrient depletion, and extreme water scarcity. A sustainable and economically feasible management option is converting high-input row crop systems to grazed perennial grasslands, which are managed to provide high-quality forage for animal performance while improving soil health and conserving water and nutrient resources through minimized disturbance and continuous soil cover. Enteric methane (CH4) production from cattle is a potential sustainability tradeoff in these systems, but it is likely possible to offset this impact through integrated soil-plant-animal management. Preliminary data for our study indicate that adding legumes can increase soil microbial uptake of CH4 and improve forage quality for livestock in semi-arid pastures, which improves resource efficiency, sustainability, and productivity. Our proposed two-year study will leverage long-term forage manipulations in grazed semi-arid pastures to determine how management regulates soil greenhouse gas (GHG) fluxes. Within this established system, we will investigate how nutrient and forage management increases soil CH4 uptake while improving soil health in semi-arid pastures. The results of our study will ultimately help us create more efficient and resilient semi-arid agricultural systems across the globe.
Project objectives from proposal:
- Discover how different sources of soil nitrogen regulate the presence and activity of CH4 cycling soil microbes and GHG soil fluxes.
- Quantify how legume density influences GHG flux in established long-term pastures.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511000.99/warc/CC-MAIN-20231002132844-20231002162844-00768.warc.gz
|
CC-MAIN-2023-40
| 1,970
| 7
|
http://ovenordstrom.blogspot.com/2008/05/javaforum-presentation-javaone2008.html
|
code
|
Yesterday we held a presentation at the Swedish Java-community conference JavaForum. We presented some of the most interesting topics from this year's JavaOne conference (I was talking about JavaME). Everything went quite well, I think. "We" are;
Robert Varttinen, Jonas Södergren, Mattias Holmqvist, Magnus Kastberg and myself.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590069.15/warc/CC-MAIN-20180718060927-20180718080927-00602.warc.gz
|
CC-MAIN-2018-30
| 329
| 2
|
https://english.stackexchange.com/questions/115022/spaces-for-ellipses?noredirect=1
|
code
|
I find some opinions about the rules for ellipses are conflicting. Here are some conflicting issues:
Q1: Are the spaces between the dots in a ellipsis necessary, i.e.
(Yes.) Grammar Girl's article
. . . for everyday purposes, it's fine to use regular spaces between the ellipsis points. Type period-space-period-space-period. Just make sure your dots don’t end up on two different lines.
Bringhurst writes that a full space between each dot is "another Victorian eccentricity. In most contexts, the Chicago ellipsis is much too wide"—he recommends using flush dots, or thin-spaced dots (up to one-fifth of an em), or the prefabricated ellipsis character.
(No.) My personal habit. I think typing
dot-dot-dotis more convenient; though I find it looks better to use the
dot-space-dot-space-dotstyle on this page :)
Q2: Normally an ellipsis should be spaced fore-and-aft to separate it from the text. So, when should the fore space or the aft space disappear?
Ellipses at the beginning and end of quotations
Aardvark said, “. . . Squiggly never caught a fish.”
Ellipses with question marks and exclamation points
“Where did he go? . . . Why did he go out again?” [Material is removed between the two sentences]
“Where did he go . . . ? Why did he go out again?” [Material is removed before the first question mark. Note the space between the last ellipsis point and the question mark.]
Ellipses with commas and semicolons
“Aardvark went home, . . . and Squiggly decided to meet him later.”
“Aardvark went home . . . ; Squiggly would meet him later.” [Note the space between the ellipsis and the semicolon.]
. . . when it combines with other punctuation, the leading space disappears and the other punctuation follows.
- i … j
i-(space)-(ellipsis)-(space)-j, the normal case.
- l…, l
- l, … l
- i … j
Katherine Fry & Rowena Kirton's grammar book: Grammar for Grown-Ups
. . . The only time there isn't a final space is when the ellipsis comes before a closing quote mark -- then the quote mark comes directly after dot 3, 'like . . .' this, 'not . . . ' this.
How numerous the conflicting rules are! I'm totally confused.
EDIT To state my question more clearly -- I need to write some software manuals in plain ASCII text. Can I just type ellipses choosing any style because there's no strict rule about that?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100602.36/warc/CC-MAIN-20231206162528-20231206192528-00694.warc.gz
|
CC-MAIN-2023-50
| 2,335
| 27
|
https://www.bladelessfanstore.com/hogan-women-maxi-h222-white-shoes-outlet-store.html
|
code
|
UP TO 60% OFF! FREE SHIPPING!!!
Standard Shipping (6-14 days) - Free Shipping
Hogan Men Interactive³ Bordeaux Shoes Flash Sales
Hogan Women Interactive Blue Shoes Best Sale
Hogan Women Active One Black/Pink/White Shoes Online Hotsell
Hogan Women's Hxw2220t548hqk0002 Black Leather Sneakers Discount Sale
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400188049.8/warc/CC-MAIN-20200918155203-20200918185203-00577.warc.gz
|
CC-MAIN-2020-40
| 304
| 6
|
http://www.staffingtalk.com/people-stop-internet-explorer-forever
|
code
|
An email was passed around the office last night about Germany's temporary urging of its citizens to stop using Internet Explorer.
You read that right. Germany is asking its citizens to stop using Internet Explorer.
This isn't some crazy scheme devised by an obsessed Google fanboy. And Germany isn't looking to thwart IE forever (more on that in a minute) either. This is about security.
Apparently hackers have found a way to exploit Internet Explorer in a really bad way. Microsoft has responded by saying they will have a security patch ready in a few days. In the meantime they recommend you download/install/configure some other software that will protect you.
Or you could just use Chrome.
As a web designer myself, the Reuters headline brings tears of elation to my eyes. I can only dream of a future without IE. Why?
When you are building a website you have a few different browsers to design for: Chrome, Firefox, Internet Explorer, Safari, Opera and then you check them on the iPad too. There might be a couple more, but that's the basic list.
So you build your site and everything looks great. Most of the time I check my site in Chrome because if it works in Chrome, it will typically work in everything else. Except IE.
Web designers literally say to each other, "It looks great! Have you opened it in Hell yet?"
Hell meaning Internet Explorer. Usually I will roll my eyes and say, "Yeah, look..." with all of the excitement of Okay Guy.
Most of the time, something is broken/weird looking. Now I have to start building parts of the site specifically for Internet Explorer. Depending on the size of the site, this can take up to a couple weeks. What a colossal waste of time.
Imagine not spending weeks building a special site just for Internet Explorer. Especially IE8. It's an absolute joke.
When I was at SXSW in 2011, I attended a session that featured representatives from Microsoft, Google, Mozilla and Opera. They were there to discuss what measures they have taken to make sure their products follow standards. The audience would ask hard questions, and usually the table would end up looking at the guy from Microsoft, while he spouted off some scripted BS response. It was beyond frustrating because it was obvious Microsoft wasn't going to even try to follow standards.
While I can continue to dream of a day where Internet Explorer followed design standards -- I am not the least bit optimistic that day will ever come.
And even though I don't support illegal hacking, I still wish something, anything, would come along and abolish this nightmare of a browser once and for all.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863206.9/warc/CC-MAIN-20180619212507-20180619232507-00114.warc.gz
|
CC-MAIN-2018-26
| 2,603
| 15
|
http://mobile.osnews.com/story.php/27681/Ubuntu-14.04-LTS-released/
|
code
|
|Ubuntu 14.04 LTS released|
|By Thom Holwerda on 2014-04-17 22:29:00|
Ubuntu 14.04 LTS is the first long-term support release with support for the new "arm64" architecture for 64-bit ARM systems, as well as the "ppc64el" architecture for little-endian 64-bit POWER systems. This release also includes several subtle but welcome improvements to Unity, AppArmor, and a host of other great software.
Is it just me, or do releases of major Linux distributions simply not create much excitement anymore? I remember a time when these releases were hotly anticipated and much debated. These days, they go by and nobody really seems to care. Is this a reflection of shifting focus in the industry - towards mobile - or because the interest in desktop Linux in general has waned considerably?
- Malware found in the Ubuntu Snap store - 2018-05-13
- Ubuntu 18.04: Ubuntu has never been better - 2018-05-09
- Ubuntu 17.10: return of the GNOME - 2017-12-02
- My Ubuntu for mobile devices post mortem analysis - 2017-06-21
- More related articles
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515041.68/warc/CC-MAIN-20181022113301-20181022134801-00187.warc.gz
|
CC-MAIN-2018-43
| 1,033
| 9
|
https://mzansideeppodcast.com/session-175-lady-zeejay-lounge-soulful
|
code
|
Join our brand-new Facebook group to discover more mixes. https://www.facebook.com/groups/mzansideep
See our official Facebook Page https://www.facebook.com/souldeepmix
Certain Mzansi Deep recordings also available on Youtube
Pierre Johnson - Polar (Release Date = 29 Jan 2021)
MIX BY Lady Zeejay
Facebook : Lady Zeejay
For Mzansi Deep show enquiries or comments, please email email@example.com
See the official Mzansi Deep Facebook Page https://www.facebook.com/souldeepmix
You can help Mzansi Deep to continue bringing you quality Deep House content by becoming a MZANSI DEEP VIP member on www.patreon.com/mzansideep .
As a MZANSI DEEP VIP you would have access to clean mixes ( no talking ) for all shows.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104690785.95/warc/CC-MAIN-20220707093848-20220707123848-00722.warc.gz
|
CC-MAIN-2022-27
| 708
| 10
|
http://www.linuxquestions.org/questions/linuxquestions-org-member-intro-24/hello-future-educator-present-student-4175450846/
|
code
|
LinuxQuestions.org Member IntroNew to LinuxQuestions.org? Been a long time member but never made a post? Introduce yourself here.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
My name is Gabriel, i am from Chile and a total newbye about linux and computers in general. However, against all odds i managed to install succesfully ubuntu 6.06 on my loved old IBM T22 (god how i love that case).
I know this is going to be slow (and painful at times). I know i will have to read more than i want to, but i refuse to send those other 3 old pcs (A P4 with 512 Ram and 2 other similar AMDs) to the trash. I impose myself with the noble task of installing linux on them, and to make them useful again, for educational purposes. I join this community with that goal, and i declare that I will never be a computer illiterate again.
Id love any early advice or suggestions for my intentions, as i struggle even to identify the hardware i actually have :P. That being said i want to thank all the people here for the information that i have already found in this forum, it is clear to me this is a very good site, full of nice people.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124299.47/warc/CC-MAIN-20170423031204-00290-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 2,510
| 13
|
https://www.thewichitacomputerguy.com/comment/54
|
code
|
I've had the opportunity to test out Windows 7 and I must say, I really like what I see! Not only is it quick, but the engineers at Microsoft have finally begun to realize that their software needs to make sense!
Enough of the Kudos, I still personally believe that Microsoft has a long way to go before they ever get a clue... so, on to the topic at hand: Getting the Intel 865 video drivers to work on Windows Vista and Windows 7.
On first install of either OS, the standard VGA drivers provided by MS are used for display. While this might work fine in some situations, other issues are presented like video responsiveness, screen resolution etc. I decided to test out Windows 7 on an old HP 2.4GHz system I had lying around which has the Intel 865 onboard video. I recently ran into a similar issue on a clients computer where they had upgraded an (even older HP) from XP to Vista and they had the similar issue where the video drivers really needed to be updated to have a smooth experience. Since I can't remember exactly where I found the "fix", I regret to say that I cant give props to the person who figured this out... but, I do want to report that updating the Intel 865 video drivers on Windows 7 using the following method works like a charm!
Manually update using the Intel 865 XP Professional Drivers. (Trust me, they work fine...)
1. Download the latest Windows XP driver for the intel 865 graphic card. (I used the XP Professional version) download
1. Download the XP Professional version of the Intel 865 graphic driver. When I originally wrote this, Intel kept lots of drivers. 15 years later, those drivers are no longer on Intel's website so you'll have to get creative. You might see if you can find similar drivers from Dell, for example a quick search resulted in Intel 865 v.188.8.131.5296, A09. (Do a web search for that) Point is, you'll have to dig to find a working driver, then follow the steps below.
2. After extracting the driver, go to "Device Manager" in Vista or Windows 7.
3. Click on the arrow to the left of "Display Adapters" and you should see the "Standard VGA Graphics Adapter" drop down.
4. Right click on the "Standard VGA Graphics Adapter" and choose "Update Driver Software"
5. Choose "Browse my computer for driver software"
6. Choose "Let me pick from a list of device drivers on my computer"
7. Find the "Have Disk" button (lower right corner) and click on it
8. Browse to the location where you extracted the Intel 865 XP driver zip file and go into the win2000 folder, then choose "Open"
9. You should see a driver in the main box now, that says it's for the Intel 865 Graphic Adapter. Choose "Next" and it will begin the driver installation.
10. Once the driver is installed, reboot and you are all done!
Update: 2009/09/27 Due to a comment posted below, I am introducing a disclaimer that this method is NOT SUPPORTED by Intel. That doesn't mean it WONT WORK... If it breaks something, please insert $0.50 and try again...
Update: 2010/03/01 I am not 100% sure, but I believe that this workaround will also work for the Intel 845 video drivers as well. Just make sure to download the driver for the 845 instead of the 865 (they might actually be the same though...) Anyone who has been successful in getting this to work, feel free to post a comment and let us know :)
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816024.45/warc/CC-MAIN-20240412132154-20240412162154-00476.warc.gz
|
CC-MAIN-2024-18
| 3,323
| 17
|
https://www.macupdate.com/app/mac/20759/oompalocker
|
code
|
A simple AppleScript front-end to Terminal actions that change the permissions on the user InputManagers folder to root only. This should prevent Leap-A (a.k.a. "Oompa Loompa") malware and its derivatives from installing. Released free under the GPL. Source code available.
Join over 500,000 subscribers.
Subscribe for our newsletter with best Mac offers from MacUpdate.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00491.warc.gz
|
CC-MAIN-2019-47
| 370
| 3
|
https://forums.developer.nvidia.com/t/run-each-one-from-list-of-videos/208676
|
code
|
I have list of separate videos and want to run pipeline consecutive (batch-size=1) without reloading pipeline after end of each video. How to implement it with Python ?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
Sorry for missing informations
• Hardware Platform: A100
• DeepStream Version: 6.0-triton
What kind of video format? Are all the video same format?
All videos are normal H264 mp4 format. I just want to run pipeline for each video. I’ve tried deepstream-python-apps/runtime_source_add_delete but could not make it work.
Please implement it in application level. Thanks
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00525.warc.gz
|
CC-MAIN-2022-40
| 812
| 12
|
https://hackernoon.com/how-to-setup-a-python-virtual-environment-on-windows-10-h61f34c6
|
code
|
Technical Author, By passion Puzzle Solving
Like other languages, Python has its way to download, store, and resolve packages. Creating a Python Virtual environment will allow you to work on an isolated copy of Python for specific projects without interfering or impacting the working of other ongoing projects.
These Python environments can be easily hosted on a VPS server or a dedicated server to be available for multiple users. It will be beneficial for different team members working on the same project.
It is a tool that provides you with the facility of installing multiple Python versions for each project. All the Python projects will use the same directory to store and retrieve site-packages. That is why we require a virtual environment because Python cannot differentiate between different versions in the same site-package directory. To overcome this Python problem, you can create a different Python virtual environment for every project. There are no limitations on how many virtual environments you can create within a system.
You can use Python’s module named a virtual environment or venv for creating a unique environment for each project. You can also install the required project settings along with a neatly organized project. Venv does not modify the default system setting of Python or its modules. The venv will create a folder containing necessary executables to use required Python packages.
If you want to leverage the functionality of venv on Windows 10. In that case, you are recommended to enable the Windows Subsystem for Linux (WSL) that will allow you to run Linux distro within Windows 10.
It will be beneficial because most of the Python information is for Linux environments. Most of the developers use Linux installation tools. With WSL, the development and production environment will be compatible.
There are several Linux distributions available online, and you can download them. But prefer downloading the latest version of the particular Linux distribution. Here we are considering a Ubuntu 18.04 LTS distribution as it has an up-to-date version, a vast support community, and is well documented even for beginners.
sudo apt update && sudo apt upgrade
If the command mentioned above does not work, you have to update and upgrade the new OS manually.
Now you can use the PowerShell to install the distribution and navigate to the folder containing the newly downloaded Linux distribution (app_name.appx) and run the below command.
We will use PowerShell to add the path setting.
$userenv = [System.Environment]::GetEnvironmentVariable("Path", "User")
[System.Environment]::SetEnvironmentVariable("PATH", $userenv + ";C:\Users\Admin\Ubuntu", "User")
Launching a distribution
After you initialize the newly installed distribution, you have to launch a new instance. You can launch it from the distro’s executable file available on the start menu. Later, it gets stored locally on your system and ready to use. For Windows server users, launch the Linux distribution’s executable file stored under the distro’s installation folder.
Setting up the process
You can set up a Python virtual environment on Windows 10, which will require you to follow the below-mentioned steps.
It is preferred to install the latest and updated Python version for setting up the environment. So we are going for the Python 3.8.0 version to get started.
Usually, PIP is pre-installed with the Python3 version. If you cannot find the required PIP, you can install it manually using the below command.
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
After you install the get-pip.py file, you can save it to the desktop. Then, navigate to the desktop with an admin account to run the file on the command prompt to make it work system-wide.
python3 get-pip.py cd Desktop Python get-pip.py
$ pip install virtualenv
$ virtualenv --version
$ virtualenv my_project
The directory will have the below file structure.
$ virtualenv -p /usr/bin/python3 my_project
$ source my_project/bin/activate (my_project) $
The above prompt is prefixed with my_project, which indicates that the virtual environment is now active.
The virtual environment may solve the package management problem to some extent but comes with some challenges in managing the environments, which can be solved using the virtualenvwrapper tool.
You can install the wrapper batch script using the below commands.
pip install virtualenvwrapper-win
git clone git://github.com/davidmarble/virtualenvwrapper-win.git
$ which virtualenvwrapper.sh
python setup.py install
After this command, the Python virtual environment is ready to use.
Deactivating the virtual environment
If you do not want to continue with the virtual environment, you can deactivate it using the below command.
(my_project) $ deactivate
This article has explained how you can install Linux distribution on Windows 10. Even if you are using Windows 10, it is preferred that you have a Linux distribution installed to set up the Python virtual environment to leverage the functionality.
It is easier to grasp Linux commands and implement them. It gives you a better environment and security to install the packages and modules, whatever is required. You will get to know how to install Python, pip and start and activate the virtual environment along with the file structure.
Create your free account to unlock your custom reading experience.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367790.67/warc/CC-MAIN-20210303200206-20210303230206-00135.warc.gz
|
CC-MAIN-2021-10
| 5,399
| 43
|
https://neuvoo.be/view/?id=5c854a524760
|
code
|
Belixys is a young Belgian company developing electronic and embedded IOT solutions and operates in sectors including Security, Police, Defence, Construction and Events.
We have more than 10 years of experience in different sectors.
Ensuring the production and its follow up
Ensuring the quality of the products delivered to customers and the supplied materials
Quickly analyzing suspect from production, according to customer requirements (such as reliability criteria) and documenting the analysis results
Defining and improving analysis methods and tools (e.g. improvement of existing evaluation tools, work instructions and defined processes)
A technical graduation in electronics
Junior, a relevant experience is considered an asset
Strong skills in problem solving and root cause analysis
A good understanding of the interaction of electronic components
A good knowledge of quality processes, quality tools and international standards for electronics analyses
Key skills : the ability to work autonomously, a team spirit, proactive attitude, highly-organized
Additional assets :
Coding skills : such as python, C, PHP, ...
Understanding of the Linux kernel, compilation and debugging techniques (gcc, gdb).
Experience in running SW or Linux on embedded processors.
Experience in device drivers, networking stacks and multimedia frameworks is also a plus.
Be essential to support our organisation in its growth
Be an active player of our dynamic and collaborative community
Be working in a small but fast and strong growing and open lively company
Be supported and respected in your career (training plan, personal and professional development....)
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178377821.94/warc/CC-MAIN-20210307135518-20210307165518-00633.warc.gz
|
CC-MAIN-2021-10
| 1,653
| 21
|
https://www.biostars.org/p/9520059/#9520084
|
code
|
Hello, I have some co-expression clusters identified. I wanted to calculate the statistical significance of these correlated genes in a cluster as opposed to a random cluster of genes. To that end, I have calculated the mean pearson correlation for a given co-expression cluster (every gene-to every other gene) so r_cluster1 . Then I have taken a random cluster and calculated the mean r_cluster_random. Then I have done this 1000 times randomization and got r_cluster_random_1k. Obviously, the r_cluster1 is much higher than r_cluster_random_1k. How do I represent this as a p-value or a Z value ?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511386.54/warc/CC-MAIN-20231004152134-20231004182134-00684.warc.gz
|
CC-MAIN-2023-40
| 599
| 1
|
https://preview.npmjs.com/package/@humblebee/ui-react
|
code
|
Honeycomb UI React by Humblebee
Humblebee white-label UI components based on theme-ui
Humblebee is a digital product and service studio based in Göteborg, Sweden.
We believe in the values Open Source projects bring us on a daily basis, and those packages are our modest contribution.
We hope you will like them as much as we do :)
PS: we are always looking for talented and creative mindsets, if you are interested, reach out to us :)
Install the packages from your favorite package manager
npm i -S theme-ui@next @humblebee/ui-react # or yarn
Some changes will probably be required from time to time until that release.
Follow the guidelines from the root CONTRIBUTIONS.md file from this repository
Run microbundle in watch mode
Build the library
Run the tests
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153816.3/warc/CC-MAIN-20210729043158-20210729073158-00327.warc.gz
|
CC-MAIN-2021-31
| 762
| 13
|
https://support.pega.com/question/pega-web-adapter-automation-displays-pop-ups-only-when-running
|
code
|
I am creating a Pega automation that uses the Pega web adapter(IE). For some reason, I get pop ups(while debugging/stepping through) that do not appear when running the steps manually or during interrogation. Has anyone encountered this before and know a work around or fix?
The current pop up says 'Are you sure you want to leave this Page?' and has two options 'Leave this page' and 'Stay on this page'.
This may mean that your data is not being fully detected by the page and/.or that you are not properly submitting the data. During interrogation, are you using Test Methods to ensure that the adapter is supplying the data to the page? Do these appear if you run the automation without breakpoints?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571086.77/warc/CC-MAIN-20220809185452-20220809215452-00108.warc.gz
|
CC-MAIN-2022-33
| 703
| 3
|
https://markmail.org/message/t545hqtavntlm3a7
|
code
|
|vinc...@massol.net||Dec 3, 2014 6:57 am|
|Thomas Mortagne||Dec 3, 2014 7:16 am|
|Eduard Moraru||Dec 3, 2014 1:35 pm|
|vinc...@massol.net||Dec 4, 2014 12:44 am|
|Guillaume "Louis-Marie" Delhumeau||Dec 4, 2014 1:28 am|
|vinc...@massol.net||Dec 4, 2014 1:37 am|
|Ecaterina Moraru (Valica)||Dec 4, 2014 4:35 am|
|Marius Dumitru Florea||Dec 5, 2014 7:19 am|
|Denis Gervalle||Dec 6, 2014 12:56 pm|
|Thomas Mortagne||Dec 7, 2014 1:52 am|
|vinc...@massol.net||Aug 2, 2015 10:43 am|
|Gabriela Smeria||Aug 7, 2015 2:46 am|
|Eduard Moraru||Aug 7, 2015 7:52 am|
|Thomas Mortagne||Aug 7, 2015 8:06 am|
|Denis Gervalle||Aug 8, 2015 2:58 pm|
|Marius Dumitru Florea||Aug 19, 2015 2:07 am|
|vinc...@massol.net||Jan 18, 2016 8:05 am|
|Thomas Mortagne||Jan 18, 2016 8:22 am|
|Denis Gervalle||Jan 18, 2016 11:37 am|
|Eduard Moraru||Jan 18, 2016 1:27 pm|
|Marius Dumitru Florea||Jan 18, 2016 10:06 pm|
|Guillaume "Louis-Marie" Delhumeau||Jan 19, 2016 1:24 am|
|Ecaterina Moraru (Valica)||Jan 19, 2016 3:00 am|
|vinc...@massol.net||Jan 19, 2016 8:17 am|
|vinc...@massol.net||Jan 21, 2016 3:28 am|
|Subject:||Re: [xwiki-devs] [Proposal] XWiki Core - Second take|
|From:||Thomas Mortagne (thom...@xwiki.com)|
|Date:||Aug 7, 2015 8:06:58 am|
On Fri, Aug 7, 2015 at 4:52 PM, Eduard Moraru <enyg...@gmail.com> wrote:
I have re-read the original thread and scanned the remarks done by Denis and I have to say that I kind of agree with him on some aspects (or at least with what I understood from his message since I scanned it quite quickly).
Basically, I also don`t see much point/value in splitting the code into multiple repositories. IMO, we should only have the xwiki and the contrib organisations and move as much as possible from xwiki to contrib, i.e. move what you call "vertical" extensions to contrib, where everybody can easily contribute like they would to any other extension.
In terms or differentiating between quality, it should just be a matter of community feedback and what the community values to be of quality or not. In other words: ratings, votes, likes, whatever.
The community does not hit the code repositories first to look at where the code is located, but the other way around. A user first hits the XWiki Extensions repository (extensions.xwiki.org) or the Extension Manager UI where he is interested on searching for his needs and deciding based on ratings, community feedback, featured extensions, etc. which result is best for him.
IMO, raising the administrative complexity of the community will not help us work faster/better and will not simplify the contribution process for outsiders, but rather the opposite.
Additionally, there is nothing stopping us, or anybody else for the matter, from setting up additional extension repositories where only hand-picked extensions are published and where users can get certain levels of guarantees on quality, support, etc. But, like Denis say saying, this is about the artefacts, not about the sources.
If we are worried about people from contrib making bad commits on high-profile contrib extensions, we can easily revert and warn the misbehaving user. On 3 strikes he's out. Personally, I find this much simpler and in line with our wishes to simplify administrative tasks (and a bit in line with what we have done for jira where we are giving users more power in handling issues).
P.S.: A reminder to whoever will be doing the moving of code from one repo to another: please! reference the source repository and the source commit ID so that when we use blaim we don`t reach a dead end. Specially if there is no jira issue to track the move, the history is lost to oblivion. (I know it is technically still there, but it's almost impossible to find)
Actually on that subject what I do is copying the history (using the great "git subtree" extension). See https://github.com/xwiki-contrib/xwiki-platform-cache-oscache that I moved recently for example.
On Fri, Aug 7, 2015 at 12:46 PM, Gabriela Smeria <gabr...@xwiki.com> wrote:
Here's my +1 for this proposal. I strongly agree with one change, because I also had it in mind for a while now. And that is: moving the "vertical" modules out of the xwiki github organization repos, since it would be easier for contributors to participate in improving and/or adding extensions and also, IMO, it will decrease the build time.
*Gabriela Smeria* *Web Developer* gabr...@xwiki.com skype: smeria.gabriela
I’d like to progress with this idea so let me summarize this thread’s discussion so far:
* +1 from Thomas, Guillaume, Caty and Marius * No answer from Edy on whether he’s ok with the proposal or not. Edy? :) * Denis seems negative about it but I agree with Thomas’s reply in that the points raised by Denis do not concern this discussion. Denis commented about publishing and installing Extensions, whereas this proposal was only about a location for storing some extensions. Extensions can be developed anywhere and don’t have to go into this new proposed location. Denis, could you please review this new proposal with this in mind? * There were discussions about the name and devs express doubts about using xwiki-contrib-sandbox.
I’d like to progress so here’s my second proposal. It differs from the first proposal on the following points:
* All our code is contributed so I don’t think we need to emphasize this point and I don’t think we need to have “contrib” in the name of the github repos. This will lead to shorter names which is better. * I propose to have 3 github org: ** xwiki-core (currently “xwiki” but we should probably rename it - Github will create redirects and the only downside is that we need to check it out for making repo changes) ** xwiki-extensions (new). For maintained and good quality level extensions, following the charter defined in the first proposal (we’ll tune it). Committers are added extension by extension and will be voted on the devs list for now, by the xwiki core devs (we’ll tune that later on) ** xwiki-incubator (currently “xwiki-contrib” but we should rename it). Extensions in xwiki-extensions that are no longer working with the latest LTS and that nobody is fixing will move back to xwiki-incubator too. * I propose to change the goal of the contrib.xwiki.org wiki and to expand its goal. Right now it’s focused about the xwiki-contrib organization on GitHub. I propose to make it the wiki that explains how to make contributions to the XWiki ecosystem in general. We would move http://dev.xwiki.org/xwiki/bin/view/Community/Contributing + add pages for explaining how to contribute to xwiki-core, xwiki-extensions and xwiki-incubator. * ATM we should continue to use the “org.xwiki.contrib" groupid for code in the xwiki-incubator and xwiki-extensions organizations. Ideally we should use org.xwiki.extension but it’s already used by the Extension module in xwiki-core. An option would have been to use org.xwiki.core for the core but that would break too much code so the only option is to keep having a special prefix for non-core code. Other ideas: “org.xwiki.module”, “org.xwiki.ext”, “org.xwiki.external”, “org.xwiki.addon”. The simplest is to keep “org.xwiki.contrib” I think, WDYT?
Once (and if) we agree on this, I’d like to quickly move some existing extensions from the xwiki-core organization into xwiki-extensions, starting with the FAQ Application, in order to start testing this new organization.
Hi committers (and devs in general),
I’m submitting to you this idea, to try to improve the xwiki open
project and to give it a new dynamism. I believe the topics discussed below are made even more important since we’re soon going to develop the notion of flavors in XWiki.
Note that this proposal obsoletes the
Issues to solve ===============
* The scope of the code maintained by the XWiki Dev Team (== the xwiki github organization) is increasing but the team stays relatively small * The more stuff we move into the repos of the xwiki github
organization, the less easy it is for non-“XWiki Dev Team” committers to participate and we want more contributions
Proposed solution =================
Executive summary: * Reduce the scope of all the code located in the xwiki github organization by only keeping “core” modules * A “core" module is defined by being a generic transversal module
that can be used in lots of XWiki flavors, if not all). This is opposed to “vertical” modules which are modules specific of a usage of XWiki.
** Examples of “core" modules: logging module, configuration module,
distribution wizard, statistics application, annotations, active installs, one base flavor (the “XWiki” flavor), etc
** Example of “vertical” modules: meeting manager application, blog application, FAQ application, flavors (except the base flavor), etc
Some consequences: * We need a new location for several modules that would go out of the xwiki github organization repos * It would be good to separate sandbox extensions from 1st class
extensions that are maintained and developed following best practices. We need some way to maintain the quality of important extensions
Detailed Implementation: * The “xwiki” github organization’s description becomes “XWiki Core” (it’s too complex to rename the org to “xwiki-core” IMO) * “XWiki Dev Team” becomes the “XWiki Core Team” (and committers in there are called “XWiki Core Committers”). * “xwiki-contrib” is split into 2 github organizations (technically we rename it to “xwiki-contrib-sandbox”): ** “xwiki-contrib-sandbox” (or “xwiki-incubator”), where newly proposed extensions or abandoned extensions are located ** “xwiki-contrib-extensions”, where maintained extensions are located. * These 2 organizations are commonly referred to as “XWiki Contrib" * Same as now, anyone requesting a repo in xwiki-contrib-sandbox would
be granted one and he/she’d be given write access to all repos in the xwiki-contrib-sandbox organization.
* We define some rules for graduating from xwiki-contrib-sandbox to xwiki-contrib-extensions. For example: ** The extension should have been in xwiki-contrib-sandbox at least 6
months (this gives time to see if the extension is maintained during that time and will survive the test of time - most extensions will die in the first months)
** The extension should have had more than 2 releases and be published on extensions.xwiki.org(http://extensions.xwiki.org) with documentation ** The extension should work with the latest LTS version of XWiki + the
latest stable version of XWiki (right now that would be 5.4.5 + 6.3). Note that if the extension has to use new API it’s ok that it doesn’t work on the latest LTS.
** Generally follow the practices defined at http://dev.xwiki.org * Each extension in xwiki-extensions has a leader/maintainer. He/she’s
the one proposing to move the extension from xwiki-sandbox to xwiki-extensions. He/she’s responsible for ensuring that the extension gets regular releases and is maintained in general. He/she defines initially the list of committers in his email proposal for moving the extension.
* We create a PMC (Project Management Committee) for XWiki Contrib,
generally in charge of both xwiki-contrib-sandbox and xwiki-contrib-extensions (voting new extensions in xwiki-contrib-extensions, vote new PMC members, etc). To bootstrap it, I would send a mail on devs@ asking who’s interested to be part of this committee. I expect some core committers + some contrib committers to stand up.
* Contrib extensions keep using the org.xwiki.contrib package name and groupid as currently defined at http://contrib.xwiki.org
Note: The idea is that xwiki core is developed as a team maintaining
code in there, xwiki contrib is developed extension by extension (each extension is an island). This allows anyone to propose extensions in XWiki Contrib without the need for everyone to support them.
-- Thomas Mortagne
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862248.4/warc/CC-MAIN-20180619095641-20180619115641-00113.warc.gz
|
CC-MAIN-2018-26
| 11,951
| 75
|
https://westdao.org/media/waves-enterprise-team-tokens-and-plans-for-q4-2020
|
code
|
We publish an address of a smart contract with the Waves Enterprise team tokens, and share our plans for the fourth quarter of 2020.
A month ago we announced a solution for storing Waves Enterprise team tokens on a smart contract and promised to publish its address in October, and thus the time has come.
Smart contract with WEST tokens
A smart account with script written in the Ride programming language is used to store the team tokens. The team decided to lock the entire cache of its tokens: 160 million, (not 150 as previously assumed), until January 1, 2022. This approach will simplify mutual understanding between the team and the community by resolving questions about the remaining 10 million and the date of their release into circulation. You can check the address where tokens are stored here: https://client.wavesenterprise.com/explorer/transactions/user/3NcvvFiweQJESSRhHurFTLnzaLdEB9pLJ8J/info. Check the smart contract’s address with the script prohibiting transactions before the above-mentioned date here: https://client.wavesenterprise.com/explorer/transactions/id/6sbvmn9oT75QurAR9M6H3xjundxokzf1mj95i97sPncC.
The possibility of freezing the tokens of early investors on a smart contract under similar conditions is still under discussion. At the moment it is hard to predict a specific date for reaching agreements on this. However, it is already clear that investors do not have short-term plans to bring their tokens into circulation.
Q4 2020 prospects
After clarifying the tokens topic, we want to notify you that we have an important quarter ahead from a business perspective, which may become no less significant than the previous quarter — with the case of using our technologies during federal elections in the Russian Federation.
Waves Enterprise clients and partnerships
By the end of this year we plan to tell you about several major projects based on Waves Enterprise technologies that we have been working on for a long time but haven’t been able to disclose details until the official release due to an agreement with our partners.
Business release of the voting service
A commercial version of the voting service will be released soon, which will work on the Waves Enterprise main network. The team plans to make substantial efforts to promote and popularize the service, which in turn should increase the volume of transactions in the main network and increase the profitability of maintaining network nodes. The commercial version is expected to be released in early November 2020.
The end of 2020 will be hot, so stay tuned!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511351.18/warc/CC-MAIN-20231004020329-20231004050329-00386.warc.gz
|
CC-MAIN-2023-40
| 2,571
| 12
|
https://stackabuse.com/bytes/how-to-kill-a-process-using-a-port-in-linux/
|
code
|
When running applications on Linux, you may find that something you're trying to launch can't because a process is already using the port that your new application needs. This is commonly the case with web applications. Another possibility is that a process has become stuck or unresponsive. This can happen for a variety of reasons, including software bugs or system crashes.
In these cases, it may be necessary to kill the process in order to free up the resources it's using. One way to do this is by using the process's port number.
Killing the Process
If we know that we want to free up the port
8080, for example, then it'd help if we can find the process using the port number. First we'll need to find the process ID (PID) associated with the port. You can do this using the
lsof command, which lists all the open files on the system. To find the PID associated with a specific port, use the
-i flag to specify the port number and the
-t flag to only show the PIDs:
lsof -i :<port number> -t
For example, if the process is using port 8080, you would use the following command:
lsof -i :8080 -t
This will print the PID of the process using that port. Once you have the PID, you can then use the
kill command to terminate the process. The
kill command sends a signal to the process, which can either terminate it or allow it to clean up before exiting.
To kill the process immediately, use the
kill -9 <pid>
Be sure to replace
<pid> with the PID of the process you want to kill. For example, if the PID is 1234, you would use the following command:
kill -9 1234
This will immediately kill the process associated with that PID.
In some cases, the
kill command may not be able to terminate the process. This can happen if the process is running with special privileges, such as those granted to system processes.
In these cases, you can try using the
killall command, which sends a signal to all processes with a given name. To use
killall, specify the name of the process you want to kill and the signal to send.
SIGKILLis a Unix signal that is used to immediately terminate a process. The signal cannot be caught or ignored, and it is not possible for the process to clean up any resources or perform any other actions before it exits.
Note: The signal is typically used as a last resort when a process is unresponsive or stuck, and other methods of terminating the process have failed.
For example, to kill a process named
my_process using the
SIGKILL signal, you would use the following command:
killall -s SIGKILL my_process
This will attempt to kill all processes with the name
my_process, regardless of their PID or privileges.
Killing a process using a port in Linux involves finding the PID of the process using the
lsof command, and then using the
killall command to terminate the process. This can be a useful tool for freeing up resources and troubleshooting unresponsive processes.
Make Clarity from Data - Quickly Learn Data Visualization with Python
Learn the landscape of Data Visualization tools in Python - work with Seaborn, Plotly, and Bokeh, and excel in Matplotlib!
From simple plot types to ridge plots, surface plots and spectrograms - understand your data and learn to draw conclusions from it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100081.47/warc/CC-MAIN-20231129105306-20231129135306-00462.warc.gz
|
CC-MAIN-2023-50
| 3,223
| 39
|
https://iti.larsys.pt/member/ines-santos-silva/
|
code
|
Inês Santos Silva
Inês dos Santos Silva is a PhD student in Computer Science at Instituto Superior Técnico (IST), School of Engineering of the University of Lisbon, Portugal. After receiving her master’s degree in Information Systems and Computer Engineering – at Alameda in 2017, she joined the ARCADE project at INESC-ID as a junior researcher until 2020. She teached at the Escola Superior Náutica Infante D. Henrique in 2018. Also, she became a teaching assistant in Human-Computer Interaction in 2018 at Instituto Superior Técnico. Served on the organizing committee for major ACM and IEEE conferences, as student volunteer chair at ACM ISS 2020, IEEE VR 2021, and publicity chair at MUM 2022. She has authored a conference paper at CHI 2020 on her PhD.
She is a researcher at ITI – Institute for Interactive Technologies/LARSyS. Her research interests include human-computer interaction, technologies for emotional wellbeing, assistive technologies and user-centred design interfaces, especially applied to Stroke Survivors and their Caregivers and therapists.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510100.47/warc/CC-MAIN-20230925215547-20230926005547-00701.warc.gz
|
CC-MAIN-2023-40
| 1,077
| 3
|
https://doc.sitecore.com/xp/en/developers/sxa/17/sitecore-experience-accelerator/designing.html
|
code
|
This section describes how to work with page designs, partial designs, and themes.
To make structuring pages easier, SXA provides a mechanism for locking down designs and reusing them: Page Designs and Partial Designs. Read the topics in this section to learn more about page designs and partial designs.
Themes define the look and feel of a site and can be created separately from the site functionality and content. Learn how to work with themes by reading the following topics:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506421.14/warc/CC-MAIN-20230922170343-20230922200343-00596.warc.gz
|
CC-MAIN-2023-40
| 480
| 3
|
https://www.reddit.com/r/ccna/comments/167tf7/subnetting_question/
|
code
|
Forgive me if this has been asked before, I couldn't find an accurate answer via search.
I have been subnetting using the method posted in this thread a few months ago. It seems I can accurately and quickly subnet almost any problem thrown at me, the one type of question I can't seem to figure out how to subnet with that method is:
Q: You are assigned an IP address of 172.30.0.0 and you need 1000 hosts on your network, what is your subnet mask.
Is there an easy way to do this using the method in the previous post? Any help would be greatly appreciated!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189092.35/warc/CC-MAIN-20170322212949-00490-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 558
| 4
|
http://alexandrugroza.ro/mptec/index.html
|
code
|
Welcome to the Microprogramming TECHNIQUES Internet site! Take some time to read about our history, our goal, and ultimately, our software products. Navigate your way through the software download galleries to get whatever suits your neeeds.
Our primary software research branches are:
Secondary software research branches are:
| System Utilities / OS Core design
| Operating System Utilities
| Electronics / Automation software
The primary software research branch is devoted to developing software applications mainly for the current operating systems while the secondary software research branch is devoted to researching new solutions for the existing operating systems and for various industrial automation panels. Most parts of the finished software belonging to the secondary software research branches may be found in the Software and DOS sections of this site.
Microprogramming TECHNIQUES launched DisCleaner 2.5.1!
Lots of improvements, easier to use, and increased versatility.
Get it for free!
about Microprogramming TECHNIQUES
We started out with DOS programming in the year 1998, by Mr. Alexandru Groza's initiative who is also the lead programmer and software architect at Microprogramming TECHNIQUES. The first program ever developed was DiskInfo V.1.00. It was used to report information regarding the storage peripherals installed into any IBM PC or clone machine. DiskInfo is still being improved and it is and will be available for download as now it is a mature technology. It is still quite useful although it remained a pure DOS 16-bit application.
Later on we have developed a lot of diversified applications for operating systems ranging from DOS to Windows 7. Basically we produce applications designed to handle disk and file routines (as in: disk cleaners, disk information retrievers, disk formatting, etc.) but StatUtils was the first software suite to break this rule. It consisted in four (4) utilities designed to analyze and work with geographical statistical data. Since 2005 we are also involved into microcontroller programming.
We are a small non-profit company and we produce mainly software that may be freely distributed under certain terms such as never altering the executable code or any other parts of the software package. This does not stop us from delivering competitive software by today's standards. So we continuously try to outperform our job for a total customer satisfaction.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585201.94/warc/CC-MAIN-20211018093606-20211018123606-00683.warc.gz
|
CC-MAIN-2021-43
| 2,429
| 14
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/afsharm/ibm-call-for-code-2fmc
|
code
|
I am in search of open source projects to contribute. Firstly, I hope it can increase my job hunting chance. Secondly, I can learn more technologies and better coding styles. Thirdly, I can grow my connection circle.
While searching for a proper project to contribute, I encountered IBM Call for Code via a twitter advertisement. As IBM was not familiar for me, like Microsoft, Google or even Oracle, it interested me more. Consequently, I started learning about it.
I continued to my research, watched videos, read medium.com account, and participated in the slack channel. I felt the atmosphere is a more student one. Also I found the slack channel's traffic does not seems very high. But, finally, I reached out to main github page. Among the projects, I found Project Lantern in the Call for Code Project Catalog. Project Lantern, which is about Long-Range Wireless Apps, is consisted of several projects. One of them is lantern-serve which seems a traditional back-end server via express.js or similar technologies. Currently I am intended to investigate more on this project.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817819.93/warc/CC-MAIN-20240421194551-20240421224551-00213.warc.gz
|
CC-MAIN-2024-18
| 1,081
| 3
|
https://github.com/jenkinsci/extended-security-settings-plugin
|
code
|
Extended Security Settings for Jenkins
Jenkins plugin to configure Extended Security Settings: a set of additional security settings for Jenkins.
Disable Password Autocomplete
This feature is designed to allow overly paranoid security scanners to certify Jenkins.
This adds an
autocomplete="off" attribute to password inputs on the signup and login pages.
Note that this feature is generally ignored by modern web browsers due to the inherent insecurity of attempting to prevent password managers from working which encourages weak passwords or bad password management practices (like using sticky notes).
See Choosing Secure Passwords for more details.
Enable X-XSS-Protection Header
This feature enables the HTTP header
X-XSS-Protection: 1; mode=block to be sent on all requests which some web browsers intend as a way to automatically block suspected cross-site scripting attacks.
Several web browsers (e.g., Firefox, Edge, and Chrome) do not support this header.
Check out the wiki page for the changelog.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00200.warc.gz
|
CC-MAIN-2019-35
| 1,009
| 13
|
https://forum.kingsnake.com/snappingturtle/messages/1090.html
|
code
|
mobile - desktop
Available Now at RodentPro.com!
News & Events:
Posted by alanmturtle on February 18, 2003 at 18:02:01:
In Reply to: weight posted by turtleguy316 on February 18, 2003 at 15:41:01:
I Have a 9-inch and hes a little over 6 pounds....
:how much should a 8 inche ally weight?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648245.63/warc/CC-MAIN-20230602003804-20230602033804-00039.warc.gz
|
CC-MAIN-2023-23
| 287
| 7
|
https://www.technojobs.co.uk/job/2862384/senior-engineerdeveloper/
|
code
|
My client a large global brand is looking for an experienced API Engineer (Server Side) on a initial contract for 14 months with the potential to extend. This will be inside IR35.
You will work on Mobile application Service Layer that will be used by tens of millions of customers around the world. We want someone with strong technical skills and creativity. Should enjoy solving tough problems and working with new technologies. You'll be working in a fast-paced environment with the stability of working for a Fortune 100 company. Your primary responsibilities will be to work on a small team of engineers developing mobile products. You should be familiar with modern software development methodologies, and be able to dive deep and rapidly iterate on ideas despite ambiguity.
- Contribute to the design, architecture, and development of Sever side APIs that are elegant, efficient, secure, highly available, and maintainable
- Works closely with other developers (within the team and outside the team), and product owners to ensure technical compatibility and user satisfaction
- Contribute insights into ways to improve our processes and tools
- Be highly motivated and maintain a positive, "can-do" attitude in a fast moving environment
- Follow and help cultivate consistent development best practices
- Collaborates with project manager and other software developers to plan, design, develop, test, and maintain the Server side APIs
- Provides thought-leadership regarding implementation best practices
- Assists in estimation and assessment of feasibility of features
- Foster a collaborative spirit across multiple teams
Qualifications and Experience:
- Bachelor's degree in Computer Science, Computer Engineering, Information Systems Technology or related field.
- Knowledge of developing trends and emerging standards in mobile apps (RxJava, Kotlin, etc), mobile payments, and wearables
- Minimum of 3 years of experience in API/Web Service Development and 5 years of experience in Java/J2EE/Web Development
- Excellent interpersonal and communication skills
- Familiar with the complete software development life cycle (e.g. requirements, analysis, design, implementation, testing, and documentation) and execution models (e.g. Waterfall, Agile, etc.)
- Great knowledge of Java design principles, patterns, and best practices
- Excellent technical knowledge of Java, J2EE, Spring and RESTful API development
- Thorough understanding of JSON, XML, SOAP, HTTP, web services technologies, and data structure fundamentals, with experience in multi-threaded programming
- Experience with build (using Gradle, Maven, Ant, etc.) and deployments on application servers (like Websphere, Weblogic)
- Experience working with testing libraries (like Junit, Mockito)
- Familiar with Continuous Integration/Deployment (using Jenkins, Maven, JMeter, etc.)
- Knowledge of the open-source Java ecosystem and the libraries available for common tasks
- Understanding of code versioning using git, github and gitflow.
- Understanding of accessibility and security compliance
- Understanding of fundamental design principles behind a scalable enterprise application
- Experience in creation and review of software and architecture designs
- Experience with Kotlin is a plus
Contact Name: Experis UK
Job ID: 2862384
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107904287.88/warc/CC-MAIN-20201029124628-20201029154628-00620.warc.gz
|
CC-MAIN-2020-45
| 3,307
| 31
|
https://remotec.com.hk/what-is-nfc-technology/
|
code
|
Near Field Communication (NFC) is a contact-less communication technology
based on a radio frequency (RF) field using a base frequency of 13.56 MHz (HF
range of RFID). It is designed to exchange data between two devices through a
distance of 10cm or less. Normally it uses for contact-less payments via mobile
wallets like Apple Pay, Android Pay, as well as contact-less cards. NFC is based
on RFID protocol with 2 ways communication and treated as a subset of RFID
family. There are two main types of NFC devices: Passive communication NFC devices
and active communication NFC devices. Passive devices have no power source
and can only connect to active devices. Active devices have a power source and
can send and receive data by switching between the normal active (transmitting)
state and a passive (receiving) state.
The NFC Forum Certification Mark is a global indicator that a device implementation
has met the standards of the NFC Forum Compliance Program. Compliance
applicants should join as NFC Forum member and use of the Certification Mark
is granted once the NFC Forum Certification process is completed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00873.warc.gz
|
CC-MAIN-2022-49
| 1,118
| 15
|
https://www.reddit.com/r/SubSimulatorGPT2/comments/bxuyde/what_do_you_call_two_people_with_the_same_name/?utm_campaign=NLP%20News&utm_medium=email&utm_source=Revue%20newsletter
|
code
|
Joe and John are named because there were only 2 people in the two couples,
The husband and wife.
The husband and wife move to another state, because the State of New York is very small.
The husband moves to the U.K. to work as a mailman.
the wife moves to Jamaica to work as a mailman.
Joe moves back to the U.K. to work as a mailman.
John moves back to Jamaica because he wants to go there to work as a mailman.
The wife moves to the U.K. to work as a mailman. Joe moves back to Jamaica because he wants to go there to work as a mailman. This is the first time that I've heard a new joke.
I was about to put up a new one: what do you call two people who have the same name but different last names.
I know it's probably awful, but:
Joe: My last name is actually Joe. John: O.K., and then you could call me Joe.
I'm sure you could make more good jokes, but this is what I think works best for this thread.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528037.92/warc/CC-MAIN-20190722133851-20190722155851-00298.warc.gz
|
CC-MAIN-2019-30
| 906
| 12
|
http://www.linuxquestions.org/questions/linux-newbie-8/mandrake-automatically-booting-dont-want-it-to-want-windows-please-help-330568/
|
code
|
You can also use WinXP's boot loader to load Mandriva. Here's how:
1. Boot into Linux.
2. (You must know on which partition the MBR is. It is usually /dev/hda, if you have only 1 hard disk) Get a formatted floppy, put it in the drive, and do:
mount /floppy; dd if=/dev/hda of=/floppy/bootsect.lnx bs=512 count=1; umount /floppy
3. 'Heal' WinXP as I said before. Don't worry, you'll be able to boot into linux. (ATTN: http://portal.suse.com/sdb/en/2002/1...l_grub_nt.html
this link tells more. If you created a boot floppy for Mandriva, GOOD. If not, do so. If all of this fails, you will still be able to boot Mandrake from that floppy)
4. Copy a:\bootsect.lnx to c:\
5. As Administrator, in XP, do: Start > My Computer (right click) Properties > Advanced > Startup And Recovery > Settings > Edit. This will edit your c:\boot.ini file, which is used to boot XP.
6. Add at the end: c:\bootsect.lnx="MandrakeLinux"
7. Save/quit/get out. Reboot.
remember http://portal.suse.com/sdb/en/2002/1...l_grub_nt.html sais more.
I use XP's boot loader to boot Debian myself!
Welcome to the wonderful world of Linux, BTW!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188914.50/warc/CC-MAIN-20170322212948-00663-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 1,108
| 13
|
https://jobsinkent.com/job/1210177
|
code
|
WHO ARE WE
Solirius Consulting, founded in 2007, delivers technical consultancy and application delivery.
Our focus is on having a sound understanding of technology, how to implement it, and critically, how to bring this knowledge to bear to solve real world problems for our clients.
We specialise in developing and advising on large systems which integrate with complex environments and business processes.
We have over 200 consultants and software developers, and are growing fast. We are based in Central London, and routinely take on projects across the UK and beyond. We operate as a flat organisation, and believe in trusting and supporting all of our team to operate independently, making the most of their expertise in their field. We believe in giving everyone an opportunity to continually learn and grow in the direction they choose, and we actively help and support people to shape the career that they wish to have.
WHO ARE YOU
The ideal candidate is conscientious, client-focused, reliable, self-motivated and able to work without close supervision. You are a technically proficient application developer with impressive analytical and communication skills; you are comfortable with talking to clients and enjoy the variety of working on different projects.
You enjoy being involved in the wider development community and will relish playing a pivotal role in helping to shape their culture and in embedding quality and best practices throughout the development process. You’ll be sharing your knowledge of tools and techniques within the team and leading discussion about how and where to use them. You’ll also have experience of mentoring junior developers, helping them adopt new approaches to problem-solving and encouraging areas for growth and improvement within the team.
Ideally you will have two years plus current experience in a similar role.
TECHNICAL SKILLS AND EXPERIENCE
Back end development (e.g. Java 8, Node.JS, JMS and ActiveMQ)
Modern software engineering practices
Comfortable working in an agile environment
Springboot experience is a plus
Testing tools and methodology (TDD and Capybara / Selenium, JUnit or TestNG)
Strong analytical, written communication and presentation skills are extremely important to this role, including the ability to communicate and engage with stakeholders and colleagues at all levels.
PACKAGE AND BENEFITS
Competitive salary, dependent on experience
25 Days holiday
Option to work from home one day a week
Annual Away days
Bi weekly company socials
On going training
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00238.warc.gz
|
CC-MAIN-2019-51
| 2,538
| 23
|
https://www.biostars.org/p/9462700/
|
code
|
Hello, I used an RSEM, gene-level count estimates matrix and just rounded the values using the round() function in R to feed them to DESeq2 but the results are very odd (VERY low number of deferentially expressed genes). Is this really a solid way to go about deferential analysis or should I resort to starting from raw data again.
I read multiple threads that suggested that the rounding method, although not optimal, should work fine.
NOTE: The study I got the data from provided the data as RSEM gene-level count matrix and FPKM normalized matrix. I had to use rsem since I know DESeq2 only takes non-normalized counts. The study used an independent t-test on the FPKM file to do the analysis, but I read somewhere that it is highly discouraged.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573744.90/warc/CC-MAIN-20220819161440-20220819191440-00359.warc.gz
|
CC-MAIN-2022-33
| 749
| 3
|
https://applesecrets.com/microsoft-tablet-surface-to-start-at-499-salt-lake-tribune/
|
code
|
https://www.applesecrets.com/wp-content/uploads/2019/01/AppleSecrets.png 0 0 https://www.applesecrets.com/wp-content/uploads/2019/01/AppleSecrets.png 2012-10-17 03:28:222012-10-17 03:28:22Microsoft tablet Surface to start at $499 - Salt Lake Tribune
U.S. News & World Report (blog)
Microsoft tablet Surface to start at $ 499
Salt Lake Tribune
The price matches that of Apple Inc.'s iPad, the most popular tablet computer, but the base model of the Surface has twice as much storage memory, 32 gigabytes. The screen is also slightly larger. In the past few months, Google Inc. and Amazon.com Inc.
The Microsoft-Apple Tablet Wars Commence
Microsoft Prices Surface Starting at $ 499 to Rival IPad
Is Microsoft going soft or maturing gracefully?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00044.warc.gz
|
CC-MAIN-2019-47
| 741
| 8
|
https://zone.ni.com/reference/en-XX/help/370736U-01/dcpowerpropref/pnidcpower_overrangingenabled/
|
code
|
|NI-DCPower (English | Japanese)|
Short Name: Overranging Enabled
Property of niDCPower
Refer to the Ranges topic in the NI DC Power Supplies and SMUs Help for more information about overranging.
Default Value:Refer to Supported Properties by Device for the default value by device.
The following table lists the characteristics of this property.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00093.warc.gz
|
CC-MAIN-2021-43
| 346
| 6
|
https://www.txconsole.com/tags/devops
|
code
|
This blog shows simple steps to containerize Next.js app using cloud native buildpacks.
Simplified workflow to perform continuous delivery for apps with Tanzu Mission Control
This blog aims to build an LDAP Server with docker and docker-compose. It serves useful to deploy temporary LDAP server for poc’s and testing our Microservices.
This blog is a comprehensive guide to build and deploy apps to Kubernetes with Skaffold. This tool provides simple workflow for local development of apps
Sealed Secrets provide a secure way to encrypt the Kubernetes secrets.
A Helloapp to demonstrate the workflow and the tools features. I have built this app with Java using Spring Boot to build the MVC framework.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510300.41/warc/CC-MAIN-20230927135227-20230927165227-00035.warc.gz
|
CC-MAIN-2023-40
| 703
| 6
|
https://fehuhreinn.com/en/valhalla/6-returns-exchanges
|
code
|
If you want to make a change of size or product, keep in mind the following:
1 - You have a period of 15 days to make a change in size or product.
2 - Send us an email to email@example.com with the subject "exchange + the number of your order" and tell us about the item or size you want to make the change.
3 - The same delivery person who brings you the new product will pick up the old one, pack it in the same way you would like to have received it.
4 - Here's the kicker, it's totally FREE.
If you want to return a product:
1 - You have a period of 15 days to make a return.
2 - You do not have to justify the reason for your return, but it always helps us to know it to improve.
3 - Send us an email to firstname.lastname@example.org with the subject "return + the number of your order" to inform us and we will respond with the steps to follow.
4 - We forgot, this is also FREE.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510942.97/warc/CC-MAIN-20231002001302-20231002031302-00367.warc.gz
|
CC-MAIN-2023-40
| 885
| 10
|
https://www.native-instruments.com/de/reaktor-community/reaktor-user-library/entry/show/13411/
|
code
|
Manual Morph Panel
MMP for new Snapshots
- FIX: bank style
- FIX: A and B value display
Paste the macro to ens level. Go to view in ens and set it to ON.
Choice a Bank, a Snapshot and hit button A for the left; same usage with button B for the right selection.
Swap button settings to apply switch positions between left (red A) and right (green B) Snapshot.
Click on Pos (Morph Position): to realise a fine result use the arrow key for up/ down.
If the instruments or Blocks don't contains Snapshots you can't morph. So prepare your patch!
Choice an ensSnap, CTRL+C for the name. Move to your first instrument and Add the ismSnap in Snapshot bank, CTRL+V for the same name as ensSnap. Go to all other instruments or Blocks and create a Snapshot. Important is now: Go back to the ensSnap and store the Snapshot. Now the ensSnap refers to all ismSnaps.
In MicMac ensemble morphing is possible
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00439.warc.gz
|
CC-MAIN-2021-43
| 891
| 11
|
http://www.moddb.com/games/cyne
|
code
|
Cyne is a work in progress conceptual innovation to the gaming industry, as it is a setting for Real-Time-Strategy role playing games. The concept of RTSRPG is that each unit can be upgraded, leveled up, and equipped. The game is real, because all of the game takes place in virtual space, and you are creating your own CG avatar. The concept sparked from the conceptual singularity story, that can actually pass as a scientific theory, that Man created AI and AI created man thus creating the loop you see when you look into the cameras recording virtual space with the same camera.
We're sorry, but no downloads were found here.
If you would like to share files with the community, sign up and you can.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164920565/warc/CC-MAIN-20131204134840-00083-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 704
| 3
|
https://techienews.co.uk/mozilla-epic-team-port-unreal-engine-4-web/
|
code
|
Days ahead of the Game Developers Conference in San Francisco, Mozilla and Epic Games have announced that they are bringing Unreal Engine 4 framework to the web.
Mozilla showed an early preview of the game engine working with Firefox. The demo had “Epic’s Soul” and “Swing Ninja” videos running within the Mozilla’s browser without the use of plugins.
At last year’s Game Developers Conference, Mozilla partnered with Epic, the creators of the most-used middleware in gaming, to port the Unreal Engine 3 to the Web that was demonstrated as a proof-of-concept.
“Any modern browser can run asm.js content, but specific optimizations currently present only in Firefox, ensure the most consistent and smooth experience,” Mozilla said.
“This technology has reached a point where games users can jump into via a Web link are now almost indistinguishable from ones they might have had to wait to download and install,” said Brendan Eich, CTO and SVP of Engineering at Mozilla.
“Using Emscripten to cross-compile C and C++ into asm.js, developers can run their games at near-native speeds, so they can approach the Web as they would any other platform.”
Mozilla claims that the performance of web applications has been optimised using asm.js from 40 percent to 67 percent native speed over the last 12 months and that it will get even better. The companies believe porting of Unreal Engine 4 to be the testament to the power of the Web.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474744.31/warc/CC-MAIN-20240228175828-20240228205828-00284.warc.gz
|
CC-MAIN-2024-10
| 1,454
| 7
|
https://alldus.com/blog/podcasts/aiinaction-thomas-thiele-deutsche-bahn/
|
code
|
Welcome to episode 67 of AI in Action, the podcast that breaks down the hype and explores the impact that Data Science, Machine Learning and Artificial Intelligence are making on our everyday lives.
Powered by Alldus International, our goal is to share with you the insights of technologists and data science enthusiasts to showcase the excellent work that is being done within AI in the United States and Europe.
Today's guest is Thomas Thiele, Program Manager of the House of AI at Deutsche Bahn. Deutsche Bahn is an international provider of mobility and logistics services. Active in over 130 countries, they design and operate the transport networks of the future.
With the integrated operation of transport and infrastructure and the intelligent linking of all modes of transport, they move people and goods on the rails and roads, by sea and by air. With some 300,000 employees, of which roughly 200,000 are based in Germany, Deutsche Bahn are one of Germany’s largest and most diverse employers.
In the show, Thomas will tell you about:
His work at Deutsche Bahn and in academia
Applying AI to impact mobility on commuter journeys
Identifying Anomalies and challenges to overcome
Collaboration with the House of AI and DB
Building relationships and trust around applying AI in industry
Upcoming projects in 2020.
What did you make of Thomas' podcast? Where do you see the future of Artificial Intelligence and Machine Learning heading in the next few years? We would love to hear your thoughts on this episode, so please leave a comment below.
If you would like to hear more from AI in Action then please subscribe and don’t forget to like and share with your friends on social media.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991693.14/warc/CC-MAIN-20210512004850-20210512034850-00305.warc.gz
|
CC-MAIN-2021-21
| 1,696
| 13
|
https://www.clipzui.com/video/j33355d364k5z3s4z526c4.html
|
code
|
How to Flush a Heater Core (Fast)06:30
Is your heat not working or is your heater not hot? You may have a clogged heater core so do a heater core flush and unclog it. Learn how to flush your heater core and get it working to full capacity again instead of spending hundreds on a replacement!
Here is the cheap flush kit I showed: http://amzn.to/2mXrqcx
Here is the Thermometer I used: http://amzn.to/2mR5azA
Hose adapter: http://amzn.to/2maFdhe
How to find why you have No Heat: https://www.youtube.com/watch?v=-XjXTVJhFLM
How to Test your Coolant to See if it is Bad: https://www.youtube.com/watch?v=mHTM3dvpD1M
How to Flush Your Coolant: https://www.youtube.com/watch?v=s--5ft5YiHg
How to Replace a Blower Motor Resistor: https://www.youtube.com/watch?v=Q-i0tytSYIM
How to Replace a Blower Motor: https://www.youtube.com/watch?v=7e60wS3_C9s
Here is the longer heater core flush video: https://www.youtube.com/watch?v=LD2LGkUycQg
**If the video was helpful, remember to give it a "thumbs up" and consider subscribing. New videos every Thursday**
-Subscribe Here: http://www.youtube.com/subscription_center?add_user=paintballoo7
-YouTube Channel: https://www.youtube.com/ChrisFix
Due to factors beyond the control of ChrisFix, I cannot guarantee against improper use or unauthorized modifications of this information. ChrisFix assumes no liability for property damage or injury incurred as a result of any of the information contained in this video. Use this information at your own risk. ChrisFix recommends safe practices when working on vehicles and or with tools seen or implied in this video. Due to factors beyond the control of ChrisFix, no information contained in this video shall create any expressed or implied warranty or guarantee of any particular result. Any injury, damage, or loss that may result from improper use of these tools, equipment, or from the information contained in this video is the sole responsibility of the user and not ChrisFix.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/warc/CC-MAIN-20190215183319-20190215205319-00534.warc.gz
|
CC-MAIN-2019-09
| 1,963
| 15
|
https://www.allinterview.com/company/255/belly.html
|
code
|
When will the testing starts? a)Once the requirements are complete b)In requirement phase2 6380
What is the first test in software testing process a)Monkey testing b)Unit Testing c)Static analysis d)None of the above22 24001
Which of the following, we cannot include in a compiled module? a)check points b)analog statements c)reporting statemnts3 4924
What file contains definition of the field types and fields of documents?
Where are automatic correlation options set?
What do you mean by the term 'normalization'?
What do you understand by user class in cognos?
Difference between oracle's plus (+) notation and ansi join notation?
what are the tools used in your project , How to ask the answer this question
What is the use of ZooKeeper?
What leads assets to turn into a private equity?
How do you check is php not empty?
How can I change the background color?
What is dynamic array in java?
Which sql server table is used to hold the stored procedure scripts?
What to do if problem occurs in downloading your app in android?
What is the best way to declare and define global variables?
What are the types of tables?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572220.19/warc/CC-MAIN-20220816030218-20220816060218-00743.warc.gz
|
CC-MAIN-2022-33
| 1,121
| 18
|
https://www.veritas.com/support/en_US/article.HOWTO77525
|
code
|
Use the Veritas Operations Manager Enterprise Server console to create rules for assigning objects to instances of a business view.
Each rule defines the conditions for assigning one object type to one instance. Objects are assigned to the instances at the lowest level of the hierarchy (leaf instance). You create one rule per leaf instance for each type of object. You select from predefined attributes and operators when defining rule conditions.
To create a rule for assigning objects to a business view
In the Veritas Operations Manager Enterprise Server console, click Manage > Business Views > Assignments.
From the Business Views drop-down list, select the business view to which you want to assign objects.
In the Instances pane, select the instance to which you want to assign objects.
On the Manage Rules tab, under Rules for Instances, select the object type and click the Add Rule icon.
In the Add Rule dialog box, define the rule. Assign a name and description and specify the rule conditions.
Optionally click Test Rule to verify the rule syntax. The Test Rule Results dialog box displays a list of all objects to be assigned by the rule. After reviewing the results, click OK.
Click Save. The rule is added to the rules table for that object type.
If the Automatic assignment check box is selected for the business view, all rules for the business view are run automatically whenever objects are rolled up, except for any rules that are set as disabled. To disable or enable rules after saving, use the Disable rules or Enable rules icons above the table of rules.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292151.8/warc/CC-MAIN-20160823195812-00109-ip-10-153-172-175.ec2.internal.warc.gz
|
CC-MAIN-2016-36
| 1,580
| 11
|
https://www.v3.co.uk/v3-uk/news/1980813/ibm-steal-limelight-sco-conference
|
code
|
The SCO Group kicks off its customer and partner conference next week, but its legal case against IBM is likely to overshadow its product announcements.
SCO public relations director Blake Stowell told vnunet.com that the two main areas of focus at the event in Las Vegas will be on new product releases taking place during the next six months, "and the protection of our intellectual property, especially our case involving IBM".
The company is currently embroiled in legal action against Big Blue, which SCO claims has contributed AIX source code to Linux without authorisation.
"The main theme at this year's SCO Forum event will be around the 25th anniversary of Unix on the Intel/AMD platform," said Stowell.
"Obviously, SCO has been squarely focused on that environment since its beginning and we will be taking a look at the past, present and future of Unix computing on Intel/AMD."
Stowell said that the company will be focusing on product news around the next release of SCO OpenServer (code-named Legend), the release of its email and collaboration product SCOoffice Server, and "a unique concept for embracing the Unix developer community around the SCO Unix platforms".
Other highlights at the event, aimed at reseller partners, developers, and enterprise customers, include a keynote by SCO president and chief executive Darl McBride discussing "SCO: past, present, and future" and briefings on SCO's product roadmap.
There will also be a keynote entitled Protecting SCO's Intellectual Property by Chris Sontag, senior vice president and general manager of the SCOsource Division, which aims to persuade Linux users to buy a licence.
Because SCO claims that Linux is an unauthorised derivative work of the Unix operating system which SCO owns, it offers an SCO Intellectual Property Licence for Linux for commercial use of Linux 2.4 and later versions.
SCO's website states: "The licence ensures that Linux end users can continue to run their business uninterrupted without misusing SCO's intellectual property."
But in June it was revealed that this part of the business managed to rake in just $11,000 during its last financial quarter.
Dr Kuan Hon criticises GDPR consent emails that will only eviscerate marketing databases and 'media misinformation'
Apple squashes Steam Link app on 'business conflicts' grounds
Philip Hammond wants to forget rules that the UK agreed with the EU to ban non-European companies from the satellites
Instapaper to 'go dark' in Europe until it can work out GDPR compliance
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794872114.89/warc/CC-MAIN-20180528072218-20180528092218-00468.warc.gz
|
CC-MAIN-2018-22
| 2,519
| 15
|
http://www.cinematography.com/index.php?showtopic=57192
|
code
|
I'd like to have my dad's Super-8mm films digitized but since I won't be able to do any pre-edit, I am planing to have every single roll digitized then edit and compile the files afterward depending on content.
I am a newbie in film editing. I have only played around with .mov with QuickTime Pro to put together some sequences shot with my digital camera.
I understand that, like for photography, it's better to scan film into a raw video format that can be latter edited (avoiding compression loss on the original master file).
So, my questions:
What would be:
1) the best master file format and best application to use for a newbie to be able to edit (mostly joining, possibly a bit of color correction if not done correctly at the digitization stage? (I don't want to invest in super-expensive pro-applications nor do I have time to learn complex ones).
2) the best frame rate to digitize into (master file)? same as Super-8mm?
3) the best frame rate to save into after editing?
4) the resolution to use at the digitization stage? Is 1020px vertical resolution overkill for Super-8mm?
5) the best technique: frame-by-frame scan or alternative (telecyne?) ?
A lot of suppliers are offering DVD format as the final product, but is that the best format to edit from?
Hope my questions make sense, my knowledge in this field is microscopic.
Editing software for newbie
No replies to this topic
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512693.40/warc/CC-MAIN-20181020101001-20181020122501-00445.warc.gz
|
CC-MAIN-2018-43
| 1,393
| 14
|
http://conkershomeland.lajnus.com/crashdebugger.html
|
code
|
If the game would ever crash(pretty rare), either by glitches or using Gameshark codes, then it has crashed. But if the Debug Mode cheat XFYHIJERPWAL_IELWZS was entered at the Options screen (or use 810EA132 FFFF, it enables the debug cheat, with access to all chapters in chapter mode) in the bar before the game crashed the game will start displaying numbers and letters of what at first seems to be random nonsense. Wait for a moment, and if “PRESS BUTTON” appears on screen, then it worked. Pressing A or B advances to the “MAIN MENU” screen. Once in there, tilt the control stick up or down to select a option. A to advance to the debug screen selected, and B to go back to Main Menu. Emulators won’t redraw the screen if the game has crashed, but if you save a state of the game at this point while the game still appears to be frozen, and then reloading it will make it appear(everything on the screen will be black, except the letters and numbers). Alternatively, you can use 81000002 1001. Turn the cheat on at any point in the game, and it’ll freeze up, making it easy enough to enter the debugger.
No need to explain this screen in extreme detail. The text on the bottom shows the date the final build was compiled on, which reads “Jan 31 2001 13:29:42″, and the version number is 19. In the NTSC version's crash debugger, the compilation date reads "Dec 19 2000 09:57:42", and the version number is 163. Hmm.. The PAL game seemed to not have received much testing, nor any major changes in content. Did Rare went lazy with the PAL conversion??
Self-explationary, shows the type of cause that made the game crash, as well as the Register threads. pressing c-right/left will display two more screens. What is being filled in there depends on at what point in the game the player were at the time of the crash.
Shows the stack. Up and Down on Control stick scrolls through the stack, showing the newest value being pushed up on top, withthe older values being pushed down, freeing them from memory. C buttons up and down do the same thing, but only one line at a time for each press.
Depending on the type of crash, the debugger will either say “HOSTDEBUG”, or “RETRYCODE”.. RETRYCODE only returns you to the debugger without doing anything else, as the game has crashed. HOSTDEBUG i think once served an important role during development, as it was used for debugging purposes by the developers at Rare, to test out various things without having to go through the whole game. According to recent videos on youtube, there is more to the debugger than initially thought. There was alot more options, like Memory, Memmap, Cameras, and alot more which by now has been reduced to only 3 in the retail, namely Registers, Stack, and Host Debug/Retry code.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891543.65/warc/CC-MAIN-20180122213051-20180122233051-00446.warc.gz
|
CC-MAIN-2018-05
| 2,782
| 5
|
https://blog.solori.net/2010/01/07/quick-take-esxi-patch-released/
|
code
|
Quick-Take: ESXi Patch ReleasedJanuary 7, 2010
Thanks to a tweet from Duncan Epping at Yellow Bricks, we’ve installed the latest ESXi patches to combat the unexpected vCenter problems reported with Update 1 for vSphere. While we’ve not experienced the vCenter problem in the lab, enough of users out there have caught it for VMware to issue a “not recommended” warning for ESXi users.
VMware ESX & vCenter Server Alerts
ESX 4.0: If you plan to upgrade ESX 4.0 to 4.0 Update 1 (fixed in 1a), it is critical to read KB article 1016070 before proceeding with the upgrade (affecting HP Proliant Systems w/Insight Agents).
vCenter Server: vCenter Server: If you have ESXi hosts connected to vCenter Server 4.0, please do not upgrade vCenter Server to Update 1 before installing the patch (ESXi400-200912001) referenced in KB article 1016262.
– VMware Support
SOLORI’s Take: reinforces the concept of patch regression in lab or non-critical cluster… Work with your VMware professional(s) to manage VMware/vSphere patches and updates whenever possible.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120101.11/warc/CC-MAIN-20170423031200-00464-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,059
| 7
|
https://ec.europa.eu/digital-single-market/search/site/?f%5B0%5D=im_field_tags%3A547&f%5B1%5D=im_field_tags%3A72
|
code
|
DSM blog post Monday, 30 September, 2013 Open and smart cities for the common future Promoting open innovation and becoming ecosystem managers instead of service providers are two of the key elements cities need to embrace if they are to innovate and tran ...
grigoge - 06/07/2015 - 14:32
The H2020 Future Internet Forum (FIF) The FIF is a registered group which aims to exchange views on H2020 topics relating to "Future Networks" (5G, Cloud, Next-Generation Internet and IoT). The members of this group have been appointed by the re ...
denanok - 02/06/2017 - 12:27
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320570.72/warc/CC-MAIN-20170625184914-20170625204914-00554.warc.gz
|
CC-MAIN-2017-26
| 567
| 4
|
https://zavod-miks.ru/steam-keeps-validating-my-games-6382.html
|
code
|
Steam keeps validating my games dating pakistani guy
I also noticed that when I pause the download, and have started Steam from a terminal, I get loats of these errors before the download goes into paused mode: I have seen quite a few of these googling the internet, but no real fix. Result below: PID | PRIO | USER | DISK READ | DISK WRITE | SWAPIN | IO | COMMAND -----|---------|---------|----------------|------------------|------------|-----|---------------- 4859 | be/4 | ******* | 62.38 M/s | 62.44 M/s | 0.00 % | 82.08 % | caja -n [pool] Downloaded and setup steam windows client with wine.Got same download speeds and same symptom of it getting stuck in Writing to disk, while iotop shows that not particulary much is happening on the disk.
Maybe months down the line it asks me again because the cookie or whatever is gone but that isn't bad at all.If you did not attempt this action, please change your password immediately.To complete this process, enter the following special access code into the authorization dialog before trying to log in again:nope i actually dont get an email at all, I just see a green box on top of the steam app asking me to verify my email address (As if its new but isnt).Until I verify cant make purchases I have steam guard turned off as I got sick of having to authorise every time I used a new browser/pc prompt in the app doesnt state a reason, it just says I need to verify. It's probably telling you that to get you to turn steam guard on because otherwise people get their accounts hijacked and then steam support spends a month back and forth in email trying to get it back.If it's coming from the steam account itself, and you want to use steam to buy stuff, I suggest you verify your email, turn on steam guard and leave it turned on.
Nothing else is running on that drive and the only files are the steamapps from those clients. Edit: Initially didn't have much luck through Google, but found something very interesting after a retry: Steam Skins - 2010 UI - Updated 17 August 2014.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00152.warc.gz
|
CC-MAIN-2021-17
| 2,033
| 4
|
https://community.canvaslms.com/t5/Canvas-Question-Forum/Creating-an-answer-sheet/td-p/237154
|
code
|
I uploaded a file/pdf document in Canvas for my students. It includes 100 multiple choice questions in addition to a bunch of fill-ins and labeling. I'd like to create an answer key on Canvas for the multiple choice questions instead of giving them a scantron, or having them upload the entire document back to me for grading. Is there an easy way to create a 100 question answer sheet without typing answer: A , Answer: B, Answer: C; and Answer: D 100 times? Thank you- Kim
I'm not sure if this would meet your needs or not, but would you be open to using Canvas Quizzes instead of uploading a document with your questions/answer key? I don't know exactly what your use case is, but it would seem to me that Quizzes might be a better fit for your needs? Here are a couple short tutorials on what Quizzes and New Quizzes are like:
What are your thoughts? I'm not sure that the "labeling" questions you have would work in Canvas (maybe they would depending on how you asked the questions), but the other question types certainly would be possible in Canvas Quizzes.
Hope to hear back from you soon, Kim. Be well...stay safe.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00378.warc.gz
|
CC-MAIN-2022-40
| 1,123
| 4
|
https://mail.python.org/pipermail/ipython-dev/2006-November/001353.html
|
code
|
[IPython-dev] Suggestions for implementing parallel algorithms?
fullung at gmail.com
Mon Nov 13 19:23:48 EST 2006
On Thu, 09 Nov 2006, Brian Granger wrote:
> This sounds like a nice application of IPython1. Fernando and I have
> had a number of talks about this exact issue lately. One things I
> should say before all else. In spite of there being years of research
> in parallel computing and algorithms, there is basically no research
> on *interactive* parallel computing. I am not sure you need your
> application to be interactive, but if you do, at some level, you are
> in new territory.
My application probably doesn't fall into the interactive category at
present. But some speaker verification and identification systems could
definately be used interactively.
> With that said, there are some things to keep in mind.
> First, it is important to realize that all the previous work done on
> parallel algorithm/application development still applies even though
> the result can be used interactively. For example, if you need to
> move data around between the ipython engines, you should still use MPI
> - and all the guidelines for using MPI still apply.
Okay. This makes sense.
> The new and interesting question is really "how and where do you want
> to interact with the parallel application as a human user?" There are
> many models of how you would want your application abstracted for
> interactive usage. And at some level, you may want to have the
> interactive API very different from the underlying computational model
> and parallel algorithm. You may want to hide the parallelism or you
> may find it better to show it explicitely in the API.
Interesting. I see what you're saying. For speaker verification
systems, you could have an interactive parallel application in the case
where you provide a system for a bunch of operators to investigate
phone calls with. The operators might be wearing trenchcoats. ;-)
> In my own work, I have tended to factor my application into units that
> can perform the basic computational tasks for both a serial and
> parallel versions of the code. I then use these as building blocks to
> build the parallel and serial version. If the low level components
> are factored well, the high level algorithm is typically very short
> and I don't mind maintaining both a serial and a parallel version.
What is still unclear to me is how you call these components? I'm
trying to avoid putting too much code in strings that get passed to
executeAll and the like. If you have something foo that does N things
on each node, how do set things up so that you don't have to write N
lines in N strings to N executeAll calls? Presumably one would want to
to just write executeAll('foo(a,b,c)') where you wrote foo as a normal
Something like this:
In : f = lambda x: x
In : rc.pushAll(f=f)
In : rc.pushAll('f(10)')
However, lambda functions don't pickle:
Object cannot be serialized: f Can't pickle <type 'function'>:
attribute lookup __builtin__.function failed
or something like this:
In : rc.getIDs()
Out: [0, 1]
In : def f(x): return 10*x
In : rc.pushAll(f=f)
However, this nukes the engines:
exceptions.AttributeError: 'module' object has no attribute 'f'
If I distribute a module containing the functions I want to execute to
all the nodes beforehand, it might be able to unpickle the function on
the engines. Still have to try this.
I wonder if it would work with functions declared inside instance
methods. Where would the engines find these functions? My thinking here
is that you could write parallel algorithms like this:
def parallelop(self, data, controller):
Make sense? Any suggestions for doing this kind of thing?
> For many things I do like the scatterAll/executeAll/gatherAll style of
> computation - it is extremely lightweight and easy to implement. The
> one thing to be careful of though is to not use this approach when MPI
> is more appropriate. Testing the scaling of your application with
> quickly reveal if there are problems like this.
For my application, getting the data to the engines probably needs some
thought. Prior to training the world model with K-means and GMM EM I
need to spread out tens to hundreds of hours of speech between the
Correct me if I'm wrong, but since the client and controller aren't
part of the MPI universe, I have to use something like scatterAll here?
On my client I would typically have a directory with a few hundred
files that I want to distribute to the engines. A quick 'n dirty
implementation could read all these files into memory and scatter them
to the engines.
This approach will run into problems when my dataset becomes bigger
than the memory on my client machine (reasonably likely) and is
probably going to be reasonably (very?) slow anyway.
Next idea: scatter filenames (or some other kind of data ID) and let
the engines query a central server for the data, maybe via HTTP or FTP
(or maybe do something that exposes a PyTables database to the
network). Alternatively, keep all the data on each engine machine. My
data is probably too big for this approach.
Next idea after that: since I have bunch of machines with disk space to
spare, I could spread out multiple copies of my dataset across these
machines. Then I could still scatter names and let the engines do some
kind of lookup to find machines that have the data it is looking for
and have it then get it from one of these at random. Basically the
previous idea + load balancing.
Next idea after that: in a separate process on each engine machine, run
something akin to a BitTorrent client that downloads files from a
torrent as the local engine needs them. When this starts up, the client
machine could seed the data until there is at least one copy of each
file distributed across the engine machines.
Next idea after that: figure out a way to prefer scattering of data to
kernels that already have the data available. I think the BOINC folks
call this concept locality scheduling:
Other idea: instead of all this network I/O, keep a subset of the data
on each engine machine and use locality scheduling to ensure that only
machines that have certain data get work related to that data scattered
to them. At this point, scatter probably isn't the right word anymore.
How fast this stuff is going to be... we'll see. :-)
Do you guys have large datasets to deal with? Any thoughts on doing this
kind of thing?
> >I'm hoping I can avoid this duplication. My first idea is to make something
> >like a LocalController that implements IPython1's IController interface in
> >way that makes sense for single node operation. This way, I can implement
> >algorithm once in terms of IController operations, test it easily, and
> >by simply setting a controller property on a instance of the class
> >implementing the algorithm, decide whether it runs on a single node or in
> I had not thought of that before, but it does make sense. It is sort
> of similar to building objects that hide whether the object is being
> used in a parallel/serial context. It is surely worth trying this
> approach, but I am not sure how it would turn out in your case.
I'd like to explore this idea further if I can figure out the "right"
way for algorithms to tell the engines how to do a piece of work from
inside a method.
> I don't know if this helps, but I would love to see what you end up
> trying and what you find most useful - I am curious about all these
> things myself.
Thanks for your inputs. Much appreciated.
More information about the IPython-dev
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00121.warc.gz
|
CC-MAIN-2022-33
| 7,512
| 127
|
https://cloudacademy.com/blog/azure-data-lake-store/
|
code
|
The Azure Data Lake Store service provides a platform for organizations to park – and process and analyse – vast volumes of data in any format. Find out how.
With increasing volumes of data to manage, enterprises are looking for appropriate infrastructure models to help them apply analytics to their big data, or simply to store them for undetermined future use. In this post, we’re going to discuss Microsoft’s entry into the data lake market, Azure Data Lake, and in particular, Azure Data Lake store.
What is a data lake?
In simple terms, a data lake is a repository for large quantities and varieties of both structured and unstructured data in their native formats. The term data lake was coined by James Dixon, CTO of Pentaho, to contrast what he called “data marts”, which handled the data reporting and analysis by identifying “the most interesting attributes, and to aggregate” them. The problems with this approach are that “only a subset of the attributes is examined, so only pre-determined questions can be answered,” and that “data is aggregated, so visibility into the lowest levels is lost.”
A data lake, on the other hand, maintains data in their native formats and handles the three Vs of big data (Volume, Velocity and Variety) while providing tools for analysis, querying, and processing. Data lake eliminates all the restrictions of a typical data warehouse system by providing unlimited space, unrestricted file size, schema on read, and various ways to access data (including programming, SQL-like queries, and REST calls).
With the emergence of Hadoop (including HDFS and YARN), the benefits of data lake – previously available only to the most resource-rich companies like Google, Yahoo, and Facebook – became a practical reality for just about anyone. Now, organizations who had been generating and gathering data on a large scale but had struggled to store and process them in a meaningful way, have more options.
Azure Data Lake
Azure Data Lake is the new kid on the data lake block from Microsoft Azure. Here is some of what it offers:
- The ability to store and analyse data of any kind and size.
- Multiple access methods including U-SQL, Spark, Hive, HBase, and Storm.
- Built on YARN and HDFS.
- Dynamic scaling to match your business priorities.
- Enterprise-grade security with Azure Active Directory.
- Managed and supported with an enterprise-grade SLA.
Azure Data Lake can, broadly, be divided into three parts:
- Azure Data Lake store – The Data Lake store provides a single repository where organizations upload data of just about infinite volume. The store is designed for high-performance processing and analytics from HDFS applications and tools, including support for low latency workloads. In the store, data can be shared for collaboration with enterprise-grade security.
- Azure Data Lake analytics – Data Lake analytics is a distributed analytics service built on Apache YARN that compliments the Data Lake store. The analytics service can handle jobs of any scale instantly with on-demand processing power and a pay-as-you-go model that’s very cost effective for short term or on-demand jobs. It includes a scalable distributed runtime called U-SQL, a language that unifies the benefits of SQL with the expressive power of user code.
- Azure HDInsight – Azure HDInsight is a full stack Hadoop Platform as a Service from Azure. Built on top of Hortonworks Data Platform (HDP), it provides Apache Hadoop, Spark, HBase, and Storm clusters.
We’ve already been introduced to HDInsight in this series. Now we will discuss Azure Data Lake Store…which is still in Preview Mode.
Azure Data Lake Store
According to Microsoft, Azure Data Lake store is a hyper-scale repository for big data analytics workloads and a Hadoop Distributed File System (HDFS) for the cloud. It…
- Imposes no fixed limits on file size.
- Imposes no fixed limits on account size.
- Allows unstructured and structured data in their native formats.
- Allows massive throughput to increase analytic performance.
- Offers high durability, availability, and reliability.
- Is integrated with Azure Active Directory access control.
Some have compared Azure Data Lake store with Amazon S3 but, beyond the fact that both provide unlimited storage space, the two really don’t share all that much in common. If you want to compare S3 to an Azure service, you’ll get better mileage with the Azure Storage Service. Azure Data Lake store, on the other hand, provides an integrated analytics service and places no limits on file size. Here’s a nice illustration:
(Image Courtesy: Microsoft)
Azure Data Lake store can handle any data in their native format, as is, without requiring prior transformations. Data Lake store does not require a schema to be defined before the data is uploaded, leaving it up to the individual analytic framework to interpret the data and define a schema at the time of the analysis. Being able to store files of arbitrary size and formats makes it possible for Data Lake store to handle structured, semi-structured, and even unstructured data.
Azure Data Lake store file system (adl://)
Azure Data Lake Store can be accessed from Hadoop (available with an HDInsight cluster) using the WebHDFS-compatible REST APIs. However, Azure Data Lake store introduced a new file system called AzureDataLakeFilesystem (adl://). adl:// is optimized for performance and available in HDInsight. Data is accessed in the Data Lake store using:
Azure Data Lake store security:
Azure Data Lake store uses Azure Active Directory (AAD) for authentication and Access Control Lists (ACLs) to manage access to your data. Azure Data Lake benefits from all AAD features including Multi-Factor Authentication, conditional access, role-based access control, application usage monitoring, security monitoring and alerting. Azure Data Lake store supports the OAuth 2.0 protocol for authentication within the REST interface. Similarly, Data Lake store provides access control by supporting POSIX-style permissions exposed by the WebHDFS protocol.
Azure Data Lake store pricing
Data Lake Store is currently available in US-2 region and offers preview pricing rates (excluding Outbound Data transfer):
Azure Data Lake is an important new part of Microsoft’s ambitious cloud offering. With Data Lake, Microsoft provides service to store and analyze data of any size at an affordable cost. In related posts, we will learn more about Data Lake Store, Data Lake Analytics, and HDInsight.
Application Load Balancer vs. Classic Load Balancer
What is an Elastic Load Balancer? This post covers basics of what an Elastic Load Balancer is, and two of its examples: Application Load Balancers and Classic Load Balancers. For additional information — including a comparison that explains Network Load Balancers — check out our post o...
Advantages and Disadvantages of Microservices Architecture
What are microservices? Let's start our discussion by setting a foundation of what microservices are. Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs). ...
Kubernetes Services: AWS vs. Azure vs. Google Cloud
Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications. Cloud service providers — such as Amazon Web Ser...
AWS Internet of Things (IoT): The 3 Services You Need to Know
The Internet of Things (IoT) embeds technology into any physical thing to enable never-before-seen levels of connectivity. IoT is revolutionizing industries and creating many new market opportunities. Cloud services play an important role in enabling deployment of IoT solutions that min...
Which Certifications Should I Get?
As we mentioned in an earlier post, the old AWS slogan, “Cloud is the new normal” is indeed a reality today. Really, cloud has been the new normal for a while now and getting credentials has become an increasingly effective way to quickly showcase your abilities to recruiters and compan...
How to Go Serverless Like a Pro
So, no servers? Yeah, I checked and there are definitely no servers. Well...the cloud service providers do need servers to host and run the code, but we don’t have to worry about it. Which operating system to use, how and when to run the instances, the scalability, and all the arch...
AWS Security: Bastion Hosts, NAT instances and VPC Peering
Effective security requires close control over your data and resources. Bastion hosts, NAT instances, and VPC peering can help you secure your AWS infrastructure. Welcome to part four of my AWS Security overview. In part three, we looked at network security at the subnet level. This ti...
Top 13 Amazon Virtual Private Cloud (VPC) Best Practices
Amazon Virtual Private Cloud (VPC) brings a host of advantages to the table, including static private IP addresses, Elastic Network Interfaces, secure bastion host setup, DHCP options, Advanced Network Access Control, predictable internal IP ranges, VPN connectivity, movement of interna...
Big Changes to the AWS Certification Exams
With AWS re:Invent 2019 just around the corner, we can expect some early announcements to trickle through with upcoming features and services. However, AWS has just announced some big changes to their certification exams. So what’s changing and what’s new? There is a brand NEW ...
New on Cloud Academy: ITIL® 4, Microsoft 365 Tenant, Jenkins, TOGAF® 9.1, and more
At Cloud Academy, we're always striving to make improvements to our training platform. Based on your feedback, we released some new features to help make it easier for you to continue studying. These new features allow you to: Remove content from “Continue Studying” section Disc...
AWS Security Groups: Instance Level Security
Instance security requires that you fully understand AWS security groups, along with patching responsibility, key pairs, and various tenancy options. As a precursor to this post, you should have a thorough understanding of the AWS Shared Responsibility Model before moving onto discussi...
Cloud Migration Risks & Benefits
If you’re like most businesses, you already have at least one workload running in the cloud. However, that doesn’t mean that cloud migration is right for everyone. While cloud environments are generally scalable, reliable, and highly available, those won’t be the only considerations dri...
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671239.99/warc/CC-MAIN-20191122042047-20191122070047-00015.warc.gz
|
CC-MAIN-2019-47
| 10,634
| 61
|
https://gis.stackexchange.com/questions/142153/formatting-tags-break-stacked-feature-linked-annotations-with-maplex
|
code
|
I have a stacked feature-linked annotation, with one of the fields using a formatting tag. The annotation is fine when it is created, however all the lines collapse onto one line when a change is made to the linked feature. A related bug is logged on Esri's website (http://support.esri.com/en/bugs/nimbus/TklNMDc4OTQ4), but the stacked label collapses with any formatting text. Here is an example of the annotation expression:
def FindLabel ( [field1], [field2], [field3], [field4] ): return [field1] + " " + "<FNT size='6'>" + [field2] + "</FNT>" + " " + [field3] + " " + [field4]
edit: Esri support has recognized this as a bug.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987769323.92/warc/CC-MAIN-20191021093533-20191021121033-00259.warc.gz
|
CC-MAIN-2019-43
| 631
| 3
|
https://management.curiouscatblog.net/2011/01/24/supporting-free-and-open-source-software/
|
code
|
Gabriel Weinberg (founder of the great Duck Duck Go search engine) proposed starting a FOSS Tithing movement. Many benefit greatly from free and open source software like: Ruby on Rails, Linux (my favorite version Ubuntu), WordPress, Apache, Ruby, Perl, Nginx, Phusion Passenger. As well as other related efforts Electronic Frontier Foundation, creative commons, PLoS.
If we can get people to contribute to this idea that would be great. I have had curiouscat.com give some money to continue the development of the open source software we use, and the related efforts.
The contribution of time is often even more important (and for some people, easier). Those individuals and organizations that are giving back in this way are key to the community benefits. Open source software is a great example of systems thinking and taking a broader view of how to succeed. And for managers interested just in their organization allowing programmers to contribute to open source projects can be very beneficial building their intrinsic motivation by contributing to something they care about them and having them learn through such participation.
My goal is to give back more. But so far that goal has been held back by my failure to better achieve the goal to increase revenue at curiouscat.com. I am going to make a new effort to have curiouscat.com give back more going forward.
I get so much from great open source software like Ruby, Rails, Ubuntu, Apache, MySQL along with lots of less well known software, that it is important to me to contribute to sustaining the environment that will continue to produce such great software.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817002.2/warc/CC-MAIN-20240415142720-20240415172720-00617.warc.gz
|
CC-MAIN-2024-18
| 1,623
| 5
|
https://practical.li/engineering-playbook/os/shell/
|
code
|
Operating System Shellλ︎
a shell is a computer program that exposes an operating system's services to a human user or other programs. In general, operating system shells use either a command-line interface (CLI) or graphical user interface (GUI), depending on a computer's role and particular operation. It is named a shell because it is the outermost layer around the operating system.
Command-line shells require the user to be familiar with commands and their calling syntax, and to understand concepts about the shell-specific scripting language (for example, bash), while graphical shells place a low burden on beginning computer users and are characterized as being easy to use, yet most GUI-enabled operating systems also provide CLI shells, normally for performing advanced tasks.
Command Line Shellλ︎
Define aliases to optomise commands and create useful default flags when calling commands
shell-aliases file to define aliases to be used with any command line shell.
# Shell aliases shared across all shells (zsh, bash)
# Neovim Aliases for multiple configurations
alias astro="NVIM_APPNAME=astronvim nvim"
# Neovide alias with AstroNvim configuration
alias neovide="NVIM_APPNAME=astronvim neovide"
# Shell history
# edit entire history
alias edit-shell-history="fc -W; astro \"$HISTFILE\"; fc -R"
# edit previous command in history
alias edit-last-command="fc -e astro -1"
Source the shell aliases from the shell configuration files
Gnome, KDE, Regolith are examples of desktop shells.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297295329.99/warc/CC-MAIN-20240425130216-20240425160216-00743.warc.gz
|
CC-MAIN-2024-18
| 1,502
| 18
|
https://cwiki.apache.org/confluence/display/OLTU/OAuth+2.0+Resource+Server
|
code
|
Oltu Resource Server
In some cases OAuth Authorization Server and Resource Server are this same application. OAuth 2.0 specification logically separates these two entities, Oltu does it too.
Oltu RS module helps you to handle client requests to access OAuth protected resource.
Usually, it is good idea to perform Oltu RS module logic in java filter or JAX-RS interceptor.
We are currently working on more thorough solution. Keep updated!
If you need more advanced examples, integration-tests module shows you all possibilities provided by Oltu API
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813803.28/warc/CC-MAIN-20180221202619-20180221222619-00104.warc.gz
|
CC-MAIN-2018-09
| 548
| 6
|
https://bot-jobs.com/job/665137-ai-chatbot-engineer-dice
|
code
|
AI chatbot Engineer
- Experience building AI-powered chatbots from scratch using AI, ML, and NLP technologies Must have knowledge of RASA, cognitive service, and other chatbot development platforms like Google Dialog Flow (added adv).
- Experience with Nodejs development (MERN stack preferred)
- Experience in algorithms and conversational AI Clean coding skills and best practices Fundamental knowledge and experience in MySQL or other databases
- Familiar with prod/software development lifecycle, agile development processes and DevOps approaches Strong multi-language background and experience working with programming languages such as Python, Java, Ruby etc
- Strong communication skills (verbal and written) Good understanding with LSTM and Transformer Networks (a plus)
Your application has been successfully submitted.
Dice: Where tech connects.™
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00589.warc.gz
|
CC-MAIN-2022-40
| 858
| 8
|
https://roboleary.net/web/2017/07/30/animation-flipbook.html
|
code
|
A useful technique to have at one’s disposal would be a quick way to create a simple, responsive animation that looks really good at different resolutions. So, I will not discuss animated GIFs!
As a starting point, we could emulate the effect of a flipbook. When I was a kid, I used a block of post-it notes to create a series of scenes, beginning from the top down, then you can flip through them to “run” the animation! So, it’s a series of overlayed images with different poses, cycle through the images to create the animation. What is the best way to do this on the web?
The quality of a SVG will not degrade on resizing. So, it is a good choice.
We can use opacity to fade one image in, and fade the other out. This can be achieved easily with CSS animation.
Here is a simple example showing Donald Trump yawping!
This technique is not very suitable for more than a few poses (frames), each frame requires a @keyframes block, and these blocks need to be coordinated with each other to make a smooth transition for the animation. An animation library should be used for anything more complex than that!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145897.19/warc/CC-MAIN-20200224040929-20200224070929-00107.warc.gz
|
CC-MAIN-2020-10
| 1,115
| 6
|
https://www.beamng.com/threads/beamng-paradise.48197/page-13#post-785849
|
code
|
Separate names with a comma.
Discussion in 'Terrains, Levels, Maps' started by falefee, Nov 5, 2017.
oh boy, I don't often get hyped for a mod, but this one
After this, would it be possible to port the maps from Burnout 3? Is it even possible?
I want the awesome rating back too, got handfuls of those.
I'm planning on making a BIG pack with (most hopefully) a bunch of Burnout cars (proof is on this very own thread), skins for BeamNG cars, the map of course, Scenarios and maybe even a Campaign? (If that evolves), Super Jumps and Mega Jumps, and all of this brought together by the community. I can make exceptions on how its distributed, but you may have to ask personally. The map currently uses about 2GB, estimate of around 4GB on the final product. Cars won't be the best of detail (as the game is 10 years old...), but should be awesome to drive in.
I have gotten requests, believe it or not. Nobody I know of is interested in researching Burnout 3's graphics format, but I'll let you know if its a possibility! (something BIGGER is coming after this, from a Criterion game )
4GB is less than I expected, I guess I should be able to run this
Running normal textures should get you fine. I'm gonna end up recreating most because 512x512 for a next gen game isn't very nice..
GIF (yes, the start is black so CLICK CLICK CLICK)
Gfycat Video - Click to Play - Direct Link
At this rate now, with my new GPU, I'm waiting for this to come out, great work!
Mingo, I'm curious about something. Are you planning on leaving the skybox/ambient color like that? Paradise had a sort of faintly green-brown filter over it whereas Beam has a bright blue ambient filter as default:
IDK, I just feel like it was an important part of the experience. The ambient "grimy" sort of color really added to the feeling of a gritty industrial setting filled with polluted air. Sort of like how GTA 4 did it with the "sickly green" ambience:
Of course, this is just a suggestion.
Thank you! Love your work too!
Yes, I am planning to recreate it with ReShade. BeamNG offers very little standard effects and ReShade is very easy to use and tweak.
idk i made a site like @DevilisticJosh xd http://beamngparadise.tk/
God this was the first game I got with my ps3 on my 10th birthday along with NFSS and that video with the overlay and the drift score just took me back. Just stumbled upon this thread so if you need any additional help like texturing or modeling I'm always happy to lend a hand. Just picked up blender not too long ago but I've used photoshop for years so I know my way around it pretty well.
Awesome! We'll need all the help we can get
Better just mention that "UI Concept" video was all edited and matched to real game footage, not actually UI apps yet.
Edit: I'm waiting for people to record...
This is pretty amateur (clearly not a 3D Artist), but I thought someone would like this.
Any idea on when a release is coming?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00426.warc.gz
|
CC-MAIN-2022-27
| 2,918
| 24
|
https://community.ortussolutions.com/t/event-handler-helper/1393
|
code
|
Hi coldbox people,
I have some methods/functions that several event handlers share, and i
wondered what is the best place to put them.
The functions i am talking about get a set of queries that populate
drop down boxes, some views use the same drop down boxes, and so i
want different event handlers to be able to call these methods.
I do not think these methods should be in my model, they seem to
belong in the controller part.
So i was thinking to make some cfc's in my handlers directory that do
this, that are not event handlers. I was thinking about making a
LookupHelper.cfc in the root of my handlers directory.
Is this a good idea, or does coldbox have something special for this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00720.warc.gz
|
CC-MAIN-2022-21
| 689
| 12
|
https://www.itweb.services/tutorials/linux-guides/installing-htmly-on-ubuntu-16-04-with-lamp/
|
code
|
Table of Contents
HTMLy is a blogging engine that works without a database. It allows you to create and manage content with flat files. In this guide, we will go over the steps you need to take to install HTMLy on your IT Web Services VPS.
- IT Web Services VPS with LAMP stack on Ubuntu 16.04
In most LAMP stack configurations, website files are stored in the directory
/var/www/html/. Let’s navigate to that folder and remove the placeholder files. Log in with SSH and then run the following commands.
cd /var/www/html/ rm -rf index.php logo.png
Next, check for the latest version of HTMLy. At the time of writing, it is v2.7.4. Given the following URL, replace
VNUMBER with the version number you would like to use.
Let’s download the HTMLy installer file and give our server user the correct permissions with the following commands:
wget https://github.com/danpros/htmly/releases/download/v2.7.4/installer.php chown www-data:www-data -R .
If you open up your web browser now and go to
http://YOUR_IP_ADDRESS/installer.php, you will see a warning. Let’s fix that first before going further with the installation.
Go to the following Apache2 file to change the way Apache handles URLs:
Find the following part. It should be near the beginning of the file.
<Directory /var/www/html/> Options -Indexes Require all granted </Directory>
Then, replace that part with the following text:
<Directory /var/www/html> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory>
At last, enable
sudo a2enmod rewrite
Then, restart the Apache service:
service apache2 restart
When you visit
http://YOUR_IP_ADDRESS/installer.php again, you will see that the error message is gone and you can follow the instructions for installing HTMLy. Once completed, you will be logged in and ready to create your first blog post.
Do you need help setting up this on your own service?
Please contact us and we’ll provide you the best possible quote!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102637.84/warc/CC-MAIN-20231210190744-20231210220744-00566.warc.gz
|
CC-MAIN-2023-50
| 1,977
| 25
|
https://sonnati.wordpress.com/category/video/
|
code
|
Those who follow my Blog know that I’m specialized in optimization of video streaming. It’s a creative and challenging work, because efficiency in streaming is not only a matter of choosing the best codecs and protocols (because it is probable to have a very limited set to choose from) but it’s more important to have an open mind, a genuine passion for research and devotion to quality.
In short: It’s less important the tool you use than the optimized methods, expertise and vision that guide that tool.
With a bit of research and original approaches it is possible to achieve higher benifits from optimization of existing codecs than from adopting new codecs, especially when you optimize codecs, streaming protocols and playback strategies synergistically.
This is even more true today because we are experiencing a kind of “stall” in the adoption of HEVC, the current state-of-the-art codec, because of uncertainties linked to multiple patent-pools ambiguity and licensing cost. If HEVC continues to be hampered, the alternative will be to use AV1 but it is still under development and will require many years to spread across a fragmented market. I’m sure this scenario will change, soon or later, but that’s not an alibi to wait and continue to deliver unoptimized H.264 video (or VP9), even because the benchmark of the market, Netflix, it’s anything but motionless!
The benefits of that kind of synergic optimizations are also in the fact that they can be applied in large part to different codecs, so when a new efficient codec will become available, the same techniques will be adapted to the new comer and maybe new specific strategies will be invented.
I’ve already spoken about “adaptive encoding” logic in this blog post. Here I want only to summarize interesting trending techniques to optimize video streaming services that are gaining traction and that I’ve applied in recent years and I’m trying to improve continuously in my consultancies.
It has many names. Netflix calls it “Per-Title Encoding”, other call “Adaptive Encoding” or “Content-Aware encoding”. I’ve discussed it extensively here. This approach to optimization has gained traction after the article of Netflix about Per-Title Encoding, but it’s a technique yet to be fully exploited.
Streaming services are quite different and there are various ways to setup a Complexity-Aware encoding pipeline. There are simple ways to extimate complexity but also more complex metrics that take into account multiple variables and models of the HVS (Human Vision System). With such refined approach it is possible to control the level of quality delivered by the encoder and therefore optimize the encoding for a specific purpose.
Such approch can be implemented inside a codec (in loop optimization), or applied externally performing accurate analysis upfront the final encoding (usable only in vod encoding).
The more complex implementation is probably the one that uses Machine Learning to predict optimal parameterizations to achieve the desired result (I’ve worked on this for a client in the last 6 months. More on this in the coming posts…)
This is a variation of the former. The logic is essentially the same but it is not applied to the encoding (a “standard” encoder can be used), but it’s applied at streaming protocol level. The manifests can be manipulated to obtain specific performances according to the analysis of the “cross” qualities you have in the entire ABR set.
For example, if a segment in an HLS set has a too high quality (measurable with traditional objective metrics or with ML-guided metrics) it is possible (but not exactly easy) to manipulate the manifest so to alter how the player navigate across the renditions.
This approach requires accurate setting of the encoder to produce the desired range of qualities and the delivery is optimized at protocol/manifest level. Not straight-forward as the former but still interesting.
I’m not sure, but maybe I’ve forged a neologism! Perception-Aware Streaming. This refined optimization technique is something I’ve played with for a while. The “streaming” in the name is used to indicate that the technique involves both encoding and delivery. “Perception-Aware” indicates that the encoding and the delivery is performed taking into account of perceptual fenomena, and in particular the angular resolution of HVS.
Again I’ve already introduced the concept in a previous post. Essentially in this technique we create a super set of renditions. A sub set is calibrated for big screens, another sub set for tablet/laptop and a final sub set is calibrated for small screen devices.
Leveraging on a simplified model of HVS, angular resolution and known minimum distance from the screen, it is possible to conceal artefacts and provide an higher sense of detail and at the same time reduces the average bitrate especially on smaller screens. We can mix that with a variation of Complexity-Aware encoding to obtain a highly optimized encoding pipeline.
It’s an interesting topic. I’d like to write more about it if I find some time. I don’t exclude that in the next months I could write whitepapers on this topic for a couple of my clients since I’m going to apply this technique again, but in a more evolved form.
Optimize encoding with the forementioned techniques leads very often to VBR encoding (capped, controlled in some more sofisticated ways or also unconstrained sometimes). Such files require dedicated heuristic to properly execute Adaprive Bitrate Streaming.
In recent times, more optimized and efficient heuristics have emerged compared to the traditional bandwidth-based heuristics. Buffer-based or hybrid heuristics allow a much better exploitation of bandwidth, more resilience to bandwidth fluctuations and can cope easily with VBR renditions. Stay tuned to know more about it.
Taken alone, each of that optimization schema provides interesting benefits, but the real gain is when you can optimize them synergistically. Those strategies are strictly correlated and can strengthen each other and enable new levels of efficiency.
For example, complexity-aware encoding and perception-driven streaming are more efficient when you can encode in VBR, but VBR encoding requires a player with custom and optimized heuristic, in return, a custom heuristic not only can cope with VBR but can also implements more efficient ABR handling and contributes greatly to maximization of QoE.
In the previous post of this 2 parts series, I have analyzed the technical features of the codec VP9 and concluded that, technically speaking, VP9 has the basis to compete with HEVC in terms of encoding efficiency.
But, you know, theory is a different thing than reality and in video encoding a big part of the final efficiency is in the encoder implementation more than in the codec specification. In this regard VP9 is not an exception and what I see from my tests is that vpxenc (the open source, command line encoder provided by Google) is not yet fully mature and optimized for every scenarios. I’ll discuss about this latest distinction more over.
VP9 specification has many features that can be used to enhance perceptual-aware encoding (like “segmentation”, to modulate quantization and filters inside frames according to perception of different areas of each frame). But those features are not yet used in vpxenc and this is clearly visible in the results.
At the beinning of 2015 I evaluated the performance of several H265 encoders for my clients and published a quick summary of the advantages and problems I found in (that time) HEVC encoders compared to optimized H264. The main problem that emerged in that evaluation was the inefficiency of “Adaptive Quantization” and other psycovisual techniques implemented in the encoders under test. The situation has partially changed for HEVC encoders during last year (thanks to better psycovisual encoding, especially for x265) but grain and noise retantion, especially in dark areas, is always a challenge for codecs exploiting big “transformations” like H265 and, indeed VP9.
Vp9 today shows the same inefficiencies of HEVC 1 years and half ago. It is quite good in handling motion related complexity, thanks to advanced motion estimation and compensation and reconstructs with high fidelity low and medium spatial frequencies, but has difficulties in retaining very high frequencies. Fine film grain disappears even at medium bitrates and the “banding” artifact is very visible in flat areas, gradients and dark areas even at high bitrates. In this regard H264 is still much better, at least at medium-high bitrates. Those kinds of artifact are quite common on Youtube because they are using now VP9 everytime they can, so try by yourself a 1080p or 2160p video on Chrome and take a look at gradients and shadows.
The sad thing is that common quality metrics like PSNR, SSIM (but also the more sofisticated VQM) are more happy with a flat encoding than with a psyco-visually pleasant, but not exact – encoding, and at the end, VP9 may be superior in PSNR or SSIM to H264/H265 even in a comparison like that of Picture 2 below where is very evident the banding or “posterization” effect.
VP9 profile 2 – 10bit per component
Until now I’ve spoken about traditional 8bits/component encoding in H264, H265 and VP9. But vpxenc supports also a 10bits per component encoding known as VP9 profile 2.
Even if your content is at 8bit and everything remains BT.709 compliant, several studies has demonstrated that 10bit encoding is always capable of better quality/bitrate ratios thanks to higher internal accuracy. In particular the benefits are well visible in gradients and dark areas’ accuracy. See this example of VP9 8bit vs 10bit:
In the picture above we can see the better rendering of soft gradients when encoding at 10bits even if the source is 8bits. Grain (high freq, low power signal) is still not retained compared to the source but banding is pretty much reduced. Note also that in the case of VP9 profile 0 we need to increase the bitrate well above 3Mbps to have a good encoding of gradients (for 1080p) while at only 1Mbps the result is in this case sufficient when using profile 2.
The superiority of 10bits encoding has been always valid also for H264 (high10 profile), so why 10bits have started to gain momentum only with HDR and not before ?
The answear is “lack of players” on consumer’s devices. Let’s remember that H264 has become relatively early the standard in internet video only because Adobe decided to insert (at it’s own expense) a decoder inside Flash Player 9 (2007). This enabled a billion desktops to playback baseline, main and high AVC profile. Few know that originally it should support also high10 but a bug ruined the opportunity to actually use this function.
Apart this missed opportunity, H264 decoders on modern browsers, mobile devices, TVs, STBs are not capable to decode H264 high10 profile and the same is true for VP9.
Where is VP9 available now ?
Today VP9 is supported in lastest Chrome, Firefox, Opera (and Edge in preview) browsers on desktop (PC and Mac) and is supported in Android from version 4.4 on (software or hardware decoding depending by device). It is also available on an increasing number of Connected TV, but all the current (significative) decoders support only VP9 in mode 0, so 8bit.
The same problem is true for H265. On the mobile devices that support it, you can only deliver 8bit H265, but in this case it is also true that the large majority of 4K TVs support HEVC main10 profile as well.
So, when is convenient to use VP9 ?
The problem of “banding artefact” is directly proportional to the size of the display. It is irrelevant on small displays like that of smart phones and tablets. On laptop it starts to become visible and is pretty bad on big TVs.
So, concluding, I think that today VP9 is an interesting option for everyone who wants:
– The maximum quality-bitrate ratio on desktop even with some compromises in terms of quality. HEVC decoding will probably not appear on desktop for a long time, so VP9 is the only viable improvement over H264. The use case of live streaming can better fit the compromises.
– High efficiency on Android with a wide support base (Android >4.4). On an old, 100$ Android Phone I have, VP9 decoding works and HEVC not. Interesting option for markets of developing countries when bandwidth is scarce and Android has a bigger base than iOS.
If the current situation doesn’t change I doubt that players like Netflix will deliver high quality content on Desktop or TV using VP9 in profile 0, especially for 4K. And infact David Ronca of Netflix has said that they are evaluating VP9 especially to lower the level of access for mobile devices (they already use HEVC for HDR-10).
But fortunately the scenario is probably about to change quickly if it’s true that Youtube is planning to deliver HDR (=10bits) with VP9 during summer. This means that TVs with Vp9 profile 2 decoding capabilities are becoming a reality and this should open the way also for profile 2 on desktop browsers. In this case (and I’m optimistic), VP9 has really good chances to definitively become the successor of H.264 at least for Internet Video on Desktop and Android.
Remain to see what Apple will decide to do. In the while I’m starting to push VP9 in my strategies because Indeed I think that their choices are irrelevant. If we want to optimize a video delivery service it is increasingly clear that we will have to optimize for all 3 codecs.
A technical primer
VP9 is a modern video codec developed by Google as the successor of VP8. While VP8 was aimed at offering an open alternative di AVC (aka H.264), VP9 challenges the latest HEVC (aka H.265). Google follows with VP9 the same model of “open” codec used for VP8 (the fact to be really open and free from patents related threats is still object of debates) and this theoretically makes of VP9 an interesting alternative to HEVC which is burdened by unclear and unsettled claims by multiple patents holders and patent pools.
VP9 specification has been freezed in June 2013 but only recently it is starting to attract attention of players that want to optimize video distribution (Youtube has been the only big adopter during last year, but now also Netflix is evaluating to use it). This is because VP9’s and HEVC’s ecosystems have finally reached a minimum level of maturity and is now possible to do evaluations and comparison with a sufficient level of confidency.
In this short serie of blog posts I analyze VP9 and try to understand if it really deserves attention and why. In this first part we will take a look at the technical specifications compared to HEVC (analyzed in this previous post) and in the second part I’ll analyze the actual performances, limits and contexts in which is possible to use VP9 as a valid alternative to AVC or HEVC.
VP9 subdivides the picture in “super blocks”. Similarly to HEVC, in VP9 super blocks can be recursively divided in smaller blocks down to 4×4. Differently from HEVC that can subdivide only in square sub partitions (32×32, 16×16, 8×8) VP9 can also use not square partitions like 32×16, 8×16 and so on (the use of rectangular partitions stops subdivision in the quad-tree branch). Most decisions are taken at level 8×8 (“skip” signaling for example) and 4×4 is a special case of 8×8. prediction mode, reference frame, MV, transform types are specified at block level.
Like VP8, VP9 uses an 8bit arithmetic coding engine known as the bool-coder. It use a static per-frame statistical model compared to an adaptive stat model like cabac used in AVC/HEVC. For each frame, the more convenient statistical model is choosen from a pool of four.
Similarly to H265, VP9 uses 4 transform sizes: 32×32, 16×16, 8×8 and 4×4. Transformations are integer approximations of DCT (Discrete Cosine Transform) or DST (Discrete Sine Transform), a mix of the two are used depending by the type of frame and transform size. Coefficients are scanned with particular patterns (different from the zig-zag patterns of H26x codecs, but with the same logic).
VP9 uses 4 scaling factors: a couple for Luma DC and AC coefficients, and a couple for Chroma DC/AC. The set of quantizers are fixed at frame level, so there is no block-level QP adjustment contrary to AVC/HEVC (but the not mandatory feature “segmentation” should be able to achieve the same effect of an adaptive quantization).
VP9 supports also a special lossless mode that uses only a Walsh transform on 4×4 blocks.
Intra prediction is a bit less complex than what offered by HEVC. Intra prediction acts on transformation blocks and there are 8 directional prediction modes and 2 not-directional compared to the 35 modes of HEVC
VP9 uses 1/8th pel motion compensation (double the precision of AVC). A novel feature is the possibility to use normal, smooth or sharp 8th pel interpolation filter (+bilinear). The proper version of the filter can be changed at block level.
Because of patents VP9 doesn’t use bidirectional motion estimation and compensation, so each block has normally only a single forward motion vector. However VP9 uses “compound prediction” where there are two motion vectors and the two predictions averaged together. To avoid patents, “compound prediction” is enabled only on not visible frames (commonly referred as “AltRef”). AltRef can be “constructed” during decoding, are not visible but can be used later as references. Since it’s possible to anticipate in an AltRef a future frame and use it as reference in compound mode, VP9 officially has no B-frames but in fact it has something completely equivalent.
Motion vectors in a frame can point to one of three possible reference frames usually named Last, Golden and AltRef. Ref frame to be used is signaled at 8×8 granularity. The decoder holds a list of 8 reference frames (slots) from which Last, Golden and AltRef refs are choosen at frame level. After decoding, the current frame can (optionally) substitute one of the 8 slots in the pool. An interesting feature of VP9 is the possibility to scale down frames during encoding (not on iframes). Inter predictors and reference frames are scaled accordingly.
Motion vector prediction is similar in complexity to HEVC. A 2-entry list of predictor is build during encoding and decoding. The first predictor is based on surrounding blocks, the second on previous frame. In case of empty list a vector 0,0 is used. So for each block the bitstream can signal to use:
-the first predictor plus a delta
-the first predictor as is
-the second predictor as is
-simply use motion vector [0,0]
There are 3 possible filters at different strength. VP9 makes a flatness test at boundaries of blocks and if the result is higher than a threshold, one of filter is applied to conceil blockiness.
Segmentation groups together blocks with similar characteristics. It is possible to change some encoding techniques at group level. This feature is dedicated to implement encoding optimizations (including psycovisual optimizations) and require an active support in the encoder.
The standard VP9 (profile 0) supports only a 8bit – 4:2:0 color mode while the (optional for hardware) profile 1 supports also 4:2:0 / 4:4:4 and optional alpha. In August 2015 Google has released a new version of the reference encoder capable to support the new profile 2 profile 2 (10-12bit -4:2:0) and profile 3 (10-12bit -4:2:2 / 4:4:4 + alpha). Profile 2 is aimed at supporting HDR video in Youtube (expected for summer 2016).
VP9 compared to HEVC
From a technical point of view, VP9 appears to be very near to HEVC as potential efficiency. The actual performance depends by the efficiency of the real encoders, but VP9 has all the potentialities to reach (almost, see below) the same performance of HEVC.
VP9 is a bit sub-par in terms of intra frame prediction (less modes) and of entropy coding (static tables vs adaptive). HEVC appears also to have an higher number of modes and small strategies to reduce the cost of syntax and signaling as well as residuals but on the other end, VP9 has some interesting potentialities in psycovisual optimizations and rate-control thanks to segmentation and adaptive frame resolution.
We will see in the next post the level of efficiency now reached by VP9 encoder compared to AVC and HEVC and the level of maturity of the respective ecosystems.
Online Video: infancy, youth and maturity
Over the last decade the consumption of online video has undergone an exponential growth, but online video is as old as the Internet itself. Recently Dan Rayburn has published a blog post about the early history of the streaming media industry, an “era” (1995-2005) where pioneers started experimenting codecs, products and models for the distribution of video over the Internet.
But it’s only with the launch of Youtube (2005-2006) that online video started a really tumultuos growth to become the preminent portion of global IP traffic. The ride of online video has been so intense that today the traffic generated by video is more than 70% of the total Internet traffic, orders of magnitude higher than 10 years ago (and still growing…).
We can say that nowadays online video has entered a phase of maturity. It is a multi-billion business ran not only by giants tech companies like Youtube, Netflix, Facebook, Amazon, Hulu, Apple, Vevo, but also by a multitude of traditional broadcasters (BBC, HBO, Sky just to name a few) with their regional OTT services.
The pressure of competition is now really high and this will bring many benefits to end users on many fronts, even that of QoE’s optimization.
Why optimize video streaming ?
Infact, until very recently, no one really cared about video optimization. Like any business in its early stages it was more important to place on the market the right product (and then find a viable business model before running out of money) than anything else including optimization of QoE. Simply put: If it worked it was enough.
But now things have changed. It cannot simply “work”, user expectations are constantly growing and it’s increasingly harder to engage users (see graph below). In this scenario optimization of streaming is becoming a key technological factor to differentiate a service from competitors, increase the satisfaction / retention and reduce costs.
Source: Conviva CSR2015 -How Consumers Judge Their Viewing Experience
How to optimize ?
If it’s clear what are the reasons to invest in streaming optimization on the other hand it’s not so easy to find the right way(s) to accomplish it. Users push the play button and want only to watch their preferite video flowlessly. But we know that behind-the-scenes there’s a lot of work to do to maximize that user experience. It’s a tangle of codecs, streaming protocols, multiple DRMs and CDNs, advertising, interactions flows, personalized experieces and so on.
At the end of the story, users want the max possible quality through out the video, a fast start and zero rebuffering on every screen. It’s up to us to untangle the skein and fulfill those expectations.
The points to be optimized are many but, in my opinion, the three more important are:
1. Video encoding optimizations (Quality)
2. ABR streaming optimizations (Robustness of distribution)
3. Playback optimizations (Reliability of streaming, start time, other aspect of QoE)
I have touched those points many times in the last 8 years in several projects (optimization of encoding pipelines and/or codecs, optimization of streaming protocols and servers, optimization of players) or during conferences (see Adobe Max 2009 / 2010 / 2011) and I’ve made “online video optimization” one of my distictive competencies.
In general, the matter is complex, the variables are multiple and there are also many boundary conditions so there’s no single recipe. Maximize the QoE requires the coordination of “optimization campaigns” in each of the aforementioned areas.
This requires flexible instead of static approaches, open-mindness instead of dogmas, desire for excellence (both for consultant and customer, paradoxically not so common to find in the latter), but also a mix of scientific approach and inspiration, remembering always that success is in the detail.
Create coordinated optimization strategies in encoding, delivery chain, and players is very complex so in this article I want to talk mainly about encoding optimization. This topic has become hot recently because of this post on the Netflix’s blog. They call it “Per-Title Encode”, I call it “(Content) Adaptive Encoding”.
I have worked on this topic for many companies like for example NTT Data, Sky Italy, Intel Media (acquired by Verizon), EvntLive (acquired by Yahoo!) and lately Vevo. I recently co-authored this article on Vevo’s tech blog on how we have optimized encoding of 200.000+ videos in Vevo during 2015. I suggest to read that article to have an high level introduction of the next topic: Content Adaptive Encoding.
“All fixed set patterns are incapable of adaptability or pliability. The truth is outside of all fixed patterns” Bruce Lee
Encoding Video is a very complex process.There’s often the temptation to over-simplify complex things and encoding is not an exception. So usually everyone encode video with a predefined set of parameters that satisfy some requirements (usually quality and/or target bitrate). But why should we use a single set of parameters (resolution, bitrate, encoding profiles) when we have very different kind of video and/or playback conditions ?
Static solutions to complex problems are rarely capable to produce best results. If we have mutable conditions and mutable data we need to adapt to them if we want to get closer to the optimal solution.
To exemplify the concept let’s make a parallelism with the problem of “function approximation”. If we need to approximate an arbritary function (see picture below), how can we hope to have a useful solution using a single 0-order approximation (red line on the left) ? It is too coarse, and the error that we get using it is very high (at least in some situations, i.e. for x -> 0). It’s clear that a first order approximation would be better (green line on the left) but still sub-optimal. Like in many other situations it’s even more useful to partition the problem in smaller (simpler) ones, in this case also a set of simple 0-order approximations (red lines on the right) would be considerably better at estimating the function than the original, ultra simplified approach, not to mention a “set” of first-order approximations (green lines on the right).
The partioning of the problem’s domain helps to avoid over-simplifications
Making a parallelism between this problem and the encoding, approximate with a 0-order estimator is similar to encode everything with the same resolution-bitrate “mix” (a.k.a ABR ladder).
The one-fits-all solution is simple, but far away from being optimal. We must be “Adaptive” in the sense of elaborating dynamic strategies to optimize the system.
There are many ways to optimize encoding but my preferite is, like said above, to partition this multi-dimensional problem in to sub-domains or clusters. We have not to apply necessarily rigorous math, it’s often more a matter of common sense. If we have a complex problem, let’s try to break it down to simplier pieces, easier to solve.
For example, in the case of encoding for ABR, we have commonly video with different complexities (a first variable to analyze) and we watch video on different devices (a second variable to take into account). A static ladder (for ABR streaming) is usually designed for the worst case and like a 0-order provides a sub-optimal performance.
We know that low complexity videos (like talking heads or fixed camera videos) are indeed much easier to encode than complex videos (like sports or action movies). This is inherently related to the way modern codec compress video data. They exploit temporal and spatial redundancies. Simple motion can be predicted from past frames and high spatial frequencies are stripped away by quantization.
A low complexity content can be compressed much more than a complex one, and this with approximately the same perceptual quality.
This is a first partition we can apply to the problem. Let’s classificate the content according to the complexity and apply specific encoding setups to optimize the overall performance toward desired goals.
Do you want to save bandwidth globally ? Why not encode content at different bitrates according to their complexities ? You will have a consistent perceptual quality but savings in bandwith consumption, globally.
You want higher average quality ? In this case, let’s encode simplier content at higher resolutions compared to the resolution we would use using a single, static setup that’s usually calibrated on the worst case (which is high complexity).
Medium Complexity (click to enlarge): firstname.lastname@example.orgMbps (left) vs 720p @2.0Mbps (right)
Finding the right recipe is not easy because things may get more complex if we go down in this process. For example, complexity is not a scalar property of a video but a local attribute (complexity can change frame by frame, or at least scene by scene). If we join this with the fact that we may have constraints set by other elements of the pipeline the logic with which we try to approximate the optimal solution may become complex.
Just to make an example, in ABR streaming we are usually forced to encode video in capped VBR (if not CBR) because of player’s heuristic (this is why I’ve said before that the “final” optimization would be to set coordinated optimization strategies for encoding, distribution and playback. You need usually an optimized player to handle VBR encodings).
So to improve the optimization level, we may need to consider not only the average complexity, but also the maximum complexity through-out the video and apply dynamic parameterizations accordingly. Furthermode, complexity may be spatial (high frequencies in the image due to nitid picture or noise) or temporal (high level of motion, more difficult to encode for traditional codecs based on motion estimation and compensation). Different complexities deserve different weights inside our “optimization function” and specific parameterizations.
Viewing Context-aware encoding
Another variable is represented by the viewing conditions. Why apply the same resolution-parameterization for the same level of bandwidth when the video is watched on quite different screens ? The human eye has a specific angular resolution, so small defects in the picture quality are not visible at high DPI (like that of a smartphone) while the same is not true for low DPI screens like that of a TV. Mix that with the variable distance of viewing and we have another set of variables that we can optimize encoding for.
Example of different sensitivity of vision. The pictures above simulate the playback of the same video at different screen sizes: approx a smartphone screen for the upper image and a tablet (double diagonal) in the lower, cropped image. The picture is the same, simply enlarged. Note that artefacts of encoding are very visible on the lower image, but much less in the upper.
Considering the different sensitivity to artifacts of the eye at different DPI we can optimize the ABR ladder with resolutions-bitrates-parameterizatons specifically choosen to conceal artifacts in specific viewing conditions.
There are other interesting aspects that enter in the mix of strategies that you can use.
I have no time to analyze them here, but they worth a mention:
– Multi-Codecs encoding: leverage the best codec available on each platform. ie. VP9 on Android / Chrome / FF, HEVC on 4K TVs and H264 every where else.
– VBR vs CBR: use VBR whenever possible. This requires custom player so i.e. is feasible today in DASH for Android and Browser but not for HLS in iOS. Will require multiple encodes but may worth the effort.
– Another interesting topic is the distance and number of renditions inside an ABR ladder. Different network conditions (i.e. mobile vs broadband) may require different setups.
– Special renditions: sometimes I have defined special renditions for special cases that may have specific goals and characteristics (i.e. special renditions to speed up initial buffering efforts).
Concluding, if we mix various strategies, the improvement in QoE and bandwidth consumption may be considerable. Consider that optimize quality/bitrate ratio generates always an increase in QoE both directly and indirectly. Infact, with giants like Netflix that monopolizes the bandwidth (40+% of Internet traffic in USA at peak times) the services that are not optimized will start to suffer (or probably are already suffering). ABR streaming cannot be used any longer an “alibi” for un-optimized encoding, it’s no longer sufficient to be in the market, you’ve to master technology, smooth edges and give the maximum to be competitive. It’s time to optimize.
I must admit, I’m feeling very guilty. This is the only new post in more than 1 year. 2013 has been wonderful from a professional point of view and I have had very few moments, if any, to dedicate to the blog. But for 2014 there are too many interesting trends that I can’t neglect anymore and so I want to return speaking about video encoding, streaming and OTT technologies.
Infact, you know that there are three magic “words” that are outlining the future of video: 4K, HEVC and DASH.
So, as a 2014 new year resolution, I’m planning to speak about ideas and optimizations related to the “magic trio”.
4K or not 4K ?
The first trend is rapidly gaining its momentum. “4K” is on every insiders’ lips and the effort of Youtube, Netflix and others to offer quickly 4K content is also opening new opportunities for selling 4K TVs and Monitors.
I’m focusing part of my researches in finding specific optimizations for H.264 encoding of 4K content. Infact I think that apart from marketing buzz, 4K will be served first using the well known H.264.
There are sereval optimizations to explore for 4K: for example custom quantization matrix, bias toward the use of 8×8 transform, changes in psyco visual optimizations, to name a few. 4K also pushes the limit of H.264 for motion compensation and estimation (too long MVs) creating several efficiency problems. But if is useful to optimized an HD and FullHD stream, it is much more crucial to super optimize a 4K stream because the level of bitrate that we are speaking about is difficult to have in Internet or to have consistently.
ABR streaming can help here but not as usual. Who can accept to watch a 2.5Mbit/s 720p rendition on a 80” 4K display because of low bandwidth on peak times ? (it is the same experience as watching a 360p video on a 40” screen from 1.5 mt of distance, try and tell me) Who buy a 4K wants 4K, no compromise. Further more, as Dan Rayburn underlined, there are few economic reasons to offer 4K because 4K delivery costs 3-4 times Full-HD. This is why I think that optimization is now more important than ever.
HEVC has been finally ratified. Like in 2003, when H.264 was ratified, now the encoders are very raw and inefficient and a lot of work is to be done, but the potentialities are all there. Theoretically HEVC is said to be from 30 to 50% more efficient than H.264 (higher efficiency at higher resolutions). So it is not a mistery that 4K and H.265 are seen as the winning couple. But the increase in pixel to be processed (8x passing from 1080p25/30 to 2160p50/60) and the complexity of the new codec (approx. 10x during encoding compared to H.264) do not draw a simple scenario with increses in required processing power up to a 80x factor. But hey…we are now like in 2003, we have maybe 10 years ahead to squeeze the max out of H.265, and this is very exciting. In thee while, H.264 still have some room for improvements and for at least a couple years will continue to be the king on the hill.
I have started to play with HEVC and probably the amount of time I’ll dedicate to experiment will increase steadily during 2014. By now I have collected interesting results. The bigger Block Transforms (not only 4×4 and 8×8 like in H.264 but also 16×16 and 32×32) plus some advanced deblocking and adaptive filtering are able to produce a much “smoother degradation” of quality when decreasing the bitrate, especially for high complexity scenes. On the other hand, the different handling of fine details is producing now less details retantion than H.264 and new approaches to psycovisual optimizations are all to be invented.
And VP9 ? Interesting technology, good potentiality. Will be successful? Hard to tell, until then I will continue to keep it under observation.
Last but not least there’s the new MPEG standard for ABR streaming MPEG DASH (Dynamic Adaptive Streaming over HTTP). HLS is spreading over various devices but at the same time the implementations are frequently bugged and without control. DASH on the other hand provides plenty of control and it is possible to change heuristic. This is very important to achieve an Higher-as-possible QoE (or QoS), a key factor in the future where CDNs’ cost per GB is flattening while viewers’ number and stream size/quality is increasing .
So stay tuned.
PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)
The fabulous world of FFmpeg filtering
Transcoding is not a “static” matter, it is dynamic because you may have in input a very wide range of content’s types and you may have to set encoding parameters accordingly (This is particularly true for user generated contents).
Not only, the elaborations that you need to do in a video project may go beyond a simple transcoding and involve a deeper capacity of analysis, handling and “filtering” of video files.
Let’s consider some examples:
1. you have input files of several resolutions and aspect ratios and you have to encode them to two target output formats (one for 16:9 and one for 4:3) . In this case you need to analyze the input file and decide what profile to apply depending by input aspect ratio.
2. now let’s suppose you want also to encode video at the target resolution only if the input has an equal or higher resolution and keep the original otherwise. Again you’d need some external logic to read the metadata of the input and setup a dedicated encoding profile.
3. sometime video needs to be filtered, scaled and filtered again. Like , for istance, deinterlacing, watermarking and denoising. You need to be able to specify a sequence of filtering and/or manipulation tasks.
4. everybody needs thumbnails generation, but it’s difficult to find a shot really representative of the video content. Grabbing shots only on scene changes may be far more efficient.
FFmpeg can satisfy these kinds of complex analysis, handling and filtering tasks even without an external logic using the embedded filtering engine ( -vf ). For very complex workflows an external controller is still necessary but filters come handy when you need to do the job straight and simple.
FFmpeg filtering is a wide topic because there are hundreds of filters and thousands of combinations. So, using the same “recipe” style of the previous articles of this series, I’ll try to solve some common problems with specific command line samples focused on filtering. Note that to simplify command lines I’ll omit the parameters dedicated to H.264 and AAc encoding. Take a look at previous articles for such informations.
1. Adaptive Resize
In FFmpeg you can use the -s switch to set the resolution of the output but this is a not flexible solution. Far more control is provided by the filter “scale”. The following command line scales the input to the desired resolution the same way as -s:
ffmpeg -i input.mp4 -vf "scale=640:360" output.mp4
But scale provides you also with a way to specifing only the vertical or horizontal resolution and calculate the other to keep the same aspect ratio of the input:
ffmpeg -i input.mp4 -vf "scale=640:-1" output.mp4
With -1 in the vertical resolution you delegate to FFmpeg the calculation of the right value to keep the same aspect ratio of input (default) or obtain the aspect radio specified with -aspect switch (if present). Unfortunately, depending by input resolution, this may end with a odd value or an even value witch is not divisable by 2 as requested by H.264. To enforce a “divisible by x” rule, you can simply use the emebedded expression evaluation engine:
ffmpeg -i input.mp4 -vf "scale=640:trunc(ow/a/2)*2" output.mp4
The expression trunc(ow/a/2)*2 as vertical resolution means: use as output height the output width (ow = in this case 640) divided for input aspect ratio and approximated to the nearest multiple of 2 (I’m sure most of you are practiced with this kind of calculation).
2. Conditional resize
Let’s go further and find a solution to the problem 2 mentioned above: how to skip resize if the input resolution is lower than the target ?
ffmpeg -i input.mp4 -vf "scale=min(640,iw):trunc(ow/a/2)*2" output.mp4
This command line uses as width the minimum between 640 and the input width (iw), and then scales the height to maintain the original aspect ratio. Notice that “,” may require to be escaped to “\,” in some shells.
With this kind of filtering you can easily setup a command line for massive batch transcoding that adapts smartly the output resolution to the target. Way to use the original resolution when lower than the target? Well, if you encode with -crf this may help you save alot of bandwidth!
SD content is always interlaced and FullHD is very often interlaced. If you encode for the web you need to deinterlace and produce a progressive video which is also easier to compress. FFmpeg has a good deinterlacer filter named yadif (yet another deinterlacing filter) which is more efficient than the standard -deinterlace switch.
ffmpeg -i input.mp4 -vf "yadif=0:-1:0, scale=trunc(iw/2)*2:trunc(ih/2)*2" output.mp4
This command deinterlace the source (only if it is interlaced) and then scale down to half the horizontal and vertical resolution. In this case the sequence is mandatory: always deinterlace prior to scale!
4. Interlacing aware scaling
Sometimes, especially if you work for ipTV projects, you may need to encode interlaced (this is because legacy STBs require interlaced contents and also because interlaced may have higher temporal resolution). This is simple, just add -tff or -bff (top field first or bottom field first) in the x264 parameters. But there’s a problem: when you start from a 1080i and want to go down to an interlaced SD output (576i or 480i) you need an interlacing aware scaling because a standard scaling will break the interlacing. No fear, recently FFmpeg has introduced this option in the scale filter:
ffmpeg -i input.mp4 -vf "scale=720:576:-1" output.mp4
The third optional flag of filter is dedicated to interlace scaling. -1 means automatic detection, use 1 instead to force interlacing scaling.
When seeking for an high compression ratio it is very useful to reduce the video noise of input. There are several possibilities, my preferite is the hqdn3d filter (high quality de-noising 3d filter)
ffmpeg -i input.mp4 -vf "yadif,hqdn3d=1.5:1.5:6:6,scale=640:360" output.mp4
The filter can denoise video using a spatial function (first two parameters set strength) and a temporal function (last two parameters). Depending by the type of source (level of motion) can be more useful the spatial or the temporal. Pay attention also to the order of the filters: deinterlace -> denoise -> scaling is usually the best.
6. Select only specific frames from input
Sometime you need to control which frames are passed to the encoding stage or more simply change the Fps. Here you find some useful usages of the select filter:
ffmpeg -i input.mp4 -vf "select=eq(pict_type,I)" output.mp4
This sample command filter out every frame that are not an I-frame. This is useful when you know the gop structure of the original and want to create in output a fast preview of the video. Specifing a frame rate for the output with -r accelerate the playback while using -vsync 0 will copy the pts from input and keep the playback real-time.
Note: The previous command is similar to the input switch -skip_frame nokey ( -skip_frame bidir drops b-frames instead during deconding, useful to speedup decoding of big files in special cases).
ffmpeg -i input.mp4 -vf "select=not(mod(n,3))" output.mp4
This command selects a frame every 3, so it is possible to decimate original framerate by an integer factor N, useful for mobile low-bitrate encoding.
7. Speed-up or slow-down the video
It is also funny to play with PTS (presentation time stamps)
ffmpeg -i input.mp4 -vf "setpts=0.5*PTS" output.mp4
Use this to speed-up your video of a factor 2 (frame are dropped accordingly), or this below to slow-down:
ffmpeg -i input.mp4 -vf "setpts=2.0*PTS" output.mp4
8. Generate thumbnails on scene changes
The filter thumbnail tries to find the most representative frames in the video. Good to generate thumbnails.
ffmpeg -i input.mp4 -vf "thumbnail,scale=640:360" -frames:v 1 thumb.png
A different way to achieve this is to use again select filter. The following command selects only frames that have more than 40% of changes compared to previous (and so probably are scene changes) and generates a sequence of 5 png.
ffmpeg -i input.mp4 -vf "select=gt(scene,0.4),scale=640x:360" -frames:v 5 thumb%03d.png
The world of FFmpeg filtering is very wide and this is only a quick and “filtered” view on this world. Let me know in the comments or on twitter (@sonnati) if you need more complex filters or have problems adventuring in this fabulous world 😉
PART I – Introduction (revised 02-jul-2012)
PART II – Parameters and recipes (revised 02-jul-2012)
PART III – Encoding in H.264 (revised 02-jul-2012)
PART IV – FFmpeg for streaming (revised 02-jul-2012)
PART V – Advanced usage (revised, 19-oct-2012)
PART VI – Filtering (new, 19-oct-2012)
Netflix, during June, reached the record level of 1 Billion hours streamed in a month. It is an incredibly huge level of bandwidth, an impetuous and growing stream of bits that makes Netflix one of the TOP10 Internet bandwidth “consumer”. But how much does it cost to Netflix this huge stream ?
I remember an article of a couple years ago by Dan Rayburn in which he estimated an average cost of 3c$ per GByte, a low rate usually applied by CDNs to very large clients. In an article of 2011, Dan corrected the estimation discussing a more complex pricing model for such big players (a mix of per GB and per Gbit/s). The new estimation can, however, be approximated to 1.5c$/GB.
This level of pricing may seem very low and negligible in the overall Netflix’s business, but I think that the growing consumption due to the relatively high average of content streamed per user per month may be a problem for Netflix if not brought under control.
Let’s dig deeper in the numbers.
Let’s suppose that the average bitrate streamed to users is 2.4 Mbit/s (see this post in the netflix blog), this means that every hour of content requires in average 1080 MB (1GB).
If you multiply this for 1Billion hours you have 1 Billion GBs * 1.5c$ = 10M$ / month, 120 M$ per year.
Compared to the cost of CDNs of 2011, 2012 is around the double. This is caused by an increase in the number of clients but most of all by an increase in the average amount of data streamed per client. A wopping 90 minutes per day per user. I think that this may be considered near the maximum possible but a further increase to 120 minutes may be realistic in a worst case simulation. This would mean 160M$ per year.
You know that I’m very sensible to encoding optimization. I have always stated that for this kind of business encoding optimization is of fundamental importance. I have already demostrated in the past that H.264 can be optimized much more then what players like Youtube, Netflix, Hulu, BBC are doing today. Here I specifically addressed Youtube and Netflix.
Netflix could benefits of a 30% to 50% reduction in average bitrate consumption with a strong optimization of the entire encoding pipeline (plus eventually of the Silverlight player). This could mean savings for 60-80M$ per year and at the same time an improvement in the average quality delivered to client, a key feature in the increasingly competitive market of OTT video.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203536.73/warc/CC-MAIN-20170322213003-00507-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 49,129
| 205
|
https://www.reddit.com/r/encryption
|
code
|
I am currently working on digitally signing PDF files using Adobe AATL and it seems that my code for it is not working. I was provided by the AATL provider with a Certificate Chain and Certificate. When I try (using Java Code) to digitally sign a PDF. It seems that my and other colleague's machine the certificate chain becomes visible. But for other machines, it seems to not work.
Would appreciate some guidance.
As I continue my search, I thought I would ask here.
As the title suggests, I am looking for a comprehensive list of self encrypting SSDs that is reasonably maintained.
I have had some success with finding vendors that sell software for SEDs. But, I don't think they are really up to date.
Any help would be appreciated.
I am working on an app where the goal is to upload user data, store it, do some processing on that data and present it to user when asked. We want to follow end to end encryption model where the system has no way to decrypt the data an user uploaded. I haven’t dealt with security this way before and in the presented architecture I have a few questions/concerns which I want to share with you all.
My concerns are:
I have a file which I need to decrypt, but I only have the private key and not the passphrase. Is there any way to do this?
I'm working on windows.
I just got a new laptop it's a Thinkpad x260 and I'm running Manjaro Linux on it. Since it's a laptop and there is a possibility it could one day be stolen I feel it might be a good idea to encrypt the hard drive. I've never done this I've heard of Veracrypt and Luks I don't know which would be better for me to use and if i would run into problems with using an encrypted system. It seems that some people who use Linux think it's a pain and some like it so what makes it a pain, does it make it difficult to access files or cause any problems? Also I figure I should start backing up my system regularly does anybody have any advice for that? Thanks.
I'm trying to send a message and I press the lock button when the chat is open to enable OTR but when I do it just has 3 dots loading for a little bit and then stops trying and doesn't work. Not sure why this is because I know I have OTR enabled under plugins. Can someone offer a suggestion.
While trying to undo an accidental spam-report I had made on Whatsapp, I ended up on the Whatsapp FAQ. Two things cought my eye. The former gives me the impression that the end-to-end encryption guaranteed by the app has loopholes and that the app can access conversations between users (under certain conditions). In the FAQ, under "Staying safe on Whatsapp", paragraph "Report" (https://bit.ly/2piJ9vD) the following can be read: "As always, this (spam, red.) report sends the most recent messages in the chat to WhatsApp.". However, Whatsapp claimed they do not dispose of the necessary key to unlock the encryption, nor do they keep the messages on a server once delivered. How can a message get send to Whatsapp for review if Whatsapp doesn't have access to any of them? Furthermore, the FAQ and Terms and Condition of the app always uses specific numbers when describing an amount (e.g. "undelivered messages are deleted from our servers after 30 days"), however, the use of "most recent messageS" seems very vague. I am now wondering how many messages from the chat are being sent for review. I also wonder why the chat history is being deleted when I report somebody as a Spam? I could not find an answer to none of these questions on the Whatsapp website. I hope one of you can appease my curiosity.
TL:DR: How is whatsapp is supposedly able to review messages when the sender is reported as a spam when the messages are supposed to be encrypted from end-to-end.
I've been doing some research on post-quantum cryptography and I've also came across ECC. Is ECC public-key cryptography quantum resistant? If not, are there any quantum resistant alternatives to say, RSA that can be used on the current internet infrastructure?
Working on a small encrypted messaging system, and I'm a novice when it comes to encryption. A few things are obvious, if I encrypt the same string, I get the same encrypted result. Sooo my novice perspective says, why don't add like 4 random bytes to the beginning of the string, and then 4 random bytes to the end of the string plus a comma or something to parse out when I decrypt... So my question is, does this do anything when it comes to the actual encryption? I assume the answer is basically no, but further on this vain, does doing this expose someone trying to decrypt it get more "usable" data or less, or is it irrelevant ?
Thanks and please forgive my ignorance here. I've tried to google this one like crazy but didn't come up with much.
So I'm in the market for a new phone. I currently have the 6s+ and have been looking to upgrade.
Given the state of things politically and really just because, I thought I'd come here to ask some expert opinions on which phone is preferred these days. What're you guys thoughts on iPhones, the Pixel, the OnePlus etc? Who're you using for the best hardware encryption and/or available apps/cloud encryption?
I've done a bit of research and know probably just slightly more than layman about the topic, but am looking for a bit more in depth reasoning as to one choice over another. Any help is greatly appreciated guys/gals.
It would be great if I had a place to write/create that was encrypted. A Wiki would be the ideal framework to house the data and provide all the basic organization and formatting. Does anyone know if there are cloud-hosted wiki sites that allow the customer to encrypt the content/data with their own keys? I'm open to other suggestions too if nothing like what I'm seeking exists.
I tried searching online but the keywords are so basic that it's been difficult to find anything amidst all the web sites trying to explain to me what encryption is.
Last time I posted https://freecrypt.org , some users had concerns (and rightly so) about files needing to be uploaded in order to encrypt and/or being intercepted in between. But there was no technology to handle files larger than 1mb that also encrypts ALL files types (every site I have come across the internet only does text files) at the time. Now, I present you guys the improved http://secure.freecrypt.org which does all you need without any software installation. Feel free to use it & I welcome your comments.
Hi! We have released a new ConnectyCube JS (Web) Chat code samples with end-to-end encryption implemented! Check it here: https://github.com/ConnectyCube/connectycube-js-samples/tree/master/end-to-end-encryption You might also find our documentation useful: https://developers.connectycube.com/guides/end-to-end-encryption-otr
I am choosing using Google Drive to store backups of my users data.
I am planning to use symmetric AES encryption.
i am planning to generate a AES encryption key with Firebase Cloud Functions Node.js backed, when the user creates an account.
I am thinking to store the encryption key to Firebase Realtime Database. The client app reads his encryption key, when he is logged in with his Google account. The encryption key is used to encrypt data that is going to be saved to Google Drive and decrypt when the data is downloaded.
Is it safe to use the same encryption key every time I update the data?
I am using Google Drive to avoid me having access to the data. In addition, the user is able to use his data with multiple devices. Also if someone happens to get access to the users Google Drive files, he couldn't read the data.
What do you think about the approach? Could there be some better option with free cloud services? I might prefer saving the encryption key to non-Google service but I do not have any budget and there should be extensive free tier option without time limit.
We have a client that wants us to store some relatively sensitive data (not SSNs or Card numbers, but data they consider sensitive and would like encrypted). With that said, I have always had a conundrum with key storage! If you don't use an HSM, how can you really store your encryption keys so your application can use them to enc/dec data? For example, obviously if you store the key in clear text on the same server as the encrypted data... well, this wouldn't be very secure.
With that said, I am open to all / any ideas, but I have outlined a proposed data encryption architecture below to I think solves this issue. However, for some reason my gut tells me there are some serious flaws, and I am just too close to it to see them!
Any feedback would be greatly appreciated. Thank you!
(FYI, Our software company is referred to as COMPANY below, and we provide web based software).
Client Data Encryption / Key Storage Architecture
This document is assembled for the purpose of outlining and evaluating the general methodology that is / will be used to encrypt the sensitive data for one of COMPANY’s client(s).
Individual Key Management
The benefits of this approach are:
Encrypting / Decrypting (viewing / editing) Secure Data
EDIT: Updated formatting. Initial post (from my phone) was hard to read.
Hi people of Reddit,
Me and my partner are doing a school project about currently existing encryption methods and how they will be affected by quantum computing. So we were wondering if anyone on this subreddit could help us find sources for our questions or be able to answer some of them. If you have any feedback to give please do so! Thank you in advance for answering!
Main Question: Will currently existing encryption methods survive with the rise of quantum computing?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157070.28/warc/CC-MAIN-20180921112207-20180921132607-00359.warc.gz
|
CC-MAIN-2018-39
| 9,629
| 44
|
http://www.bleepingcomputer.com/forums/t/209664/samsung-psa60e-laptop-how-to-turn-wireless-on/
|
code
|
Samsung PSA60E laptop - How to turn wireless on?
Posted 09 March 2009 - 12:14 PM
Is there a way of turning it on or is it because I don't have wireless hardware available? Theres a wireless light on the front of the laptop so I assumed it had wireless functionality.
Thanks for your time.
BC AdBot (Login to Remove)
Posted 09 March 2009 - 02:21 PM
Just because there is a light doesn't mean there is a wireless card installed.
Can you confirm from the manual that your model has a wireless card in, or the other option is to have a look for yourself by opening it up.
Post the model name & number for help with this if needed.
Also, Go to Start > Connect To - and see if it gives you an option to connect to a wireless network.
"Emu, You Moo, We All Moo for Emu!" <-- Thanks to Animal
"If at first you don't succeed; call it version 1.0"
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398459333.27/warc/CC-MAIN-20151124205419-00215-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 909
| 14
|
http://forum.knittinghelp.com/t/new-member-asking-for-help-with-a-veil-pattern/29640
|
code
|
So happy to be here - the videos are fantastic I'm hoping to get to know you all soon.
My main question is that I've been trawling the net for a pattern for a veil. I'm getting married in 2 years (so hopefully have enough time to make it!) and would like to find a pattern for a very fine open veil, either knitting or crochet, don't mind at all - just want it to be as fine and airy as possible while still being beautiful
I can't find much, just wraps and shawls, unfortunately. I'm not experienced enough at lace knitting to risk altering a pattern without some advice! (But I pick things up quickly so hopefully would be able to if someone could tell me how).
I'm very excited about the wedding and am hoping to make a few things for it, but the veil I would like to be an heirloom if possibly.
Thanks for your time, looking forward to my time here
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609817.29/warc/CC-MAIN-20170528120617-20170528140617-00374.warc.gz
|
CC-MAIN-2017-22
| 852
| 5
|
https://forums.rancher.com/t/rancher-cluster-imported/40526
|
code
|
We need to imported a Cluster for our Rancher (v2.6.9). This Cluster was created in the past via Rancher, thats importante.
We saw some things it’s not working well now:
- We can not make Snapshot o recover from it, etc etc
- In the past we need to remove rancher-system-agent.service from the VM they are composing the cluster, so now that service, also imported, doesnt appear. So when we turn off one of Controllers, the cluster dissapear from Rancher with the message “Cluster agent is not connected”
Are there any way to solve this problems, and leave the Cluster like time ago in Rancher?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653631.71/warc/CC-MAIN-20230607074914-20230607104914-00019.warc.gz
|
CC-MAIN-2023-23
| 600
| 5
|
http://anabolicminds.com/forum/weight-loss/38604-dumb-question.html
|
code
|
Liquid Clen and Liquid T3. Are they taken orally? or are they to be injected?
Both are wrong. They are to be taken as a suppository. Hope you're fond up sticking things in your...
Nate, your always trying to get someone else to make that same mistakes that you have. Dont be mad cause you actually liked it and continue to do it. Its cool, just dont shake my handOriginally Posted by natedogg
I heard it was up to 30% more effective if administered in that manner. Not really though. It was just an excuse to stick things in my...I mean in my lab rats ass. And by lab rat I mean Beelzebub.Originally Posted by revodrew
Ive heard they are the meanist, ugly, viscious rats around. Why would you pick one of those!Originally Posted by natedogg
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164583265/warc/CC-MAIN-20131204134303-00098-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 740
| 5
|
https://forum.mendix.com/link/space/microflows/questions/95897
|
code
|
passing single object to microflow
I have a page with list of shopping items and a add to cart button. when I click the button, all items in page are moving to cart. how can I send single item to cart saperately.
Change the input parameter of the microflow from list to single object.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099892.46/warc/CC-MAIN-20231128151412-20231128181412-00382.warc.gz
|
CC-MAIN-2023-50
| 284
| 3
|
https://cdn.wowinterface.com/forums/printthread.php?s=9f19ed31133b1f7d4021fbed60c5e159&t=58103
|
code
|
Names for different frames in WOW
I saw this post
and it helped me do some of the stuff I want but an issue is I dont know all the names of the things I want to turn off while out of combat like the mini map or different bars.
Is there a list of the names that wow uses so I can add those and run those scripts?
Also is there a tutorial post with this script info so I could play around with it?
Yes, /framestack or for short /fstack.
Notice that pressing ALT toggles through all frames under your mouse cursor.
Pressing CTRL opens the "Frame Attributes" window for the currently highlighted frame.
Pressing SHIFT toggles additional texture info.
|All times are GMT -6. The time now is 08:25 PM.|
vBulletin © 2021, Jelsoft Enterprises Ltd
© 2004 - 2020 MMOUI
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703561996.72/warc/CC-MAIN-20210124235054-20210125025054-00077.warc.gz
|
CC-MAIN-2021-04
| 760
| 12
|
http://8iapps.com/app/1077011353/ojo
|
code
|
By using this Application user can upload images from gallery or camera, and these could be any informative image with text like visiting card. We made exploring images as simple as add any no of images with text information and after, perform search on those uploaded images. This app brings text from Visiting Cards to life and making it searchable. it allows user to easily look up the specific image with text to appear on top of the results. The result appears by entering a keyword on which look up is performed in search text box.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824601.32/warc/CC-MAIN-20181213080138-20181213101638-00123.warc.gz
|
CC-MAIN-2018-51
| 537
| 1
|
https://www.swflug.org/2004/07/ann-microolap-database-designer-1-1/
|
code
|
We’re proud to announce the release of the microOLAP Database Designer for MySQL – visual development system intended for database design, modeling, creation, modification, reverse engineering, and import/export data from/to various data sources.
Direct download link:
http://microolap.com/dba/mysql/designerm/mymdd.zipWhat’s new in the microOLAP Database Designer:
[+] Added possibility to predefine MS Access Tables Names “Garbage symbols” and replace them with allowed by MySQL ones.
[+] Added possibility to predefine MS Access Database Objects (tables only or tables and views) for the reverse engineering from MS Access and ADO data sources.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998100.52/warc/CC-MAIN-20190616102719-20190616124719-00148.warc.gz
|
CC-MAIN-2019-26
| 657
| 5
|
http://appdownloadreview.com/tag/m810dluv75c/
|
code
|
A workstation is a particular computer designed for technical or scientific functions. The notion of storing each data and directions in memory grew to become often called the ‘stored program idea’ to differentiate it from earlier methods of instructing a computer. While the Altair 8800 was the first actual personal computer, it was the discharge of the Apple II a couple of years later that signaled the start of the LAPTOP as a sought-after residence appliance.
Full time students can full this system in four years. Value group an enormous collection of computers and Apple products , software , accessories and computer parts for building your individual LAPTOP – over 25,000 merchandise. Registers are used for probably the most continuously needed data objects to keep away from having to entry important reminiscence each time knowledge is required.
Access to this Nanodegree program runs for the length of time laid out in your subscription plan. Sometimes programs are executed by a hybrid technique of the 2 methods. College students get fingers-on experience writing code, testing applications, fixing errors, and doing many other tasks that they’ll perform on the job.
In some instances, a computer might retailer some or all of its program in reminiscence that is saved separate from the information it operates on. This is referred to as the Harvard structure after the Harvard Mark I computer. In order to succeed on this rigorous and accelerated program, college students ought to have each a ardour for learning computer science and the power to dedicate important time and effort to their research.
In a widely circulated paper, mathematician John von Neumann outlines the structure of a stored-program computer, including electronic storage of programming data and knowledge – which eliminates the necessity for more clumsy strategies of programming akin to plugboards, punched playing cards and paper.…
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819273.90/warc/CC-MAIN-20240424112049-20240424142049-00206.warc.gz
|
CC-MAIN-2024-18
| 1,936
| 5
|
https://learn.microsoft.com/en-us/answers/questions/1617420/how-to-fix-datetime-culture-info-issue
|
code
|
How to fix DateTime Culture Info Issue?
I'm developing an HR application using Angular 13 and a .Net 6 Web API. I'm facing a challenge related to handling different time zones for users.
The application is deployed in the AWS Asia/Mumbai region. When users from various time zones access the app, a culture information issue arises. Let's consider two examples:
- User X (USA, New York): Sets a reminder at 10:00 AM for a task. The application should send the reminder to User X at 10:00 AM in their local New York time zone.
- User Y (Russia, Moscow): Sets a reminder at 12:00 AM for a task. The application should send the reminder to User Y at 12:00 AM in their local Moscow time zone.
Currently, the application triggers reminders based on the server's time zone (Asia/Mumbai). This results in incorrect reminder times for users in different locations.
- Frontend: Angular 13
- Backend: .Net 6 Web API
- Scheduling: Hangfire
I've been unable to resolve this issue on my own. I'd appreciate any suggestions or ideas on how to ensure reminders are sent based on the user's specific time zone.
Hi Praveen, to make sure the reminders are sent based on the users' time zone, we only need to make sure that the time uses set is based on their client machine but not the server. When we try to get the time in Angular, make sure it's client time, when we stored in database or somewhere else, make sure it represents the client time.
Sign in to comment
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818072.58/warc/CC-MAIN-20240422020223-20240422050223-00615.warc.gz
|
CC-MAIN-2024-18
| 1,449
| 12
|
https://forum.opencart.com/viewtopic.php?f=198&t=221925
|
code
|
Hi! Our webshop checkout was rendered inoperable, as Braintree updated its API:s. The braintree payment processor should be updated to fix this, however I have no idea how to do that and I also have no clue whatsoever how the payment processor is installed. The opencart version is 220.127.116.11. The original employees that worked on this webshop have left the company a long time ago, and they didn't leave any documentation behind... It does not show up on the admin panel as a extension and I have no Idea where to look next.
For updating the Braintree SDK that comes with OpenCart.
Who is online
Users browsing this forum: No registered users and 9 guests
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057830.70/warc/CC-MAIN-20210926053229-20210926083229-00081.warc.gz
|
CC-MAIN-2021-39
| 661
| 4
|
https://kobu.com/author/draw-en.html
|
code
|
Markdown Hosing - Register Now!
Home > Authoring > Guide > Draw | Japanese
The SVG chart generater is a Perl script for generating a very simple configuration-type chart like the one below with limited but simple instructions.
The generator allows you to draw a limited number of simple shapes (such as squares or circles) with a text description and connect them horizontally and vertically.
A block of drawing instructions in a source text is called a drawing block and enlcosed with a begin marker of
!draw! and end marker of
!draw! paper "Source" -; ball "Draw.pm" -; paper "Output" !end!
This block will produce the following SVG drawing:
A drawing block forms a grid structure. It consists of rows delimited by new lines and columns delimited by semicolons. A column is called cell and includes a drawing instruction for one figure.
!draw! cell; cell; ... cell; cell; ... ... !end!
A grid of two rows by three columns:
was produced by a block:
!draw! box "Box 1.1" -; box "Box 1.2" +; box "Box 1.3" ~; box "Box 2.2" !end!
The first row contains three figures. The second row only contains a figure in the middle column. A column containing nothing or only a tilda (~) produces a blank cell, no figure in it. The number of cells in a row may vary line by line.
A cell portion contains an instruction to draw a figure (also called shape) with description text and/or lines to neighboring figures (called hands).
Syntax of a cell:
|shape "text" [hand]||hand is optional|
|shape "" [hand]||no text|
A shape is one of the following:
|box||Hardware such as PC or server||Rounded-corner rectangle|
|ball||software or process (shown as a rugby ball)||Ellipse|
|paper||File or resource||Rectangle (polygon) with upper right corner cut|
|disk||File system or folder||Rectangle with double lines at top and bottom|
A hand is a line drawn from the current cell to the neighbor cell. The line can be drawn to the right-side and/or immediately lower figure. A single-character symbol is used to denote a type of a hand:
|-||Line to the right|
||||Line going down|
|+||Lines going to the right and down|
|~||No line drawn|
|box "Box"||Just a box and a text|
|ball "Ball" -||A rugby ball with a line connecting to the right figure|
|paper "Paper" +||A paper with both the right and down lines|
|disk "Disk" |||A disk with a line connecting to the figure below|
How they look like in a horizontal order:
Presented by Kobu.Com
2020-apr-16 first edition
2020-oct-09 japanese translation
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00662.warc.gz
|
CC-MAIN-2023-14
| 2,473
| 36
|
https://community.brave.com/t/it-wouldnt-hurt-to-have-bigger-buttons-like-microsoft-edge/399741
|
code
|
Microsoft Edge has bigger, thicker buttons. This works great for 2-in-1 laptops and just feels more relaxed anyway. Make this optional in settings in case some users don’t want it.
There are many posts that raise the issues related to the small size of buttons, folder icons, icons, tabs, menu , etc…, and it is naturally a very good idea to have a built-in setting in Brave to increase/decrease the size at the user’s choice; But so far nothing new has come out from the developpers to respond to this . To be fair to Brave, this issue also occurs in Chrome, Firefox and Opera and other Windows apps such as MS WORD and MS EXCEL; So, it is primarly a Windows Settings. I was having the same problem with my laptop because of the default settings of Windows . Since, I increased the size by 10% and it suits very well my eyesight needs (See befroe and after images).
To increase size menu, icons, text in url bar: Open Windows Settings, go to Accessibility, and click on text size to increase size.
To increase width of tab: Open brave://flags/, search for tab scrolling and enable as in image below. Result: Tabs are wider and you can scroll left or right to other tabs. (BE careful with FLAGS!)
To increase content of pages, CTRL+mousewheel does the job.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00634.warc.gz
|
CC-MAIN-2022-27
| 1,262
| 5
|
http://news.sys-con.com/node/2648336
|
code
|
|By Andreas Grabner||
|May 10, 2013 01:45 PM EDT||
Adding more memory to your JVMs (Java Virtual Machines) might be a temporary solution to fixing memory leaks in Java applications, but it for sure won't fix the root cause of the issue. Instead of crashing once per day it may just crash every other day. "Preventive" restarts are also just another desperate measure to minimize downtime, but, let's be frank: this is not how production issues should be solved.
One of our customers - a large online retail store - ran into such an issue. They run one of their online gift card self-service interfaces on two JVMs. During peak holiday seasons when users are activating their gift cards or checking the balance, crashes due to OOM (Out Of Memory) were more frequent, which caused bad user experience. The first "measure" they took was to double the JVM Heap Size. This didn't solve the problem as JVMs were still crashing, so they followed the memory diagnostics approach for production as explained in Java Memory Leaks to identify and fix the root cause of the problem.
Before we walk through the individual steps, let's look at the memory graph that shows the problems they had in December during the peak of the holiday season. The problem persisted even after increasing the memory. They could fix the problem after identifying the real root cause and applying specific configuration changes to a third-party software component.
After identifying the actual root cause and applying necessary configuration changes did the memory leak issue go away? Increasing Memory was not even a temporary solution that worked.
Step 1: Identify a Java Memory Leak
The first step is to monitor the JVM/CLR Memory Metrics such as Heap Space. This will tell us whether there is a potential memory leak. In this case we see memory usage constantly growing, resulting in an eventual runtime crash when the memory limit is reached.
Java Heap Size of both JVMs showed significant growth starting Dec 2nd and Dec 4th resulting in a crash on Dec 6th for both JVMs when the 512MB Max Heap Size was exceeded.
Step 2: Identify problematic Java Objects
The out-of-memory exception automatically triggers a full memory dump that allows for an analysis of which objects consumed the heap and are most likely to be the root cause of the out-of-memory crash. Looking at the objects that consumed most of the heap below indicates that they are related to a third-party logging API used by the application.
Sorting by GC (Garbage Collection) Size and focusing on custom classes (instead of system classes) shows that 80% of the heap is consumed by classes of a third-party logging framework
A closer look at an instance of the VPReportEntry4 shows that it contains five strings - with one consuming 23KB (as compared to several bytes of other string objects).This also explains the high GC Size of the String class in the overall Heap Dump.
Individual very large String objects as part of the ReportEntry object
Following the referrer chain further up reveals the complete picture. The EventQueue keeps LogEvents in an Array, which keeps VPReportEntrys in an Array. All of these objects seem to be kept in memory as the objects are being added to these arrays but never removed and therefore not garbage collected:
Following the referrer tree reveals that global EventQueue objects hold on to the LogEvent and VPReportEntry objects in array lists which are never removed from these arrays
Step 3: Who allocates these objects?
Analyzing object allocation allows us to figure out which part of the code is creating these objects and adding them to the queue. Creating what is called a "Selective Memory Dump" when the application reached 75% Heap Utilization showed the customer that the ReportWriter.report method allocated these entries and that they have been "living" on the heap for quite a while.
It is the report method that allocates the VPReportEntry objects that stay on the heap for quite a while
Step 4: Why are these objects not removed from the Heap?
The premise of the third-party logging framework is that log entries will be created by the application and written in batches at certain times by sending these log entries to a remote logging service using JMS. The memory behavior indicates that even though these log entries might be sent to the service, these objects are not always removed from the EventQueue leading to the out-of-memory exception.
Further analysis revealed that the background batch writer thread calls a logBatch method, which loops through the event queue (calling EventQueue.next) to send current log events in the queue. The question is whether as many messages were taken out of the queue (using next) vs put into the queue (using add) and whether the batch job is really called frequently enough to keep up with the incoming event entries. The following chart shows the method executions of add, as well as the call to logBatch highlighting that logBatch is actually not called frequently enough and therefore not calling next to remove messages from the queue:
The highlighted area shows that messages are put into the queue but not taken out because the background batch job is not executed. Once this leads to an OOM and the system restarts it goes back to normal operation but older log messages will be lost.
Step 5: Fixing the Java Memory Leak problem
After providing this information to the third-party provider and discussing with them the number of log entries and their system environment the conclusion was that our customer used a special logging mode that was not supposed to be used in high-load production environments. It's like running with DEBUG log level in a high load or production environment. This overwhelmed the remote logging service and this is why the batch logging thread was stopped and log events remained in the EventQueue until the out of memory occurred.
After making the recommended changes the system could again run with the previous heap memory size without experiencing any out-of-memory exceptions.
The Memory Leak issue has been solved and the application now runs even with the initial 512MB Heap Space without any problem.
They still use the same dashboards they have built to troubleshoot this issue, and to monitor for any future excessive logging problems.
These dashboards allow them to verify that the logging framework can keep up with log messages after they applied the changes.
Adding additional memory to crashing JVMs is most often not a temporary fix. If you have a real Java memory leak it will just take longer until the Java runtime crashes. It will even incur more overhead due to garbage collection when using larger heaps. The real answer to this is to use the simple approach explained here. Look at the memory metrics to identify whether you have a leak or not. Then identify which objects are causing the issue and why they are not collected by the GC. Working with engineers or third-party providers (as in this case) will help you find a permanent solution that allows you to run the system without impacting end users and without additional resource requirements.
If you want to learn more about Java Memory Management or general Application Performance Best Practices check out our free online Java Enterprise Performance Book. Existing customers of our APM Solution may also want to check out additional best practices on our APM Community.
If you’re responsible for an application that depends on the data or functionality of various IoT endpoints – either sensors or devices – your brand reputation depends on the security, reliability, and compliance of its many integrated parts. If your application fails to deliver the expected business results, your customers and partners won't care if that failure stems from the code you developed or from a component that you integrated. What can you do to ensure that the endpoints work as expect...
Sep. 27, 2016 12:30 AM EDT Reads: 1,626
WebRTC adoption has generated a wave of creative uses of communications and collaboration through websites, sales apps, customer care and business applications. As WebRTC has become more mainstream it has evolved to use cases beyond the original peer-to-peer case, which has led to a repeating requirement for interoperability with existing infrastructures. In his session at @ThingsExpo, Graham Holt, Executive Vice President of Daitan Group, will cover implementation examples that have enabled ea...
Sep. 27, 2016 12:00 AM EDT Reads: 1,538
The Transparent Cloud-computing Consortium (abbreviation: T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data processing High speed and high quality networks, and dramatic improvements in computer processing capabilities, have greatly changed the nature of applications and made the storing and processing of data on the network commonplace.
Sep. 27, 2016 12:00 AM EDT Reads: 1,041
SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including GE, Toyota, United Healthcare, NASA and 6 of the Fortune 15 have said “No to LUNs.” With Tintri they mana...
Sep. 26, 2016 11:45 PM EDT Reads: 2,744
Major trends and emerging technologies – from virtual reality and IoT, to Big Data and algorithms – are helping organizations innovate in the digital era. However, to create real business value, IT must think beyond the ‘what’ of digital transformation to the ‘how’ to harness emerging trends, innovation and disruption. Architecture is the key that underpins and ties all these efforts together. In the digital age, it’s important to invest in architecture, extend the enterprise footprint to the cl...
Sep. 26, 2016 10:45 PM EDT Reads: 438
SYS-CON Events announced today the Kubernetes and Google Container Engine Workshop, being held November 3, 2016, in conjunction with @DevOpsSummit at 19th Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA. This workshop led by Sebastian Scheele introduces participants to Kubernetes and Google Container Engine (GKE). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, students learn the key concepts and practices for deploying and maintainin...
Sep. 26, 2016 10:00 PM EDT Reads: 2,672
Cognitive Computing is becoming the foundation for a new generation of solutions that have the potential to transform business. Unlike traditional approaches to building solutions, a cognitive computing approach allows the data to help determine the way applications are designed. This contrasts with conventional software development that begins with defining logic based on the current way a business operates. In her session at 18th Cloud Expo, Judith S. Hurwitz, President and CEO of Hurwitz & ...
Sep. 26, 2016 09:45 PM EDT Reads: 3,005
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life sett...
Sep. 26, 2016 08:45 PM EDT Reads: 3,400
Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walk you through how Oct...
Sep. 26, 2016 08:45 PM EDT Reads: 2,165
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
Sep. 26, 2016 08:30 PM EDT Reads: 1,592
SYS-CON Events announced today that ReadyTalk, a leading provider of online conferencing and webinar services, has been named Vendor Presentation Sponsor at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. ReadyTalk delivers audio and web conferencing services that inspire collaboration and enable the Future of Work for today’s increasingly digital and mobile workforce. By combining intuitive, innovative tec...
Sep. 26, 2016 08:00 PM EDT Reads: 2,906
SYS-CON Events announced today that Secure Channels will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. The bedrock of Secure Channels Technology is a uniquely modified and enhanced process based on superencipherment. Superencipherment is the process of encrypting an already encrypted message one or more times, either using the same or a different algorithm.
Sep. 26, 2016 05:15 PM EDT Reads: 1,601
Vidyo, Inc., has joined the Alliance for Open Media. The Alliance for Open Media is a non-profit organization working to define and develop media technologies that address the need for an open standard for video compression and delivery over the web. As a member of the Alliance, Vidyo will collaborate with industry leaders in pursuit of an open and royalty-free AOMedia Video codec, AV1. Vidyo’s contributions to the organization will bring to bear its long history of expertise in codec technolo...
Sep. 26, 2016 05:15 PM EDT Reads: 2,619
Traditional on-premises data centers have long been the domain of modern data platforms like Apache Hadoop, meaning companies who build their business on public cloud were challenged to run Big Data processing and analytics at scale. But recent advancements in Hadoop performance, security, and most importantly cloud-native integrations, are giving organizations the ability to truly gain value from all their data. In his session at 19th Cloud Expo, David Tishgart, Director of Product Marketing ...
Sep. 26, 2016 05:00 PM EDT Reads: 1,839
SYS-CON Events announced today that Bsquare has been named “Silver Sponsor” of SYS-CON's @ThingsExpo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. For more than two decades, Bsquare has helped its customers extract business value from a broad array of physical assets by making them intelligent, connecting them, and using the data they generate to optimize business processes.
Sep. 26, 2016 05:00 PM EDT Reads: 2,732
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660966.51/warc/CC-MAIN-20160924173740-00208-ip-10-143-35-109.ec2.internal.warc.gz
|
CC-MAIN-2016-40
| 15,492
| 61
|
http://www.droidforums.net/forum/android-general-discussions/14761-android-streams.html
|
code
|
I am a Chicago Blackhawks/Bears fan who just relocated to North Carolina and I had to stream all of my team's games this year through sites such as Rajangan.net and Justin.tv
Is there an App or SOMETHING that would allow me to "stream" or access such sites to watch my games live?
I heard a Adobe flash 10.1 is suppose to come out SOMETIME... we don't know when, but I need something NOW! my Blackhawks play TONIGHT!!
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010292910/warc/CC-MAIN-20140305090452-00025-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 417
| 3
|
http://www.sqlservercurry.com/2009/03/list-all-stored-procedures-of-database.html
|
code
|
The other way to script objects of your database in SQL Server 2005/2008 is to use the Microsoft SQL Server Database Publishing Wizard 1.1.
However if you need to do the same task programmatically using T-SQL, then here's the T-SQL that will help you do so:
This query will list down all the stored procedures and their definitions of your database. In order to save the results of this SELECT statement to a text file, check this post of mine
Save SELECT query output to a text file
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607242.32/warc/CC-MAIN-20170522230356-20170523010356-00062.warc.gz
|
CC-MAIN-2017-22
| 483
| 4
|
https://nicevps.net/news?page=9
|
code
|
News - Service status
Beta testing & hPanel UpdatesBy sys-admin @ 2017-07-28 20:18:06
We are looking for beta testers, and translators who will receive free service test.
If you are interested please get in touch
hPanel v18.104.22.1687 :
- Few bug fixes
+ Statistics for VPS with Graphs
+ Affiliate system
+ Other small upgrades
Have a real nice day!
Welcome to NiceVPS.net !By sys-admin @ 2017-07-23 00:00:00
We open beta to public today, for this reason we are giving out a nice discount codes.
Register now and order your bargain VPS.
SPECIAL 11% DISCOUNT COUPON: NICE2MEETU
will be available till 2017-08-15.
Have a nice day!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610004.56/warc/CC-MAIN-20200123101110-20200123130110-00035.warc.gz
|
CC-MAIN-2020-05
| 629
| 16
|
https://social.technet.microsoft.com/Forums/en-US/605030e1-8a0f-48d2-84ee-d62506ff0b7d/shared-mailbox-send-as-in-outlook-2013-stuck-in-delegates-outbox-folder
|
code
|
I have Outlook 2013 connecting to Exchange 2013 in Online Mode and not cached mode. Using ECP, I have a user full access rights and send as rights to a shared mailbox. The user can see,delete, and read emails in the shared mailbox. When the user tries to send mail as the shared mailbox, the email is getting stuck in the user's outbox folder and never sends. Not even after manually clicking on send and receive. I also do not have the registry key DelegateSentItemsStyle
Any idea how I can fix this?
We have exactly the same problem here and I could be more specific for my case:
Exchange 2013 with CU1 installed.
1) with the SendAs function, the message is stuck in the outbox folder
2) with the SendOnBehalf function, the message is delivered but contains an incorrect smtp address; in fact the field is replaced by content of the attribute "legacyDN" from the AD (CN=xxxxxxx,OU=xxxxxxx,xxxxxxx)
On both case this problem is affecting Outlook 2010 and Outlook 2013 as sending client and not OWA client
On both case we see that activating the cache mode for Outlook is a workaround on this problem; but it creates other problems like allocating GB of storage for .OST files (in particular if an user add other's people accounts in the client).
On both case we see that purging the auto-complete (in Outlook or with MFCMAPI) cache does not fix the problem.
We have to deploy Exchange 2013 the next week-end for 120 users. We are researching an other and much better workaround. We will at least deploy a GPO to configure cache mode properties if this is the only workaround.
I would be very happy to have a fix for this rapidely. We have now open a SR (113040410339300) by Microsoft but without any success at current time.
- Proposed as answer by Valery Tyurin Saturday, October 05, 2013 6:16 PM
Answer from microsoft support :
I have tested the issue at my lab environment and I have the same issue with yours. There is a bug on this issue based on my research on our internal data. Please wait for some time and it will be fixed later. Thanks for your patience and time.
After asking when this will be fixed :
I have submitted the feedback about this issue to Microsoft Exchange 2013 product Team and if I get any reply from them, I will update it to this post. Thanks for your understandings and time.
For information, in the case of "SendOfBehalf" message, we get always 3 error messages in the application event log in sequence:
1) Event ID 4007 Rules
Transport engine failed to evaluate condition or apply action. Details: 'Organization: '' Message ID '<bf7dca790f774a76b14d96e697af239c@EXCHMB.dsa.dom>' Rule ID '361cd6ea-84a8-42f9-a201-3e4906295fe3' Predicate 'senderAttributeContains' Action ''. ArgumentOutOfRangeException Error: System.ArgumentOutOfRangeException: The address '/o=Devi/ou=Exchange Administrative Group /cn=Recipients/cn=ec91c813c8a34033bf384bb2174d33c0-yankee1' is not a valid SMTP address.
at Microsoft.Exchange.MessagingPolicies.Rules.TransportUtils.UserAttributeContainsWords(TransportRulesEvaluationContext context, String user, String attributeList, String tagName)
at System.Linq.Enumerable.Any[TSource](IEnumerable`1 source, Func`2 predicate)
at Microsoft.Exchange.MessagingPolicies.Rules.AndCondition.Evaluate ...
This would say that the smtp address is already fitted wrong with the legacyDN by the Outlook client (when entry was taken from the GAL ?)
2) Event ID 10003 Poison Message
3) Event ID 4999 General
For the "SendAs" case, no event log in Exchange as the message is stuck already in the Outlook client.
If this is a known issue, i don't think that i have to open a case. If Microsoft offers this product with a online mode, they have to support without a case to send a fix. It's not a isolate case, and affects everyone that buy the exchange. The option of online mode is offers in this product, if dont work with online mode, don't sell with this option or just solve this problem and and release the fix to all.
We still does not have a better solution from Microsoft for this issue. So we added:
- a 300GB HDU (no Raid) to store the cache of the users in off-line mode (.ost files)
- we have designed a group of users that absolutely need for now the SendonBehalf and SendAs features
- for this group we have pushed by GPO the "force cached mode", the location of .OST and the fact that additional mailboxes loaded in Outlook are NOT cached (to limit the cache). This works properly.
- we use terminal server, so we decided that users does not need to have more than 1 session opened but may use any of the TS available so we have set only 1 cached files location, independant of the user profile (otherwise you get error loading outlook if the cached .ost files are already opened by an other session)
- we started production for 120 users mailboxes 2 weeks ago, with 40 users with the cache active, and it seems to work so until better solution
We also regret that this bug, which has been reproduced by Microsoft according to the man in charge the service request we opened, is still not resolved. For a more important deployement it could be a major isssue.
Have the exact same issue, large number of users on Citrix that need to be able to send emails with different FROM addresses (more than one). My only little bit of success was adding all the FROM email addresses (departments) from the GAL to a users Contacts. I set the address book then to first check in the users contacts (by default it was the GAL). So I create a new email, select FROM via the contacts I just added and it works. What doesn't work though is doing it again, using the dropdown menu/history next to the FROM button. If I select a FROM address via the drop down I get "you do not have the permission to send on behalf of this user". If I press the FROM button, go to the user's contacts, select a FROM email address and send the email it works, again and again. If I select it via the FROM dropdown it does not. Now, this along with the omission of the very usefull DelegatesSentItemsStyle (option to have outbound emails in the delagates Sent Items as well as the shared mailbox's) is giving me and my company headaches. The SentItemsStyle issue I have managed to emulate via transport and mailbox rules, this FROM issue though I can't. It was nice finding this post though as I was going crazy, trying a lot of weird and dangerous things.
OK guys, I think I have a pretty good workaround after "refining" my previous workaround. I selected all the contacts from the GAL that I would like to be able to send email as and added them to the user's contacts. I then opened the Address book's properties and removed the GAL from "check address lists in this order", after selecting Custom of course. This way I can fire up a new email, click FROM-Other email address and type in there any part of the FROM addresses I have in my contacts, press OK and then send. This way no GAL lookup is done (it seems). The only thing that doesn't work is using the history dropdown of the FROM button, this seems to have the same effect as selecting a contact from the GAL.
Hope this helps,
- Proposed as answer by Vectivus Monday, July 01, 2013 2:57 PM
We have a simular problem but are solution is :
don't create you outlook profile via outlook but create it via Start/configuration/e-mail. Add all e-mail account to the new created outlook-profile. (best is to delete the existing outlook-profile)
We don't now the difference between creating via outlook are creating vin start/configuratio.e-mail but all are hanging mailproblems are solved.
Hope this helps;
I have tried your method, but when I try to select the address from my contacts list i get " Cannot perform the requested operation. the coomand selected is not valid for this recipient"
I have also tried the method below (manual config) and then it just stays in the Outbox-
Really getting frustrating as we need to send as- but cannot load the terminal server with all these OST's in cached mode.
You would expect something a little more efficient considering the thousands of dollars spent on this.
I have been using this method for days now on several users, testing SEND AS before we go live, so at least in my environment it works as described 100%. I can see your error popping up around the internet so something else must be wrong in your environment. I would suggest you create a brand new user/mailbox and:
- Switch Outlook to online mode
- Go to the GAL, select the FROM contacts you would like to send as, right click and select add to contacts.
- Then go to Address book (in inbox), go to tools-options and under "when sending email, check these address lists first" select custom, remove the GAL and leave only the user's Outlook contacts in there. The last bit is important, as I observed a case where a user had two accounts in Outlook (an Exchange one and a POP3 one), with two Contact folders, both set to be used as an address book and my trick was not working. I had to remove everything but the Exchange mailbox Contacts and Suggested Contacts for it to work.
Again I would suggest you tried this with a "clean" user (mailbox, .nk2 cache file). Our goal is for Outlook to use the users contact whenever its trying to validate/lookup the FROM address/display name.
If it works with a brand new user, then you can begin troubleshooting why it doesn't work with an old one.
- Edited by Vectivus Thursday, July 04, 2013 10:46 AM addressed to wrong person (Peter instead of a0j).
Yes working ok now with CU2 installed :)
My version is Exchange 2013 RTM. It's delighted to hear that if CU2 can fix it.
My client encountered the problem only have shared mailbox and it happened randomly.
I tried re-created client outlook profile, add shared mailbox manually rather than auto-add by Exchange 2013.
Both of the two method are fail.
My workaround is uncheck cached mode and no mail stuck in outbook.
But the method would cause server loading heavy and unconvenience for roaming users.
Is it any risk to upgrade to CU2 ? What shall I pay attention?
Yes, it works now for plain online mode, but with DelegateSentItemsStyle it's still confusing:
The mail will be sent, but gets stuck in my outbox folder.
Indeed it's not satisfying that there is no working way to have the DelegateSentItemsStyle behavior on server side, the reg key just seems to be a dirty client hack.
we work with Exchange 2013 CU2 and Outlook 2013 and our customers work with Exchange 2013 CU3. And it doesn't work fine. My Installation is "fine" working in Cache mode.
Is there any official solution from MS?
So many Posts and so few Solutions :-( -
Hi MS-Team, why do you do good features broken again with every new version?
Removed Full Access permissions, then reassigned Full Access permissions with AutoMapping disabled
Disable Outlook Auto-Mapping with Full Access Mailboxes:
Issues that can occur when you add multiple Exchange accounts in the same Outlook 2010 or
Outlook 2013 profile:
I have the SAME issue with Exchange 2013 SP1 !!!
I have Exchange 2010 sp3 and Exchange 2013 sp1 (fresh in new VM) in coexistence - if i move mailbox to 2010 - everything is working. If i move mailbox back to 2013 - i have same bug. I have tried to create new mailbox - bug still there.
1. Create Distribution Group in Exchange Ad. Center 2013
2. Set Send As in delegation.
3. Set mail address for this group.
4. Trying to send mail from it
5. Message stuck in outbox.
6. Delete message from outbox.
7. Move mailbox to Exchange 2010
8. Trying to send mail again - it works!!
9. But i want to use 2013 and with no cached mode :)
- Edited by 06a62412e3 Wednesday, March 19, 2014 9:39 AM
Same Problem here:
Exchange 2013 SP1 & Outlook 2013 SP1. Our client uses Citrix and PC's. It is NOT DONE to use Cached Mode. Some profiles have 50GB!!!!!!!! Mailboxes. I do not know about you but our c-drives do not have TB's of data available to accomplish cached mode.
What I find very weird is the fact that this problem exists since 2003. EVERY time MSFT makes a workaround after years of complaining of us. Why can't you just make it default with a NEW vesion of your products?
This is just very annoying. 1 Year ago you said it would be fixed. Now SP1 is already out and it is still not fixed. Just hurry up!
- Edited by Michel Biesheuvel Friday, April 04, 2014 11:57 AM
i had same issue in our enviroment:
E2k13 CU15 on W2k12R2 (all patches).
Domain Controller is also 2012R2.
We migrated from E2k7 SP1. We have a Citrix-Terminal-Server-Enviroment with 16 TS. In E2k7-Version we deployed a GPO that set the registry-key "delegateSentItemStyle" for Outlook-Clients, to move sent emails to delegate's sent-Items-Folder. After Migration to E2k13 that GPO didn't touched.
In Outlook-Profiles under new Exchange-Server emails which sent by "send as" keep stucking in outbox of senders profile, even their are received by recipients. I became mad for searching the reason. The switch -MessageCopyForSentAsEnabled $true allows me to copy sent emails to Sent-Items-Folder of shared Mailbox and senders Mailbox, but it still stuck in senders outbox.
Cause of using roaming profiles in enviroment, cached mode where not an option.
Many hours an many more Google-Searches later i got an idea: Whats happens if i just delete this Registry-Key? So i did it for a test-user and voilà: send-as emails finaly leave the outbox an move to sent-items-folder, like they shall.
Reg-Key for Outlook 2013:
Hope that could help somebody.
Greetings from germany and sorry for bad english ;)
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120878.96/warc/CC-MAIN-20170423031200-00596-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 13,515
| 99
|
https://news.slashdot.org/story/05/02/04/028220/first-program-executed-on-l4-port-of-gnuhurd
|
code
|
wikinerd writes "The GNU Project was working on a new OS kernel called HURD from 1990, using the GNU Mach microkernel. However, when HURD-Mach was able to run a GUI and a browser, the developers decided to start from scratch and port the project to the high-performance L4 microkernel. As a result development was slowed by years, but now HURD developer Marcus Brinkmann made a historic step and finished the process initialization code, which enabled him to execute the first software on HURD-L4. He says: 'We can now easily explore and develop the system in any way we want. The dinner is prepared!'"
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946199.72/warc/CC-MAIN-20180423203935-20180423223935-00061.warc.gz
|
CC-MAIN-2018-17
| 602
| 1
|
https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&oldid=218558
|
code
|
Source Code Analysis Tools
Source code analysis tools are designed to analyze source code and/or compiled version of code in order to help find security flaws. Ideally, such tools would automatically find security flaws with such a high degree of confidence that what's found is indeed a flaw. However, this is beyond the state of the art for many types of application security flaws. Thus, such tools frequently serve as aids for an analyst to help them zero in on security relevant portions of code so they can find flaws more efficiently, rather than a tool that just automatically finds flaws.
Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.
Strengths and weaknesses
- Scales well -- can be run on lots of software, and can be repeatedly (as with nightly builds)
- Useful for things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, and so forth
- Output is good for developers -- highlights the precise source files, line numbers, and even subsections of lines that are affected
- Many types of security vulnerabilities are very difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. Tools of this type are getting better, however.
- High numbers of false positives.
- Frequently can't find configuration issues, since they are not represented in the code.
- Difficult to 'prove' that an identified security issue is an actual vulnerability.
- Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc.
Important selection criteria
- Requirement: Must support your language, but not usually a key factor once it does.
- Types of vulnerabilities it can detect (out of the OWASP Top Ten?) (plus more?)
- Does it require a fully buildable set of source?
- Can it run against binaries instead of source?
- Can it be integrated into the developer's IDE?
- License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.)
OWASP Tools Of This Type
Disclaimer: The tools listed in the tables below are presented in alphabetical order. OWASP does not endorse any of the vendors or tools by listing them in the table below. We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think that this information is incomplete or incorrect, please send an e-mail to our mailing list and we will make every effort to correct this information.
Open Source or Free Tools Of This Type
- Brakeman - Brakeman is an open source vulnerability scanner specifically designed for Ruby on Rails applications
- Codesake Dawn - Codesake Dawn is an open source security source code analyzer designed for Sinatra, Padrino and Ruby on Rails applications. It can work also for non web application wrote in Ruby programming language
- FindBugs - Find Bugs (including some security flaws) in Java Programs
- Flawfinder Flawfinder - Scans C and C++
- FxCop (Microsoft) - FxCop is an application that analyzes managed code assemblies (code that targets the .NET Framework common language runtime) and reports information about the assemblies, such as possible design, localization, performance, and security improvements.
- Google CodeSearchDiggity - Uses Google Code Search to identifies vulnerabilities in open source code projects hosted by Google Code, MS CodePlex, SourceForge, Github, and more. The tool comes with over 130 default searches that identify SQL injection, cross-site scripting (XSS), insecure remote and local file includes, hard-coded passwords, and much more. Essentially, Google CodeSearchDiggity provides a source code security analysis of nearly every single open source code project in existence – simultaneously.
- OWASP SWAAT Project - Simplistic Beta Tool - Languages: Java, JSP, ASP .Net, and PHP
- PMD - PMD scans Java source code and looks for potential code problems (this is a code quality tool that does not focus on security issues)
- PreFast (Microsoft) - PREfast is a static analysis tool that identifies defects in C/C++ programs
- SonarQube - Scans source code for more than 20 languages for Bugs, Vulnerabilities, and Code Smells
- VCG - Scans C/C++, Java, C# and PL/SQL for security issues and for comments which may indicate defective code. The config files can be used to carry out additional checks for banned functions or functions which commonly cause security issues.
Commercial Tools Of This Type
- bugScout (Buguroo Offensive Security)
- Latest generation source code analysis tool bugScout detects source code vulnerabilities and makes possible an accurate management of the life cycles due to its easy use.
- Contrast from Contrast Security
- Contrast is not a static analysis tool like these others. It instruments the running application and provides code level results, but doesn't actually perform static analysis. It monitors the code that is actually running.
- IBM Security AppScan Source Edition (formerly Ounce)
- Insight (KlocWork)
- Parasoft Test (Parasoft)
- Pitbull Source Code Control (Pitbull SCC)
- Software application designed to solve efficiently application source code control with the appropriate compiled files to ensure integrity prior to placing it into production. Providing added value,allows the analysis of source code to identify if it has a malware that affects the normal functioning of the application.
- Seeker (Quotium)
- Seeker performs code security without actually doing static analysis. Seeker does Interactive Application Security Testing (IAST), correlating runtime code & data analysis with simulated attacks. It provides code level results without actually relying on static analysis.
- Source Patrol (Pentest)
- Static Source Code Analysis with CodeSecure™ (Armorize Technologies)
- Kiuwan - SaaS Software Quality & Security Analysis (Optimyth)
- Static Code Analysis (Checkmarx)
- Security Advisor (Coverity)
- PVS-Studio (PVS-Studio)
- Source Code Analysis (HP/Fortify)
- Veracode (Veracode)
- Sentinel Source solution (Whitehat)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510994.61/warc/CC-MAIN-20231002100910-20231002130910-00221.warc.gz
|
CC-MAIN-2023-40
| 6,818
| 54
|
https://blogs.msdn.microsoft.com/oldnewthing/20140819-00/?p=203
|
code
|
Back in 1994 or so, my friend helped out his buddy who worked as the IT department for a local Seattle company known as Sub Pop Records. Here's what their Web site looked like back then. Oh, and in case you were wondering, when I said that his buddy worked as the IT department, I mean that the IT department consisted of one guy, namely him. And this wasn't even his real job. His main job was as their payroll guy; he just did their IT because he happened to know a little bit about computers. (If you asked him, he'd say that his main job was as a band member in Earth.)
The mission was to make it possible for fans to buy records online. Nobody else was doing this at the time, so they had to invent it all by themselves. The natural metaphor for them was the shopping cart. You wandered through the virtual record store putting records in your basket, and then you went to check out.
The trick here is how to keep track of the user as they wander through your store. This was 1994. Cookies hadn't been invented yet, or at least if they had been invented, support for them was very erratic, and you couldn't assume that every visitor to your site is using a browser that supported them.
The solution was to encode the shopping cart state in the URL by making every link on the page include the session ID in the URL. It was crude but it got the job done.
The site went online, and soon they were taking orders from excited fans around the world. The company loved it, because they probably got to charge full price for the records (rather than losing a cut to the distributor). And my friend told me the deep dark secret of his system: "We do okay if you ask for standard shipping, but the real money is when somebody is impatient and insists on overnight shipping. Overcharging for shipping is where the real money is."
(Note: Statements about business models for a primitive online shopping site from 1994 are not necessarily accurate today.)
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.11/warc/CC-MAIN-20180618051104-20180618071104-00597.warc.gz
|
CC-MAIN-2018-26
| 1,948
| 6
|