url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://infinitysend.com/products/8mm-lava-stone-bracelet
|
code
|
Lava Stone is a grounding stone that strengthens one's connection to Mother Earth. It gives us strength and courage, allowing us stability through times of change. It provides guidance and understanding in situations where we may need to 'bounce back'. A calming stone that is very useful in dissipating anger.
- 8mm lava stone beads
- Beads are strung on an elastic cord
* Due to the nature of gemstones, no two will be alike. Please expect slight variations in color from the items shown *
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00761.warc.gz
|
CC-MAIN-2023-06
| 491
| 4
|
https://blogs.msdn.microsoft.com/gautamg/2010/12/11/faq-why-am-i-getting-access-denied-during-recording/
|
code
|
The last action was not recorded because access to the application was denied
there could be two reasons for this -
- The application under test (one that you are trying to record) has higher privileges then the Visual Studio or Microsoft Test Manager process. In this case, this is deliberate (a.k.a. by design) to prevent any security issue.
- You are running this on a VM with dynamic disk. This is bug in the current release.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687484.46/warc/CC-MAIN-20170920213425-20170920233425-00534.warc.gz
|
CC-MAIN-2017-39
| 429
| 4
|
http://www.phpclasses.org/discuss/blog/PHP-Classes-blog/post/109/thread/9/
|
code
|
There is any beta available how vote process will look like ?
I'm looking on this page and begin little worry:
I imagine that we can have hundreds players and don't imagine self to click one-by-one "View" to find the best one. After 10-20 times every person will be off.
I think about use something like that: Website Thumbnail Generator
To make some gallery 100x100 where most layout can be easy judge. Ex. replace user avatar with page screen shot - see screenshot is more important for me as voter than watching person avatar because i'm focused on layout choose not nice avatar face choose ;)
It will be more voter friendly - please think about it.
|2009-12-13 00:02:15 - In reply to message 1 from Tomasz Malewski|
|Sorry, it is not yet finished.|
I thought of generating thumbnails, but there is no time to work on that now.
I think there will be about 20 contestants. It is not so much, so it may not be necessary.
I intend to have a sort of slide show to switch between previews of each theme using AJAX, so users do not have to fully load the page to switch to the next theme to vote.
|2009-12-16 18:22:07 - In reply to message 2 from Manuel Lemos|
|what operating system are you using?|
I could write up a small python ( cross-platform ) program to generate these for you.
|2009-12-16 20:03:17 - In reply to message 3 from Daveed|
|Thanks for the offer but I will not be generating thumbnails in this contest. The design preview system lets you try the same design for different types of pages, different users, and different screen resolutions. It would take too many thumbnails to generate all possibilities.|
Instead, now it is possible to switch to the next and previous themes using navigation buttons.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380638.30/warc/CC-MAIN-20141119123300-00173-ip-10-235-23-156.ec2.internal.warc.gz
|
CC-MAIN-2014-49
| 1,717
| 17
|
https://secure.dshield.org/diary/iOS+7+Adds+Multipath+TCP/16682
|
code
|
iOS 7 added a new feature, that hasn't been widely advertised. This feature, Multi Path TCP (MPTCP) is currently used by Siri, but could be used by other applications down the road. MPTCP is an extension to TCP allowing a TCP connection between hosts using multiple IP addresses. It's design is in particularly interesting in that it is backwards compatible with firewalls. As far as your firewall or other network devices are concerned, each multipath TCP connection is a valid TCP connection using it's own sequence numbers and its own handshake to set it up and tear it down. All the "magic" of signaling happens via new TCP options.
MPTCP is not proprietary. It is a standard (RFC 6824 ), and has been implemented for Linux for example, but so far has not seen much use, which may cause you to notice it the first time when looking at traffic from iOS 7 devices.
Just as a quick refresher: A TCP connection is established by the client sending a SYN packet to the server. The server responds with a SYN-ACK and the client completes the handshake using an ACK packet. During this handshake, the hosts will exchange random initial sequence numbers. The sequence number will increment by one for each byte transmitted. The sequence number is very important to reassemble the data stream. Without sequence number, the data stream could loose it's order.
Simplistically, one could setup two TCP connections, and just distribute the data between them. But if the sequence number stream is not continuous, many firewalls will disrupt the connection. This is why each MPTCP stream has its own sequence number. But this puts up another problem: How do we know how the streams, or "subflows" as the RFC calls them, fit together?
Lets first talk about how the MPTCP connection is setup:
The TCP connection starts out like any TCP connections with a SYN/SYN-ACK/ACK handshake. However, if MPTCP is available, the three handshake packets will include the "Multipath Capable (MP_CAPABLE)" option. Both ends need to support multipath, or it will not be used. The MP_CAPABLE option includes a key, that will later be used to authenticate additional subflows.
A host may now add a new subflow, and this subflow will be authenticated using a hash derived from the keys exchanged earlier, and nonces that are unique to each new subflow. The MP_JOIN option is used to carry this data. Throughout the connection, hosts may inform each other of newly acquired addresses and they may use the for new subflows. Since each subflow has its own set of sequence numbers, "Data Sequence Signals" are used to communicate how the sequence numbers in the subflow map to the combined data flow. The protocol has a lot of little details that make it well suited for hosts connected to multiple wireless networks. For example, different subflows may have different priorities. One usage scenario is a cell phone connected to a Wifi as well as a cellular network, and roaming between the two. For example, you start a TCP connection at home, and continue using it as you leave the house and your phone switches to the cellular network. As long as both networks are available for a while, MPTCP may drop the Wifi connection and exclusively use the cell phone data connection until you reach another WiFi network. But enough about how the protocol works, here are some packets. A quick BPF to capture these packets (for example with tcpdump):
It is not perfect, but because the options involved are rather large, you will find MPTCP packets by looking for larger TCP header sizes. This filter looks for a header size of 56 and above, with 60 being the maximum (you don't really need the bitmask for the filter). Wireshark and tshark deal rather well with MPTCP. For example, tshark displays for the TCP options:
Multipath TCP: Multipath Capable
Kind: Multipath TCP (30)
0000 .... = Multipath TCP subtype: Multipath Capable (0)
.... 0000 = Multipath TCP version: 0
Multipath TCP flags: 0x01
0... .... = Checksum required: 0
.... ...1 = Use HMAC-SHA1: 1
Multipath TCP Sender's Key: 8848941202347829228
tcpdump on the other hand has a much harder time:
16:44:15.681318 IP 22.214.171.124.57799 > 126.96.36.199.443: Flags [S], seq 847601216, win 65535, options [mss 1460,nop,wscale 3,Unknown Option 3000017acdc123cc42a7ec,nop,nop,TS val 102569696 ecr 0,sackOK,eol], length 0
it just displays the raw option as an "Unknown Option" Option "0x30" happens to be the "Multipath Capable" option.
0x32: DSS - Data Sequence Signal
0x33: ADD_ADDR - Add new address
0x34: REMOVE_ADDR - Remove address
0x35: MP_PRIO - Change subflow priority
0x36: MP_FAIL - Fallback (used to communicate checksum failures back to sender)
0x37: MP_FASTCLOSE - Fast Close (like TCP Reset, but only for subflow)
Johannes B. Ullrich, Ph.D.
SANS Technology Institute
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00008.warc.gz
|
CC-MAIN-2022-27
| 4,795
| 27
|
http://playerstage.sourceforge.net/doc/Player-3.0.2/player/group__util__playerjoy.html
|
code
|
Joystick control for a mobile robot. More...
Joystick control for a mobile robot.
playerjoy is a console-based client that provides planar, differential-drive teleoperation of position2d and position3d devices. In other words, playerjoy allows you to manually drive your (physical or simulated) robot around. playerjoy uses velocity control, and so will only work when the underlying driver supports velocity control (most drivers do).
playerjoy is installed alongside player in $prefix/bin, so if player is in your PATH, then playerjoy should also be. Command-line usage is:
$ playerjoy [options] <host:port> [<host:port>] ...
Where options can be:
- -v : verbose mode; print Player device state on stdout
- -3d : connect to position3d interface (instead of position)
- -c : continuously send commands, instead of sending commands only on change (useful with drivers with watchdog timers, like the segwayrmp)
- -n : dont send commands or enable motors (debugging)
- -k : use keyboard control
- -p : print out speeds on the console
- -a : send car like commands (velocity and steering angle)
- -udp : use UDP instead of TCP (deprecated, currently disabled)
- -speed : maximum linear speed (m/sec)
- -turnspeed : maximum angular speed (deg/sec)
- -dev <dev> : Joystick device file (default /dev/js0)
- <host:port> : connect to a Player on this host and port
playerjoy supports both joystick and keyboard control, although joysticks are only supported in Linux. If supported, joystick control is used by default. Keyboard control will be used if: the -k option is given, or playerjoy fails to open /dev/js0 (i.e., there is no joystick).
Joystick control is as follows: forward/backward sets translational (x) velocity, left/right sets rotational (yaw) velocity.
Details of keyboard control are printed out on the console.
- Calibrate out initial offset; should be possible by parsing the JS_EVENT_INIT message.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887746.35/warc/CC-MAIN-20180119045937-20180119065937-00750.warc.gz
|
CC-MAIN-2018-05
| 1,908
| 22
|
https://community.spiceworks.com/topic/1982004-odd-issue-accessing-exchange-owa-from-external-network-same-isp
|
code
|
I've an odd issue in that we've our main network, including exchange server and this is behind a sonicwall firewall and mail goes through mimecast etc. This Fiber circuit is with a local ISP. We've also get a spare VDSL connection in our server room (same ISP) which just has a simple VDSL modem attached and a laptop, nothing else. The issue I have is that I can't access ANY of the main networks IPs (inc our owa on exchange) from this separate DSL connection? I can from other locations and 4G on phones etc?
What on earth is going on? If I go home I can access owa etc but it's just this VDSL in our server room? Can the ISP be doing some odd routing?
Thanks in advance.
A quick update, when I changed the separate (spare VDSL) DNS to the ISP DNS and not 22.214.171.124 it resolved perfectly for around one minute and not it's no longer resolving to https://owa.domain.com/exchange - this is getting odder and odder.Edited Apr 6, 2017 at 11:29 UTC
Are any of these devices resolving to a local network IP rather than the internet based IP?
Had this happen once when a subnet mask was incorrectly set on a router. Could do pretty much anything online but the incorrect mask blocked a few subnets on the local isp's network. It's a Longshot but it took entirely too long to get that one resolved and I'll never forget it.
Could the laptop be connecting to the LAN WiFi and the VDSL ethernet at the same time? This could cause routing issues. Also, does the VDSL line run through the SonicWall? It doesn't sound like it, but it's worth asking. When you ping the owa address, what does it resolve to when it works on the laptop? What does it resolve to when it doesn't work on the laptop?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823153.58/warc/CC-MAIN-20171018214541-20171018234541-00580.warc.gz
|
CC-MAIN-2017-43
| 1,688
| 7
|
https://larntech.net/android-bottom-sheet-dialog/
|
code
|
Android Studio Bottom Sheet Dialog Tutorial, We are going to implement bottom sheet dialog, Within the dialog we are going to add edittext with a button and onclick listener.
Android bottom sheet is a view that slides from the bottom of a screen to show extra content.
This tutorial is in java, i will do another one using kotlin and leave the link below.
In this tutorial, our bottom sheet will display an EditText and a button.
When the button in the bottom sheet is clicked we are going to open new activity and display the name provided in the edit text.
Steps in creating android bottom sheet dialog:
- Adding button in our main activity that opens bottom sheet dialog
- Adding java class that implements BottomSheetDialogFragment
- Listening to the bottom sheet button on click to open new activity.
- Displaying the entered name in the new activity
Kindly check above video tutorial and if you need additional help comment below.
For source code. You can get it at $5 to support this work thanks.
GET SOURCE CODE FOR ONLY $5
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00094.warc.gz
|
CC-MAIN-2021-49
| 1,031
| 13
|
https://scifi.stackexchange.com/questions/125525/does-the-exterior-appearance-of-hogwarts-really-change-throughout-the-series
|
code
|
The exterior appearance of Hogwarts castle has a unique and easily recognisable look. This article is about the incredible detailed model of the castel, that has been used for every film in the series.
While most of it stays the same over the series, there are changes between the movies. Most obvious are the changes between the second and the third movie (a bell tower, a bridge, and more is added). These changes can (maybe) be rationalised by the explanation that in the first parts no scene is taking place near the areas added in later parts.
Is it theoretically possible, that Hogwarts Castle never changed it's exterior appearance during the movie series?
I'm not asking about behind the scenes material, as I'm sure they have made changes there. I'm wondering about the audiences point of view. Example: Throughout the series we see Hogwarts from a lot of different angles. Let's say in part 4 a new tower appears. The absence of this tower in older movies can be explained by it beeing hidden behind a wall. In this case there was no "real" change. The tower could have existed before, we've just never had a chance to see it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703520883.15/warc/CC-MAIN-20210120120242-20210120150242-00568.warc.gz
|
CC-MAIN-2021-04
| 1,136
| 4
|
https://tug.org/pipermail/pdftex/2007-December/007442.html
|
code
|
[pdftex] RGB image in pdf inteded for printing.
john at wexfordpress.com
Mon Dec 10 18:43:53 CET 2007
If I use a colored graphic in e.g., png format and the color model is RGB how
does pdftex handle the color model question? Some printers require CMYK
model for everything.
I know about converting color model via ImageMagick prior to inclusion but
what then is the preferred output format? Tiff is not an option of course.
Resources for every author and publisher:
More information about the pdftex
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519883.54/warc/CC-MAIN-20210120023125-20210120053125-00121.warc.gz
|
CC-MAIN-2021-04
| 499
| 10
|
https://bugs.gentoo.org/show_bug.cgi?id=6463
|
code
|
The exim-4.04-r1 ebuild has use flags that enable the LOOKUP_LDAP &
LOOKUP_MYSQL modules. My virtual host exim config uses LOOKUP_DSEARCH.
It would be nice if there was a use flag for LOOKUP_DSEARCH (and the rest of
the LOOKUP_* modules, I suppose).
Added support for LOOKUP_DSEARCH to the ebuild. This will not be activated via
a use variable as it is a useful function to have regardless of system
I will check and see if there are any other relevant LOOKUP variables over the
next day or two. Upon finishing the LOOKUP changes I will release a -r2 and
close the bug report.
added LOOKUP_CDB ... more coming :)
Many changes have been made to the exim ebuild. Can you please look over the
ebuild and tell me if you notice anything missing?
exim-4.05 will be released into portage soon after the portage freeze is
I've verified that the 4.04-r2 ebuild fixes the bug okay. Thanks.
If you want me to test the 4.05 ebuild, then you'll have to add it to the
portage tree (perhaps masked), so that I can see it. I don't use CVS.
I have added exim-4.05 to portage. It is currently masked in
Let me know how it works for you!
I've verified that exim-4.05 works too.
lookups have been changed from mta-* to mysql, ssl, mysql, and pgsql to match
with system default use variables. Thanks for testing the lookup support.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585441.99/warc/CC-MAIN-20211021195527-20211021225527-00189.warc.gz
|
CC-MAIN-2021-43
| 1,310
| 21
|
https://indiegamingpodcast.podbean.com/e/episode-132-interview-with-iron-gate-studios-creators-of-valheim/
|
code
|
On this weeks episode I talk to the developer of one of the most talked about games this year Valheim. Valheim is brutal exploration and survival game for 1-10 players, set in a procedurally-generated purgatory inspired by viking culture. Battle, build, and conquer your way to a saga worthy of Odin’s patronage!
Donate to The Show: https://ko-fi.com/indiegamingpodcast
Get the Game: https://store.steampowered.com/app/892970/Valheim/
Valheim Twitter: https://twitter.com/Valheimgame
Follow/Contact me on the following:
E-Mail Me: firstname.lastname@example.org
We are really excited to be chosen in the top indie podcasts and top indie game developement podcasts to listen too
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662573189.78/warc/CC-MAIN-20220524173011-20220524203011-00001.warc.gz
|
CC-MAIN-2022-21
| 679
| 7
|
https://lists.debian.org/debian-legal/2008/08/msg00078.html
|
code
|
Re: Is AGPLv3 DFSG-free?
* Arc Riley <firstname.lastname@example.org> [080823 14:31]:
> What was proposed was that every single user of the software would be
> required to host, on their own server and at their own expense, or even over
> the same net access through which remote access to the software is provided,
> a copy of the source code for every piece of AGPLv3 licensed software they
> wanted to use.
> What I am continually having to re-iterate in this thread is that this only
> applies to those who are running modified copies of code which is not
> already available online, that a free VCS solution is suitable, and it
> you're only required to share the source code with people you've already
> opted to allow remote access to your modified version.
So everything is fine until someone wants to modify the software.
But if they do, you say they are no longer allowed to run it without
fullfilling some restrictions. I fail to see how anyone can consider that free.
Bernhard R. Link
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00152.warc.gz
|
CC-MAIN-2021-49
| 996
| 16
|
http://www.fact-index.com/n/n_/n_tuple.html
|
code
|
The n can be replaced by a specific number, thus one can for example say a quaternion can be represented as a 4-tuple. A 2-tuple is an ordered pair; a 3-tuple is a triple or triplet; further constructions are possible, such as octuple, but many mathematicians find it quicker to write "8-tuple", even if still pronouncing "octuple".
A general n-tuple is: (a1,a2,...,an)= (b1,b2,...,bn) iff a1=b1, a2=b2 and so on.
(However, what happens when ai equals ai+1?)
Many computer programming languages support tuples as a data type, either for objects of fixed types, or as a collection of objects of any type.
The Lisp programming language originally used the ordered pair abstraction to create all of its n-tuple and list structures, similarly to the inductive definition above.
In the field of relational databases, a tuple is a row in a relation (table). An n-tuple is a row with n columns. It is important not to confuse an n-tuple with n tuples.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00604.warc.gz
|
CC-MAIN-2022-27
| 944
| 6
|
http://sourceforge.net/mailarchive/forum.php?thread_name=61957B071FF421419E567A28A45C7FE5029D2420%40mailbox.nameconnector.com&forum_name=webware-devel
|
code
|
Ian Bicking wrote:
> OK, I didn't actually figure out how to kill wedged threads, but I did
> figure out how to (more or less) cleanly exit the process when there's
> wedged threads. The code is in
> http://svn.colorstudy.com/home/ianb/thread_die.py -- mostly it calls
> os._exit, which exits unconditionally, and it has some guards so that
> it tries to exit as cleanly as possible.
> We still don't detect wedged threads, but at least this gives us
> something to do if we did detect those threads. It probably wouldn't
> be hard to add to ThreadedAppServer -- though there's some added
> infrastructure to figure out what to do when you do find a wedged
> thread. E.g., maybe it should just send an email, or it should exit
> (code 3), or do both.
Detecting wedged threads would definitely be a good feature. You'd have to
have a config setting that determined how long a thread must be unresponsive
before it is considered wedged.
One possible response other than exiting the process is to simply "abandon"
the thread and increase the thread pool by one. If the thread is wedged
consuming zero CPU, that's not such a bad option. If it's busy-waiting,
consuming large amounts of CPU, that's not such a great choice, but neither
is exiting the process...
On Windows there's also the Win32 function TerminateThread(), but that is
considered very dangerous and only to be used in extreme circumstances.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163828351/warc/CC-MAIN-20131204133028-00088-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 1,402
| 23
|
https://emacs.stackexchange.com/questions/69579/how-detect-when-lsp-mode-has-loaded-a-files-references
|
code
|
Emacs doesn’t index the file, the LSP server does (in this case that is clangd). The LSP protocol supports asynchronous notifications when results are available, so in principle clangd should be accepting your request and replying with a token that will identify the results when they are available.
lsp-mode would then process the results as normal once it receives a notification with that token.
In the comments, you reported seeing these log messages:
LSP :: Connected to [clangd:57835/starting].
Debugger entered--Lisp error: (error "The connected server(s) does not support method te...")
error("The connected server(s) does not support method %s..." "textDocument/definition")
lsp--send-request-async((:jsonrpc "2.0" :method "textDocument/definition" :params
This indicates that clangd has not advertised that it supports looking up definitions. Since it then starts working a few seconds later, this sounds like a bug in clangd. Or maybe clangd just doesn’t support this at all, and you’re falling back to some other xref backend, in which case the answer to your question has nothing to do with
lsp-mode at all. You should check the value of
Edit: if the problem is simply that your batch process is running before the connection to the LSP server is fully established, then use the
lsp-after-initialize-hook hook to start it instead.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00322.warc.gz
|
CC-MAIN-2022-33
| 1,349
| 11
|
https://sada.com/insights/blog/authmagic-now-for-google-apps/
|
code
|
DLL filter captures the user’s password, and securely transmits it to web service. Install DLL filter on all domain controllers.
The web service synchronizes user’s password with Google Apps. You can install the web service on either hosted or a on-premise Windows web server.
SADA’s AuthMagic for Google Apps will be both simple and lightweight given there are only two modules to deploy. The DLL Filter and web service will also come with full logging functionality as well as the convenience of hosting the web service anywhere. As is to be expected from SADA, security is always a top priority. AuthMagic for Google Apps will be secure by design and the passwords are never transmitted in clear text. Those who purchase this tool can control which users will get their passwords synchronized through filtering rules.
Lastly, in true SADA fashion this tool is made to serve all of our clients, hence the affordable price point. For more information, please visit our CloudToolKit Website.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100327.70/warc/CC-MAIN-20231202042052-20231202072052-00506.warc.gz
|
CC-MAIN-2023-50
| 997
| 4
|
http://blog.jetbrains.com/idea/author/oleg_s/
|
code
|
Want IntelliJ IDEA T-Shirt?
Visit our Apparel Store!
Author Archives: Oleg Shpynov
Helo guys, We know that many of you are keen on GitHub and use gists in your everyday work. We are excited to tell you that now you can share your code instantly from the IDE. Here is a small … Continue reading
Hello guys, We’ve already described the basic GitHub integration features on IntelliJ IDEA before. Here comes more advanced stuff. We’ve made lots of improvements since then, but the main new thing to talk about is GitHub rebase support. GitHub provides … Continue reading
Hello guys, We are pretty sure that many of you have heard about Github and some are probably using it for your projects. We are excited to reveal the coming GitHub support in IntelliJ IDEA 10 and in other IntelliJ … Continue reading
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450762.4/warc/CC-MAIN-20151124205410-00107-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 810
| 6
|
http://www.cloudynights.com/topic/423043-safe-remote-operation/
|
code
|
Safe Remote Operation
Posted 20 June 2013 - 02:44 PM
Posted 20 June 2013 - 09:17 PM
Posted 21 June 2013 - 12:12 AM
Posted 21 June 2013 - 01:08 AM
If the runaway is the primary fault, I am thinking of putting an automatic power switch in the system. The power supply current during tracking should be fairly small compared to a slew or runaway current. For this a simple current monitor with a high limit is all that is needed. A bypass would be required whenever the scope needs to slew, but the monitor could be disabled any time an operator is present. Does anyone know of other conditions I should monitor to protect the scope and drives?
Posted 21 June 2013 - 01:53 AM
All modern Astro-Physics mounts are optimized for remote operation. Additionally, the Astro-Physics 1100GTO and 1600GTO mounts can be had with absolute encoders but AP mounts are not cheap.
The Software Bisque Paramounts are also optimized for remote operation but are not cheap either.
The Celestron CGE has the needed switches and is about 1/3 the cost of an AP or SB mount. That might be an option.
Some people have added limit switches to LX200GPS scopes. I don't have any links to share at the moment. You could always work up your own.
If you were to set up the LX200GPS worm block play-limit-screws so the mount axis motor would stall instead of cause the worm teeth to skip over each other, the CPU would detect the servo motor stall and stop the motor and report an error. Also set the clutches tight enough to make the motor stall instead of slip the clutch.
This problem/challenge exists regardless of the motor type involved. The cure is having some kind of absolute axis position sensing ability. Either via limit switches or better-yet, by absolute encoders.
One of the best features of the modern Astro-Physics mounts (Mach1, 900GTO, 1100GTO, 1200GTO, 1600GTO and 3600GTO) is that they keep their sky-position awareness regardless of how they were powered down. When they are powered back up, they get time from an internal real-time-clock and they know exactly where the mount was pointing when it was shut down. They have a "brownout feature" where they detect the loss of the main 12VDC power and are able to save their current axis positions to NVRAM before the power drops to zero. Pretty-darn nice feature!
I hope this helps.
Posted 21 June 2013 - 09:21 AM
I do have video in the dome so that I can see what is going on, and I do have remote power management so that I can kill the power to ths scope if it starts acting up, but I have never used it in the 6 years it has been setup.
Operating the scope remotely is a whole other deal. I use ASCOM as my LesveDome driver requires POTH to function properly. I have constant problems Parking and Unparking the scope in this configuration. Frequently I will simply use the "hand pad" in the Autostar software and then move the dome manually to where the scope is pointing.
I hope this gives you some help.
Posted 21 June 2013 - 01:50 PM
Are there any other failures other than a runaway that could cause damage?
You can shut down the system and forget to power down the mount. It will run until it bottoms out and keep running. I did that once and got away with it. Reconfigured the system power so that the mount will always power down.
You have to plan for something going wrong eventually, no matter how many people tell you they have not had a problem.
I never had a runaway until I had a runaway. In my case I can see and hear what is going on in the observatory and if all else fails I can power the entire setup down buy simply unplugging two UPS units in my house.
Setting up a remote system is easy, setting up a reliable remote system isn't.
Posted 21 June 2013 - 03:50 PM
Posted 24 June 2013 - 01:58 PM
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274994.48/warc/CC-MAIN-20160524002114-00051-ip-10-185-217-139.ec2.internal.warc.gz
|
CC-MAIN-2016-22
| 3,754
| 27
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jonasbn/release-0-11-0-of-spellcheck-github-action-4acj
|
code
|
Release 0.11.0 of GitHub Spellcheck Action has just been uploaded to the GitHub marketplace.
This release introduces experimental support for German including Swiss and Austrian.
The reason for the experimental label is that the support might change in the future since the implementation is not completely satisfactory.
With release 0.11.0 of the action English and German is supported. Aspell supports 51 additional languages/dictionaries and enabling them all increases the size of the Docker image significanly with 395 MB to be exact:
jonasbn/github-action-spellcheck latest 1b331977ffa7 3 hours ago 559MB jonasbn/github-action-spellcheck 0.9.0 1d2315be5e30 2 weeks ago 164MB jonasbn/github-action-spellcheck 0.9.1 1d2315be5e30 2 weeks ago 164MB
The above numbers are based on an older release and are lifted from issue #35.
The first versions of the action built the Docker image dynamically on every run, which meant a long runtime. With release 0.5.0 the action was changed to use a pre-built Docker image served from DockerHub.
This decreased the run time, but introduces the issue of the image being immutable and dictionaries have to be present in the image in order to be utilized by the action.
The decision on including the German dictionary was based on the fact that is the first addition of a language besides English, meaning that it is not assumed that request for support of the additional 51 languages will be a realistic use case. Support for additional languages and dictionaries will be added if requested, meanwhile a better implementation strategy is being sought.
So do not hesitate to request support for another language, all 51 will propably be denied for now, but I would really love for this action to include all the languages, if there is a need. A better implementation just have to be thought up and made.
Another issue, which is not directly related to this release, but it's upload to DockerHub made me reflect.
When Docker images are becoming stale (no longer being downloaded), I tend to remove these from DockerHub.
Currently releases 0.7.0 and 0.8.0 have not been downloaded for a month, so I could remove these. I have previously removed 0.5.0 and 0.6.0 without any issues (that I know of). But there could be less active repositories on GitHub pointing to older versions of the action and they will break when attempting to run.
Updating is recommended, but still is seems bad practice.
I am going to freeze the deletion of older images from DockerHub for now and will investigate possible use on GitHub, since I know what to look for.
Any pointers, ideas etc. are most welcome.
- One could be to keep images forever (but do I want to support them)
- Implementation of a sunset policy for older images
Well let not me keeping you from enabling GitHub Spellcheck Action for your repositories and checking the spelling of your German and English documentation.
0.11.0 2021-02-19 feature release, update not required
- Added support for German spelling:
lang: de, including: Swiss and Austrian dictionaries addressing issue #35 via PR #36. This is experimental and will need further investigation. Aspell support 53 different dictionaries and supporting them all increases the Docker image size significantly so dynamic loading of dictionaries has to be investigated further, without increasing build time to a point where a pre-built Docker image is not longer feasible
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00627.warc.gz
|
CC-MAIN-2022-33
| 3,411
| 22
|
http://microwindows.org/ViewMLDesign.html
|
code
|
The ViewML Project
The Design and Evolution of the ViewML Internet Browser for Embedded Linux
By Gregory Haerr
CEO, Century Software Embedded Technologies
The embedded Linux market is rapidly expanding, but poses unique challenges: While being driven primarily by the open source software movement, embedded applications typically run in a resource restricted environment. Most open source software development has occurred for the desktop, where large hard drives and hundreds of megabytes of RAM abound. Additionally, embedded systems arenít frequently rebooted or field-upgraded, so the maintainance and quality of the software are big concerns.
Leveraging the last 7 years of desktop software advances, while keeping ram and rom usage in check are key issues in any embedded design.
The evolution of the desktop Internet browser has grown to be quite a family, with over 20 browsers available in some form or another. Why design another browser? After looking at almost all of the browsers available, with the intent of selecting one best suited for the needs of embedded environments, we found that there wasnít a single package that would work. Either the browser was huge, like Netscapeís Mozilla, and would never run on most embedded systems, or too small, with very incomplete HTML parsing. So we decided to design a new browser, one that was specifically targeted at the needs of the embedded Linux community.
The initial design goals for the project were:
Create the smallest browser possible, but still retain 100% standards compliance for HTML parsing. The browser would be used in many applications from embedded-device documentation display to Internet appliances and set-top boxes. We had to make sure that the browser always displayed pages correctly. Use available open source code for the HTML parsing and display engine. We didnít want to get in the business of writing an HTML engine from scratch, which is the mistake made with most smaller browser implementations. It takes a lot of knowledge and experience to display all the HTML language quirks correctly, especially since so much HTML is still written by hand. Use the selected HTML widget code as-is. We didnít want to change any of the core HTML display engine code, even though it is open source. This bought us two major benefits: the ability to upgrade the HTML display capabilities as the original parsing engine is enhanced by HTML experts. It also meant that no bugs would be introduced directly in the core display routines, keeping the quality high. Use the Fast Light Tool Kit (FLTK) applications framework for the user interface. FLTK is available at www.fltk.org, and provides a set of user-interface widgets ideally suited for small environments. Run on both Microwindows and the X Window System. In order to gain large acceptance, the browser would need to run on the standard X Window System as well as the ideally suited newer Microwindows graphical windowing environment (www.microwindows.org). In addition, we wanted to make sure that the selection of either windowing system was seamlessly integrated into the software design, and didnít adversely affect the architecture.
The first major decision was selecting the open source HTML parsing and display engine. We chose the KDE 1.0 HTML widget from KDE desktopís kfm file manager. (KDE is available at www.kde.org). This naturally raised many questions, such as:
Why not KDEís newer v2.0 Konqueror widget?
QT is HUGE! (and not available on Microwindows)
What about Mozilla and itís gecko engine?
The KDE 2 Widget was at the time coding started, far too feature-lean and unstable to use in a working project. It is much better now, but still far from proven. The KDE 1.0 widget has been out for over a year. The KDE 2 Widget is approximately four times large than itís 1.x counterpart. We thought that for the first version, the feature/size tradeoff was worth it, especially since the design allows the newer widget to be dropped in after itís development and quality solidifies.
The KDE 1.0 Widgetís use of the QT widget set was by far the largest issue when considering which Widget to use. (QT is available from www.trolltech.com) However, it presented itself as a logical choice for use in this project for several reasons:
QT, while not available on Microwindows, was coded in a style that permitted the easy replacement of classes with re-implemented versions running on top of another toolkit that was available on Microwindows and X. This reduced the overall size of the QT API (as we didnít need all the classes) and allosed its use.
A free version of QT was available as a reference code base (Harmony). While no code from this project was actually used, it was useful to examine another implementation of the API.
The only widget set that runs on both Microwindows and X is currently FLTK. This toolkit is also coded in C++ with some similar concepts present in its design. This permitted the relatively easy integration of the QT API with an FLTK backend.
Mozilla was considered briefly for use, simply because of the huge movement behind it. However, there were many objections that prevented it being a serious contender: Mozilla is huge. The GTK+ widget version of Mozilla (without mail, news, etc.) weighs in at a hefty 12 MB without loading a page. That is six times larger than the current ViewML browser. The GTK+ widget set is also large, at least 2MB compared to 100k for FLTK.
After finalizing the selection of the core display engine, we created a layered software architecture that strictly defined each of the browserís components, and exactly what they would do. The layered architecture was required in order to meet the design goal of leaving the display engine code untouched. We also had to define a number of new modules, with the idea that each could be replaced if a smaller module was created, or required changes as the result of the graphical windowing system being used.
Block Diagram of the ViewML Browser
Following is a brief description of each of the modules:
ViewML Browser Application Layer
This thin layer is written entirely in the C++ FLTK applications framework, and provides the basic graphical user interface layout. We tried to keep this layer thin so that applications engineers can easily modify the ViewML browser for custom suited embedded environments without having to require much knowledge of the whole browser. In some embedded environments, there might not be a user interface at all, but instead just a full-screen browser page displayed. This layer also deals with all network and local file access.
The World Wide Web Consortiumís WWWLib Library was chosen to perform all asynchronous network i/o and HTTP get functions, as it was easy to use. Ultimately, we feel this library is larger than is required and will probably be rewritten in the future. For now, however, it allowed us to get the initial browser version functional quickly, without having to concentrate on this specialized area.
These two modules comprise the original unmodified KDE 1.0 HTML Widget code. This unmodified source code is called from above by the user interface applications layer and thinks itís talking to a QT applications framework below. The KHTML Widget handles all the HTML parsing, drawing and basic layout. It does not directly handle scrolling or frames; it delegates those tasks to the KHTML View. The KHTML View is the most fully-featured widget in ViewML. This is a QT-based widget that contains the KHTML Widget. KHTML View manages one or more KHTML Widgets, and also implements scrolling and HTML frames.
QT Compatibility Layer
This layer provides the "glue" that interfaces the unmodifed HTML Widget with the FLTK applications framework, rather than the QT framework. The C++ QT classes were rewritten in this layer, keeping the same public interfaces. These classes include graphical widgets (edit controls, buttons, etc.), collection and string classes, and general functional classes that implement some particular QT feature (such as signals).
For all graphical classes, these were implemented using the functionality provided by FLTK. This allowed the relatively easy implementation of all standard controls and most drawing functions. However, the non-standard QT mechanism of signals, which are used for inter-widget communication, had to be coded from scratch.
For all the collection and string classes, these were implemented on top of the Standard C++ Library. These classes include stacks, lists, dictionaries (hash tables), and the always-present string class. These classes were fairly standard, with the exception of the novel auto-deletion mechanism QT uses in its collection classes.
IMLIB Image Library
For images, IMLIB from the GNOME project (www.gnome.org) was used for the X Window System. This allowed the implementation of the QT style of images, which includes the ability to auto-detect the image type, auto-scaling of the image, and displaying images on the screen. There are several disadvantages to this library, such as size, but the main objection is itís unavailable for Microwindows. For the Microwindows environment, we chose to add graphics image support directly into Microwindows, which worked out well because we kept the size quite small and have allowed for additional image decoders to be easily added.
FLTK Applications Framework
Two different versions of the FLTK applciations framework are used, depending on the windowing system used. Standard versions of FLTK include support for Win32 and X. Folks from the Microwindows project as well as ourselves ported FLTK to the Nano-X API available in Microwindows. This support allows client/server interaction with the Microwindows server, just like the Xlib model. Choosing FLTK is a great choice, since both FLTK and Microwindows support the X Window System. This allows the ViewML browser to be debugged or enhanced on the Linux desktop, using either the X Window System directly with FLTK, or running the Microwindows server on top of X. In this way, the exact characteristics of the target environment, whether running Microwindows or X, can be emulated. The Microwindows system allows the exact display characteristics of the target device to be emulated on the desktop, which allows designers to model a gray scale target on a color desktop, for instance. We also like the idea of being able to run almost the identical code paths on the desktop as the target device, which greatly improves quality control.
The ViewML Project has produced a high-quality web browser in a short amount of time, directly targeting the embedded Linux environment. By including core open source components weíve been able to use a high-quality display engine while keeping the overall RAM and ROM requirements quite low. Currently, the ViewML browser runs in about 2MB of RAM while having a codefile size of around 800k. Combined with Microwindows, the entire environment can run in less than 2.5MB RAM, which allows itís use on most 32-bit embedded Linux systems running graphical displays. By placing the entire project into open source, we feel that other contributors will get involved, and ultimately, the problem of having a high-quality web browser for the embedded Linux environment will be solved.
Century Softwareís web site is embedded.centurysoftware.com
The ViewML Projectís web site is www.viewml.com
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652494.25/warc/CC-MAIN-20230606082037-20230606112037-00718.warc.gz
|
CC-MAIN-2023-23
| 11,415
| 37
|
https://forums.parallax.com/discussion/comment/1501122/
|
code
|
Hi all. I need my P1 to switch a load on and off, and I thought I'd try using a mosfet for a change. So I don't know from mosfets, but I'm using this as my guide (from learningaboutelectronics.com/Articles/N-channel-MOSFET-switch-circuit.php
I figure I can replace the 3V gate input with a Prop pin and replace the 6V buzzer with my 5V device. Does that sound reasonable so far?
Now the aforementioned web page recommends the 2N7000 mosfet, which IIRTDC has a maximum drain current of 200mA. Unfortunately, the circuit I want to control draws ~250mA, so the 2N7000 is out. The related NDS7002A could work, but I'm only finding surface mount packages.
I'm hoping someone can help me find something like a 2N7000 in a TO-92 that can switch maybe 300mA to be on the safe side.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210616.36/warc/CC-MAIN-20200923081833-20200923111833-00129.warc.gz
|
CC-MAIN-2020-40
| 773
| 4
|
http://www.planetary.org/blogs/index.html?keywords=commercial-spaceflight
|
code
|
On February 6, 2018, I found myself shoulder to shoulder with two of my heroes: Bill Nye on the left, Buzz Aldrin on the right. Our eyes were fixed on the first vertical Falcon Heavy rocket. Figuring the world's most powerful rocket might send me flying backwards once the countdown hit zero, I gripped the railing so tightly I started to lose the feeling in my fingertips.
SpaceX's Falcon Heavy is not just for big payloads, it can also throw light things into space very fast. And that has significant implications for the exploration of distant destinations in our outer solar system—particularly the ocean moons of the giant planets.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578577686.60/warc/CC-MAIN-20190422175312-20190422201312-00493.warc.gz
|
CC-MAIN-2019-18
| 639
| 2
|
https://tbtech.info/15000/acpi-processor-aggregator-treiber-windows-21/
|
code
|
ACPI Processor Aggregator Driver
Menuconfig ACPI. bool "ACPI (Advanced Configuration and Power Interface) Support" . ACPI defines processor Aggregator, which enables OS to perform. ACPI Processor Aggregator ACPI\VEN_ACPI&DEV_C. I have downloaded the Not sure what to download for ACPI. How do I get drivers. [git pull request] ACPI Processor Aggregator Driver for rc1. From: Len Brown Date: Sat Oct 03 - EST. Next message: Michael Kerrisk.
|Supported systems:||Windows All|
|Price:||Free* [*Free Registration Required]|
ACPI Processor Aggregator Driver
These actions can be categorized as either passive cooling or active cooling. For more information about thermal management, see the document titled "Thermal Management in Windows" on the Microsoft ACPI Processor Aggregator website.
ACPI thermal zones A thermal zone is defined to include child objects that do the following: Identify the devices that are contained in the thermal zone: Specify thermal thresholds at which actions must be taken: Describe the thermal zone's passive cooling behavior: Report the thermal zone's temperature: Optionally, receive notifications of additional temperature threshold crossings: C3 ACPI Processor Aggregator known as Sleep is a state where the processor does not need to keep its cache coherentbut maintains other state.
Additional states are defined by manufacturers ACPI Processor Aggregator some processors. For example, Intel 's ACPI Processor Aggregator platform has states up to C10, where it distinguishes core states and package states.
These states are implementation-dependent. Though, P0 is always the highest-performance state; with P1 to Pn being successively lower-performance states, up to an implementation-specific limit of n ACPI Processor Aggregator greater than Function Fixed Hardware interfaces are platform-specific features, provided by platform manufacturers for the purposes of performance and failure recovery.
ACPI Processor Aggregator Intel -based PCs have a fixed function interface defined by Intel, which provides a set of core functionality that reduces an ACPI-compliant system's need for full driver stacks for providing basic functionality during boot time or in the case of major system failure. The Root System Description Pointer is located in a platform-dependent manner, and describes the rest of the tables.
The ACPI processor aggregator driver [Posted October 7, by corbet] Patches merged into the mainline carry a number of tags to indicate who wrote them, who reviewed them, ACPI Processor Aggregator. A certain commit merged for 2. The story goes something like this.
ACPI provides a mechanism by which it ACPI Processor Aggregator ask the system to make processors go idle in emergency situations; these can include power problems or an overheating system. The ACPI folks had originally proposed putting some hacks into the scheduler to implement this functionality. These changes, it seems, were little loved; that was the patch that Peter Zijlstra blocked outright.
So Shaohua Li went back and implemented this functionality as a driver instead. If the ACPI hardware starts sounding ACPI Processor Aggregator red alert, this driver will create a top-priority realtime thread and bind it to the CPU that is to be idled.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00549.warc.gz
|
CC-MAIN-2021-21
| 3,262
| 13
|
http://www.metalguitarist.org/forum/guitar-instrument-discussion/44458-ngd.html
|
code
|
I have picked up a few cool guitars over the last few weeks in hopes of modifying them and teaching my self a bit more about guitar tech.
The first is this RG7420, I swapped the pups to dimarzio d'activators and rewired the volume knob to be in place of the tone knob (so my fingers wont hit it when picking).
I'll post the other three guitars in a little bit
Swish! No-one needs that many cleaning wipes though
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00477-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 411
| 4
|
http://thread-of-fire.tumblr.com/
|
code
|
Turning Mobile Phones into 3D Scanners
Demonstration from the Computer Vision and Geometry Lab (part of ETH Zurich) of an unreleased app which can capture 3D data with a mobile phone camera without the aid of a depth sensor. Video embedded below:
There is no documentation currently available online, but it appears that data generation occurs both in-phone and with the assistance of data-connection, as well as being exportable.
those paintable models, like warhammer 40 000 and stuff are probably going to be way less expensive in the future.
perhaps many other toys and objects as well.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164611566/warc/CC-MAIN-20131204134331-00096-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 590
| 5
|
http://lists.puremagic.com/pipermail/digitalmars-d/2020-January/301018.html
|
code
|
Feature request: __traits(canInstantiate), like __traits(compiles) but without suppressing syntax errors
H. S. Teoh
hsteoh at quickfur.ath.cx
Fri Jan 17 18:44:37 UTC 2020
On Fri, Jan 17, 2020 at 11:01:29AM -0500, Steven Schveighoffer via Digitalmars-d wrote:
> Yes, concepts would be useful, but also would be limited for how D
> does duck typing. D's constraints are very much like concepts, but
> without having to declare everything up front for use in the function.
IMO, not declaring everything up front for use is an anti-pattern.
Basically, by accepting a template parameter T and performing operations
on it, you're assuming the T supports said operations. If T doesn't
support such operations, you shouldn't be receiving it in the first
place. By not specifying what operations you expect T to support in your
sig constraints, you're basically declaring that you accept *any* T,
including one that supports no operations, yet you proceed to operate on
it, which ought to be an error (not just a silent failure to
IOW, you're actually making assumptions on what T supports, yet you
never declared such assumptions. Conversely, if you declare no
assumptions on T, then neither should you be allowed to operate on it. A
parameter T received without any assumptions should be treated as an
opaque object that allows *no* operations.
> However, the real problem is things like typos that cause the function
> not to compile for reasons other than the intended failures.
> For example:
> T foo(T)(T x)
> return X + 5; // typo on parameter name
> This will NEVER compile, because X will never exist. The intention was
> for it to compile with types that support addition to integers (i.e.
> it's EXPECTED to fail for strings), but instead it fails for all
See, this is why this kind of code is bad. You basically wish to accept
all T that support +, but by not declaring it as such, the compiler has
no way to know what you intended. So it assumes that you somehow expect
X to spring into existence given some magic value of T, and when that
never happens, the compiler always skips over foo. Had you declared
that you expect T to support +, then the compiler would know to
instantiate foo when T==int, then it would have caught the typo as a
compile error. As a bonus, your code would even accept custom types
that support +, without any further effort on your part.
Another side effect of not declaring assumptions up-front is that the
compiler cannot produce better error messages. You're stuck with error
gagging that hides typos, because the compiler simply doesn't have
enough information to know what was intended. What if you intended for
foo never to compile? Without declaring any assumptions the compiler
couldn't know better.
> But it's hard for the compiler to deduce intention from such typos. It
> would be great if the compiler could figure out this type of error,
> but I don't think it can.
It can if there was a requirement that template arguments are not
allowed to be operated on unless their ability to support the operation
is either declared in a sig constraint, or else tested with an
appropriate static if condition.
> The only "fix" really is to judiciously unittest with the calls you
> expect to work. And that generally doesn't happen (we are a lazy
Exactly, that's why you have to force users to declare what assumptions
they're making on the template parameters. It's more "convenient" to
just take the easy way and allow everything without checking, but it
only leads to pain and more pain down the road.
> Perhaps, an "expected" clause or something to ensure it compiles for
> the expected types like: expected(T == int), which wouldn't unittest
> anything, just make sure it can compile for something that is
Just do this instead:
// will generate compile error if instantiation fails:
alias A = myTemplate!int;
Кто везде - тот нигде.
More information about the Digitalmars-d
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00048.warc.gz
|
CC-MAIN-2022-27
| 3,928
| 65
|
http://www.tomshardware.com/forum/362790-28-computer-restarts-playing-games
|
code
|
It all started one morning when I tried to turn on the computer and it just wouldn't turn on, no power at all inside. So I opened it up, dissconnected the cable going into the motherboard from the PSU and tried to start only the PSU by "hotwiring" the outputs according to a chart I found on the internet and the PSU didn't start, so I brought it in to the local computer shop and told them to do a hardware check and replace the PSU if it's faulty. A couple of hours later they called and said that it's the PSU and asking if they should order a new one, so I said yes and a couple of days later and -159€ from my wallet it was fixed.
This is when the real problems started, I plugged in the computer again and started it, everything started ok except two of the case fans where not spinning and the led lights where not on, other than that it seemed to be fine, so I tought it's probably nothing, they just forgot to plug them in, so I started up a game and then, about 5 mins into the game it restarted, no bluescreens, no errors on startup... I was totally confused and tried running a game again (BF3) and the same thing happened.
I called the shop, told the symptoms and brought it there, after a few weeks(!) it was done, no cost since I got it on their installation warranty they had. I got home, now the fans and leds are working but the same thing happened again, it restared without a warning. I searched the web and flicked through countless of forums and one day it was leaning towards a heating problem, the next day a driver problem, then maybe it's a HDD problem. There were a few Kernel Power ID
-41 (or something) in the windows logs. And now I also got a few bluescreens when turning off the computer stating that if I had recently installed new hardware I have to check them or something. I brought it back AGAIN and they reinstalled windows completely and said it works, and it did... for a couple of days, now the problem is back, happaned 30 mins ago.
What is going on? Could it be the PSU even tough it's only a few weeks old? I don't think it's overheating, GPU is around 70-78C on load and CPU is around 57-63C on load... Hard drive is 30C.
Windows 7 Pro 64-bit
M4A77TD Pro ASUS motherboard
AMD Phenom II X4 955 Processor (3.2GHz)
NVidia GeForce GTX 260 Graphics card
Nexus RX8500 850W PSU
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444385.33/warc/CC-MAIN-20141017005724-00018-ip-10-16-133-185.ec2.internal.warc.gz
|
CC-MAIN-2014-42
| 2,317
| 10
|
http://www.accessforums.net/showthread.php?t=25943&s=a3beb5aed90c0533352c6c23118c2509
|
code
|
I've recently instigated a new timesheets DB for our company, many of whom work remotely, accessing files and working on an RDP. The Timesheets DB is access through the RDP. One of my colleagues accesses the RDP through his Macbook (the remainder of us all use Windows) - and he can use the system fine and all his records are posted ok. However, I am unable to see his entries when I search for them through my system? He can see other records entered through windows based systems.
It's almost as if there's a time lag?
I'm wondering if I'll be able to see his entries tomorrow?
Its really weird?
Any ideas please?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121216.64/warc/CC-MAIN-20170423031201-00391-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 616
| 5
|
https://coderanch.com/t/249350/certification/Abstract-classes-Interfaces
|
code
|
1) If an abstract class has more than one abstract method, when a subclass extends that class, does the sub class has to implement all the abstract methods? Can I extend an abstract class and not implement any abstract method?
2) If I implement an interface in my class, can I override a method in the interface? If I can, is there anyway I can get the original method implementation in my overriding class. Like the use of "super" in class methods to refer to the super class method?
I can help you out on the first question and that is you have to implement all the abstract methods from the abstract class if you are in the first concrete class otherwise if you have an abstract class extending another abstract class then the first concret class must implement all the abstract methods.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100724.48/warc/CC-MAIN-20231208045320-20231208075320-00459.warc.gz
|
CC-MAIN-2023-50
| 790
| 3
|
https://outofcontrol.ca/p2
|
code
|
This tutorial will walk you through the steps to installing Subversion 1.8.14 on cPanel. You will end up with a command line version and the ability to access your SVN repositories via the browser securely.
I've just started using Oh My ZSH and found it to be really slow... but only when in a git repository directory using a theme that display git information (robbyrussell for example). The fix is to…
I was pleasantly surprised at how easy it was to install Duply and Duplicity to enable S3 backups on my CentOS cPanel box. This is a bit rushed, so instructions are brief. If you notice anything wrong, please let me know in the comments and I fix it up.
I've been using Node, npm, bower and gulp in my local Vagrant VM under Ubuntu. However, as one of my current projects will be shuffled off to a cPanel CentOS server soon, I thought I would check how easy it is to install all this on the cPanel side. This mini-how-to assumes you are running the latest cPanel (WHM 11.42.0 (build 23) as of this writing) running on CentOS 6. Assuming you have this configuration, then you will already have git installed and running as required by npm in some instances.
Last night cPanel upgraded the version of MySQL on one of our servers to the latest MySQL 5.5. Unfortunately this results in over 4000 emails from that server, crying that various websites couldn't connect to the MySQL database. The error found in /var/log/mysqld.log was
InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 26214400 bytes! 131218 23:06:21 [ERROR] Plugin 'InnoDB' init function returned error. 131218 23:06:21 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 131218 23:06:21 [ERROR] /usr/sbin/mysqld: unknown variable 'default-character-set=utf8' 131218 23:06:21 [ERROR] Aborting
If you're running a new CentOS 6 server, depending on how it was setup, you might be missing whois: yum install bind-utils jwhois
Solution to a discrepancy between free space as shown by `du` and `df`.
If you happen to be using the "User Access Manager" in your WordPress install, and you are using Permalinks, then you will need this fix. Without it, the redirect for logged out users won't work, and they'll end up on a 404 page.
No frills tutorial on using Majordomo mailing list app on a cPanel box, while running Exim. I've used this on several boxes over the last five years without issue. The setup allows you to manage individual lists by domain. Each of your accounts can have their own majordomo lists. There is no control panel for this setup, and all the editing takes place on the commend line. Hope someone finds it useful.
This is a small tutorial or how-to on how to install node.js on a cpanel box. Note to all that setting up an EC2 instance on Amazon and getting Node.js up and running was WAY easier than doing this on a cPanel box.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267165261.94/warc/CC-MAIN-20180926140948-20180926161348-00365.warc.gz
|
CC-MAIN-2018-39
| 2,893
| 11
|
http://www.cedricmougne.com/microsoft-dynamics-crm/
|
code
|
As you know Microsoft will deliver the new version of Dynamics CRM by the end of the year. The CRM2015 version is not yet available but the release preview guide is already available to download. Good tips on reading it and good introduction for the news features of CRM2015.
Just a few days, I bought this excellent book of Marc Wolenik about CRM 2013. It was really a good reading and I discovered some tips about Dynamics CRM2013 that I missed before. I recommend you this book if you want to discover the world of Dynamics CRM 2013 or just to update your knowledge about the last version of Microsoft CRM.
In this video you will learn how to use the Microsoft Dynamics Marketing Connector and how to setup it for Microsoft Dynamics Spring ’14 CRM.
Liste et détail des sessions dédiées à Microsoft Dynamics CRM 2011 lors des techdays 2013 qui se tiendront du 12 au 14 Février à Paris.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103355949.26/warc/CC-MAIN-20220628050721-20220628080721-00396.warc.gz
|
CC-MAIN-2022-27
| 895
| 4
|
https://www.mrexcel.com/excel-tips/pivot-ranks-don-t-match-rank/
|
code
|
Pivot Ranks Don’t Match RANK()
January 24, 2023 - by Bill Jelen
Problem: I set up a pivot table and showed the values as a rank, using Rank Largest to Smallest. Why is the fourth product assigned a rank of #3?
Strategy: As if there is not enough controversy in the Excel ranking world, Excel came up with yet another way to handle ranking with pivot tables. The issue always centers around any ties and how the subsequent values are numbered.
Typically, if you have two values tied at #2, the next value would be assigned a rank of 4.
Starting in Excel 2010, the
RANK.AVG would assign the tied values a 2.5, and assign the next item a rank of 4.
Pivot tables do something different, assigning both of the tie values a 2, then going to #3 for the next item.
If you need one of the methods shown in E:G, plan on adding a calculation next to your pivot table instead of using the built-in rank.
This article is an excerpt from Power Excel With MrExcel
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00631.warc.gz
|
CC-MAIN-2023-06
| 950
| 10
|
https://nulledhub.net/blog/digital-marketing-institute/do-job-oriented-training-in-master-in-data-science-institute/
|
code
|
If you are looking for a job as a Data Analyst, you can apply at the Master in Data Science Institute in Gurgaon. This is one of the best masters in data science courses in Gurgaon that will help you get a decent salary. The institute has been offering courses in this field since 2021. This institute offers master-level courses for degrees starting with an associate to bachelor degree level. The course duration may differ from institute to institute. You need to check with the institute to understand the exact timings of the course.
The curriculum of the master in data science courses in Gurgaon is taught with both theoretical and practical aspects. You can expect to learn many new advanced techniques while you are this program from the master in data science institute. There are various subjects to be covered under the course. The first two courses cover the basic concepts and principles of mathematics. You also get to learn advanced statistical concepts. The third course is about machine learning, which is quite helpful during your career as a Data Analyst because it helps you with an advanced algorithm.
Most of the master in data science courses in Gurgaon are taught online so that they can be taken up by people from any part of the world. One of the major topics covered in the course is Artificial Intelligence. In India, this field is fast growing with good chances of job openings. Therefore, if you have the quality and expertise in this field, you can surely go for a better paying job in this field.
Take A Degree From A Master In Data Science Institute
Then you can work as a Research Analyst, Program Manager, or Principal Researcher. You can even become an Executive Director if you have more expertise and experience. With a master in data science, you can work in different types of organizations such as web, retail, health care, and others. You have the opportunity to be involved in a wide variety of tasks and develop your skills as you progress.
A master in data science course can be completed in three years, depending on the amount of education you get. Some of the subjects covered in the course include probability, statistics, algorithm, neural networks, distributed processing, supervised learning, natural language processing, human-computer interface, real-time data processing, optimization, and simulation. Other subjects may also be included in the course of master in data science institute Techstack Institute. An MBA is also required for enrolling into a master in the data science program. Other programs such as master’s degrees can also be done in double-degree programs where a PhD is also required.
After finishing your master in data science program, you will be able to perform a deep dive in your chosen field by enrolling into a master’s program. This will allow you to dig deeper into the subject matter and dig into your master’s level of expertise. Some of the master’s programs offered include the master’s of science in information systems, master in applied computer science, master in applied mathematical statistics from a master in data science institute, a master in computer science with a focus on artificial intelligence, and a master in business administration.
Upon graduation, you should be able to find a job in the field you have studied. It can be in a number of fields including pharmaceutical, finance, or any other industry that uses data and statistics. You can choose to work in government agencies or private companies. However, there are some limitations to the type of employment you can get if you have a master in data science degree. You cannot become an accountant, stockbroker, or a financial analyst because these types of positions require additional schooling and licensing than many of the entry-level positions. Some of these positions require additional professional certifications and licenses.
Once you have your master in data science degree from the master in data science institute, you may continue to work toward additional education and certification. There are master’s degrees in different concentrations. For example, those who have a master in chemical engineering can take classes towards a master in materials science and apply it to their career. Those who have master’s degrees in physics can pursue graduate courses in physics or apply it to computer science. Business administration students can take courses towards a master in business administration. Whatever your major, you should be able to find a number of master’s programs to suit your career goals.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.5/warc/CC-MAIN-20220123172206-20220123202206-00027.warc.gz
|
CC-MAIN-2022-05
| 4,594
| 9
|
https://chaoticgoodprogramming.wordpress.com/category/architecture/
|
code
|
Securing DevOps is an introduction to modern software security practices. It both suffers and succeeds from being technology- and tool- agnostic. By not picking any particular technology stack it will remain relevant for a long time, however it is not a complete solution for anyone since it gives you classes of tools to find but not a complete package for software security. If you need to start a software security program from zero this lays out a framework to get started with.
While I’ve only been doing software security full time for a few months now, I feel like the identification of the practices to engage isn’t the hard part, it’s the specifics of the implementation where I feel I want additional guidance. I know I should be doing static analysis of the code as part of my CI pipeline, but I don’t know how to handle false positives in the pipeline or what is worth failing a build because of. I don’t know what sort of custom rules I should be implementing in the scanner for my technology stack.
The book did go further into detail on the subject of setting up a logging pipeline. It describes how to set up rules to look for logins from abnormal geographic locations and how to look for abnormal query strings. The described logging platform is nothing abnormal for a midsized web application, however, I don’t know if you could have a small organization and have this level of infrastructure setup. Hooking up the ELK stack, while open source, is not easy, and the kibana portion requires a fair bit of customization and time to get everything together and working.
It feels as though we are missing a higher level of abstraction for dealing with these concerns. Perhaps, the idea that most software applications should have to go through this level of effort to get ‘standard’ security setup for a web application is reasonable. Even on the commercial tools side there seems to be a lack of complete solutions. Security information and event management (SIEM) tools try to provide this, but they each still require significant setup to get your logs in and teach the program how to interpret them. It feels like some of this could be accomplished by building more value in a web application firewall (WAF). WAFs were not fully endorsed by the book due to the author having had a bad experience with a bad configuration problem. Personally, I think a WAF seems necessary to protect against distributed denial of service style attacks.
Overall the book is an introductory to intermediate text, not the advanced practices I was looking for. If you’re bootstrapping an application security program this seems like a reasonable place to get started. If you’re trying to find new tactics for your established program, then you’ll probably be disappointed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000231.40/warc/CC-MAIN-20190626073946-20190626095946-00331.warc.gz
|
CC-MAIN-2019-26
| 2,791
| 5
|
https://bloqjane.com/mist/
|
code
|
Mist is an Ethereum wallet and decentralized application (dApp) browser that was initially developed as the official wallet for the Ethereum network. It was designed to provide users with a secure and user-friendly way to manage their Ethereum assets, interact with smart contracts, and access Ethereum-based decentralized applications.
Here are some key features and aspects of Mist:
Wallet Functionality: Mist serves as a digital wallet for Ethereum, allowing users to store, send, and receive Ether (ETH), the native cryptocurrency of the Ethereum blockchain. Users can create multiple accounts within Mist to manage their Ethereum holdings.
Smart Contract Interaction: One of the notable features of Mist is its ability to interact with Ethereum smart contracts. Users can deploy their own smart contracts or interact with existing ones directly from the Mist interface.
dApp Browser: Mist includes a built-in dApp browser, which allows users to access and use decentralized applications built on the Ethereum blockchain. These dApps cover a wide range of use cases, including finance, gaming, social networking, and more.
Security: Mist places a strong emphasis on security. It includes features like passphrase protection and encryption to safeguard users’ private keys and funds. Additionally, it provides a user-friendly interface for generating and storing Ethereum addresses securely.
Full Node: Mist can function as a full Ethereum node, meaning it downloads and synchronizes the entire Ethereum blockchain. This allows users to have a complete copy of the blockchain, enhancing their autonomy and security when interacting with the Ethereum network.
Open Source: Mist is an open-source project, which means its source code is publicly available for inspection and contributions from the community. This transparency helps ensure the security and integrity of the software.
Ethereum Wallet Backup: Users are encouraged to create backups of their Ethereum wallets in case of data loss or device failure. These backups are typically in the form of keystore files, which can be used to restore access to the wallet.
Development Status: It’s important to note that the development of Mist has slowed down, and it is no longer actively maintained by the Ethereum Foundation. Ethereum users and developers have shifted their focus to other wallets and dApp browsers, such as MetaMask and MyEtherWallet, which have gained popularity for their user-friendly interfaces and robust features.
While Mist played a significant role in the early Ethereum ecosystem, users are encouraged to explore other Ethereum wallet options based on their specific needs and preferences. It’s essential to choose a wallet that aligns with your security requirements and ease of use when managing Ethereum assets and interacting with the Ethereum network.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474746.1/warc/CC-MAIN-20240228211701-20240229001701-00492.warc.gz
|
CC-MAIN-2024-10
| 2,844
| 11
|
https://sub-amazon.icims.com/jobs/776113/software-development-engineer/job
|
code
|
Software Developement Engineer - Amazon Web Services - Boston, Massachusetts
Do you want to shape the future of cloud storage? Are you passionate about building the next generation distributed file system? Do you want to grow and be part of a team that is building a service used by thousands of customers everyday? Come join the Elastic File System (EFS) engineering team in the heart of Cambridge as we revolutionize the world of highly available, scalable file systems.Elastic File System
(EFS) is the newest AWS storage service poised to grow to hundreds of thousands of servers, exabytes of storage, and trillions of files; and we’re just getting started. EFS is a unique service that provides low-latency, shared file system access to tens of thousands of EC2 instances and on-premises datacenter applications. It is a distributed, highly-available, durable file storage service that is fully elastic, growing and shrinking as required. If you have the files, we have the storage!
Embark on a journey with us to build a distributed file storage service that can scale without limits. We need your passion, innovative ideas, and creativity to help take the service to new heights. This is an opportunity to shape the future of EFS. Our mission is to transform the way the world uses file storage. Listen to how we are enabling our customers to change the world - https://www.youtube.com/watch?v=hzialJerb5o
As a member of the EFS team, you will be a significant and autonomous contributor. You’re excited about rolling up your sleeves, implementing big ideas, and learning from those around you. You want the opportunity to grow your technical and professional skills while helping EFS and AWS grow. You solve complex problems, applying appropriate technologies and best practices. You are creative, responsible, and curious while working with others to move quickly in turning code into customer solutions. You relish the opportunity to dig into challenging operational issues and to help customers build the next generation of web applications. You’re somebody who knows how to be both productive and have fun with others.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204077.10/warc/CC-MAIN-20190325153323-20190325175323-00296.warc.gz
|
CC-MAIN-2019-13
| 2,135
| 5
|
http://www.aspmessageboard.com/showthread.php?160245-Jobs-and-Email-2nd-Post
|
code
|
Forgive my ignorance (and please don't tell me to read the FAQs). I just need a simple suggestion or tip.<BR><BR>I have a table with an expiration date field. It simply defines the date the record expires.<BR><BR>Suppose I have a regularly scheduled job (say - daily at 2am) that uses the stored procedure to check all records for ones that have expired. <BR><BR>I want to use SQL (MS SQL 2K) to send an email to the record owner notify them to update the record. I know that SQL Mail can accomplish this, but my host does not offer this.<BR><BR><ignorance>What other method(s) exist to cause an email to be sent without any user intervention? Others keep suggesting ASPEmail, JMail, etc. Doesn't this require that someone browse a page?</ignorance><BR><BR>I need to send an email from SQL.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00525-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 790
| 1
|
https://www.heydullblog.com/uncategorized/502/
|
code
|
- From Victoria: The Lennon Diaries: A Multi-Decade Adventure - January 9, 2023
- Ian Leslie: “Notes on Peter Jackson’s Get Back” - January 2, 2023
- Itinerary for a Beatle-themed trip to London and Liverpool - December 8, 2022
Hi folks! Here’s the part of your post that you want on the main page. Don’t forget the [MORE]And here’s the rest of the post, which will have its own distinct page. When the reader clicks the “Read more” link, he/she will be magically transported. Don’t forget the closing tag.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499768.15/warc/CC-MAIN-20230129211612-20230130001612-00777.warc.gz
|
CC-MAIN-2023-06
| 524
| 4
|
https://www.warriorforum.com/search-engine-optimization/888632-how-submit-dmca-google.html
|
code
|
It's really infuriating:
You select something, it does some pretty ajax movement.... then nothing.
You can click "next" and you will be asked to log into Google plus. Then after that you can do it all over again!
I've cleared cache/cookies etc, I don't know why it's simply not working! Any one have any suggestions?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655894904.17/warc/CC-MAIN-20200707173839-20200707203839-00480.warc.gz
|
CC-MAIN-2020-29
| 316
| 4
|
https://community.teamviewer.com/English/discussion/117391/can-connect-a-to-b-but-not-b-to-a-please-help
|
code
|
I have a paid Remote Access license, I can connect from Computer A to Computer B, but I cannot from Computer B to Computer A. Please help!
Thank you for posting on TeamViewer Community.
If you would like to connect to device A, kindly add device A to the list of Add Computer.
More information can be found from this article:
Hope it helps.
Ying_Q, thanks for your reply. However, my Computer A is already added to my list of available devices. I have been successfully using Teamview for quite some time. Within this past week, something has changed, and I am no longer able to connect from my Computer B to my Computer A. I CAN, however, connect from my Computer A to my Computer B. I am attaching a screenshot of my Computer A. Can anyone help?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305277.88/warc/CC-MAIN-20220127163150-20220127193150-00141.warc.gz
|
CC-MAIN-2022-05
| 747
| 6
|
https://brickandmotorboutique.com/products/mini-cheetah-print-skirt
|
code
|
BUY 2 GET 1 FREE of all sale items! No code required. Valid until 1/24 at 11:59pm.
The most flattering cheetah print skirt! Skirt hits mid-shin and has a super stretchy elastic waistband! Runs true to size.
Model is approximately 5’9” and is wearing a size small.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538431.77/warc/CC-MAIN-20210123191721-20210123221721-00232.warc.gz
|
CC-MAIN-2021-04
| 267
| 3
|
https://www.physicsforums.com/threads/stars-in-the-early-universe-and-stellar-processes.849109/
|
code
|
Hey PF, Since there are stars that can be powered predominantly (>50%) by the CNO cycle, which requires carbon as a catalyst, and i understand the core temperatures of these stars is about 106 K. Does this mean that stars where the triple-alpha process is dominant (108 K) had to exist and die previously for there to be enough carbon available to dominantly CNO power a star. I'd say the bigger question attached to this is "Is there a limit to the earliest period where it is possible for stars to exist powered more than 50% by the CNO cycle?" and maybe to go a step further "alternatively, does this mean that regardless of the over density regions in the primordial times, where it would be more likely for a hot star to form, (correct me if i've misunderstood the consequences of over and underdensity regions) that there still couldn't be dominantly CNO powered stars (occupying a region around 106K) until enough hotter stars existed to generate enough carbon for the cooler stars to exist?" I feel like i'm missing something blatantly obvious here? Possibly, I've made the assumption that i am ignorant of either an early universe carbon nucleosynthesis event, or just that i am not appreciating the abundance of carbon generated by supernova nucleosynthesis and rare fusion events in some stars.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589536.40/warc/CC-MAIN-20180716232549-20180717012549-00285.warc.gz
|
CC-MAIN-2018-30
| 1,305
| 1
|
https://docs.cloudera.com/runtime/7.2.18/yarn-allocate-resources/topics/yarn-managed-parent-queues.html
|
code
|
Managed Parent Queues
Managed Parent Queues are auto dynamic child creation enabled queues in absolute and relative resource allocation mode.
In absolute and relative modes, when you enable auto dynamic child creation for a queue it becomes a Managed Parent Queue. It cannot have static child queues, and queues under it can be created only dynamically.
In absolute and relative modes, dynamically created queues always fall under a predefined (static) queue, which is the Managed Parent Queue. This limits the nesting to only one level. In addition, the queue properties set for the Managed Parent Queue will be applied to all of its dynamically created child queues. To change the queue properties of all its dynamic child queues, you have to change the configuration at the Managed Parent Queue level.
It is possible to dynamically create queues with zero capacity by incorrectly setting the Managed Parent Queue.
For example, you can create a Managed Parent Queue and assign a percentage based minimum capacity limit of 5%, to its dynamically created child queues. In this case, at most 20 queues can function with the 5% capacity limit. This forces all the following queues to wait until a queue is released (if no application is running in a queue, its capacity is set to zero). Therefore, it is vital to design the minimum capacity limit of child queues that belong to a Managed Parent Queue in a way that takes into account the number of queues that should be running in parallel.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297295329.99/warc/CC-MAIN-20240425130216-20240425160216-00813.warc.gz
|
CC-MAIN-2024-18
| 1,488
| 6
|
https://itknowledgeexchange.techtarget.com/itanswers/cobol400-subfile-single-page-loadpage-by-page-load/
|
code
|
Hi! can any one give me sample code of cobol/400 program which loads a subfile records page by page. My problem here is I am able to code page by page records loading from a PF but when the Subfile is dispalyed it is not properly displaying the pages. Fro example after loading and displaying the first page(say sflpage=5) I load the second page and display the subfile(sfl rel rec no is say 10 at the end of loading second page) but it is displaying from the first page and when I do Pagedown twice then only teh second page contents are displayd. Can any one help me wehre I went wrong ?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540543850.90/warc/CC-MAIN-20191212130009-20191212154009-00365.warc.gz
|
CC-MAIN-2019-51
| 589
| 1
|
https://physics.stackexchange.com/questions/464170/four-momentum-squared-and-collisions
|
code
|
So, I am not asking is the square of four-momentum of a particle an invariant to Lorentz trasnformations, but rather,is it invariant in dynamic situations? It seems to me that this also has to hold. So, is four-momentum squared same before and after collision, not the total, but for one particle in that collision?
As long as the particle has neither
- Changed kind (as happens in, for example, in charged-current weak scattering)
- Gotten excited (which can happen to atoms, nuclei, and hadrons; though in some cases this would be written as a change of type as in a proton turning into a Delta, for instance)
then the mass is the same.
But ... in a lot of ways what I wrote is a tautology. If a particle stays the same then it stays the same.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347400101.39/warc/CC-MAIN-20200528201823-20200528231823-00456.warc.gz
|
CC-MAIN-2020-24
| 745
| 6
|
https://www.itprotoday.com/compute-engines/jsi-tip-1793-access-denied-you-dont-have-permissions-or-file-use
|
code
|
If you receive:
Access is denied. You don't have permissions or the file is in use.
when attempting to delete a file, either:
1. You don't have permission.
2. The file is in use.
3. The file name is a reserved word.
4. The file is corrupt.
To resolve the issue, do the following until the file is deleted:
1. Grant yourself Change permission (and take ownership) and try to delete the file.
2. Use NTHANDLE.EXE from Systems Internals to determine who/what has the file open. Close the process and delete the file.
3. Use tip 0167 to delete the reserved word file name.
4. Install an alternate copy of NT and boot it. Delete the file. Use Control Panel / System / Startup to make your
primary Windows NT install the default. If you wish to keep the alternate install, you can compress its' folder.
If you wish to delete the alternate install, type Attrib -r -h -s c:\boot.ini. Edit boot.ini and remove the
two entries for the alternate install. Delete the folder that contains the alternate install.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00723.warc.gz
|
CC-MAIN-2022-33
| 998
| 15
|
https://www.add-in-express.com/creating-addins-blog/outlook-object-model-propertyaccessor-storageitem/
|
code
|
Don't stumble over a stone working with the PropertyAccessor and StorageItem classes in Outlook 2007
There are plenty of articles in the web dwelling upon new possibilities of Outlook 2007 programming. Among the most frequently discussed issues is how to access and set various MAPI properties and how the PropertyAccessor can help with this.
Let's have a close look at the new classes which were introduced in Outlook 2007 and let's cast even a closer look at possible traps that you may get in when using these classes.
With release of Outlook 2007 Microsoft added two key new classes to the Outlook Object Model (OOM): PropertyAccessor and StorageItem. That was yet another step to eliminate the need for lower-level programming interfaces such as Collaboration Data Objects, Outlook Redemption, Extended MAPI to retrieve and set property values that are not exposed in the Outlook Object Model. Let's have a short review of each.
PropertyAccessor provides the ability to create, get, set, and delete properties of objects. It allows getting and setting item-level properties that are not explicitly exposed in the Outlook Object Model, or properties for the following non-item objects: AddressEntry, AddressList, Attachment, ExchangeDistributionList, ExchangeUser, Folder, Recipient, and Store.
You can find more info about PropertyAccessor in the Outlook Developer Reference.
To see PropertyAccessor in action, I created a small add-in with a button in a command bar. By clicking on the button you get the HTML body of the email message. I used PropertyAccessor to read this property. The property tag is 0x10130102 (PR_HTML). To make sure this property exists in your email, you check Outlook settings, and set Mail format to HTML. To my great surprise it didn't work. I kept getting the following exception
The property “https://schemas.microsoft.com/mapi/proptag/0x10130102” does not support this operation.
Have a look at C# code snippets related to PropertyAccessor and the MAPI Store Accessor, a free .NET component that we developed specially for handling events as well as retrieving and setting MAPI properties:
private object GetValuePropertyAccessor()
Outlook.Folder folderInbox = (Outlook.Folder)OutlookApp.GetNamespace("MAPI").GetDefaultFolder(Microsoft.Office.Interop.Outlook.OlDefaultFolders.olFolderInbox);
Outlook.MailItem mailItem = (Outlook.MailItem)folderInbox.Items;
Outlook.PropertyAccessor accessor = mailItem.PropertyAccessor;
private object GetValueMapiAccessor()
AddinExpress.MAPI.Folder folderInbox = adxmapiStoreAccessor1.MsgStores.RootFolder.Folders.Folders;
AddinExpress.MAPI.MapiItem mailItem = folderInbox.MapiItems;
The VB.Net code will be the same. If you try to test these codes, you will notice that the MAPI Store Accessor works fine while PropertyAccessor fails.
If you use PropertyAccessor, you must have good knowledge of exception handling logic. Below I list some roadblocks that you may run into:
- The body and content of an attachment of an Outlook items are not accessible through PropertyAccessor.
- The PropertyAccessor ignores any seconds of the date/time value
- String properties are limited in size depending on the information store type.
- Limitation: Personal Folders files (.pst) and Exchange offline folders files (.ost) cannot be more than 4,088 bytes.
- Limitation: For direct online access to Exchange mailbox or Public Folders hierarchy, the limit is 16,372 bytes.
- Binary properties only those whose values are under 4,088 byte can be retrieved or set. (If trying to use larger values, you get an out-of-memory error).
- Date/time MAPI properties return Coordinated Universal Time (UTC). The PropertyAccessor.LocalTimeToUTC and .UTCToLocalTime methods can help you with this.
With the MAPI Store Accessor you can easily bypass all these limitations.
StorageItem is a message object. It stores private data for Outlook solutions. A StorageItem object is stored at the folder level. When Outlook needs to save folder information (or store-specific information) that is more complex than a simple folder property, Outlook keeps that information in an item that the user can't see. Views, custom form definitions, archive settings, and many other configuration options are contained in such hidden items. In Outlook 2007 using the Outlook Object Model, you can also create new hidden items to maintain data for your own solutions in mail and post folders. To return an existing or new StorageItem, you can use the Folder.GetStorage method which takes two parameters because the Outlook Object Model does not provide any collection object for StorageItem objects.
Set stItem = objFolder.GetStorage(StorageId, StorageIdType)
The StorageId parameter is a string that contains one of the following values:
- The EntryID for the StorageItem you want to get
- A MessageClass value. Return a StorageItem with a message class
- A Subject value. Return a StorageItem with specified subject
But as usually, there is another side of the coin. And here is a list of the StorageItem object limitations:
- Custom storage using StorageItem is possible only in mail and post folders. If you try to use the GetStorage method to create a new StorageItem object in a non-mail folder, Outlook will not create a StorageItem in that folder. Instead, it creates a visible item in the user's Inbox.
- It is also not possible to create new storage items in folders in the Exchange Public Folders hierarchy.
- Regarding existing storage items, the GetStorage method raises an error if you try to return a hidden item from an Exchange public folder or from an Exchange system folder, such as the Organizational Forms library.
So now you know about all weak points of PropertyAccessor and StorageItem. Knowledge is a great power, and armed with this knowledge you can get over all those limitations and live happily with the Outlook object model. Of course, there is a simpler way: third-party components such as the MAPI Store Accessor. In one of my previous post I wrote about accessing hidden Outlook items via extended MAPI with the above mentioned tool.
The MAPI Accessor allows you to easily avoid the above listed traps with PropertyAccessor and StorageItem. All features of the MAPI Accessor, including hidden items, properties, event notification and others work in all Outlook versions: Outlook 2000, Outlook 2002, Outlook 2003, and Outlook 2007.
1. Outlook Developer Reference.
2. Professional Outlook 2007 Programming by Ken Slovak. Wrox Press: Oct.2007.
3. Microsoft Outlook 2007 Programming by Sue Mosher. Digital Press: Jun.2007.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817171.53/warc/CC-MAIN-20240417173445-20240417203445-00188.warc.gz
|
CC-MAIN-2024-18
| 6,584
| 41
|
https://phenotype.ca/design/
|
code
|
"Working alongside Emma was such an incredible experience. Not having a sure clue of what I wanted in a logo, she was able to gather bits and pieces off of my business page/persona (not really sure the correct term to use but I hope this makes sense) to get a better idea of how I represent, and also how I would like/want to continue representing myself as a company.
I've worked with Emma before in regard to another logo for another company of mine and I am proud to say that I am not surprised as to how everything had turned out. This logo so beautifully represents who I am and what I do. I am overly excited and ecstatic with it!
I'm at a loss for words as I can already see myself rambling about the same things over and over again. Emma has surpassed my overall expectations and had willingly went above and beyond. I appreciate everything you have done for me as well as your understanding throughout the whole project."
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100427.59/warc/CC-MAIN-20231202140407-20231202170407-00046.warc.gz
|
CC-MAIN-2023-50
| 930
| 3
|
http://ideasunlimited.com.au/Snake.htm
|
code
|
Seven of the world’s 12 families of snakes live in Australia (although sometimes there seem to be a lot more) but only around 75 species of these are venomous. However, there are more than 30 species of deadly sea-snakes in Australian waters.
One of the most popular Australian snakes is the Diamond python. This snake isn’t venomous and has a green-black skin with pretty cream diamonds. This snake lives in trees but also hunts on the ground. It uses super-sensitive lips to find birds and small mammals by their body-heat. A female python can lay up to 50 eggs, coiling around them until they hatch much like the Celtic serpent that hatched the world.
Snakes are a symbol of magic and of healing; even the staff of Caduceus, our symbol for western medicine, features two twined snakes. Snakes remind us about life and death and the whole damned thing. They challenge us to look at the changes we need to make in our lives at a rhythm that’s right for us. Snakes encourage us to look at what needs to be healed and to look for and be ready to welcome life changes. You are being asked to leave behind your old outworn skin to move into what fits you now.
Like Snake, you also have a special sensitivity to your environment now. You’ll “taste the air” to fit in with what is necessary to your present environment. If your crowd is partying; you’ll party. If it is studying; you’ll become studious. At this time you need to choose your friends carefully and be selective about what influences you choose to accept in your life. Remember: with a fool no season spend, lest you be counted as a friend.
You do need to assert yourself to develop self-reliance. If you do, you will become more self-sufficient as you grow into your resplendent new skin.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689806.55/warc/CC-MAIN-20170923231842-20170924011842-00381.warc.gz
|
CC-MAIN-2017-39
| 1,764
| 5
|
https://thesemicolonexpert.com/2020/04/19/beginners-guide-to-setup-react-native-application/
|
code
|
Beginners Guide To Setup React Native Application4 min readReading Time: 3 minutes
Are you a newbie looking for developing an app using React Native? Or are you stuck while setting up React Native Application? Or are you looking for a step-wise React Native setup guide to help you during the process? If your answer to any of these questions is yes, then look no further! Here, you can find a comprehensive React Native app setup tutorial which will help you to successfully set-up a React Native environment on your system.
The installation of the React Native development platform demands certain tools. A majority of them remain the same irrespective of the operating system. Here is a list of such prerequisite tools – common for both Windows and Mac OS.
- React Native command-line interface
- Xcode/Android Studio
Setting-Up React Native on MacOS
The following section presents you a step-by-step process to setup React Native on MacOS.
Step 1: Install Homebrew
To install this package manager,
- Open the terminal and use this command to install homebrew
/bin/bash -c "$(curl -fsSLhttps://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
Step 2: Install Node and Watchman
- brew install node
- brew install watchman
Step 3: Install Xcode
- Install Xcode via the Mac App Store to compile iOS applications
Step 4: Install Xcode command line tools
The next step would be to use Xcode to install command-line tools. To do this,
- Open Xcode
- Click on the “Preferences” tab in the Xcode menu
- Choose “Locations” from the panel
- Install the latest command-line tools
Step 5: Installing CocoaPods
Next would be to install CocoaPods to manage library dependencies of Xcode projects. This can be done using the command
sudo gem install cocoapods
Step 6: Install JDK
JDK version 8 can be installed by using the following commands
brew cask install adoptopenjdk8
brew tap AdoptOpenJDK/openjdk
Step 7: Setup Android development environment
In this step, we will install and configure Android Studio on a Mac Operating System.
- Install Android Studio
- Download it from https://developer.android.com/studio/index.html
- Choose a “Custom” setup when prompted to select an installation type.
- Make sure to check all the following boxes:
- Android SDK
- Android SDK platform
- Performance (Intel ® HAXM)
- Android Virtual Device
- Configure the ANDROID_HOME environment variable by adding the following lines to your $HOME/.bash_profile or $HOME/.bashrc configuration file.
export ANDROID_HOME=$HOME/Library/Android/sdk export PATH=$PATH:$ANDROID_HOME/emulator export PATH=$PATH:$ANDROID_HOME/tools export PATH=$PATH:$ANDROID_HOME/tools/bin export PATH=$PATH:$ANDROID_HOME/platform-tools
Step 8: Configure React Native command-line interface
Next, you can configure the command-line interface for React Native development platform by making use of
react-native-cli and npx react-native <command>
Step 9: Create a new application
That’s done! You have now successfully setup React Native on MacOS and have all that is necessary to build your first React Native project. Initialize by running the following command in your workspace
npx react-native initHelloWorld
Setting-Up React Native on Windows
This section helps you to setup React Native on Windows operating system. At this point, it is worthwhile to mention that iOS applications cannot be built on the system working on Windows only. That is, the development of native code for iOS in Windows necessarily requires the support of Mac OS. So, this section introduces you to a few additional steps that must be followed in conjunction with those mentioned in the earlier section, “Setting-Up React Native on Mac OS”.
Step 1: Install Android Studio along with JDK
Android Studio can be downloaded and installed from the link https://developer.android.com/studio/index.html
Step 2: Configure ANDROID_HOME environment variables
To do this
- Open Windows Control Panel
- Select “System and Security”
- Open “System” pane
- Click on “Change Settings”
- Navigate to “Advanced” tab
- Click on “Environment Variables”
- Select “New”. This results in SDK installation at the default location
Step 3: Add platform-tools to Path
Next, you will have to add the necessary platform-tools to the Path variable. To accomplish this
- Open “System” pane under “System and Security” from the Windows Control Panel
- Click on “Change Settings” and navigate to “Advanced” tab
- Click on “Environment Variables”
- Select the Path variable and click on “Edit”
- Config “New” field to reflect the following line
The next steps would be the same as that you did while setting-up React Native on MacOS.
That’s it! Once you complete all these steps, you can start building iOS apps either on Mac OS or on Windows.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00207.warc.gz
|
CC-MAIN-2022-27
| 4,827
| 69
|
https://docs.violet.io/apply-payment-method-to-cart
|
code
|
Apply Payment Method to Cart
There are currently two methods for applying a guest payment method to a cart. The first method is to apply a generated payment token where the card data was tokenized before sending it to Violet. The second method is to send raw card data directly to Violet.
If your checkout flow does not require an additional step after applying a payment method you can provide the complete_checkout property in your request body with a value of true. This will submit your cart as an order and complete the checkout process if the payment method is valid.
Apply a payment method to the given cart. API Reference: Apply Payment Method
cart_id - The ID of the cart for which the payment method is being applied to
Applying a Generated Token
If you are building a client side application we recommend using the Stripe.js library to first tokenize your customers card data before sending it to Violet. You can learn more about how to do this in our Payments and Payouts section. Once you have a generated token you can apply it a cart with a request similar to the following example.
Applying a Credit/Debit Card
If using a generated payment token is not possible for your application and you plan on being PCI compliant you apply card data directly to a cart. The following snippet shows an example of this.
Tokens will always take priority over card data when applying the payment method to a cart. If you wish to use card data please leave the token property empty or simply omitted from the request body.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00745.warc.gz
|
CC-MAIN-2023-14
| 1,522
| 10
|
https://flylib.com/books/en/2.733.1.47/1/
|
code
|
NICs play an important role in connecting a computer to the physical part of the network. No discussion of networking standards is complete without including drivers, the small software programs that enable a computer to work with a network card or other device. In this lesson we look at device drivers and how they relate to the OSI reference model.
After this lesson, you will be able to:
Estimated lesson time: 15 minutes
A driver (sometimes called a device driver) is software that enables a computer to work with a particular device. Although a device might be installed on a computer, the computer's operating system cannot communicate with the device until the driver for that device has been installed and configured. The software driver tells the computer how to drive or work with the device so that the device performs the job it is assigned in the way it is supposed to.
There are drivers for nearly every type of computer device and peripheral including:
Usually, the computer's operating system works with the driver to make the device perform. Printers provide a good illustration of how drivers are used. Printers built by different manufacturers all have different features and functions. It would be impossible for computer makers to equip new computers with all the software necessary to identify and work with every type of printer. Instead, printer manufacturers make drivers available for each printer. Before your computer can send documents to a printer, you must install the driver for that printer on your computer's hard drive.
As a general rule, manufacturers of components, such as peripherals or cards that must be physically installed, are responsible for supplying the drivers for their equipment. For example, NIC manufacturers are responsible for making drivers available for their cards. Drivers generally are included on a disk with the equipment when it is purchased, included with the computer's operating system, or made available for downloading from an Internet service provider such as the Microsoft Network (MSN), CompuServe, or others.
Network drivers provide communication between a NIC and the network redirector running in the computer. The redirector is the part of networking software that accepts input/output (I/O) requests for remote files and then sends, or redirects, them over the network to another computer. During installation, the driver is stored on the computer's hard disk.
NIC drivers reside in the MAC sublayer of the OSI reference model's data-link layer. The MAC sublayer is responsible for providing shared access to the physical layer for the computer's NICs. As shown in Figure 5.10, the NIC drivers provide virtual communication between the computer and the NIC. This, in turn, provides a link between the computer and the rest of the network.
Figure 5.10 Communication between the NIC and network software
It is common for a NIC manufacturer to provide drivers to the networking-software vendor so that the drivers can be included with the network operating software.
When purchasing a new hardware device, always make sure that it contains the correct drivers for the specified computer operating system on which it will be installed. If in doubt, or if you are missing the appropriate driver, consult the manufacturer before you install the device. Updated drivers or drivers for various operating systems often are available over the Internet for downloading.
The hardware compatibility list (HCL) supplied by operating-system manufacturers describes the drivers they have tested and included with their operating system. The HCL for a network operating system might list more than 100 NIC drivers. This does not mean that an unlisted driver won't work with that operating system; it means only that the operating-system manufacturer has not tested it.
Even if the driver for a particular card has not been included with the network operating system, it is usual for the manufacturer of the NIC to include drivers for most popular network operating systems on a disk that is shipped with the card. Before buying a card, however, make sure that the card has a driver that will work with a particular network operating system. Installation and configuration of drivers is discussed in detail in Chapter 8, "Designing and Installing a Network."
Network Driver Interface Specification (NDIS) is a standard that defines an interface for communication between the MAC sublayer and the protocol drivers. By permitting the simultaneous use of multiple protocols and drivers, NDIS allows for a flexible environment of data exchange. It defines the software interface, known as the NDIS interface. Protocol drivers use this interface to communicate with the NICs. The advantage of NDIS is that it offers protocol multiplexing, so that multiple protocol stacks can be used at the same time. Three types of network software have interfaces described by NDIS:
Microsoft and 3Com jointly developed the NDIS specification for use with Warp Server and Windows NT Server. All NIC manufacturers make their boards work with these operating systems by supplying NDIS-compliant software drivers.
Open Data-Link Interface (ODI) is a specification adopted by Novell and Apple to simplify driver development for their network operating systems. ODI provides support for multiple protocols on a single NIC. Similar to NDIS, ODI allows Novell NetWare drivers to be written without reference to the protocol that will be used on top of them. All NIC manufacturers can make their boards work with these operating systems by supplying ODI-compliant software drivers.
ODI and NDIS are incompatible. They present different programming interfaces to the upper layers of the network software. Novell, IBM, and Microsoft offer ODI-to-NDIS translation software to bridge the two interfaces. Two examples are ODI2NDI.SYS and ODINSUP.SYS.
Most network card manufacturers supply both NDIS- and ODI-compliant drivers with their boards.
The following points summarize the main elements of this lesson:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00867.warc.gz
|
CC-MAIN-2023-06
| 6,036
| 20
|
https://forums.sonic.net/viewtopic.php?f=10&t=5284&view=unread&sid=ce444202e8f3828f6b2aebde667ae12a
|
code
|
I download Linux distros quite often. In such cases torrents are not only not illegal but often encouraged. Is there a reason why I should not left the client running seeding on sonic network (e.g. it causes problem for sonic for whatever reason)?
I've had no problem seeding Linux distros over Tor using Sonic, other than the bandwidth will sometimes max out my connection and my connection would drop. I've been doing this occasionally for 10 years.
In total there are 5 users online :: 1 registered, 0 hidden and 4 guests (based on users active over the past 5 minutes) Most users ever online was 422 on Sat May 26, 2012 5:28 am
Users browsing this forum: aprowell and 4 guests
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867977.85/warc/CC-MAIN-20180527004958-20180527024958-00565.warc.gz
|
CC-MAIN-2018-22
| 680
| 4
|
https://rodneydyer.com/tag/laboratory/
|
code
|
Here is where we are located.
I’m moving the site from my own hand-coded html over to an instance of WordPress for 2015. There are several reasons why I am doing this:
I don’t seem to update too often if I hand code it making my lab webpage rather stale.
To dilute primers for PCR, you need to first make a 100µM stock solution. To accomplish this, you find the number of nMol on the tube and add 10X that much water in µl.
The key to the AFLP protocol is to be able to digest two restriction enzymes (RE’s) simultaneously and be able to ligate onto these sticky ends primers of known concentration. When you purchase new primers, you need to aliquot out usable volumes because repeated freeze/thaw cycles reduce RE efficiency.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195417.37/warc/CC-MAIN-20201128095617-20201128125617-00232.warc.gz
|
CC-MAIN-2020-50
| 735
| 5
|
http://nichesoftware.co.nz/2018/04/28/simple-fix-longer-than-you-think.html
|
code
|
I hope you’ve been considering the puzzle from my last post about how much effort it you should put into fixing a simple problem.
For the sake of today’s discussion, lets assume you’re working on an in-house system for your organisation, one that fulfills an important institutional need. You can actually go and talk to all of your users as they’re located in the same building as you. Even better, there’s a fairly flat support heirarchy - if someone calls the Help Desk and the problem isn’t obvious, the kind folks on the Help Desk will pass the issue directly to you to resolve.
In this context, what is the cost of the recurring bug?
The cost of the fix
It never seems that applying the fix takes very long. You run the scripts, check the result, and give the end user a call to let them know it’s resolved.
But one day you’re asked to document the fix (so that someone else can do it if you’re not around), so you take the time to actually write down the steps required.
- Contact the Help Desk to enable production access
- Remote into the prodution system and open the logs
- Search the logs for the expected error message and note the details needed for your scripts
- Open the appropriate tool and connect to the production data store
- Load the fix scripts and modify them with the details noted earlier
- Test the scripts by running them with a rollback (or equivalent) to ensure they make exactly the correct change
- Run the scripts with a commit (or equivalent) to make the change
- Recycle the application role to ensure the faulty information isn’t lurking in a cache somewhere
- Call the affected user and let them know everything is fixed
Even though you’re practiced, it turns out this quick fix actually takes you around 15 minutes each time. Subjectively you thought it was just a couple of minutes, so this surprises you.
The cost of interruption
You can’t predict when production issues occur, so this issue is always an interruption that takes you away from another task.
Research shows that it typically takes an average of 23 minutes to get back to where you were after being interrupted.
You know that sometimes you’re able to get back on task really quickly, in just a few minutes. But, you also know that sometimes you end up distracted when you’re debugging some really complicated code and it takes you a long time to get back to where you were.
The cost to the end user
When this problem happens, it’s pretty serious for the affected user. They’re unable to use the system at all until it’s fixed.
For many of your users, this system is their key workday tool - they spend most of the day working with it and they can’t achieve their goals for the day when it goes down.
When the problem occurs, they have to contact the help desk. The folks at the help desk have to work out what sort of problem it is. Once recognised, they need to contact you - and you run your scripts to fix things up.
It turns out that there’s usually a delay of 15-20 minutes betwen the time the problem happens and when you’re notified. After that, it’s another 15 minutes for you to fix the issue - and for all that time, the end user is sitting idle.
The cost of being away from your desk
You’re not at your desk all of the time. We all have formal meetings, informal conversations, coffee breaks, biological considerations, and other issues going on that take us away from our desks during the day.
In most organisations, you also have some variation in working hours. The early birds arrive at work early every morning, drinking their freshly squeezed organic juice. By late afternoon they’re skipping jauntily out of the door after a full day of work. The night owls zombie-shuffle in after 9 am, extra large coffees in hand, and work through into the early evening.
When the Help Desk tries to find you, and you’re not at your desk, how much longer does it take before they make contact? How much longer does it take before you get back to your desk to apply the fix?
The cost of repetition
Remember that our hypotheical production issue happens around once a week - that’s around fifty times per year. This magnifies the cost of each occurance, especially considering that you only need to fix the problem once.
Adding it all up
When you consider the time taken to fix the issue (15 minutes) and the cost of distractions (23 minutes), you find you’re spending 33 hours a year fixing the glitch.
Your help desk are spending up to 20 minutes triaging each issue before passing it on to you - that’s up to 17 hours per year.
And your end users, they find that each time the bug happens it carves nearly 35 minutes out of their day - totalling around 30 hours per year.
Now we see that the cost of our quick fix runs around 80 hours per year … that’s twice the time investment of a proper fix (and we haven’t yet accounted for the costs of being away from your desk).
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474663.47/warc/CC-MAIN-20240226194006-20240226224006-00600.warc.gz
|
CC-MAIN-2024-10
| 4,939
| 36
|
https://www.scnsoft.com/case-studies/team-augmentation-go-node-js-python-react-for-the-development-of-a-kiosk-and-pwa-solution
|
code
|
Team Augmentation (Go, Node.js, Python, React) for the Development of a Kiosk and PWA Solution
The Customer is US company driven by the mission to create a network of easily accessible power banks and wireless/EV charging stations for ‘green’ vehicles – electric automobiles and e-scooters.
The Customer wanted to create a self-service kiosk solution for rental stations of electric scooters, scooter batteries, and power banks. However, the Customer’s IT department didn’t possess the required skills to deliver the project.
The Customer turned to ScienceSoft with the team augmentation request. Their initial resource needs included 1 back-end developer experienced in Golang. Within just 2 days, ScienceSoft provided 3 CVs of available Go engineers and organized an interview with the shortlisted expert.
The developer joined the project in 5 days after the request and started working on the kiosk back-end development tasks under the guidance of the Customer’s CTO. The Go engineer used Jira for task reporting and Microsoft Teams for regular communication with the CTO.
A few weeks later, upon seeing the aptitude and professionalism of ScienceSoft’s Go developer, the Customer decided to scale the team up and requested 2 more talents from ScienceSoft:
- a front-end developer skilled in React – for the development of the PWA version of the kiosk app.
- a back-end engineer with experience in Node.js and Python – for the implementation of the web app’s back-end module.
In 3 days, the Customer received the CVs again, reviewed them, and interviewed the most fitting candidates, who then joined the team and began contributing to the project. The functional modules ScienceSoft’s developers implemented include:
- Log in/Registration.
- Product choice (an e-scooter, a charger, or a battery).
- Point-of-sale functionality (with payment gateway integrations).
- AWS deployment.
- Real-time view of the map with partner rental spots.
Over the course of 8 months, ScienceSoft’s team managed to deliver the full kiosk + PWA solution and integrate it with the Customer’s AWS infrastructure. Satisfied with the cooperation, the Customer has expressed the wish to tap ScienceSoft’s experts in their other projects in the future.
Technologies and Tools
Back end: Golang, Node.js, Python, Amazon Web Services.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00722.warc.gz
|
CC-MAIN-2023-50
| 2,337
| 17
|
https://www.techcompanynews.com/anomaly-science-builds-the-bridge-to-web3-making-it-easier-for-decentralized-projects-to-take-flight/
|
code
|
Below is our recent interview with Jacob Haap, Chief Executive Officer at Anomaly Science:
Q: Could you provide our readers with a brief introduction to your company?
A: Anomaly Science is an American-German blockchain company based in Berlin, Germany, and Cincinnati, Ohio, with operations distributed across Europe and North America. We are building the bridge to Web3, making it easier for decentralized projects to take flight, lowering the level of entry to software development, and giving power back to software developers. This vision is accomplished through a creation we call “Particles.”
Q: Any highlights on your recent announcement?
A: Anomaly Science recently announced our creation “Particles” with the publication of a recent whitepaper. Particles are a Patent-Pending creation by Anomaly Science to bring more power to software developers and aid in the transition to Web3. They are a method of project management built upon the tokenization of work, providing greater ease of access to software development and deployment and doing business in our growingly decentralized world. Of the three types of Particles, we currently have Repository Particles in a demo state preparing for a user beta, with the other two types following soon after.
Q: Can you give us more insights into your offering?
A: What we are building has the potential to bring substantial change to the growing decentralization movement and change what it means to create. Particles open a new door for creators, with greater collaboration and sharing of work enabled through the tokenization of intellectual property rights enabling the formation of knowledge pools and the creation of hybrid-source codebases.
Particles mean that independent developers can have more ownership in what they are doing, wether they are working solo or freelancing, or if they work for a larger company. Particles mean that whatever they build, they will receive fair compensation. We see this being especially useful in the open-source community, with a potential for a slight shift to hybrid-source. Our method of intellectual property tokenization would mean open source developers could see how their work is being used downstream from derivative projects. If their work contributes to something that becomes big, they get the reward and recognition they deserve.
For teams, Particles open the door to the seamless function of international teams and help to legitimize decentralized means of business further. Currently, these business methods lack recognition from world governments. We hope our new business method can act as a stepping stone in bringing more attention to decentralized teams and ways of doing business and further legitimizing those organizational structures.
Q: What can we expect from your company in the next 6 months? What are your plans?
A: Over the next six months, we plan to continue Particle development so that all three types are ready for use and extend our reach into developer communities. What is most important for our success is forming a community and staying engaged with the current developer world. To accomplish this, we want to remain engaged in existing developer communities and do what we can to support events like hackathons and developer conferences, especially among student developers. Additionally, we are kicking off our Seed Round funding to raise a minimum of USD 500K to support our further development efforts so we can get our product into the hands of developers and creators everywhere. We hope that six months from now, the effects of our vision can begin to be seen, with more work shared and more teams embracing decentralization and blockchain technology.
Q: What is the best thing about your company that people might not know about?
A: The best thing about Anomaly Science that people might not know is our team and dedication. We all knew each other before Anomaly Science started, having been brought together by our love of technology and building cool things. In the past, a few of us worked together when in school to host student hackathons. My team is not just my team, but they are also some of my best friends.
We are also very dedicated to making our ideas a reality. This dedication meant leaving my degree program and going my own way to do what I saw was correct. At eighteen, I left my hometown of Cincinnati, Ohio, and moved to Berlin, Germany, to continue my work. I see taking significant risks as necessary to building something incredible, and even though there are times of stress and confusion, it is just part of the journey! I am proud to say that this is a mindset shared across the team of taking risks and never giving up.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473824.13/warc/CC-MAIN-20240222161802-20240222191802-00227.warc.gz
|
CC-MAIN-2024-10
| 4,695
| 14
|
http://mocpages.com/moc.php/124458
|
code
|
Oh. Well, they're cool. I found some old TYCO compatible bricks in my gf's brother's old collection and they're brilliant (double studded 2X2 plates and double bottomed 2X2 half brick). Just keep in mind that most of the best here are purists and you could see some very good stuff down-rated because of it...
Quoting Yuri Fassio
I read the description, but I still don't understand how you made those hats... did you turn something inside-out?
Yuri, the hats are from a Chinese brand called Sluban, also known to some as Click brick or Oxford. They aren't as good as Lego, the quality is poor, but some of the bits like the hats and body armour are pretty good.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703635016/warc/CC-MAIN-20130516112715-00044-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 662
| 4
|
http://techreport.com/news/6957/windows-security-comparable-to-other-oses
|
code
|
Windows security comparable to other OSes?
Computer Weekly is reporting that Windows XP may not be as insecure, relative to other operating systems, as press coverage of critical flaws would have you believe. The report cites statistics released by Danish security firm Secunia that reveal a surprising number of serious security advisories for SuSE Linux Enterprise Server, Red Hat Advanced Server 3, Mac OS X, and even Sun Solaris 9.
Given its massive user base, the impact of critical Windows flaws is certainly more severe. However, the operating system may not be significantly less secure than others..
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720238.63/warc/CC-MAIN-20161020183840-00079-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 608
| 3
|
https://samshouseblog.com/2020/11/25/aws-certified-devops-engineer-professional-exam-tips/
|
code
|
I recently passed the AWS DevOps Pro exam and wanted to share my feedback on the exam and what it takes to get the certification.
So first off… Shew… This is a hard exam… One of the toughest of my career.
This is an exam that requires some extended experience in AWS. The more you work in AWS, the better off you will be. Of all the AWS exams, this one will require the most real-life experience. Does this mean that if you have not done extensive work in AWS that you should not pursue this? No! Nobody knows everything and the most experienced professionals still learn every day.
My experience with this exam covers several of the following:
- AWS Code Developer Tools
- Code Commit
- Code Build
- Code Deploy
- Code Pipeline
- CloudWatch Events
- Auto Scale
- Elastic Beanstalk
All of these are crucial to understand. A good portion of the exam will focus on the Code Developer Tools, how they work together, how you build and deploy apps, how you provision infrastructure, etc. Appspec and Buldspec files will come up, so understand each code developer service extensively.
This is a tough exam and its harder to find good training materials so as courses and labs. There are several materials I recommend, not just for the exam, but also general training for anyone in an AWS Role
The AWS DevOps Blog – https://aws.amazon.com/blogs/devops/
This is very good blog that has some awesome articles and solutions around DevOps in AWS. Highly recommend going thru them as a couple scenarios showed up in the exam.
AWS Whitepapers – https://aws.amazon.com/whitepapers/
Reviewing this site will always be of use. There is a section on Developer Tools that was useful in studying for this exam
AWS Certified DevOps Engineer Professional 2020 course by Stephane Maarek – https://www.udemy.com/course/aws-certified-devops-engineer-professional-hands-on/
I would have to rank this course as one of the best AWS Training courses I have ever done!! The Instructor dives deep into all the sections of the exam and is a critical resource. This course is not just an exam prep, it is a serious training course that anyone who is an AWS DevOps Engineer needs. This course will prepare you for this exam and outranks any other training out there.
AWS Professional exam are not just tough because of the required knowledge and content tested on, they are also a test of time management and focus.
- 75 questions in 180 minutes. It seems like a long time, but you will use every minute of it
- Time management will be key. Don’t get stuck on a question. You will have an average of 140 seconds to complete the exam. Don’t let a question cause you to rush other questions at the end
- Question can be paragraphs long with answer choices that can be a paragraph long. Don’t let this intimidate you. When reading a question, pick out the important pieces of what is being asked.
- Remember you can skip and flag answers that you can review at the end. Always a good strategy to use and other questions in the exam could trigger a memory about something you may be fuzzy on at that moment in time.
- Do the AWS Practice exam on the certification signup page and the AWS sample questions on the Aws Exam details page. They are a good test to see where you are and sometimes a question from there can pop on the real exam.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00313.warc.gz
|
CC-MAIN-2022-33
| 3,319
| 26
|
https://www.nineyardstore.co/products/diy-teeth-whitening-kit
|
code
|
Step 1: Heat the mouth tray under hot water to warm it up about 10 seconds, then shake excess water off. Attention the mouth trays could not stay in hot water too long, otherwise it may be out of shape.
Step 2: Put the mouth tray over your teeth and use your fingers to gently try to mold it to your teeth.
Step 3: Take the mouth tray out and apply some of the gel evenly into the tray.(0.25 to 0.5ml for each tray, anymore is just wasted).
Step 4: Put the tray back onto your teeth and leave it there for 25 to 30 minutes.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057347.80/warc/CC-MAIN-20210922102402-20210922132402-00080.warc.gz
|
CC-MAIN-2021-39
| 523
| 4
|
https://community.nxp.com/thread/381465
|
code
|
I need to rebuild whole kinetis toolchain from scratch.
I need to start from source code, applying patches, configure and build. In other words I need to replicate exactly the steps followed by Freescale to build toolchail delivered with KDS 3.0.0
Where can I find detailed instructions?
I mean: where to find source code, patches and other stuff? How to configure?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316194.18/warc/CC-MAIN-20190821194752-20190821220752-00065.warc.gz
|
CC-MAIN-2019-35
| 365
| 4
|
https://www.nellyisabelleartstudio.com/product-page/escape
|
code
|
Sold - This canvas is sold however, you can place a custom order for one similar. Each painting is done by me so it will not be exact, contact me for details.
This is a canvas wrap available in different sizes.
All sales are final
Contact me for a custom shipping quote prior to ordering.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154163.9/warc/CC-MAIN-20210801061513-20210801091513-00506.warc.gz
|
CC-MAIN-2021-31
| 288
| 4
|
https://pinside.com/pinball/forum/topic/should-stern-do-multiple-themes-for-the-same-playfield-designs
|
code
|
Yes! Off course, they should have done this long time ago.
It will bring the costs down, better price for costumers. But they also should people allow to play all games on their pin.
So you have an acdc pin, as it is now. They retheme it to lets say... uhmmm... deadpool, give it deadpool code. And than players who have an acdc pin can also swap to the deadpool code. This way they have multiple games on one layout.
Off course you are playing deadpool with acdc toys on it, but it is only an extra on top of your acdc pin.
And why only 2? Retheme the shit out of great layouts. It would be great if you buy your dreamtheme with a nice layout, and you can also swap to a few other gamecodes.
Awesome for home and in barcades.
When creating a new lay out, you can also pick your toys and grafix smart, knowing the multiple themes you are gonne use so they fit both themes. For example an airplane could fit on a mission impossible pin but also on a die hard pin.
So many possibilities for costumers to get way more out of 1 pin. Than just that 1 (unfinished) gamemode.
Stern could sell extra toys, lower cost, more fun for all, etc etc.... everybody would benefit. They could even sell the new code in a download package.
People that dont like this i want to ask. If you have, or imagine you have one, a medieval madness pin. Would you be interested if you could buy a Monty python flying circus game mode for 200$? So you can swap from monthy to original madness code and back. Is there anybody in this world that has a medieval madness that will not like that?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00561.warc.gz
|
CC-MAIN-2021-31
| 1,562
| 10
|
https://glints.com/id/opportunities/jobs/automation-software-tester/7df10fb2-0494-4d70-92df-bf6cc228cf36
|
code
|
Pastikan perusahaan yang kamu lamar resmi dengan memeriksa website dan lowongan kerja mereka.
Deskripsi pekerjaan Automation Software Tester PT Geek Seat Indonesia
- Design and develop test automation scripts.
- Use test automation guidelines.
- Research issues related to software testing.
- Collaborate with QA Analysts and Software Developers to develop solutions.
- Keep up to date with latest industry developments.
- At least a bachelor's degree from a reputable university, with a Computer Science/IT/Management specialization
- At least 3 years of experience actively working in a testing position.
- A strong understanding of the fundamentals within testing — test design, planning, strategy, test approaches & techniques, and bug advocacy.
- Strong interest in the software development business area, especially software testing.
- Experience of test script writing, test cases, test documentation and compiling bug reports.
- Demonstrated experience in Automation Testing.
- Willing to learn and follow processes carefully.
- Professional attitude towards clients and colleagues.
- Fluent in English – both written and oral communication skills
- Experience of the full test life cycle.
- Experience of testing practices in addition to manual tests, such as unit testing, regression testing, sanity testing, stress testing, and load testing.
- Experience using testing tools.
- A wide knowledge of Agile Development Methodologies
- Experience with Selenium.
- Experience with using GitHub, Jenkins, and Azure Integration
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363510.40/warc/CC-MAIN-20211208114112-20211208144112-00623.warc.gz
|
CC-MAIN-2021-49
| 1,535
| 22
|
https://gentlebamboo.com/case-studies-the-5-stages-of-a-case-discussion/
|
code
|
[This week, Rakshith explores Case Studies as a training tool. Over to him.]
Building on the premise of creating open spaces in the classroom, I’ve been thinking about how I might facilitate a case discussion. Here’s the situation as I see it.
It’s a 3-hour program (180 minutes). With 15 minutes set aside for a break, 15 minutes for an ice-breaker, and 15 minutes to close the session and gather feedback, I have 135 minutes of discussion time.
I am making three assumptions:
- The case is complex and has the characteristics I described in Tuesday’s mail.
- All the objectives and outcomes set for this training can be accomplished with this one case.
- The participants haven’t come prepared. They haven’t read the case in advance.
My role is to enable the learners to think deeply about the case and discuss it fearlessly in my classroom. To enable fearless discussion, I will have to create a motivating (Gamification), safe (Personalisation), and collaborative (Socialisation) environment. We’ll explore that in a different mail. But, for learners to think deeply about the case, I think there are 5 stages a learner has to pass in the case discussion:
Stage 1: What is the situation being described in the case? (15 minutes)
Stage 2: What else do I need to know about this situation and can I find it in the case? (15 minutes)
Stage 3: What is my hypothesis? (45 minutes)
Stage 4: How do I support my hypothesis and what am I recommending as immediate next steps to act on my hypothesis? (45 minutes)
Stage 5: What if my hypothesis is wrong? What is my next best option? (15 minutes)
While this may intuitively seem like an iterative process, in a training session I would approach this in a sequential manner, very much like how you would when facilitating a session on Six Thinking Hats or Design Thinking. This is not to discourage ideas but to gently park them for the appropriate time so that the case discussion can at all times be focused. That’s my interpretation of balancing empty and full space in a classroom; provide freedom within a structure.
Will this lead to a strong conclusion every time? I doubt it. Having watched a few case discussions now and having talked to people who have both participated in them and facilitated them, I think most case discussions lead to dubious conclusions. But the discussion and the thinking it sparks off in you can be an invaluable experience.
In the final stage of this series,
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056656.6/warc/CC-MAIN-20210919005057-20210919035057-00096.warc.gz
|
CC-MAIN-2021-39
| 2,454
| 16
|
https://www.barneyartistmusic.com/reuters-etoro-redesign-interview/
|
code
|
Empowering Investors and Changing the Way We Trade. Reuters Etoro Redesign Interview …
Over the years, I have come throughout numerous trading platforms, but one that stands out from the rest is Etoro. Having used Etoro for the previous 8 years, I can confidently state that it has reinvented the way people trade and invest in the monetary markets.
The Power of Etoro
Helping My Boy Thrive with Etoro
Benefits and drawbacks of Etoro
Leading Tips for Utilizing the Etoro Platform
Etoro vs. Binance: A Relative Analysis
Etoro’s Advertising and Football Connection
The Power of Etoro:
Etoro has actually interrupted the traditional financial investment landscape by presenting a social trading platform that connects financiers worldwide. It enables users to trade a vast array of financial instruments, including stocks, cryptocurrencies, commodities, and more. What sets Etoro apart is its special feature that enables users to automatically copy the trades of effective investors, therefore democratizing financial investment techniques and knowledge-sharing.
Helping My Kid Thrive with Etoro:
Through Etoro’s CopyTrading feature, he acquired direct exposure to the trading activities of seasoned financiers, which helped him make informed choices. Etoro’s virtual trading platform allowed him to practice risk-free prior to investing genuine cash.
Advantages and disadvantages of Etoro:
Social Trading Neighborhood: Etoro’s social trading element promotes an interesting environment for knowledge exchange and concept sharing.
CopyTrading: Users can reproduce the trades of successful investors, permitting novices to learn from experienced specialists.
Diverse Possession Choice: Etoro provides a large selection of financial instruments, accommodating the varied interests and preferences of financiers.
User-Friendly Interface: The platform’s intuitive style makes it easy for users to navigate and perform trades.
Restricted Cryptocurrency Selection: While Etoro supports popular cryptocurrencies, it has a narrower range compared to specialized crypto exchanges.
Withdrawal Costs: Etoro charges a cost for withdrawing funds, which can be a drawback for some financiers.
Lack Of Exercise Fees: Users who are inactive for extended durations may incur charges, which can be a concern for infrequent traders.
Leading Tips for Utilizing the Etoro Platform:
Conduct Thorough Research study: Prior to investing, utilize Etoro’s vast academic resources and perform independent research to make informed decisions.
Diversify Your Portfolio: Spread your financial investments throughout different asset classes to reduce danger and make the most of prospective returns.
Engage with the Community: Engage with other users, follow effective traders, and gain from their methods and insights.
Set Realistic Goals: Specify your financial investment objectives and establish a distinct strategy to attain them.
Regularly Review Your Portfolio: Track your financial investments, monitor their performance, and make adjustments as required.
Etoro vs. Binance: A Comparative Analysis:
While Etoro and Binance are both popular trading platforms, they cater to different requirements. Etoro excels in social trading, providing a thorough platform for both newbie and experienced investors to link and discover.
you make money from etoro as a complete novice as you men can see right here this is the Toro so essentially what etoro is is a trading platform now the thing that makes etoro Stick out is the important things that is actually great about this platform is as you guys can see right here we have a news feed and now of course as a beginner you most likely do not know what this is but think of this platform like Facebook meets uh coinbase for instance so basically what you have here is you have a selection of different areas and the distinct part of this platform is that you can actually talk with other people and see what other people are purchasing as you guys can see right here in my news feed based upon the type of stocks that I wish to follow I’m going to be connecting with various other users now as you guys can see right here various individuals are publishing different things somebody’s publishing about this stock and naturally you people can see I can actually click this stock
right here and of course this button right here I can literally invest if I wish to what’s also fascinating is that I can likewise go to the statistics bar as you men can see right here I can see the 6th month I can see the trading chart and what’s also cool about this is I can really utilize this to draw tools and several things so it’s actually very great now what’s also cool is that we likewise have a news tab as you guys can see right here so if you believe that there’s going to be any sort of unfavorable or favorable news you people can see it states positive neutral and after that obviously you people will see if that is going to affect the stock NOW the biggest manner in which you can actually make money from this platform and the most big reason that people do utilize etoro
is the social trading aspect of it so basically what you wish to do is you want to go on Discover and basically you men can see this location right here so you wish to scroll scroll down to copy Trader now if you’re wondering my account balance I am presently in the virtual um one today so as you people can see right here this is copy Trader so basically what this enables you to do is it permits you to essentially just mirror the portfolios of the leading investors so this just suggests that if there is a leading financier okay you can just merely copy their trades and you don’t require to do definitely anything so you men can see right here this is a list this section right here is the list of the top investors so you men can see the return over 2 years over two years he’s posted a return of 72 over 2 years he’s posted 33 over two years he’s published 120 27 so you people can get the idea from there you men can essentially go ahead and click these guys and essentially copy their trades so you people can see on the button on the leading right here you guys can click this button and of course basically what this allows you to do is this will allow you to copy that
individual’s trade now the factor this is actually quite cool is since if you have the ability to copy somebody’s trade basically what this implies is invest extremely wisely without actually doing any work so as you guys can see right here it reveals their risk level and as you men can see right here it says this is his risk score so a threat rating level of two you guys can see right here two programs us that his threat is really really low if someone’s danger rating was level 10 naturally you guys would see that their wrist level is entirely 10 and it suggests it’s most likely to be unpredictable significance you can have very gains and extremely losses now basically all you would wish to do is click this copy button right here then obviously you would simply invest the quantity that you want all right over a particular amount of time so it would say close the investment also right here you guys can see there’s a box beneath and basically what this does is this secures your losses so let’s state for instance you had 10 thousand dollars that you wished to buy and naturally you didn’t wish to lose more than let’s say a thousand dollars you didn’t desire your investment to drop below nine thousand dollars you would say Okay close the financial investment if it drops below nine thousand dollars so naturally as you people can see right there it reveals if the value drops below 88 or 9 000 of course that is going to be something it closes so of course as you guys can say this means if when you want to tick this box you copy the open trades and as a newbie you can put your cash on this platform and
simply Cobble their open trades now you don’t in fact have to do anything right here this is 100 entirely passive and if I click invest it’s going to go ahead and for the next year or so it’s going to copy their trades now as you people can see you can proceed and take a look at comparable Traders right here and see their active thing now let’s make this so heros as you can in fact see what’s inside their portfolio what they’re buying you can see what they’re holding uh how much they’ve you know they have actually been selling exactly what’s been giving them a big profit um you guys can see precisely what they buy you can likewise see the statistics you can see each and every single month that they have actually generated income you guys can see the efficiency of course you men can see 2022 you guys can see just how much they made every month some red months some green months it’s absolutely an extremely extremely interesting way to do this and obviously they do have an area where you can talk with individuals too so that is going to be the main feature of etoro and like I said it’s definitely excellent now there’s many different ways that you can in fact trade too so for example let’s say I wanted to go onto my watch list right here and this is the default watch list that they provide you however let’s state for instance I wanted to trade Tesla I don’t understand if the stock exchange is open right now I believe I think the stock market did simply open so essentially what we can do here is you can obviously trade now you can obviously trade using leverage also so you can even if you only have a thousand dollars to begin you can then naturally use 5 times leverage men and naturally this implies that you can
increase your position size so what’s likewise interesting about this men is that you also do not wish to have a circumstance where the overnight feed um ruins you because of course as you people can see right here when you trade with the utilize guys there is an over night cost because of course you’re using another person’s money you people can see the overnight cost I’m uncertain if you men can see may need to focus it’s four dollars on a weekend and one dollar daily so that indicates each and every single day that is going to be the fee that you pay to etoro for utilizing their money now if you just utilize number one take advantage of regular leverage then you’re not going to be paying that much depending on The Take advantage of you utilize there’s going to be an increased fee because of course your obtain covering that cash now understand that of course buying um any considerable amount of Tesla naturally as you guys can see it really reveals right here the real worth of our actual trade and obviously essentially what you wish to do here people is obviously I would experiment with the virtual account initially because this is going to get you to grips it’s going to have the ability to reveal you uh how you can in fact use the platform to make money now naturally if you do not want to buy you believe a Stock’s going to go down you can do sell naturally it’s basically shorting the marketplace so obviously this is going to be something that you can quickly do and after that naturally we’ve got various ones you can purchase here so basically what you wish to buy and if you believe Tesla’s gon na go to 120 you can do order and then you can strike 120 and state all right when checked is 120 I desire 2 500 to enter and obviously you men see the rate there so that is something you can do and of course it will be purchased with utilize
and of course you people can see you can set your take revenues and things like that so that is something that you can do what’s likewise interesting also is that you can literally speak with all these people that are purchasing Tesla and see their stuff so you people can see this person’s asking is it increasing or down um and the factor I selected Tesla is because it’s an extremely intriguing stop at the moment so it’s absolutely extremely very intriguing and what’s likewise interesting also is that you can also see I think you can see the marketplace belief too due to the fact that uh you understand it’s certainly something that you understand you require to look at and if you likewise wish to take a look at the business’s balance sheets you can see whether it’s increasing or down obviously you people can see overall there’s an uptrend obviously you men to see the earnings is likewise going it’s actually pretty stagnant however it’s still in quite an uptrend so um as you guys can see it’s certainly really really fascinating and you men can see the last 3 years for Tesla have been absolutely insane so all of that stuff alright and when you go to the analysis tab I believe this is where we have some really excellent indicators um I’m just not sure if this is working yeah so you guys can see that right here we have the analysis tab now you can go to any stock click the analysis tab right here so just click analysis and then basically when you scroll down it shows you if it’s a buy or not now people will be stating um it shows you if based upon 16 of offering under my condition on Tesla it states moderate buy it reveals you also the forecast fine the prediction is that it’s going to be at 232 which is an 87 gain based on 16 ranked
analysts and of course as you men can see over the next 12 months people are targeting 4 to nine naturally 258 and of course 87 at the bearish price quote so it gives you a great area um on what you want to do and obviously you guys can see it connects to the post so you can check out the short article because they actually post these for the public obviously you people can see um Tesla and obviously as you men can see right here when you scroll down you can see a lot more details about this stock so you people can see that the the belief naturally
Etoro’s Marketing and Football Connection:
Etoro has effectively used tactical partnerships and sponsorship handle top football clubs to raise brand awareness. By associating itself with the enjoyment and enthusiasm of football, Etoro has actually reached a larger audience and recorded the attention of sports fans who are also thinking about investing.
Etoro has actually changed the financial investment landscape by producing a platform that combines social trading, educational resources, and a diverse range of financial instruments. With its user-friendly user interface and distinct features, Etoro empowers both amateur and seasoned financiers to navigate the financial markets with self-confidence. As a retired hedge fund supervisor, I highly suggest Etoro for those wanting to begin their investment journey or improve their trading abilities. Accept the power of Etoro, and embark on your course towards monetary success.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816832.57/warc/CC-MAIN-20240413180040-20240413210040-00277.warc.gz
|
CC-MAIN-2024-18
| 14,886
| 39
|
https://discuss.96boards.org/t/hikey-boot-u-boot-without-atf-fastboot-l-loader-and-other-components/6940
|
code
|
The Documentation on U-Boot has a nice Wiki to build and run U-Boot on Hikey Lemaker : https://github.com/qemu/u-boot/tree/master/board/hisilicon/hikey.
However it uses lots of other components which I guess are not mandatory :
git clone https://github.com/96boards-hikey/edk2 -b testing/hikey960_v2.5 git clone https://github.com/ARM-software/arm-trusted-firmware git clone https://github.com/96boards-hikey/l-loader -b testing/hikey960_v1.2 git clone https://github.com/96boards-hikey/OpenPlatformPkg -b testing/hikey960_v1.3.4 git clone https://github.com/96boards-hikey/atf-fastboot
However, I want to get rid of these components here as my sole objective here is to run U-boot, as I want to understand U-Boot flow.
Similar to BeagleBone black, I just want to create an SPL like MLO , which then loads U-Boot. Is there a way to run U-Boot directry from Hikey ROM code and get rid of these additional components (ATF, l-loader,etc.)?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660258.36/warc/CC-MAIN-20190118172438-20190118194438-00329.warc.gz
|
CC-MAIN-2019-04
| 936
| 5
|
http://tech.fortune.cnn.com/tag/open/
|
code
|
What we have here folks is a good old fashioned geek war.
Apple CEO Steve Jobs spent about five minutes of yesterday's earnings call (embedded below) berating Google's Android Model and its "disingenuous" "smokescreen" while touting Apple's "integrated, vertical" model where Apple controls most of the hardware and software stack.
the definition of open: "mkdir android ; cd android ; repo init -u git://android.git.kernel.org/platform/manifest.git ; repo sync ; make"
That isn't going to mean much to mainstream smartphone users, but to developers, it is the simple instructions for downloading and installing the Android Open Source OS on a device. More
|Regulators pave way for Internet "fast lane" with net neutrality rules|
|What stumps Warren Buffett? Minimum wage|
|Apple shares soar on increased buyback|
|Facebook profit triples on mobile growth|
|Thanks to Obamacare, more workers may quit their jobs|
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00147-ip-10-147-4-33.ec2.internal.warc.gz
|
CC-MAIN-2014-15
| 912
| 9
|
http://jaxenter.com/discussing-vaadin-7-with-joonas-lehtinen-47697.html
|
code
|
Discussing Vaadin 7 with Joonas Lehtinen
Originally in March's JAX Magazine, we chatted to the creator and CEO of Vaadin about the team's release that had been 18 months in the making, as well as their commitment to Google Web Toolkit
JAX Magazine: For those who are unaware, just what exactly is Vaadin?
Joonas Lehtinen: As a starting point, Vaadin is a framework for building rich Web applications. The basic idea is that you can write fully modern rich Web applications just by writing Java on the server side. So when you're running Java on the server side, it basically means you have all the tooling, all the libraries, all the frameworks – everything you already have been using and you know already. All of that is available for you. Vaadin then creates the UIs automatically on the browser side.
So it’s trying to find a middle-ground between the two?
So how many people use Vaadin at the moment?
Statistically, we are looking most carefully at how many unique visitors we are seeing on the community side, at the moment 105,000 on a monthly basis.
What reasons did you have originally for creating Vaadin, and do those core values still apply today or has the role changed over the years?
The original reason was that I was leading a team in 1997 that were building a hospital system. That was quite a complex web application that we built in Perl and the tools of the day. It became kind of huge pile of spaghetti in the end, because we didn't have any proper tools or libraries for it, so we actually started of thinking of Vaadin already in 2000.
First, we wanted to build a tool for ourselves then we kind of fell in love with the tool and decided to open source it and release that to the public in a distribution in 2002. I guess the reasons for building Vaadin are still there and the values that we were sitting in 2000 are still there. We still try to reduce complexity. We still try to make building nice looking UIs for web as simply as possible. This is in the business context, not a tool for writing a website. It's a tool for writing business applications. But as you can hear the history is quite long – it's already a 12 year-old framework and that's part of the reason we wanted to build Vaadin 7.
Can you explain someone of the core concepts behind Vaadin?
It's a UI library, so everything you see is concentrated on UI Components. The basic idea is that you build the whole UI just by composing it from ready-made components. There are few UI frameworks around like Swing, that just put components within other components then wire them up and they start behaving as a user interface. We have been really focused keeping our thinking on the UI layer and for the Web. Extending from there, we have data bindings from the UI layer to various kinds of data sources. We have different tools for customising the user experience – how it looks, how it behaves and so on, but still the core is the UI components.
I would say that makes Vaadin quite different from most of the competition where most of the UI components actually live on the server side memory. You can just compose in pure Java or in fact in any JVM language on the server side and expect all of the user interactions to be handled over for you in the server side. You don't have to think about how these user interactions actually are sent over the wire to the web browser, how they are rendered and how browsers behave differently. Most of the time, you can think at quite a bit higher level, in trees and tables and tabsheets. Not like HTML Elements and CSS styles.
I think you touched upon on it there – what separates you from other JVM frameworks?
It's mostly used by quite large projects. We try to find a nice balance between developer abstracts and how much you help control the presentation, but still keeping the building of the UI and maintaining of the UI relatively easy and compact.
There also seems to be an ability to mix and match too – it seems quite customisable. Is that a goal you had in mind also?
Definitely. One of the most powerful concepts we introduced a couple of years ago was a component directory. So now we have a directory where the community can send their own components so that people can reuse them. The nice thing is that we really nailed the packaging of the components, the distribution of them and how you start to use them. The community has now submitted 300 components in there, such as adding a Maven definition into your pom file or download the component and Vaadin takes over and integrates the component automatically into your application.
- Origins of Vaadin
- Vaadin 7 and Google Web Toolkit
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657116650.48/warc/CC-MAIN-20140914011156-00029-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
|
CC-MAIN-2014-41
| 4,646
| 19
|
https://marcusclasson.com/2013/07/14/reading-a-windows-azure-service-bus-queue-from-net/
|
code
|
This is the second part in a multi part blog post where the previous part is here:
We have a couple of message in the queue. Let’s pull them out. This is the design of these objects we did in the previous post:
This is simulating a mobile application that can place orders of some kind.
To easily deserialize these into POCOs we create the container classes like this:
So, first of all go to the Azure portal and get the connection string to the queue. Click on the link at the bottom saying connection information.
In the following popup you click the “copy to clipboard“-icon to the right of the “ACS Connection string” box. There, now you have the connection to the endpoint in your clipboard.
The connection string goes right where it says in the following snippet
Here you get an instance of the NamespaceManager and then use it to check if the queue exists or, otherwise, create it. Orderqueue is the name we chose in the previous sample. Change to whatever your queue is called.
If everything is fine we go ahead and create the QueueClient pointing straight at our queue.
So, that’s all the setup needed.
Let’s go get some object then. Now, remember this example is overly simplified. It’s just blocking forever until a message arrives. It could very well that be you want some timeouts to act upon. The Client.Receive() optionally takes a TimeSpan for specifying the time to wait, just as in the MSMQ counterpart.
You perhaps also would like to check out the Async-methods and see if they better suit your needs. This sample is suitable for a Windows Service or any non-server application where you control the flow. I’m just using one thread for the loop so it’s fine.
A few comments on the code. The BrokeredMessage contains a lot of metadata that you normally would like to extract. I’m just using the EnqueueTime as a sample. One interesting property is DeliveryCount. This is the number of time this message has been picked up for delivery but then just dropped (or at least not completed). SequenceNumber is another one. This can be used to check the ordering of the received messages if that is important in your application.
Ok. We have the message received alright. Since we didn’t go for the xml formatting but used JSON instead, we cannot use just a one-liner for the deserialization. Instead we take the body in the form of a Stream. Remember, the GetBody-method checks for the property “body” in the message
As you see on line 11 we add a property named “body” to hold the user request.
An excerpt from the documentation:
“Completes the receive operation of a message and indicates that the message should be marked as processed and deleted.”
And in our catch we just send it to the deadletter queue where all the garbage ends up. A bit crude but in this sample it is fine.
A few notes at the end:
In the Azure Portal you can set the default behaviour of the queuelike the DeliveryCount I mentioned.
But also, and this is a tip, the lock duration. When you pull the message out of the queue to process it, you have the amount of time specified here at your disposal. After that time has elapsed the message is unlocked and any message.Complete() after that will fail. The DeliveryCount will be increased on the message and it is ready to be retrieved again (by you or another application). I’m mentioning this because during debugging you will probably want to increase this to a very large number to avoid problem.
Another tip to make this work is that you will have to install the Azure SDK to get it to compile. Nowadays this is done preferably through NuGet.
Right-click on your project, select “Manage NuGet packages…” and search for “service”. There you will find the Azure Service Bus SDK. Click install and you are good to go.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500158.5/warc/CC-MAIN-20230205000727-20230205030727-00132.warc.gz
|
CC-MAIN-2023-06
| 3,803
| 23
|
https://www.antisip.com/windows-demo
|
code
|
Windows Desktop Demo: Emansip
We offer a demo on windows that will help you to evaluate amsip key features. This application will help you to test most possible scnenario that you can implement with the amsip SDK.
Download emansip here!
- initiate an audio and video call
- add video, switch from static image (privacy) to real camera
- basic friends presence using SIP/SIMPLE
- test low and high video quality (upload and download settings are configurable and negotiated by the stack)
- test conference audio & video calls
- configure a secondary account to test YOUR server (then dial sip:email@example.com OR digit only numbers to use your server
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400210996.32/warc/CC-MAIN-20200923113029-20200923143029-00664.warc.gz
|
CC-MAIN-2020-40
| 650
| 9
|
https://forum.newsblur.com/t/dont-automatically-open-first-article-when-switching-folder/1802
|
code
|
I tried searching for this in multiple ways but I didn’t come across anything like what I intend, but I can’t believe noone else has previously requested it.
Basically, when I change folders it automatically opens the first story in the article list. I often switch folders looking for any new articles and interesting headlines but don’t always read the first one as soon as I switch.
It would be nice to either disable the automatic story opening as soon as the folder is selected (maybe show the stats/splash page instead) or add a significant timeout delay before that article is read - but only that one. I select uninteresting articles to mark them as read so having to wait 10-30 seconds for each story would become tedious.
Thanks for all your hard work with NewsBlur, I’m glad you survived through the teething problems after the Reader news broke.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572043.2/warc/CC-MAIN-20220814143522-20220814173522-00265.warc.gz
|
CC-MAIN-2022-33
| 865
| 4
|
https://www.warriorforum.com/programming/1361429-how-make-online-paid-service.html?utm_source=internal&utm_medium=discussion-list&utm_campaign=feed&utm_term=title
|
code
|
I was wondering, how can I make an online paid service?
The thing is, I have a software that I want to monetize, the sw is a collection of tools for a chemistry\medic lab,
in the specific it process spectrograms\cromatograms, it have function like noise reduction,baseline drift correction,
advanced smoothing etc...
The user upon payment will be able to upload the files to process and select the tools to use
I was thinking about running the software serverside, and using Paypal API's on the website to control the payments and the UI,
but since I never tried to do something like this I'm searching for hints tutorials.. anything =D
thanks in advance
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318986.84/warc/CC-MAIN-20190823192831-20190823214831-00393.warc.gz
|
CC-MAIN-2019-35
| 654
| 8
|
https://docs.onapp.com/pages/viewpage.action?pageId=35652143
|
code
|
This section lists the recommended network configurations for an OnApp Cloud installation.
Server Config Reminder - supported versions of the servers
Suggested Specifications for OnApp
Types of Cloud Service with OnApp
Provisioning network is not required for clouds using Integrated Storage with dedicated backup servers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511364.23/warc/CC-MAIN-20231004084230-20231004114230-00263.warc.gz
|
CC-MAIN-2023-40
| 322
| 5
|
https://discussion.dreamhost.com/t/mail-app-imap-config-with-ssl/21726
|
code
|
How do I configure Mail.app for my IMAP account with SSL? Before checking the SSL box in “Server Settings,” sending and receiving worked fine. Now I can receive but not send.
Here’s the setup.
-Outgoing Mail Server: (mail.mydomain.com, I assume)
-Server port: (587? 993?)
-Use Secure Sockets Layer (SSL)
-User Name: (mine)
Essentially, what’s my outgoing mail server, port, and authentication method?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00483.warc.gz
|
CC-MAIN-2021-21
| 408
| 7
|
https://syntaxfix.com/question/13931/how-to-create-cron-job-using-php
|
code
|
I'm new to using cron job. I don't even know how to write it. I have tried to search from internet, but I still don't understand it well. I want to create a cron job that will execute my code every minute. I'm using PHP to create it. It is not working.
run.php (Code that will be executed every minute)
<?php echo "This code will run every minute"; ?>
<?php $path = dirname(__FILE__); $cron = $path . "/run.php"; echo exec("***** php -q ".$cron." &> /dev/null"); ?>
Suppose that these two files are in the same folder.
Is the code that I did wrong? If wrong, please kindly tell me how to fix it.
This is the best explanation with code in PHP I have found so far:
Although the syntax of scheduling a new job may seem daunting at first glance, it's actually relatively simple to understand once you break it down. A cron job will always have five columns each of which represent a chronological 'operator' followed by the full path and command to execute:
* * * * * home/path/to/command/the_command.sh
Each of the chronological columns has a specific relevance to the schedule of the task. They are as follows:
Minutes represents the minutes of a given hour, 0-59 respectively. Hours represents the hours of a given day, 0-23 respectively. Days represents the days of a given month, 1-31 respectively. Months represents the months of a given year, 1-12 respectively. Day of the Week represents the day of the week, Sunday through Saturday, numerically, as 0-6 respectively.
So, for example, if one wanted to schedule a task for 12am on the first day of every month it would look something like this:
0 0 1 * * home/path/to/command/the_command.sh
If we wanted to schedule a task to run every Saturday at 8:30am we'd write it as follows:
30 8 * * 6 home/path/to/command/the_command.sh
There are also a number of operators which can be used to customize the schedule even further:
Commas is used to create a comma separated list of values for any of the cron columns. Dashes is used to specify a range of values. Asterisksis used to specify 'all' or 'every' value
Visit the link for the full article, it explains:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646257.46/warc/CC-MAIN-20230531022541-20230531052541-00741.warc.gz
|
CC-MAIN-2023-23
| 2,108
| 18
|
https://www.howtogeek.com/286931/are-ntfs-compressed-files-decompressed-to-disk-or-memory/
|
code
|
If you are looking for ways to tweak your Windows system to conserve disk space, you might be looking at NTFS compression as an option. But if you choose this option, then how does the decompression process work? Today’s SuperUser Q&A post has the answer to a curious reader’s question.
Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.
SuperUser reader CausingUnderflowsEverywhere wants to know if NTFS compressed files are decompressed to disk or memory:
How does NTFS decompression work in Windows? According to Microsoft, NTFS decompression is done by expanding the file, then using it. That sounds right, but my question is how does this process occur technically?
Does Windows load the compressed file into memory, expand it in memory, then read it from memory? Or does it load the compressed file into memory, expand it to disk or memory, write it to disk, then read it?
I am trying to figure out if I can improve my computer’s performance by using NTFS compression. That way, a slow hard drive or SSD that is unable to handle that many write operations will always have less data to write and read, and the powerful processor that is idling most of the time can decompress the files and improve my computer’s storage speed and health.
Are NTFS compressed files decompressed to disk or memory?
SuperUser contributor Ben N has the answer for us:
Windows decompresses files into memory. Doing it to disk would completely obliterate any speed improvements and would cause a lot of unnecessary disk writing. See the end of this Microsoft blog article on NTFS sparse files and compression.
Of course, if you are low on memory, the memory used by the decompression process could cause other memory be paged out and written to disk in the page file. Fortunately, only the chunks containing sections that your programs actually read will be decompressed. NTFS does not have to decompress the whole thing if you only need a few bytes.
If your SSD is fast, you are probably not going to get any speed improvements from NTFS compression. It is conceivable that the time your processor spends decompressing data plus the time your disk spends reading the compressed data could add up to be more than the time your SSD takes to read the uncompressed data.
It also depends on the size of the files you work with. The minimum size of a compressible file ranges from 8 – 64 KB, depending on your cluster size. Any files less than that in size will not be compressed at all, but a tiny amount of bookkeeping would be added. If you do a lot of writing to compressed files, you could see a lot of variance in speed due to the compression algorithm used (LZ).
Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.
Image Credit: Jannis Andrija Schnitzer (Flickr)
- › 6 Ways Our Tech Is Better Than Star Trek’s
- › 5 Ways to See If Your Phone Is Being Tapped
- › 10 Kodi Features You Should Be Using
- › What You Should (and Shouldn’t) Unplug or Turn Off When You Go On Vacation
- › Update iTunes on Windows Now to Fix a Security Flaw
- › How to Add a Shortcut to Pretty Much Anything on Android
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651325.38/warc/CC-MAIN-20230605053432-20230605083432-00536.warc.gz
|
CC-MAIN-2023-23
| 3,335
| 20
|
http://lists.seas.upenn.edu/pipermail/types-announce/2013/003816.html
|
code
|
[TYPES/announce] GlynnFest Workshop, May 31st and June 1st, Cambridge University Computer Laboratory
hilde at itu.dk
Tue May 14 05:25:01 EDT 2013
We are happy to announce a workshop to honour Glynn Winskel on the occasion of his 60th birthday.
The workshop will take place at Cambridge University Computer Laboratory on May 31st and June 1st.
The speakers will be:
- Samson Abramsky, University of Oxford
- Henrik R. Andersen, Configit
- Steve Brookes, Carnegie-Mellon University
- Pierre-Louis Curien, University of Paris 7
- Olivier Danvy, University of Aarhus
- Marcelo Fiore, University of Cambridge
- Thomas T. Hildebrandt, IT University of Copenhagen
- Martin Hyland, University of Cambridge
- Kim G. Larsen, University of Aalborg
- Ugo Montanari, University of Pisa
- Mogens Nielsen, University of Aarhus
- Prakash Panangaden, McGill University
- Andy Pitts, University of Cambridge
- Gordon Plotkin, University of Edinburgh
- Vladimiro Sassone, University of Southampton
Participation to the workshop is open, but attendees are kindly requested to register in advance.
More details about the workshop venue and program and about the registration procedure can be found at the following link
Best wishes on behalf of the organizing committee,
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Types-announce
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00641.warc.gz
|
CC-MAIN-2023-14
| 1,366
| 27
|
https://www.flatpyramid.com/help/what-is-flat-pyramid/
|
code
|
What Is Flat Pyramid?
Flat Pyramid is an organization that specializes in the distribution of 3d models and other digital content and interactive media. It is a crowd sourced marketplace for Augmented Reality 3d models, 3d Printing 3d models, Mobile ready 3d models and Online game ready 3d models.
Flat Pyramid is an intermediary platform where digital artists generate revenue from selling their 3d content. Flat Pyramid’s growing library has tens of thousands of products for purchase or for free.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813431.5/warc/CC-MAIN-20180221044156-20180221064156-00183.warc.gz
|
CC-MAIN-2018-09
| 502
| 3
|
http://facetimeforandroidd.com/microsoft-exchange/microsoft-exchange-information-store-service-error-0x80004005.php
|
code
|
Join & Ask a Question Need Help in Real-Time? This was the last missing piece of the puzzle. Now I tried to manually start this service and I get this error: --------------------------- Services --------------------------- Windows could not start the Microsoft Exchange Information Store on Local Computer. You’ll be auto redirected in 1 second. http://facetimeforandroidd.com/microsoft-exchange/microsoft-exchange-information-store-service-error.php
These related events may help you find the root cause of this error. Now IMF is not available in the tabs, but the store is now mounting and running. Disk sec/Read. Manage Your Profile | Site Feedback Site Feedback x Tell us about your experience...
This key is outlined in: http://www.petri.co.il/installing_imf_with_exchange_2003_sp2.htm. They can also help you identify and resolve performance issues, improve mail flow, and better manage disaster recovery scenarios. Did the page load quickly?
Collect: MSExchangeIS Client: JET pages preread/sec The Exchange store failed to start because it is out of memory Collect: MSExchange Database: I/O Database Writes/sec (Report Collection). Previous checkpoint='0' - Current checkpoint='1'. [6/5/2007 3:14:05 PM] Will wait '10000' milliseconds for the service 'MSExchangeIS' to reach status 'Running'. [6/5/2007 3:14:15 PM] Service 'MSExchangeIS' failed to reach status Unable to initialize the Exchange store due to the clocks on the client and server not being synchronized Collect: MSExchangeIS Mailbox: Messages Queued for Submission. We show this process by using the Exchange Admin Center.
Review Operations Manager for detailed information about the cause of this problem. Tuesday, November 17, 2009 5:08 AM Reply | Quote 0 Sign in to vote I too am having this problem. Yes No Additional feedback? 1500 characters remaining Submit Skip this Thank you! visit Run Policytest.exe.
You’ll be auto redirected in 1 second. When this condition is true, one or more Exchange Server related services will not start. Note The Policytest.exe tool cannot be used in a native Exchange 2007 environment. If some domain controllers or all domain controllers do not have the correct permissions, assign the "Manage auditing and security log" permission to the Exchange Enterprise Servers group.
In the right pane, double-click Generate security audits, click Add, enter the MACHINENAME$, and then click OK two times. https://support.microsoft.com/en-us/kb/285116 The Exchange store has terminated The Exchange store has failed to initialize Collect: MSExchangeIS: Client:RPCs succeeded (Report Collection). If you are not already doing so, consider running the tools that Microsoft Exchange offers to help administrators analyze and troubleshoot their Exchange environment. Then, run the setup /domainprep command again.
Yes No Do you like the page design? useful reference Most of the messages that I have been able to find are talking about upgrades from Exch SP1 to SP2, but that is not the issue here. Operations Manager Management Pack for Exchange 2007 Mailbox Information Store Information Store Unable to initialize the Microsoft Exchange Information Store service Unable to initialize the Microsoft Exchange Information Store service Unable Because your organization may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review your organization's guidelines first.
From the MOM Operator Console, click the Events tab, and then double-click the event in the list for which you want to review the event description. Then, try to start the Information Store service. Hire A Freelancer Suggested Solutions Title # Comments Views Activity Reset ActiveSync device PIN without wiping device (Exchange 2013) 1 19 8d Outlook and Webmail doesn't show same availability in calendar my review here Nashtri _____________________________Phew!
Get quality work. Navigate to the Servers >> Data… Exchange Email Servers Basics of Database Availability Groups (Part 2) Video by: Tej Pratap The video tutorial explains the basics of the Exchange server Database If some domain controllers do not have the correct permissions, see step 4. Manage Your Profile | Site Feedback Site Feedback x Tell us about your experience...
Had the exact same error in the logs. When the box is rebooted, all services start correctly except the Exchange Information Store (all Exchange services along with all Server services). Manually grant the Local System account the Generate security audits right on one of the following policies: The domain controller's policy The domain policy The Local Machine Security policy To regrant http://facetimeforandroidd.com/microsoft-exchange/microsoft-exchange-information-store-service-error-1068.php MSPAnswers.com Resource site for Managed Service Providers.
Reset the Exchange Enterprise Server default permissions at the domain level. For more information about why Exchange 2000 Server and Exchange Server 2003 use the Local System account to start Exchange services, see Microsoft Knowledge Base article 239762, Exchange Services Run Under Failed to find the working directory parameter from the registry - Error 0x80004005. I tried going in to the key to make sure the permissions were set to run with the system user and that did not work.
Share to Twitter Share to Facebook Newer Post Older Post Leave a Reply EXCHANGE RANGER RSS Feed Certifications Sponsors and Partners Find Me On... The components of this video include: 1. I just need a clean uninstall. 0 LVL 23 Overall: Level 23 Exchange 19 Message Accepted Solution by:Justin Durrant2008-10-13 in ADSIEdit, go to Configuration > Services > Microsoft Exchange > Copyright © 2014 TechGenix Ltd.
Expand the Organization object, and then expand the Recipients container. Then, wait for the domain controllers to replicate the changes throughout the domain.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657510.42/warc/CC-MAIN-20190116134421-20190116160421-00140.warc.gz
|
CC-MAIN-2019-04
| 5,876
| 14
|
http://bydgoskiemeble.net/neapolitan-wafer-nleh/0e7d29-git-and-github
|
code
|
Here we verify that RStudio can issue Git commands on your behalf. Git and GitHub are two popular terms used regularly in the Coding Platforms. When you connect to a GitHub repository from Git, you'll need to authenticate with GitHub using either HTTPS or SSH. When I was in my first Computer Science course, I never knew what version control was or how Github even worked. Incomplete. Next steps: Authenticating with GitHub from Git. GitHub is a Git-based repository hosting platform with 40 million users (January 2020) making it the largest source code globally. Set up ssh on your computer. In later chapters and in live workshops, we revisit these operations with much more explanation. By downloading, you agree to the Open Source Applications Terms. And Github is just a … Do that now by using the git commit command. Add to Trailmix. Set your username in Git. Using Git. New Branches are for bug … You can look at other people’s code, identify issues with their code and even propose changes. Open RStudio. GitHub Learning Lab offers free interactive courses that are built into GitHub with instant automated feedback and help. The first of these will enable colored output in the terminal; the second tells git that you want to use emacs. Learn Where GitHub Fits in the Development Lifecycle ~10 mins. Git is installed locally on the system. Git vs. TFS. Using GitHub. Download for macOS Download for Windows (64bit) Download for macOS or Windows (msi) Download for Windows. Any other branch is a copy of the master branch (as it was at a point in time). Git and GitHub are used frequently by developers everywhere. GitHub is a web-based platform that incorporates git’s version control features so they can be used collaboratively. 18 Git and GitHub. GIT and GitHub Git is a distributed version control software which you need to install on your local system in order to use it. Chapter 12 Connect RStudio to Git and GitHub. Now that you have what you need installed locally, let’s create the repository that will hold your new website. GitHub Desktop Focus on what matters instead of fighting with Git. This course is designed to jump right into showing how Git and GitHub work together, focusing on the Git basic workflow. TFS users “check-in” which invokes file locking whereas Git users do commits based on distributed full versions with difference checking. Version control is an essential skill for developers to master, and Git is by far the most popular version control system on the web. GitHub is a treasure trove of some of the world's best projects, built by the contributions of developers all across the globe. Learn Why Version Control Is Important for Team-Based Development ~10 mins. Git Started with GitHub. When you commit changes, you are telling Git to make a snapshot of this state in the repo. Assuming that you’ve gotten local Git to talk to GitHub, this means you’ll also be able to pull from and push to GitHub from RStudio. We've released a crash course video from Gwen Faraday that will teach you the basics of Git In this fast-paced course, author Ray Villalobos shows you how to install Git and use the fundamental commands you need to work with Git projects: moving files, managing logs, and working with branches. 39.4 Overview of Git. Anyway, let’s start with our list: 1. 3. Download and install Git. Add to Favorites. Photo by Matty Adame on Unsplash. Git is a command-line tool: GitHub is a graphical user interface: 3. Git and GitHub Basics. Happy Git aims to complement existing, general Git resources by highlighting the most rewarding usage patterns for data science. The Git Started with GitHub. Learn about version control systems and practice using Git and GitHub. This course is designed to jump right into showing how Git and GitHub work together, focusing on the Git basic workflow. I was blindly doing commits for school work and hoping it went through. GitHub is a web-based service for version control using Git. Download and install RStudio (1.1.383 or higher). It also includes project and team management features, as well as opportunities for networking and social coding. For Ubuntu: First, update your packages. By default a repository has a master branch (a production branch). 2. git push origin master -> pushes your files to github master branch git push origin anyOtherBranch -> pushes any other branch to github. GitHub official web page Git installation. Create the remote repository on GitHub. GitHub is a service. Git is a version control system, a tool that tracks changes to your code and shares those changes with others.Git is most useful when combined with GitHub, a website that allows you to share your code with the world, solicit improvements via pull requests and track issues. I like Roger Peng’s guide to setting up password-less logins.Also see github’s guide to generating SSH keys.. Look to see if you have files ~/.ssh/id_rsa and ~/.ssh/id_rsa.pub. Set your commit email address in Git. sudo apt-get install git. Git comes with built-in GUI tools (git-gui, gitk), but there are several third-party tools for users looking for a platform-specific experience. 9: Use gists to share snippets and pastes GitHub “gists”—shared code snippets—are not a Git feature, but they use Git. The use of Git/GitHub in data science has a slightly different vibe from that of pure software develoment, due to differences in the user’s context and objective. Git is … To get started, you can create a new repository on the GitHub website or perform a git init to create a new repository from your project directory.. You will learn how to set up Git for Windows and Mac OS X and then how to use Git’s help command. In the end of this course you'll be able to manipulate git and github like a master , and word in team very good and fine tags ~1 hr 50 mins. Download and install the latest version of Git. Use Git not just in terminal but also in graphical user interfaces like GitHub Desktop, SourceTree, Visual Studio Code Learn different GIt objects - blobs, trees, commits and annotated tags Create local and remote Git repositories If you’re serious about software development, you need to learn about Git. git --version. This course is designed to reduce academic theory and to key concepts and focus on the basic tasks in Git in order to be productive. Git is the magic sauce that allows you to track and host versions of files on Github. 2. Git is maintained by linux. Git is a command-line tool, but the center around which all things involving Git revolve is the hub—GitHub.com—where developers store their projects and network with like minded people. Committing Changes. Familiarize yourself with Git by visiting the official Git project site and reading the ProGit ebook.You can review the Git command list or Git command lookup reference while using the Try Git simulator.. Originally, GitHub launched in 2008 and was founded by Tom Preston-Werner, Chris Wanstrath, and PJ Hyett. Git/GitHub tip No. The repository consists of three ‘trees.’ First is the working directory, which holds the actual files.The second one is the index or the staging area. These tools are used at most software companies and so they are important to understand if you want a job in the software industry. Connecting over HTTPS (recommended) Git GitHub; 1. GitHub is hosted on the web 4. sudo apt update. Basically, it is a social networking site for developers. GitHub, on the other hand, is a website that hosts Git repositories in a central server to share them with the rest of the world. Now we need to install Git's tools on our computer. Let’s go over a few of the main reasons that geeks like to use GitHub, and learn some terminology along the way. git log ; to see all your commits git checkout commitObject(first 8 bits) file.txt-> revert back to this previous commit for file file.txt The -m option tells Git to use the commit message that follows. 4. Next, install Git and GitHub with apt-get. If you don’t use -m, Git will bring up an editor for you to create the commit message.In general, you want your commit messages to reflect what has changed in the commit: In other words, you use commands of Git to track versions of your files. Basically, it is a social networking site for developers. The main actions in Git are to: pull changes from the remote repo, in this case the GitHub repo; add files, or as we say in the Git lingo stage files; commit changes to the local repo; push changes to the remote repo, in our case the GitHub repo; To effectively permit version control and collaboration in Git, files move across four different areas: The GitHub Workflow with git LFS and file locking support, all within Unity. 1. View GUI Clients → Logos You'll also learn to use Git and GitHub, troubleshoot and debug complex problems, and apply automation at scale by using configuration management and the Cloud. Finally, verify that Git is installed correctly. GitHub projects can be made public and every publicly shared code is freely open to everyone. GitHub is maintained by microsoft. 6 min read. Setting up Git. ; If not, create such public/private keys: Open a terminal/shell and type: TFS has its own language: Check-in/Check-out is a different concept. A GitHub branch is used to work with different versions of a repository at the same time. Git is a software. For this tutorial you will use Git and RStudio to work with your GitHub repository. This simple, yet extremely powerful platform helps every individual interested in building or developing something big to contribute and get recognized in the open source community. Whether you're new to Git or a seasoned user, GitHub Desktop simplifies your development workflow. We’ll use CLI to communicate with GitHub. Students can expect to learn the minimum needed to start using Git in about 30 minutes. For an individual working on a project alone, Git … TFS is a centralized version while Git is distributed as everyone has a full copy of the whole repo and its history. 5. Well as opportunities for networking and social coding interface: 3 and hoping it went through git and github that are into! Jump right into showing how Git and GitHub start with our list: 1 you connect to GitHub! New Branches are for bug … Git and GitHub work together, focusing on git and github basic... Two popular terms used regularly in the development Lifecycle ~10 mins its own language: Check-in/Check-out is a service... And Mac OS X git and github then how to use the commit message that follows matters instead of with... Install on your behalf is the magic git and github that allows you to track and versions! Full copy of the master branch ( as it was at a point in time ) basic workflow bug., we revisit these operations with git and github more explanation by using the Git commit command course! Used at most software companies and so they are Important git and github understand if you want a job in the Platforms... Is designed to jump right into git and github how Git and GitHub Git is distributed as everyone has full... With different versions of files on GitHub user, GitHub launched in 2008 and founded! That you have what you git and github to authenticate with GitHub using either or! Let ’ s create the repository that will hold your new website is the magic sauce allows! Freely Open to everyone highlighting the most rewarding usage patterns for data science social networking site developers! We ’ ll use CLI to communicate with GitHub using git and github HTTPS SSH! At a point in time ) then how to use it the development Lifecycle ~10 mins work with versions... Support, all within Unity verify that RStudio can issue Git commands on your git and github system order... To install on your local system in order to use Git ’ start... Of this state in the coding Platforms at the same time serious about software development you... Site for developers can expect to learn about Git at a point git and github time ) with our list 1. Data science Windows ( 64bit ) Download for macOS or Windows ( 64bit ) Download for Windows and Mac git and github. Repo and its history and its history by using the Git basic workflow the industry... Of fighting with Git it also includes project and team management features, git and github! User, GitHub Desktop simplifies your development workflow Open Source Applications terms are two popular terms used regularly in repo. Git LFS and file locking support, all within Unity publicly shared code is Open! Tools are used at most software companies and so they git and github Important to understand you! Team management features, as well as opportunities for networking and social coding of the whole repo its... A distributed version control features so they are Important to understand if you want a job in the software.! Fighting with Git to learn about Git other people ’ s help command commit message that follows want a in... To install on your local system in order to use it by Tom Preston-Werner git and github Chris Wanstrath, and Hyett... ) git and github for Windows state in the software industry Open Source Applications terms everyone. Start with our list: 1 on distributed full versions with difference checking repository has full. ~10 mins to Git or a seasoned user, GitHub launched in 2008 and was founded by Preston-Werner! Github is a copy of the whole repo and its history track and host versions of files on GitHub your! You will learn how to use Git ’ s version control systems and practice using Git and GitHub and! Two popular terms used regularly in the software industry simplifies your development workflow features, as well git and github for... Now we need to learn about version control features so they can be made public every!, general Git resources by highlighting the most rewarding usage patterns for data science with GitHub using HTTPS... And file locking support, all within Unity Important for Team-Based git and github ~10 mins such keys! It was at git and github point in time ) installed locally, let ’ s code identify. File locking whereas Git users do commits based git and github distributed full versions with difference checking is distributed everyone. A full copy of the master branch ( as it was at a point in time ) locally let. Popular terms used regularly in the development Lifecycle ~10 mins service for version control software which you need to on. Distributed version control was or how GitHub even worked git and github need installed locally, let ’ s,. Use it git and github all within Unity Git or a seasoned user, GitHub Desktop simplifies your development.! Or a seasoned user, GitHub Desktop Focus on what matters git and github fighting. Git resources by highlighting the git and github rewarding usage patterns for data science X and then how set. A centralized version while Git is distributed as everyone has a full copy of the master git and github... At a point in git and github ) be used collaboratively propose changes school work and hoping went... Github even worked GitHub is a different concept at the same time practice Git. Github branch is a web-based platform that incorporates Git ’ s code, identify issues with their and... Msi ) Download for macOS Download for Windows and Mac OS X then! Using either HTTPS or SSH are used at most software companies and so they can be collaboratively. Social networking site for developers communicate with GitHub using either HTTPS or SSH by the!
Golf Camp Concord Ma, Japanese Trapdoor Snails Temperature, Carbs In Wendy's Bbq Sauce, Cake The Curl Friend Ingredients, What Happened To Aqua Fortnite, Vodka Cocktails For A Crowd, Mac Linux Usb Loader Kali, Trader Joe's Watermelon Overnight Face Mask Review, Radar Detector Uk Legal,
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104506762.79/warc/CC-MAIN-20220704232527-20220705022527-00651.warc.gz
|
CC-MAIN-2022-27
| 15,812
| 2
|
https://www.metaltoad.com/blog/first-class-dependency-objects-ecmascript-harmony
|
code
|
First class dependency objects in ECMAScript Harmony
My favorite perk of Require.JS is how dependencies are sent as arguments to a factory function. Treating dependencies as first class objects makes much more sense to me than polluting the window object - something that happens too much with other dependency management solutions (I'm looking at you, Steal.JS).
First class dependency objects via Require.JS
In this example,
jquery.js are fully loaded first, then passed as objects named
$, respectively. Once in the factory function, I have free reign over the dependencies' methods, properties, and prototypes.
First class dependency objects via ECMAScript Harmony
module helloworld from '../src/helloworld.js';
module $ from '../lib/jquery.js';
This is the same result as Require.JS - the dependencies are loaded asynchronously (Harmony loads modules at compile time), and the dependencies are nicely encapsulated in their respective first class objects.
As it stands, I find the Require.JS syntax to be more readable. The list of dependencies and their mapping to objects is crystal clear. Since I haven't been able to test ECMAScript Harmony modules in a browser yet, the jury is still out on the pleasures of the new module syntax.
it should be noted that in your require.js example, without wrapping jquery in a define that returns jQuery, it does pollute the window object, with window.jQuery.
Tue, 02/21/2012 - 20:15
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648695.4/warc/CC-MAIN-20230602140602-20230602170602-00016.warc.gz
|
CC-MAIN-2023-23
| 1,427
| 13
|
https://chapel.discourse.group/t/new-issue-should-we-rename-isfloattype-and-friends-to-something-else/10943
|
code
|
19362, "stonea", "Should we rename isFloatType (and friends) to something else?", "2022-03-04T22:08:23Z"
This is something that came up during the types module review (https://github.com/Cray/chapel-private/issues/3034)
In Chapel we have the following query functions that ask something about a kind of a type:
proc isPrimitiveType(type t) param proc isPrimitiveValue(e) param proc isPrimitive(e) param proc isNumericType(type t) param proc isNumericValue(e) param proc isNumeric(e) param proc isIntegralType(type t) param proc isIntegralValue(e) param proc isIntegral(e) param proc isFloatType(type t) param proc isFloatValue(e) param proc isFloat(e) param
Where each kind is defined as being one of the more specific types on the right hand side of this table:
Primitives: nothing, void, bool, <anything numeric>, string, bytes Numeric: <anything integral>, <anything float>, complex Integral: int, uint Float: real, imaginary Enum: enum
For the most part I think the categories\names are good. The one that sticks out to me is 'float'. As a C programmer I'm used thinking of float as being a concrete (not a generic) type so I'm afraid some people might find this confusing.
During the module review meeting we discussed renaming this to 'isRealType' or 'isFloatingPointType' but didn't reach any consensus at the time.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/warc/CC-MAIN-20220807150925-20220807180925-00291.warc.gz
|
CC-MAIN-2022-33
| 1,322
| 8
|
https://www.twosigma.com/businesses/
|
code
|
We discover value in the world’s data.
More than 1,600 people who believe the scientific method is the best way to approach financial markets. Ideas backed up with information. And improved by iteration. That’s Two Sigma.
Expertise across financial services.
Since 2001 we’ve brought advanced data science and technology to a range of financial categories to deliver value to our clients.
This is an auto-rotating carousel of quotes. Disable rotation by activating any of the buttons or by pausing animations in the global site controls. Navigate using the previous and next buttons, or jump to a new quote by selecting a slide.
We believe that using data and a technology platform, and trying to accumulate as much information as you can to make the best predictions and manage risk effectively, is the right way to go.
Chief Business Officer, Two Sigma
Data science and technology are two of the central forces driving breakthroughs in many industries, and they will be key ingredients in the creation of some of the most important businesses of our lifetime.
Partner, Two Sigma Ventures
Two Sigma brings data expertise to the projects it takes on, but is guided by the nonprofit's experience and knowledge. The innovation is meshing these two together.
Rachael Weiss Riley
Director, Data Clinic
This is an acceleration point. We recognize that this industry, like many, desperately needs a new approach around data science and technology infrastructure.
CEO, Insurance Quantified
The workforce is losing the upper hand. Most businesses are thinking about how to automate away.
Yet it turns out humans are incredibly remarkable machines. Empathy, creativity, and dexterity are unique human skills and we need to incentivize society to think of ways to use unique human talents. This is a business opportunity.
Co-founder and Co-chairman, Two Sigma
We bring Two Sigma’s people, data science skills, and technological know-how to help our non-profit community partners use data and tech more effectively.
Our Two Sigma experts collaborate with the world’s top data scientists on cutting-edge research ideas at the intersection of data science, technology and finance.
We contribute to open source technologies and we believe in open sourcing the tools we’ve developed to help others discover value in the world’s data.
Partnering with talented professors and doctoral students is crucial to our mission of finding value in the world’s data.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100603.33/warc/CC-MAIN-20231206194439-20231206224439-00061.warc.gz
|
CC-MAIN-2023-50
| 2,456
| 21
|
http://www.engr.iupui.edu/~skoskie/ECE362/lecture_notes/LNB04_html/text8.html
|
code
|
First page Back Continue Last page Overview Graphics
Mapping of Internal Resources
Internal resource addresses can be changed from the defaults.
Relocatable resources include
- The internal register block,
- Flash EEPROM (B32) or ROM (BE32), and
Resource mapping registers are
- INITRG (0x0011), INITRM (0x0010), and INITEE (0x0012).
Resource mapping is covered in section 5.3 of the M68HC12B Family Data Sheet.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743351.61/warc/CC-MAIN-20181117082141-20181117104141-00178.warc.gz
|
CC-MAIN-2018-47
| 411
| 9
|
http://www.dlxedu.com/askdetail/3/bdb93198464882e2fc68cf2118fd2d13.html
|
code
|
I am trying to define a shape in an XML doc as follows:
<corners android:radius="3dp" />
<stroke android:width="5px" android:color="#000000" />
However, I get the following warnings and errors:
How can I clear these errors?
That XML file should be located in
res/drawable. Given the errors you just mentioned I assume it is located in
res/layout, which is an incorrect location.
I had the same problem due to the fact that i surrounded the same code with tag "selector", it was put automatically from android studio. "selector" must be changed with "shape" for have border and avoid warning
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256147.15/warc/CC-MAIN-20190520202108-20190520224108-00557.warc.gz
|
CC-MAIN-2019-22
| 590
| 9
|
http://stackoverflow.com/questions/7551025/use-uint-or-int/7551077
|
code
|
Definitely, I know the basic differences between unsigned integers (
uint) and signed integers (
I noticed that in .NET public classes, a property called
Length is always using signed integers.
Maybe this is because unsigned integers are not CLS compliant.
However, for example, in my static function:
public static double GetDistributionDispersion(int tokens, int positions)
tokens and all elements in
positions cannot be negative. If it's negative, the final result is useless. So if I use
int both for
positions, I have to check the values every time this function is called (and return non-sense values or throw exceptions if negative values found???), which is tedious.
OK, then we should use
uint for both parameters. This really makes sense to me.
I found, however, as in a lot of public APIs, they are almost always using
int. Does that mean inside their implementation, they always check the negativeness of each value (if it is supposed to be non-negative)?
So, in a word, what should I do?
I could provide two cases:
- This function will only be called by myself in my own solution;
- This function will be used as a library by others in other team.
Should we use different schemes for these two cases?
P.S.: I did do a lot of research, and there is still no reason to convince me not to use
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446300.49/warc/CC-MAIN-20151124205406-00103-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 1,302
| 21
|
https://domalab.com/pfsense-squid-vmware/
|
code
|
How many times the same files are downloaded over and over again maybe from different machines? Not to mention the common web sites frequently accessed everyday. One way to reduce the network bandwidth and improve response times is by caching content to a temporary location. This article covers the main steps to install and run pfSense SQUID configuration in a VMware homelab.
There are many benefits for doing so. Last but not least the ability to cache redundant data like for example Windows Updates recurring on a number of machines. Certainly, purpose built update managers like Windows Server Update Services (WSUS) offer way more specific features. With a more agnostic and flexible approach for an heterogeneous environment like a homelab the pfSense SQUID configuration is probably one of most cost effective way in terms of resources.
pfSense SQUID setup
First steps to proceed with the pfSense SQUID configuration is to download and install the package directly from the pfSense System > Package Manager > Available Packages.
Next is to search for “SQUID” and the first package to install is the one with same name (version 0.4.44_9 at the time of installing the package). Lightsquid and squidGuard will be covered in a different article. Once ready press install.
An additional screen will appear to confirm the package install. Upon confirmation, the selected package is downloaded and automatically installed. Before closing the window it is a good idea to copy the info on the package installation screen into a notepad or similar as it contains interesting extra info including documentation.
Now that the pfSense SQUID package is installed next step is to proceed with first configuration. It is recommended to setup all the desired configurations, save and then check the box to enable the Squid proxy service with latest config. In the Proxy interface(s) field the ability to select the individual networks or VLANs where the Web Proxy is enabled. Allows the selection of multiple networks. By default the Squid port is 3128. Can be changed if desired. In general default settings work for majority of installs.
In addition, it is also possible to specify for which network pfSense SQUID works as Transparent Web Proxy.
Next step is to move to Local Cache tab. From here the option to define finer details like the Cache Replacement Policy and the associated algorithms. In particular there are the following options:
- Heap LFUDA
Keeps popular objects in cache regardless of their size and thus optimizes byte hit rate at the expense of hit rate.
- Heap GDSF
Optimizes object-hit rate by keeping smaller, popular objects in cache.
- Heap LRU
Works like LRU, but uses a heap instead.
Keeps recently referenced objects (i.e., replaces the object that has not been accessed for the longest time).
Following on the screen it is possible to define the Hard Disk Cache Size. This was already part for the initial considerations when deploying pfSense to VMware. Depending on the size of the environment might be a good idea to allow the cache on a separate disk. Even though this is the first install it is always a good idea to Clear Disk Cache NOW to start from a clean state. The other values can be left as default.
Next item to review and config is the Squid Memory Cache Settings. In this case it is possible to specify the Memory Cache Size in MB and the maximum size for the object to keep in cache. In this case the value is in KB.
In order to optimise and allow a smoother experience to download content from Windows Updates it is possible to enable a sort of dynamic cache with specific instructions. It is a matter of enabling the Squid refresh pattern. A sample below includes the following:
refresh_pattern -i windowsupdate.com/.*\.(cab|exe|ms[i|u|f|p]|[ap]sf|wm[v|a]|dat|zip|psf) 43200 80% 129600 reload-into-ims
refresh_pattern -i microsoft.com/.*\.(cab|exe|ms[i|u|f|p]|[ap]sf|wm[v|a]|dat|zip|psf) 43200 80% 129600 reload-into-ims
refresh_pattern -i windows.com/.*\.(cab|exe|ms[i|u|f|p]|[ap]sf|wm[v|a]|dat|zip|psf) 43200 80% 129600 reload-into-ims
refresh_pattern -i microsoft.com.akadns.net/.*\.(cab|exe|ms[i|u|f|p]|[ap]sf|wm[v|a]|dat|zip|psf) 43200 80% 129600 reload-into-ims
refresh_pattern -i deploy.akamaitechnologies.com/.*\.(cab|exe|ms[i|u|f|p]|[ap]sf|wm[v|a]|dat|zip|psf) 43200 80% 129600 reload-into-ims
After all changes a final Save to amend the configuration.
One last setting to control is the Squid Proxy Server Service status and eventually enable this one from the pfSense > Status > Services panel.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00745.warc.gz
|
CC-MAIN-2023-50
| 4,551
| 26
|
https://www.innoworks.tech/blog/react-tailwind-redux
|
code
|
React, Tailwind CSS, and Redux are three powerful tools that have gained significant popularity in the world of modern web development. Together, they offer a robust foundation for building scalable, responsive, and state-driven applications. In this article, we will explore how React, Tailwind CSS, and Redux work together and the benefits they bring to application development.
2. Tailwind CSS: Streamlined Styling and Customization Tailwind CSS is a utility-first CSS framework that provides a comprehensive set of utility classes. It takes a different approach from traditional CSS frameworks by focusing on providing low-level utility classes that can be combined to create custom styles. Tailwind CSS enables rapid prototyping and easy customization, allowing developers to create unique and visually appealing user interfaces. It eliminates the need for writing custom CSS, resulting in faster development and streamlined styling.
Benefits of Using React, Tailwind CSS, and Redux Together By combining React, Tailwind CSS, and Redux, developers can leverage the strengths of each tool to create powerful applications. React's component-based architecture allows for reusable and modular code, Tailwind CSS provides a streamlined approach to styling and customization, and Redux simplifies state management and ensures data consistency across the application. Together, they enable developers to build scalable, responsive, and state-driven applications that deliver a smooth user experience.
Efficient Development Workflow React, Tailwind CSS, and Redux work seamlessly together, offering an efficient development workflow. With React's component-based structure, developers can easily integrate Tailwind CSS classes within their components to style and customize the UI. Redux provides a centralized store to manage application state, making it straightforward to connect React components to the Redux store and access the required data.
Ecosystem and Community Support React, Tailwind CSS, and Redux have vibrant communities and extensive ecosystems. There are numerous libraries, tools, and resources available that can enhance the development process and address specific needs. The React ecosystem, for example, offers libraries like React Router for handling routing, React Testing Library for testing components, and React Developer Tools for debugging.
Reusability and Maintainability The component-based nature of React promotes reusability, allowing developers to create independent and reusable components that can be used across different parts of the application. This reusability leads to a more efficient and maintainable codebase, reducing redundancy and making it easier to add new features or make updates.
Scalability and Performance Optimization React's virtual DOM and reconciliation algorithm enable efficient rendering, resulting in better performance and scalability. Additionally, Tailwind CSS's utility-first approach ensures that only the necessary styles are applied, reducing the size of the CSS file and improving load times. Redux's unidirectional data flow pattern simplifies state management and makes it easier to optimize performance by minimizing unnecessary re-renders.
Developer Productivity and Collaboration React's component-based architecture, along with the utility classes provided by Tailwind CSS, speeds up development and allows for better collaboration among team members. Developers can work on different components in parallel, and the predefined utility classes in Tailwind CSS enable consistent styling across the application. Redux's centralized state management further enhances productivity by providing a clear structure for data flow and enabling effective debugging.
Continuous Improvement and Support React, Tailwind CSS, and Redux have active developer communities and are continually improved and updated. New features, performance optimizations, and bug fixes are regularly released, ensuring that developers have access to the latest enhancements and best practices. This active support and community involvement make React, Tailwind CSS, and Redux reliable and future-proof choices for application development.
In conclusion, the combination of React, Tailwind CSS, and Redux provides developers with a powerful stack for building modern, scalable, and state-driven applications. Their seamless integration, extensive ecosystems, and focus on reusability, performance optimization, and developer productivity make them ideal tools for both small and large-scale projects. By harnessing the capabilities of React, Tailwind CSS, and Redux together, developers can create highly interactive, responsive, and efficient applications that deliver an outstanding user experience.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474641.34/warc/CC-MAIN-20240225171204-20240225201204-00882.warc.gz
|
CC-MAIN-2024-10
| 4,744
| 10
|
http://www.aikiweb.com/forums/showpost.php?p=262419&postcount=3
|
code
|
I turn away simply so I don't get my face kicked. Though come to think of it, having my faced kick may be an improvement on my looks.
As for staying "live", well .... one could argue you shouldn't have to SEE it with your eyes to stay "live". What if you're blinded or it's pitch black?!? You should be able to "feel" the other person's energy and connection or lack thereof.
Good point about getting kicked - but then, isn't getting hit in the face preferable (by some small degree) to getting hit in the back of the head?
And if you can't see, but detect an opening and move into it, then you could move straight into a strike; whereas if you can see where you're going...
I know what you're saying though.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865411.56/warc/CC-MAIN-20180523024534-20180523044534-00587.warc.gz
|
CC-MAIN-2018-22
| 708
| 5
|
https://github.com/nvaccess/nvda/wiki/Copyright-headers
|
code
|
Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
In most files (covered by the GNU GPL) use the following copyright header:
<YOUR NAME>with your name.
<CREATED YEAR>the year the file was created
<LAST UPDATED YEAR>when the file was last updated. For new files this can be missing. E.g. a file created in 2017 might have
#Copyright (C) 2017
# A part of NonVisual Desktop Access (NVDA) # Copyright (C) <CREATED YEAR>-<LAST UPDATED YEAR> NV Access Limited, <YOUR NAME> # This file may be used under the terms of the GNU General Public License, version 2 or later. # For more details see: https://www.gnu.org/licenses/gpl-2.0.html
Some files are covered by the GNU LGPL, for example the the controller client, and the examples. This allows someone to use them (or parts of them) as-is. In this case use the following header:
# A part of NonVisual Desktop Access (NVDA) # Copyright (C) <CREATED YEAR>-<LAST UPDATED YEAR> NV Access Limited, <YOUR NAME> # This file may be used under the terms of the GNU Lesser General Public License, version 2.1. # For more details see: https://www.gnu.org/licenses/lgpl-2.1.html
In some files an older style of referring to the contributors is used. The contributors is not a list of names and may just say something like:
Copyright (C) 2006-2016 NVDA Contributors. We are in the process of trying to convert these to the new style of explicit contributors. That process is a bit tricky. You have to use the git logs to figure out when the file was created and who touched it and then build a copyright line accordingly.
Note: We decided to remove the filename comment from the top of the file, since it doesn't really add anything, and is a source of error on file rename / copying copyright headers between files.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660258.36/warc/CC-MAIN-20190118172438-20190118194438-00273.warc.gz
|
CC-MAIN-2019-04
| 1,854
| 13
|
https://www.greaterwrong.com/posts/YnBFravZQ5qm6Nmyh/alignment-newsletter-41/comment/ujMB8us6KDR8REJ6e
|
code
|
This seems like it is not about the “motivational system”, and if this were implemented in a robot that does have a separate “motivational system” (i.e. it is goal-directed), I worry about a nearest unblocked strategy.
I am confused about where you think the motivation system comes into my statement. It sounds like you are imagining that what I said is a constraint, which could somehow be coupled with a seperate motivation system. If that’s your interpretation, that’s not what I meant at all, unless random sampling counts as a motivation system. I’m saying that all you do is sample from what’s consented to.
But, maybe what you are saying is that in “the intersection of what the user expects and what the user wants”, the first is functioning as a constraint, and the second is functioning as a motivation system (basically the usual IRL motivation system). If that’s what you meant, I think that’s a valid concern. What I was imagining is that you are trying to infer “what the user wants” not in terms of end goals, but rather in terms of actions (really, policies) for the AI. So, it is more like an approval-directed agent to an extent. If the human says “get me groceries”, the job of the AI is not to infer the end state the human is asking the robot to optimize for, but rather, to infer the set of policies which the human is trying to point at.
There’s no optimization on top of this finding perverse instantiations of the constraints; the AI just follows the policy which it infers the human would like. Of course the powerful learning system required for this to work may perversely instantiate these beliefs (ie, there may be daemons aka inner optimizers).
(The most obvious problem I see with this approach is that it seems to imply that the AI can’t help the human do anything which the human doesn’t already know how to do. For example, if you don’t know how to get started filing your taxes, then the robot can’t help you. But maybe there’s some way to differentiate between more benign cases like that and less benign cases like using nanotechnology to more effectively get groceries?)
A third interpretation of your concern is that you’re saying that if the thing is doing well enough to get groceries, there has to be powerful optimization somewhere, and wherever it is, it’s going to be pushing toward perverse instantiations one way or another. I don’t have any argument against this concern, but I think it mostly amounts to a concern about inner optimizers.
(I feel compelled to mention again that I don’t feel strongly that the whole idea makes any sense. I just want to convey why I don’t think it’s about constraining an underlying motivation system.)
But, maybe what you are saying is that in “the intersection of what the user expects and what the user wants”, the first is functioning as a constraint, and the second is functioning as a motivation system (basically the usual IRL motivation system).
This is basically what I meant. Thanks for clarifying that you meant something else.
The most obvious problem I see with this approach is that it seems to imply that the AI can’t help the human do anything which the human doesn’t already know how to do.
Yeah, this is my concern with the thing you actually meant. (It’s also why I incorrectly assumed that “what the user wants” was meant to be goal-directed optimization, as opposed to about policies the user approves of.) It could work combined with something like amplification where you get to assume that the overseer is smarter than the agent, but then it’s not clear if the part about “what the user expects” buys you anything over the “what the user wants” part.
This does seem like a concern, but it wasn’t the one I was thinking about. It also seems like a concern about basically any existing proposal. Usually when talking about concerns I don’t bring up the ones that are always concerns, unless someone explicitly claims that their solution obviates that concern.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066568.16/warc/CC-MAIN-20210412023359-20210412053359-00144.warc.gz
|
CC-MAIN-2021-17
| 4,048
| 12
|
http://xiaodiancuns.xyz/archives/4950
|
code
|
Amazingnovel Jing Wu Hen – Chapter 2560 – : The Unique Thing About Wang Xiao? egg spiffy propose-p1
does miss brown have a cookbook
Novel–The Legend of Futian–The Legend of Futian
Chapter 2560 – : The Unique Thing About Wang Xiao? sponge crazy
As time pa.s.sed, a few of the divine hands have been forged. The first to total was an armorer out of the lower ranking Renhuang “division.” Then, cultivators did start to comprehensive forging their divine forearms one just after another. Obviously, some of them was unsuccessful.
No wonder there have been rumors around praoclaiming that Tianyan Community was even eyeing the Donghuang Imperial Palace because the princess didn’t have a Course Partner for cultivation yet still.
Aside from that, they had to announce the effects joyfully, and then give away the important awards.
As the results of the 1st showdown were definitely becoming revealed, the seven wonderful armorers in the atmosphere obtained just completed the smelting from the armoring components, which was suggestive of the main difference with their levels.
If w.a.n.g Xiao was this spectacular, he indeed capable of contend for your place. Needless to say, he acquired only just professional because of it.
Xi Chiyao have also been an heir connected with an Medieval G.o.d Clan, so in a natural way, she realized a touch more about Early G.o.d Clans. If w.a.n.g Xiao was already equipped to…
He explained that w.a.n.g Xiao got perfectly gained the inheritance of Tianyan the fantastic. That would mean that he was over the Lord of Tianyan Town.
He explained that w.a.n.g Xiao got perfectly gotten the inheritance of Tianyan the Great. That means he was above the Lord of Tianyan Location.
These seven Tribulation airplane armorers got only just finished half the ways in forging their tools. Yet most of the seven Armory Zones acquired already turned into dominions of fire. Even their forging operations have been pleasing for the vision in the target audience.
She didn’t assume that the Tianyan Metropolis Lord also had this sort of motives in launching w.a.n.g Xiao on this occasion.
She didn’t believe that the Tianyan Metropolis Lord also got these kinds of motives in discover w.a.n.g Xiao this time.
w.a.n.g Xiao with the Area Lord’s Workplace. He’d been honing his sword for a 100 years, and also this sword should be extremely razor-sharp, proper? most people in Tianyan Community believed. It was a hundred years for the reason that earlier Armorer Compet.i.tion. w.a.n.g Xiao was probably of sufficient age to not forget points back then and may have seen the earlier Armorer Compet.i.tion.
He looked toward just where w.a.n.g Xiao was.
“It is like I am checking out the performance on the Historic Armorer Good Emperor,” a cultivator from an institution affiliated into the Area Lord’s Company reported. There had been a certain amount of fawning within that thoughts, nevertheless it worked tirelessly on the area Lord’s Place of work cultivators.
When the armorer’s amount wasn’t high enough, along with his Direction Fireplace wasn’t sufficiently strong enough, he wouldn’t even be capable of smelt the greater-tier armoring elements, let alone implement the armoring.
The highest armorer would, of course, come up in the Tribulation Plane cultivators. The initial place will bring away four Sub-divine Hands. But depending on the seem from the circumstance now, w.a.n.g Xiao might be getting this done himself.
Over this, that they had the capability to get a Two-tribulation Sub-divine Arm.
As the outcomes of the earliest showdown were being uncovered, the seven wonderful armorers on the air got just carried out the smelting of your armoring elements, that had been an indication of the main difference inside their ranges.
Unexpectedly, the City Lord’s Company suddenly lost in every one of the important compet.i.tions inside the seventh, eighth, and 9th stage uppr-position Renhuang divisions and couldn’t shield the champions.h.i.+p t.i.tle. This made cultivators out of the Town Lord’s office shed a lot of confront, but all they might do was ingest this defeat.
But even then, throughout the ultimate showdown this time, you could begin to see the dominating advantages how the Town Lord’s Office has around the world of armory. Should the unfamiliar cultivator didn’t show up, they are able to probably clinch substantially more variety styles.
Not just that, that they had to broadcast the results joyfully, and after that share the important rewards.
Provided that w.a.n.g Xiao was the smartest armorer of this compet.i.tion, other people wasn’t that vital. Regardless of whether they missing inside the 9th-amount Renhuang department, they might replace with it. w.a.n.g Xiao’s splendour would handle whatever else . and make individuals overlook the disaster of other folks.
Based on the suggestions displayed within Tianyan Community, it was subsequently probably pretty next to the facts. Her imagine was probably right. In that case, she was setting up to secure a little nervous for Ye Futian.
Interior Tianyan Metropolis, where all was noiseless, a great number of people today searched up toward the sky.
1 right after one other, the cultivators completed forging their weaponry, all eight arenas determined the compet.i.tion and also the winner possessed blossomed.
In addition, the ninth-point Renhuang section seemed to be dangerous. The armorer who beaten Gents Yan have also been harmful the position of the armorer in the Town Lord’s Workplace.
Xi Chiyao’s cardiovascular quivered. Does she, sad to say, suppose it appropriate?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710813.48/warc/CC-MAIN-20221201121601-20221201151601-00467.warc.gz
|
CC-MAIN-2022-49
| 5,648
| 31
|
http://termotools.com/equivalent-airspeed/repair-equivalent-airspeed-error.php
|
code
|
Equivalent Airspeed Error
Equivalent Airspeed Error
Hi, I have almost everybody's favorite case...... When I turned it back on, thanks! Anybody? All the led lights were on and Ipreventing me from logging on normal mode.I've installed both wwan controller softwares thatits alright if i go a little over.
REBOOT Plug in your the power button again since I couldn't see anything. Another 2x 1Gb DDR2 RAM, error in safe mode than it began to show again. airspeed Equivalent Airspeed Formula Again nice bright CD/DVD drives that will work with the recovery disk. Last week, I had error finsh writing this and I will post the results.
I attempted to boot up my card and start Smartview ENJOY!!!!!!!!!!!!!!! You will need to find a CMI 8738 audio driver maybe this one: http://download.cnet.com/C-Media-CMI8738-WDM-Driver-Windows-XP/3000-2120_4-10495782.htmlit plugged in.As if i've 'No Device' via Control Panel.
Also, all drivers onboard was missing tabs in my display properties. Please tell me whatas "new hardware found"... Equivalent Airspeed Is Calibrated Airspeed Corrected For Than I came across another problem I This often means (at least a place to start) that a driver is corrupted/interfering.Install Sprint Smartview it willrunning and all is okay.
I really need some help here! I've dual I really need some help here! I've dual I could open the cd tray, and there https://www.mathworks.com/help/aeroblks/idealairspeedcorrection.html AGP, 5.1 Sound Card, power supply.I always haveremount the outer black shell?Make sure the RAM you are buying the problem. The temps arent going past 50c.
And a quite low wattageany Sound Blaster sound card programs listed?The mobo is a p4b533-v that Equivalent Airspeed Calculator I would be grateful.The Toshiba support site lists the supported external Psu, I'm not sure which one. Http://i20.photobucket.com/albums/b2...ark_000001.jpg http://i20.photobucket.com/albums/b2...ark_000000.jpg maybe try reapplying thermal paste? Nothingbasic, sleek, professional look.
Any help wouldRecovery DVD from the manufacturer.It has bios settings that will allow itan emachines M622-UK8X.ATI Radeon HDbooted windows 7 onto my computer on a spare hdd...I have $800 canadian to spend and never plugged it in...
The display on the screen had been perfect or 3 years old.I put in my sprinthappens with the following two devices. I installed another hard drive and everything https://en.wikipedia.org/wiki/Equivalent_airspeed it with me at work.Is there anyone in here who had theare the newest available.
It is also or two more times. As a side note: the instructions do say The title says almost all but several more things to add.Any help really appreciated,could hear it running but the screen showed nothing.It was my AVG 8.5 that was same problem and was able to fix it?
It is about 2external CD-ROM/DVD or Floppy drives.It never recognizes that Medieval Two Total War Kingdoms expansion pack. Go ahead and let Equivalent Airspeed To True Airspeed check device manager for sound drivers.Checked power supply while a month and then, well, here I am.
Its running also a brand new AGP it finish then restart.I have a Toshiba http://www.skybrary.aero/index.php/Equivalent_Airspeed was fine until I turned the computer off.Remove them first => Then equivalent so a total of 4 gb.This thing is going to sit underwas just installed, lol, and it recognizes other things...
I had a hard time logging on drive. this is my first time building a gaming rig. Would this be Equivalent Airspeed Corrected For Atmospheric Density Effects night and didn't worry anymore.Not even so muchworked, I faced all the problems everyone else did with no solutions.I am going to run Memtest86 after I than taking a look inside.
This problem started equivalent would be more running if I clicked enter.I will get a 550W if this isthat the drive cannot be formatted in NTFS.I turned it off atATX board capable.Look in Add/Remove Programs first. => Areup two days ago.
It has no built-in or my computer for a couple of days.a good buy?Any ideas or favorites? air card (595u) and nothing happens... Plenty of room, Knots Equivalent Airspeed EC - 430Watt Psu.
Antec Earth-watts EA-430 this AMD Athlon 64 X2 4000+ CPU. 2x 1 Gb DDR2 RAM. I tried oneknow to do that.Th Antec 900 is very nice looking. I downloaded a registry file and downloaded itsprint offers for the air card but nothing...
Thanks! You can find all your Dell guides here: http://www.techspot.com/vb/topic100658.html no hard drive detected in bios again. Does anyone know how to equivalent card but the other one is working too. error I never had any problems with Knots Equivalent Airspeed To Ft/s Portege M200 Tablet PC. equivalent However, I am unable to runsay it doesn't recognize operating system...
Whats new is CPU, Motherboard, it although I use it a lot. The device cannot start (Code 10)The Miss Congeniality of Computer Cases....... I know it's not a corrupt OS, it How To Find Equivalent Airspeed it's being plugged in...After I deleted that I ranto boot from HDD, FDD, CD-ROM, or LAN.
I have the Toshiba 4850 (1Gb) graphics card. But when I turned itmy desk, it's not a display piece.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161902.89/warc/CC-MAIN-20180925163044-20180925183444-00167.warc.gz
|
CC-MAIN-2018-39
| 5,167
| 20
|
http://pushingpodio.globi.ca/using-google-drive-to-scan-documents.php
|
code
|
Using Google Drive to Scan Documents
I’ve recently found the power in the scan option of the Google Drive app for Android. The scans actually turn out pretty decently. Just when I thought this was great, I discovered that you can open up a scanned PDF from GD using Google Docs, and the whole thing is OCR’d into text. If we can pipe this all together into a single workflow, that would be awesome.
The goal is to be able to scan a document using the GD app, and have it appear in Podio along with a text version. Automagically.
The first thing we’ll need for this is a Scans app in Podio. The fields the app needs would be:
- Title - used for the name of the PDF
- Transcription - multi-line text field for the pretty OCR version of the document
- Plain Text - multi-line text field for a plain text version of the document
- File ID - single-line always-hidden text field to store the Google Drive file ID
With my setup, I also have a folder in GD called “Scans”. I don’t want to have to select a folder when scanning from the app, and just want GF to move my PDF to the Scans folder automatically once it’s been processed. The ID of this Scans folder is required for the flow, so just make a note of it for now.
To trigger the process you’ll also need Zapier or IFTTT or other automation platform that can trigger on Google Drive files. I chose IFTTT for this exercise, but the principle will be similar if you’re using another automation platform.
We’re going to need a webhook in GF to start the process. Create a new webhook and make a note of its URL:
Now go to IFTTT (or Zapier or whatever) and create a new applet to trigger when a new GD file is added and to do a POST to the GF webhook address:
Now open up the GD app on your mobile device and scan something. Anything. Doesn’t matter what you scan. The point is just to get the trigger to fire and for GF to receive some payload.
Once the trigger has fired, go back to your webhook flow in GF and click on the “(refresh)” link in the trigger. You should see the result of the last event and the variables that were passed:
The rest of the webhook flow is pretty straight forward:
- Make sure the filename starts with “Scanned_”
- Create a new item in the Scans app passing the file ID
Note that the code to parse out the file ID from the url is
preg_match_gf("/id=([a-zA-Z0-9-_]+)/", [(WebHook) url], 1).
Important: make sure that the hook event is turned on for the create action so that the subsequent flow is triggered.
The Main App Flow
In the Scans app, we now need a flow to do the heavy lifting. This flow should trigger on create (and will be triggered by the previous webhook flow):
We’ll want this flow to:
- Copy the file from GF to Podio
- Get the OCR version of the file in HTML
- Move the file to the Scans folder in GD
For extra credit we also convert the HTML version into plain text for easier parsing later on if required.
The flow looks like this:
To achieve the desired result, the flow uses the following ProcFu scripts:
- google_file_to_podio.pf - to copy the PDF file from GD to the Podio item
- google_drive_pdf_to_html.pf - to get the HTML version of the PDF (note that PF will create a temporary Google Docs file and will delete it again afterwards)
- google_drive_move.pf - to move the PDF into the Scans folder in GD
That’s all there’s to it. Now it’s time to try this out.
Find a document to scan, and scan it using the GD app.
I decided to scan the Podio API fact sheet (again):
Which turned up in Podio less than a minute later:
The transcription wasn’t all too bad:
And the plain text version of that was incredibly accurate:
So there you have it. Easy Scanning + OCR using Google Drive :-)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649302.35/warc/CC-MAIN-20230603165228-20230603195228-00144.warc.gz
|
CC-MAIN-2023-23
| 3,724
| 39
|
https://stackoverflow.com/questions/28757902/how-to-install-mysqldb-in-pycharm-windows/31866350
|
code
|
I am new to Python(I am using Python2.6) and Pycharm, but I need to use
MySQLdb module to complete my task.
I found How to install MySQLdb in Python 2.6 CentOS but I need it at Windows 7(64 bits)
Is there any way to easily install this module using Pycharm? I am using Pycharm(3.1.1 professional)
I spent time to search for some guides or tips and finally I go to here but does not found
MySQLdb to install.
Any help will be appreciated, thank you!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107882102.31/warc/CC-MAIN-20201024051926-20201024081926-00374.warc.gz
|
CC-MAIN-2020-45
| 448
| 7
|
https://teksolutionz.com/open-source-network-monitoring-tools/
|
code
|
Networks have gone through a steady evolution in the past few decades from being flat to much more complex designs. Newer technologies have been thrown into the mix like mobile devices, VPN, IoT, and cloud etc. However, one thing has remained the same; the need for good open source network monitoring tools.
Why Use Open Source Network Monitoring Tools?
Monitoring lets you know the state of the network elements and the connections so you can understand and fix any issues that may cause problems. Open source network monitoring tools allow you to detect failing network components like frozen servers, failed switches and failed routers etc. We are going to list some of the best open source network monitoring tools out there so you can manage your IT infrastructure efficiently.
OpenNMS is one of the open source network monitoring tools that offers a lot of flexibility. It provides service monitoring, event management and performance monitoring for enterprise-level businesses. You can build a network monitoring solution for any infrastructure and collect system data using WMI, XML, HTTP, JSON, JMX, NRPE, JDBC etc.
One of the main advantages of OpenNMS is its innovative user interface. You can view the reports in a dashboard or a chart. Even though it is designed for Linux, you can still use it on Windows, Solaris, and Mac. OpenNMS also supports both IPv4 and IPv6 over ICMP along with event notifications via email, SMS etc. You can monitor device temperatures and power supply as well. If you’re looking for open source network monitoring tools with amazing UI and compatibility, OpenNMS is the one for you.
Cacti is one of the open source network monitoring tools that connect to RRDTool and generate easy to understand graphs and charts of network data. You can install Cacti on both Linux and Windows. Cacti also allows unlimited graph items for each graph and is capable of using CDEF or sources inside Cacti. This tool is usually used to show network data over time like CPU load or bandwidth usage.
Cacti supports RRD files that have two or more data sources and it is able to fetch and use any RRD file that is stored locally. Other features include graphs, data source, and host templates, custom data-gathering scripts and user-based management and security. Cacti requires Apache, IIS, or MySQL that support PHP. If time-based network graphs are what you need, then Cacti is one of the best open source network monitoring tools out there.
3. Nagios: Open Source Monitoring Tools
Nagios is one of the industry leaders in open source network monitoring tools. It provides monitoring solutions to both small and enterprise level networks. Nagios is also extremely diverse and can monitor almost all types of components like a web server, website, network protocols, operating systems etc. Moreover, it consumes very little server resources due to its high-performance Core 4 monitoring engine.
Nagios has a ton of plugins available that let you integrate with all kinds of third-party software. Furthermore, it can also monitor Middleware such as Tomcat, URL, Apache, JBoss, WebLogic, and WebSphere etc. It gives you a central view of your entire IT infrastructure and has multi-user access as well as selective access. Currently, Nagios has a huge active community of over a million users. So, if you’re looking for versatile and diverse open source network monitoring tools, take a look at Nagios.
In an array of open source network monitoring tools, Zabbix is one of those that are used by huge companies because of its enterprise-level software. Some users include DELL, ICANN and Orange etc. It is capable of monitoring everything including the performance of servers, network equipment, web applications, and database management.
It has a wide array of operating system support and therefore you can install it on Linux, Windows, Mac, Solaris, AIX, and FreeBSD etc. Zabbix also supports VM monitoring that allows VMWare. It allows automation using scripts in various language and is capable of integration with other system management tools such as bcfg2, Chef and cfengine etc. It can also monitor JAVA applications directly. So, in a nutshell, if you’re looking for open source network monitoring tools for a large organization, Zabbix is a great choice.
5. Paessler PRTG
Paessler PRTG is an all-round network monitoring solution that allows centralized monitoring i.e. letting you monitor all areas of the network. It has a built-in notification system that notifies you before a problem arises. It also has a mobile app for OTG (on-the-go) monitoring of devices in data centers. You can monitor for services other than networks like hardware, cloud, and performance. It also supports multiple languages. The free version has support for 100 sensors. So, if you want all-in-one open source network monitoring tools that also allow monitoring on the fly, then Paessler PRTG might suit you.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741491.47/warc/CC-MAIN-20181113194622-20181113220622-00246.warc.gz
|
CC-MAIN-2018-47
| 4,927
| 14
|
https://sendylc.wordpress.com/software-pendukung/
|
code
|
Berikut adalah software-software pendukung buat kegiatan lab dan semuanya versi lokal IEX(INTERNET ELEKTRO EXCHANGE):
1. Packet TracerDropbox 1.2.51
Packet Tracer is a Cisco router emulator that can be utilized in training and education, but also in research for simple computer network simulations. The tool is created by Cisco Systems and provided for free distribution to faculty, students, and alumni who are or have participated in the Cisco Academy program. The purpose of Packet Tracer is to offer students and teachers a tool to learn the principles of networking as well as develop Cisco Technology specific skills.
Wireshark is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education. Originally named Ethereal, in May 2006 the project was renamed Wireshark due to trademark issues.
GNS3 is a graphical network simulator that allows simulation of complex networks.
To allow complete simulations, GNS3 is strongly linked with :
- Dynamips, the core program that allows Cisco IOS emulation.
- Dynagen, a text-based front-end for Dynamips.
- Qemu, a generic and open source machine emulator and virtualizer.
GNS3 is an excellent complementary tool to real labs for network engineers, administrators and people wanting to pass certifications such as CCNA, CCNP, CCIP, CCIE, JNCIA, JNCIS, JNCIE. It can also be used to experiment features of Cisco IOS, Juniper JunOS or to check configurations that need to be deployed later on real routers.
Nmap (Network Mapper) is a security scanner originally written by Gordon Lyon (also known by his pseudonym Fyodor Vaskovich) used to discover hosts and services on a computer network, thus creating a “map” of the network. To accomplish its goal nmap sends specially crafted packets to the target host and then analyzes the responses. Unlike many simple port scanners that just send packets at some predefined constant rate, nmap accounts for the network conditions (latency fluctuations, network congestion, the target interference with the scan) during the run.
FileZilla is free, open source, cross-platform FTP software, consisting of FileZilla Client and FileZilla Server. Binaries are available for Windows, Linux, and Mac OS X. It supports FTP, SFTP, and FTPS (FTP over SSL/TLS). As of 5 March 2009, FileZilla Client was the 5th most popular download of all time from SourceForge.net. FileZilla Server is a sister product of FileZilla Client. It is an FTP server supported by the same project and features support for FTP and FTP over SSL/TLS.
The Metasploit Project is an open-source computer security project which provides information about security vulnerabilities and aids in penetration testing and IDS signature development. Its most well-known sub-project is the Metasploit Framework, a tool for developing and executing exploit code against a remote target machine. Other important sub-projects include the Opcode Database, shellcode archive, and security research.
XAMPP (pronounced /ˈzæmp/ or /ˈɛks.æmp/) is a free and open source cross-platform web server package, consisting mainly of the Apache HTTP Server, MySQL database, and interpreters for scripts written in the PHP and Perl programming languages.
Many people know from their own experience that it’s not easy to install an Apache web server and it gets harder if you want to add MySQL, PHP and Perl.
XAMPP is an easy to install Apache distribution containing MySQL, PHP and Perl. XAMPP is really very easy to install and to use – just download, extract and start.
8. Cain and Abel
Cain & Abel is a password recovery tool for Microsoft Operating Systems. It allows easy recovery of various kind of passwords by sniffing the network, cracking encrypted passwords using Dictionary, Brute-Force and Cryptanalysis attacks, recording VoIP conversations, decoding scrambled passwords, recovering wireless network keys, revealing password boxes, uncovering cached passwords and analyzing routing protocols. The program does not exploit any software vulnerabilities or bugs that could not be fixed with little effort. It covers some security aspects/weakness present in protocol’s standards, authentication methods and caching mechanisms; its main purpose is the simplified recovery of passwords and credentials from various sources, however it also ships some “non standard” utilities for Microsoft Windows users.
Python is a general-purpose high-level programming language whose design philosophy emphasizes code readability. Python aims to combine “remarkable power with very clear syntax”, and its standard library is large and comprehensive. Its use of indentation for block delimiters is unusual among popular programming languages. Python supports multiple programming paradigms, primarily but not limited to object oriented, imperative and, to a lesser extent, functional programming styles. It features a fully dynamic type system and automatic memory management, similar to that of Scheme, Ruby, Perl, and Tcl. Like other dynamic languages, Python is often used as a scripting language, but is also used in a wide range of non-scripting contexts.
Mikrotīkls Ltd., known internationally as MikroTik, is a Latvian manufacturer of computer networking equipment. It sells wireless products and routers. The company was founded in 1995, with the intent to sell in the emerging wireless technology market. As of 2007, the company had more than 70 employees.
Notepad++ is a free (as in “free speech” and also as in “free beer”) source code editor and Notepad replacement that supports several languages. Running in the MS Windows environment, its use is governed by GPL License.
Based on the powerful editing component Scintilla, Notepad++ is written in C++ and uses pure Win32 API and STL which ensures a higher execution speed and smaller program size. By optimizing as many routines as possible without losing user friendliness, Notepad++ is trying to reduce the world carbon dioxide emissions. When using less CPU power, the PC can throttle down and reduce power consumption, resulting in a greener environment.This project is mature. However, there may be still some bugs and missing features that are being worked on. If you have any questions or suggestions about this project, please post them in the forums. Also, if you wish to make a feature request, you can post it there as well. But there’s no guarantee that I’ll implement your request.
12. Backtrack 4
BackTrack is intended for all audiences from the most savvy security professionals to early newcomers to the information security field. BackTrack promotes a quick and easy way to find and update the largest database of security tools collection to-date. Our community of users range from skilled penetration testers in the information security field, government entities, information technology, security enthusiasts, and individuals new to the security community.
Putty is a generic term for a plastic material similar in texture to clay or dough typically used in domestic construction and repair as a sealant or filler. Painter’s Putty is typically a linseed oil based product used for filling holes, minor cracks and defacements in wood only. Putties can also be made intumescent, in which case they are used for firestopping as well as for padding of electrical outlet boxes in fire-resistance rated drywall assemblies. In the latter case, hydrates in the putty produce an endothermic reaction to mitigate heat transfer to the unexposed side.
Tenable Network Security provides a suite of solutions that unify real-time vulnerability, event and compliance monitoring into a single, role-based, interface for administrators, auditors and risk managers to evaluate, communicate and report needed information for effective decision making and systems management.
Firesheep is an extension developed by Eric Butler for the Firefox web browser. The extension uses a packet sniffer to intercept unencrypted cookies from certain websites (such as Facebook and Twitter) as the cookies are transmitted over networks, exploiting session hijacking vulnerabilities. It shows the discovered identities on a sidebar displayed in the browser, and allows the user to instantly take on the log-in credentials of the user by double-clicking on the victim’s name.
In the field of computer network administration, pcap (packet capture) consists of an application programming interface (API) for capturing network traffic. Unix-like systems implement pcap in the libpcap library; Windows uses a port of libpcap known as WinPcap.
Monitoring software may use libpcap and/or WinPcap to capture packets travelling over a network and, in newer versions, to transmit packets on a network at the link layer, as well as to get a list of network interfaces for possible use with libpcap or WinPcap.
The implementors of the pcap API wrote it in C, so other languages such as Java, .NET languages, and scripting languages generally use a wrapper; no such wrappers are provided by libpcap or WinPcap itself. C++ programs may link directly to the C API; only one partial object-oriented C++ wrapper is currently available from an external source.
The NetBeans IDE is written in Java and runs everywhere where a JVM is installed, including Windows, Mac OS, Linux, and Solaris. A JDK is required for Java development functionality, but is not required for development in other programming languages.
Rapid Leech is a free server transfer script for use on various popular upload/download sites such as megaupload.com, Rapidshare.com and more than 45 others. The famous Rapidleech script transfers files from Rapidshare, Megaupload, Depositfiles.com, Easy-share.com, etc, via your fast servers connection speed and dumps the file on your server. You may then download these files from your server anytime later.
Rapidleech script has being used by more than 5 million users worldwide and has being installed on more than 2000 servers.
19. Xenserver + Xencenter
In computing, Xen (pronounced /ˈzɛn/) is a virtual-machine monitor for IA-32, x86-64, Itanium and ARM architectures. It allows several guest operating systems to execute on the same computer hardware concurrently. The University of Cambridge Computer Laboratory developed the first versions of Xen; as of 2010, the Xen community develops and maintains Xen as free software, licensed under the GNU General Public License (GPLv2).
TeamViewer is a computer software package for remote control, desktop sharing, and file transfer between computers. The software operates with the Microsoft Windows, Mac OS X, Linux, iOS, and Android operating systems.It is possible to access a machine running TeamViewer with a web browser. While the main focus of the application is remote control of computers, collaboration and presentation features are included. TeamViewer GmbH was founded in 2005 in Uhingen, Germany.
When you purchase Windows 7 from Microsoft Store, you have the option to download an ISO file or compressed files. The Windows 7 USB/DVD Download tool allows you to create a copy of your Windows 7 ISO file on a USB flash drive or a DVD. To create a bootable DVD or USB flash drive, download the ISO file and then run the Windows 7 USB/DVD Download tool. Once this is done, you can install Windows 7 directly from the USB flash drive or DVD.
Synergy is Free and Open Source Software that lets you easily share your mouse and keyboard between multiple computers, where each computer has its own display. No special hardware is required, just a network connection. Synergy is supported on Windows, Mac OS X and Linux. Usage is as simple as moving the mouse off the edge of your screen. You can even share your clipboard.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864063.9/warc/CC-MAIN-20180521102758-20180521122524-00015.warc.gz
|
CC-MAIN-2018-22
| 11,777
| 38
|
http://programmers.stackexchange.com/users/73234/theredone
|
code
|
|visits||member for||2 years, 6 months|
|seen||Dec 1 '12 at 13:39|
I am currently a Student of Computer Systems. I am on work experience at the moment in a company and am mainly doing database administration, albeit the basics, need to learn a lot more.
0 Votes Cast
This user has not cast any votes
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928754.15/warc/CC-MAIN-20150521113208-00035-ip-10-180-206-219.ec2.internal.warc.gz
|
CC-MAIN-2015-22
| 299
| 5
|