url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://www.my.freelancer.com/u/rogeliog/portfolio/gitmetrics-109103
code
Gitmetrics provides graphic information about the commits made to a repository on Github. Through us you can view a repository's development summarized by the last 100 commits and analyze the participation of each developer within the project as well as the time and days when commits are made. http://gitmetrics.com I am Mexican and currently studying a bachelorette in software engineering. I have several open source projects, and working experience at an agile development company (INNKU). PEKERJA BEBAS BARU
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510575.93/warc/CC-MAIN-20230930014147-20230930044147-00588.warc.gz
CC-MAIN-2023-40
512
3
https://es.mathworks.com/help/rtw/ug/identifier-name-collisions-and-mangling.html
code
Identifier Name Collisions and Mangling In identifier generation, a circumstance that would cause generation of two or more identical identifiers is called a name collision. When a potential name collision exists, unique name-mangling text is generated and inserted into each of the potentially conflicting identifiers. Each set of name-mangling characters is unique for each generated identifier. Identifier Name Collisions with Referenced Models Referenced models can introduce additional naming constraints. Within a model that uses referenced models, collisions between the names of the models cannot exist. When you generate code from a model that includes referenced models, the Maximum identifier length parameter must be large enough to accommodate the root model name and name-mangling text. A code generation error occurs if Maximum identifier length is too small. When a name conflict occurs between an identifier within the scope of a higher-level model and an identifier within the scope of a referenced model, the identifier from the referenced model is preserved. Name mangling is performed on the identifier from the higher-level model. For more information on referenced models, see Parameterize Instances of a Reusable Referenced Model.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652494.25/warc/CC-MAIN-20230606082037-20230606112037-00185.warc.gz
CC-MAIN-2023-23
1,254
6
https://www.meadmaderight.com/profile/csg_surferdude2003/forum-posts
code
Nov 01, 2021 In Question Submission I'm trying to find published academic papers and/or white papers on TOSNA. I'm looking for low level papers, and not just somebody saying "It's a way to feed your yeast." :-) And I'm trying to find out what the differences are between versions 1,2, and 3. Thanks!
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00755.warc.gz
CC-MAIN-2022-21
299
3
https://blog.hartwork.org/posts/uriparser-063-released/
code
Sorry for the noise but... this release fixes major bugs for normalization again. Blame the guy who first said "release early, release often"... Thanks for the patch to - you guessed it - Adrian Manrique. Actually I have no idea how I could even find normalization test cases that worked; very odd. Beside the bugfixes I finally wrote a short tutorial on using uriparser, which can also be found online. This release introduces documentation as .chm file for download. As another goodie the six warnings are gone finally. As questions by users are coming up I set up a uriparser-users mailing list. Please join the discussion to find help and/or to help others. Before I forget: Dear maintainers, this release is both source- and binary-compatible. Thanks for your support!
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247515149.92/warc/CC-MAIN-20190222094419-20190222120419-00099.warc.gz
CC-MAIN-2019-09
773
1
https://lab.popul-ar.com/t/playing-sound-while-recording-in-spark/17
code
This is a question that I seen in the Spark FB group about once a week, so I’d like to document the solutions and limitations here. Let’s say you want to make a karaoke effect where a song plays in the background and the user sings along. You’ll set up your project and throw a sound clip into it and test it out on your device. Immediately, you’ll notice that the sound is muted while recording, but it actually gets recorded in the output. It’s not ideal because then the user can’t hear the song while they record, which ruins the whole effect. You can’t record with the microphone and play sounds at the same time. If you could, it would result in horrible sound from the speakers. There are a few solutions but they come with a cost. - Disable the microphone. This will unmute the audio while recording, but obviously the downside is that you won’t hear the user any more. - Tell the user to wear headphones (there are custom instructions available for this). Since the speakers won’t be playing the sounds, they won’t be muted! I’ve heard that bluetooth headphones don’t work for this, and many phones require adapters for corded headphones. This is the only way to have audio and the mic active at the same time during recording, but it comes with some serious UX implications.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363211.17/warc/CC-MAIN-20210302003534-20210302033534-00475.warc.gz
CC-MAIN-2021-10
1,308
5
http://mmgid.com/oracle-error/oracle-sql-error-numbers.html
code
ORA-00053: Maximum number of enqueues exceeded ORA-00054: Resource busy and acquire with NOWAIT specified ORA-00055: Maximum number of DML locks exceeded ORA-00056: DDL lock on object "string.string" is already h... For example, PL/SQL raises the predefined exception NO_DATA_FOUND if a SELECT INTO statement returns no rows. Exceptions declared in a block are considered local to that block and global to all its sub-blocks. Raising Exceptions with the RAISE Statement PL/SQL blocks and subprograms should raise an exception only when an error makes it undesirable or impossible to finish processing. http://www.oracle.com/pls/db92/db92.error_search?prefill=ORA- For example, in the Oracle Precompilers environment, any database changes made by a failed SQL statement or PL/SQL block are rolled back. Example Since EXCEPTION HANDLING is usually written with the following syntax: EXCEPTION WHEN exception_name1 THEN [statements] WHEN exception_name2 THEN [statements] WHEN exception_name_n THEN [statements] WHEN OTHERS THEN [statements] END [procedure_name]; You For example, the following GOTO statement is illegal: DECLARE pe_ratio NUMBER(3,1); BEGIN DELETE FROM stats WHERE symbol = 'XYZ'; SELECT price / NVL(earnings, 0) INTO pe_ratio FROM stocks WHERE symbol = When you see an error stack, or sequence of error messages, the one on top is the one that you can trap and handle. You can also perform a sequence of DML operations where some might fail, and process the exceptions only after the entire operation is complete, as described in "Handling FORALL Exceptions with Passing a zero to SQLERRM always returns the message normal, successful completion. Continuing after an Exception Is Raised An exception handler lets you recover from an otherwise fatal error before exiting a block. Oracle Error Handling Without exception handling, every time you issue a command, you must check for execution errors: BEGIN SELECT ... -- check for 'no data found' error SELECT ... -- check for 'no These statements complete execution of the block or subprogram; control does not return to where the exception was raised. Oracle Error Codes Table A cursor FOR loop automatically opens the cursor to which it refers. In procedural statements, VALUE_ERROR is raised if the conversion of a character string into a number fails. (In SQL statements, INVALID_NUMBER is raised.) ZERO_DIVIDE Your program attempts to divide a number So, you need not declare them yourself. Search for Oracle error messages here. Ora Error 12154 Consider the following example: BEGIN ... Also, it can use the pragma EXCEPTION_INIT to map specific error numbers returned by raise_application_error to exceptions of its own, as the following Pro*C example shows: EXEC SQL EXECUTE /* Execute SELF_IS_NULL Your program attempts to call a MEMBER method on a null instance. Therefore, the RAISE statement and the WHEN clause refer to different exceptions. Get More Info Scope Rules for PL/SQL Exceptions You cannot declare an exception twice in the same block. Oracle Error Codes List With Description That lets you refer to any internal exception by name and to write a specific handler for it. Oracle Error Codes And Solution You can pass an error number to SQLERRM, in which case SQLERRM returns the message associated with that error number. However, other user-defined exceptions must be raised explicitly by RAISE statements. http://mmgid.com/oracle-error/oracle-on-error-sql.html If earnings are zero, the function DECODE returns a null. Each handler consists of a WHEN clause, which specifies an exception, followed by a sequence of statements to be executed when that exception is raised. If the transaction fails, control transfers to the exception handler, where you roll back to the savepoint undoing any changes, then try to fix the problem. Oracle Error Sqlcode For example, the following declaration raises an exception because the constant credit_limit cannot store numbers larger than 999: DECLARE credit_limit CONSTANT NUMBER(3) := 5000; -- raises an exception BEGIN ... For example, when your program selects a column value into a character variable, if the value is longer than the declared length of the variable, PL/SQL aborts the assignment and raises IF ... http://mmgid.com/oracle-error/oracle-error-numbers-list.html Figure 7-1 Propagation Rules: Example 1 Text description of the illustration pls81009_propagation_rules_example1.gif Figure 7-2 Propagation Rules: Example 2 Text description of the illustration pls81010_propagation_rules_example2.gif Figure 7-3 Propagation Rules: Example 3 Text Databases SQL Oracle / PLSQL SQL Server MySQL MariaDB PostgreSQL SQLite MS Office Excel Access Word Web Development HTML CSS Color Picker Languages C Language More ASCII Table Linux UNIX Java Oracle Error Code 942 The FETCH statement is expected to return no rows eventually, so when that happens, no exception is raised. SUBSCRIPT_BEYOND_COUNT Your program references a nested table or varray element using an index number larger than the number of elements in the collection. SUBSCRIPT_OUTSIDE_LIMIT Your program references a nested table or varray element using an index number (-1 for example) that is outside the legal range. In the following example, you declare an exception named past_due: DECLARE past_due EXCEPTION; Exception and variable declarations are similar. ORA-00024: Logins from more than one process not allowed i... Ora In Oracle NOT_LOGGED_ON Your program issues a database call without being connected to Oracle. Otherwise, DECODE returns the price-to-earnings ratio. We use advertisements to support this website and fund the development of new content. However, when an exception is raised inside a cursor FOR loop, the cursor is closed implicitly before the handler is invoked. his comment is here THEN -- handle the error WHEN ... Retrying a Transaction After an exception is raised, rather than abandon your transaction, you might want to retry it. Databases SQL Oracle / PLSQL SQL Server MySQL MariaDB PostgreSQL SQLite MS Office Excel Access Word Web Development HTML CSS Color Picker Languages C Language More ASCII Table Linux UNIX Java Therefore, a PL/SQL block cannot catch an exception raised by a remote subprogram. END; You can still handle an exception for a statement, then continue with the next statement. VALUE_ERROR An arithmetic, conversion, truncation, or size-constraint error occurs. THEN RAISE out_of_balance; -- raise the exception END IF; EXCEPTION WHEN out_of_balance THEN -- handle the error RAISE; -- reraise the current exception END; ------------ sub-block ends EXCEPTION WHEN out_of_balance THEN So, an exception raised inside a handler propagates immediately to the enclosing block, which is searched to find a handler for the newly raised exception. Copyright © 2003-2016 TechOnTheNet.com. If the transaction succeeds, commit, then exit from the loop. Skip Headers PL/SQL User's Guide and Reference Release 2 (9.2) Part Number A96624-01 Home Book List Contents Index Master Index Feedback 7 Handling PL/SQL Errors There is nothing more exhilarating than Consider the following example: DECLARE pe_ratio NUMBER(3,1); BEGIN DELETE FROM stats WHERE symbol = 'XYZ'; BEGIN ---------- sub-block begins SELECT price / NVL(earnings, 0) INTO pe_ratio FROM stocks WHERE symbol = NO_DATA_FOUND A SELECT INTO statement returns no rows, or your program references a deleted element in a nested table or an uninitialized element in an index-by table. That is, normal execution stops and control transfers to the exception-handling part of your PL/SQL block or subprogram. WHEN OTHERS THEN ROLLBACK; END; Because the block in which exception past_due was declared has no handler for it, the exception propagates to the enclosing block. The message begins with the Oracle error code. SQLSTATE Codes Code Condition Oracle Error 00000 successful completion ORA-00000 01000 warning 01001 cursor operation conflict 01002 disconnect error 01003 null value eliminated in set function 01004 If an error occurs in the sub-block, a local handler can catch the exception. PL/SQL declares predefined exceptions globally in package STANDARD, which defines the PL/SQL environment. That is, the built-in parameter SELF (which is always the first parameter passed to a MEMBER method) is null. That way, an exception handler written for the predefined exception can process other errors, as the following example shows: DECLARE acct_type INTEGER := 7; BEGIN IF acct_type NOT IN (1, 2, For example, you might want to roll back a transaction in the current block, then log the error in an enclosing block. However, exceptions cannot propagate across remote procedure calls (RPCs). But when the handler completes, the block is terminated.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817437.99/warc/CC-MAIN-20180225205820-20180225225820-00375.warc.gz
CC-MAIN-2018-09
8,746
17
https://colleges.claremont.edu/ccms/event/tba-16/
code
What do swarm robotics and political redistricting have in common? One answer is Markov chains, which have recently been used in very different ways to address problems in both these areas. To get a large swarm to exhibit a desired behavior, one solution is to make each individual in the swarm fairly intelligent; another is to make the individuals simple, but to let the desired behavior emerge as a result of their interactions. My collaborators and I recently used Markov chains and ideas from statistical physics to develop distributed algorithms that follow this second paradigm. We also worked with physicists to create a physical robot system where each individual cannot compute anything, but the system as a whole can still accomplish complex tasks. For political redistricting, the main mathematical technique developed in the last few years for detecting gerrymandering is to compare a proposed plan to the space of all possible alternative plans; if the proposed plan is an outlier, that’s an indicator it might be gerrymandered. However, the space of all possible districting plans is far too large to ever be studied in its entirety. Instead, Markov chains are used to generate random samples of alternative plans, where the hope is that the sampled plans are reasonably representative of all possible plans. This approach has already been used successfully in court cases around the country, though questions still remain about what mathematical guarantees we can give about the randomly sampled districting plans.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876307.21/warc/CC-MAIN-20201021093214-20201021123214-00152.warc.gz
CC-MAIN-2020-45
1,532
1
https://www.whencanistop.com/2010/01/what-is-web-analyst.html
code
What is a Web Analyst “Oh Alec!” I hear you saying, “Are you feeling melancholy and wondering what life is all about?” No more than normal is the answer that you’ll all be happy to hear. It is a new year, but there is no new whencanistop out there, I am afraid. Obviously speaking about my alter ego in the third person could be slightly new, but I’m going for the personal interpersonal touch. If you know what I mean. And I’m sure you don’t , because I’ve lost it already and we’re only on the first post of 2010 and already I’m drivelling in the opening paragraph. Oh wait – no, that is normal behaviour. Actually what has really been going on with me is a little discussion that I eavesdropped on at the end of last year (that sounds so long ago, whilst actually only being last week!) that I didn’t want to comment on until I had read some of the stuff that has been going on. So lets start at the beginning. Or the end, as the case may be. It started with me picking up a ‘conversation’ that Stéphane Hamel and Eric Peterson had been having on Twitter around ‘models’ (in the Business sense, not the entertainment sense :)). To cut a long story short, Stéphane has created a model (which you can download, read yourself and then pass back comment to Stéphane as I have) which describes how ‘mature’ or advanced a company is with its Web Analytics. The model describes through six different ‘pillars’ how mature your organisation is on a scale of 0 to 5 (0 being not at all and 5 being brilliant – to paraphrase). This also led me into reading Joseph Carrabis’ “The unfulfilled Promise of Online Analytics” Part 1 and Part 2. They are lengthy reads – I’d recommend that you plan your dinner for a bit in the middle. Yes, I was researching this during my spare time and no I should probably not have been doing. Part 1, in case you are wondering ponders what the problem is and part 2 ponders what to do about it, using some of the methodology that is picked up by the Maturity model suggested by Stéphane (see there is a point to this). Anyway, all the pondering led me to an epiphany and I stopped asking What is Web Analytics? and started asking: What is a Web Analyst? This should be something that is simple to answer for whencanistop given that it is in my job title. I even proclaim to be one in that little blurb at the top of the screen. What is Web Analytics? Stéphane proclaims to be able to define Web Analytics at the start of his Maturity model as: “The extensive use of quantitative and qualitative data (primarily, but not limited to online data), statistical analysis, explanatory (e.g. multivariate testing) and predictive models (e.g. behavioural targeting), business process analysis and fact-based management to drive a continuous improvement of online activities; resulting in higher ROI.” Which is, as he quite rightly points out, a bit of an increase on what Wikipedia says about it (although the Wikipedia article looks a bit like it has been hijacked by someone from Nielson) . Really what we’re talking about is collecting data about what people do on the site (by using a Web Analytics tool or by asking them in a survey) and then using it to improve the performance of the site. What is a Web Analyst So a Web Analyst (like yours truly) does stuff with that information. What sort of stuff? Well we take all that data from the tools, the surveys and any other source that we can find and we turn it into actionable insight (interestingly this last word doesn’t appear anywhere in the Web Analytics page on Wikipedia). So we take all that information and turn it into insight that the Business people can make their website better. Is that all we do? Well, no not at all. When those Business people implement it, we then tell them afterwards whether it has worked or not. In theory, you then use a continual improvement programme of measuring and providing insight and changing. However actionable insight relies on a number of things. Any insight can be actionable. One of the insights I’d probably have made about this blog very early on is that I should have put it on WordPress. So I should transfer it to WordPress, copy all the content across, redirect all the links, etc, etc. But that isn’t going to happen, because the effort and time it would take me to do it isn’t worth my while. The benefits are there, but the pay back from it would probably take a long time to come into fruition. That means that not only do I have to come up with insights that are actionable, but the benefits have to outweigh the cost quickly. Not only do I have to work out the benefits, but I have to work out what the cost is to work out if the ROI is there. How do I do that? I need to have a vague working knowledge of the systems that we are going to need to change. If we’re making a change to a site, then we need to work out the Architecture of the IT, how competent the developers are (if there are any), whether it will have knock on effects, will there need to be lots of testing, etc, etc. This means I have to have a quite detailed knowledge of the IT systems (or at least access to someone who does). Now, I’ve worked out that the change I’m suggesting has benefit, I’ve worked out when it has benefit, how much and how long it will take to do, then there is the next step. I have to persuade the Business that it is more important than any of the other projects that they are doing. Not least those pesky ones that nobody knows how long it will take, how much it will cost and what the benefits will be. When you’re working with numbers, you can be very precise about the benefit. When you’re suggesting that you turn your logo into a dancing hamster, you can make it up as you go along, because nobody will be able to argue with you through numbers. So we’ve got to know: - The information to provide the insight - The IT infrastructure and how to alter it (plus all those processes that always go off in IT) - The Business framework (how you can get funding) - The current projects so that you can push your project ps – would you have ever thought you’d see a picture of me and Kate Moss on the same blog post?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818105.48/warc/CC-MAIN-20240422082202-20240422112202-00010.warc.gz
CC-MAIN-2024-18
6,237
22
https://forum.gitlab.com/t/mr-conflict/33694
code
Describe your question in as much detail as possible: People, recently I’ve updated my Gitlab instance from 11.11.2 to 12.6.4. After that the merge requests are showing that there are conflicts and the merge cannot be done, like in the image attached. But those MRs there are no conflicts at all. When a conflict really happens it shows the button to ‘resolve conflicts’. What are you seeing, and how does it differ from what you expect to see? I expected to see the diff showing line by line or side by side or when there are no conflicts with the merge button to be enabled. Consider including screenshots, error messages, and/or other helpful visuals What version are you on (Hint: /help) ? and are you using self-managed or gitlab.com? Self-managed updated version from 11.11.2 to 12.6.4 What troubleshooting steps have you already taken? Can you link to any docs or other resources so we know where you have been? Only old issues from 2016 that looked like the same but discarded due to being old
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710931.81/warc/CC-MAIN-20221203111902-20221203141902-00365.warc.gz
CC-MAIN-2022-49
1,007
10
https://ce3c.ciencias.ulisboa.pt/research/publications/ver.php?id=1330
code
Aspin, T.W.H., Khamis, K., Matthews, T.J., Milner, A.M., O’Callaghan, M.J., Trimmer, M., Woodward, G. & Ledger, M.E. (2019) Extreme drought pushes stream invertebrate communities over functional thresholds.Global Change Biology, 25, 230-244. DOI:10.1111/gcb.14495 (IF2019 8,555; Q1 Ecology) Functional traits are increasingly being used to predict extinction risks and range shifts under long‐term climate change scenarios, but have rarely been used to study vulnerability to extreme climatic events, such as supraseasonal droughts. In streams, drought intensification can cross thresholds of habitat loss, where marginal changes in environmental conditions trigger disproportionate biotic responses. However, these thresholds have been studied only from a structural perspective, and the existence of functional nonlinearity remains unknown. We explored trends in invertebrate community functional traits along a gradient of drought intensity, simulated over 18 months, using mesocosms analogous to lowland headwater streams. We modelled the responses of 16 traits based on a priori predictions of trait filtering by drought, and also examined the responses of trait profile groups (TPGs) identified via hierarchical cluster analysis. As responses to drought intensification were both linear and nonlinear, generalized additive models (GAMs) were chosen to model response curves, with the slopes of fitted splines used to detect functional thresholds during drought. Drought triggered significant responses in 12 (75%) of the a priori‐selected traits. Behavioural traits describing movement (dispersal, locomotion) and diet were sensitive to moderate‐intensity drought, as channels fragmented into isolated pools. By comparison, morphological and physiological traits showed little response until surface water was lost, at which point we observed sudden shifts in body size, respiration mode and thermal tolerance. Responses varied widely among TPGs, ranging from population collapses of non‐aerial dispersers as channels fragmented to irruptions of small, eurythermic dietary generalists upon extreme dewatering. Our study demonstrates for the first time that relatively small changes in drought intensity can trigger disproportionately large functional shifts in stream communities, suggesting that traits‐based approaches could be particularly useful for diagnosing catastrophic ecological responses to global change.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652959.43/warc/CC-MAIN-20230606150510-20230606180510-00166.warc.gz
CC-MAIN-2023-23
2,434
2
http://forum.xda-developers.com/showthread.php?t=1612413
code
PengDroid: a healthy fusion of Debian and Android Notice: Now that PengDroid has been rolled into BotBrew, this installation method is now deprecated. You'd get all this, and a package manager GUI, by installing BotBrew "basil". I love Debian, and I think it's nice to have a chroot environment full of useful tools. However, there's always a barrier keeping Android and Linux from talking to each other. This is why BotBrew is designed to not rely on a chroot system. PengDroid is an experimental chimera of Android and Linux that gives you access to a chrooted Linux userland while preserving access to the Android system. If this sounds dangerous, let me explain why this is safe. Have a look at the root directory of a Linux system and the root directory of an Android system; notice how they don't overlap much. This means that we could safely map some of the most useful Android directories into the Linux namespace; there's no need to modify the Android side. Let's see the code. I made a prebuilt archive to demonstrate this method. In exchange for trying, you get a nice Debian chroot. What's not to love?! - download pengdroid.tgz to your Android device - unpack it to /data (or /sd-ext): tar zxvf path/to/pengdroid.tgz -C /data - run it: /data/pengdroid/init For a quick sanity check, run: getprop ro.product.model (whoa, Android inside Linux) Then, for some more fun: apt-get By default, no repositories are enabled. Before we start installing packages from Debian, we should actually install Debian. Wait, what? Right, PengDroid is so small because it packs just enough for us to use dpkg/apt; but many Debian packages assume that we have a complete setup. To bootstrap a minimal Debian installation, run /debian.sh (which makes the final installed size a whopping 58mb ). If that's too much, keep reading. Installing a minimal Debian gives us a fairly complete, but still small, *nix system. If we wanted to go even lighter, but still have a reasonably robust setup, we could run the alternative installer: /debian.sh apt -- which installs a complete dpkg/apt system. It's a hassle to run /data/pengdroid/init all the time, but we could fix that by making a shortcut: busybox mount -o remount,rw /system echo '/data/pengdroid/init -- "$@"' > /system/bin/pengdroid chmod 0755 /system/bin/pengdroid Now, we could just prefix everything with pengdroid , like so: pengdroid python Or, if we just want a shell: pengdroid
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703682988/warc/CC-MAIN-20130516112802-00086-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
2,429
28
https://wpirover.com/2011/09/07/drive-module/
code
One of the problems raised by the original Robo-Ops rover was that the drive modules left the motors and gearboxes entirely exposed to the environment. This is clearly not suitable for a potential planetary rover, so we knew we would have to build enclosures for the drive system. This is particularly challenging because the enclosure needs to provide both mechanical attachment to the chassis as well as support the 20 electrical connections necessary for our Maxon brushless motors (3 motor power, 5 position sense, 10 quadrature differential encoder, and 2 thermistor leads). To accomplish this task, we decided to build the enclosure to have two distinct sections. The front section, constructed of thick-wall aluminum tube and plate, mounts the motor, accommodates a lip seal on the shaft, and provides a connection to the chassis. The rear section, constructed from a modified Hammond 1550Z103 enclosure, accommodates the electrical connections and protects the differential encoder on the rear of the motor. Amphenol Size 28, 20 pin connectors (MS-3102A-28-16P/MS-3106B-28-16S) will connect each motor housing with its corresponding cable harness.
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323889.3/warc/CC-MAIN-20170629070237-20170629090237-00275.warc.gz
CC-MAIN-2017-26
1,155
1
https://leftasexercise.com/2020/01/24/wsgi-middleware-pastedeploy-and-all-that/
code
When you are a Python programmer or study open source software written in Python, you will sooner or later be exposed to the WSGI standard and to related concepts like WSGI middleware. In this post, I will give you a short overview of this technology and point you to some additional references. What is WSGI? WSGI stands for “Web Server Gateway Interface” and is a standard that defines how Python applications can run inside a web container (“server”), quite similar to Java servlets running in a servlet container. The WSGI standard is defined in PEP 333 (and, for Python3, in PEP 3333) and describes the interface between the application and the server. In essence, the standard is quite simple. First, an application needs to provide a callable object (that can be a function, an instance of a class with a __call__ method or a method of a class or object) to the server which accepts two arguments. The first argument, traditionally called environ, is a dictionary that plays the role of a request context. The standard defines a set of fields in that object that a server needs to populate, including |REQUEST_METHOD||The HTTP request method (GET, POST, ..)| |HTTP_*||Variables corresponding to the various components of the HTTP request header| |QUERY_STRING||The part of the request strings after the ?| |wsgi.input||A stream from which the response body can be read, using methods like read(), readline() or __iter__| |wsgi.errors||A stream to which the application can write error logs| The second argument that is passed to the application is actually a function, with the signature This function is supposed to return a stream-like object implementing the write method. The application can call use this object to write the response into it (which, however, is not the preferred way, in general, the application should simpyl return the response data). The argument status is a HTTP status code along with the respective string, like “200 OK”. The response_headers is a list of tuples of the form (name, value) which are added to the HTTP header of the response. The idea of this function is to give the server a chance to prepare the HTTP header of the response before the actual response body is written. In fact, there is a third, optional argument to this method, which is an expection information as returned by sys.exc_info, which can be used to ask the server to re-raise an exception caught by the application and which we will ignore here. The application function is supposed to return the response data, i.e. the data should go into the HTTP response body. Note that with Python3, this is supposed to be a bytes object, so text needs to be converted to bytes first. Armed with this information, let us now write our first WSGI application. Of course, we need a WSGI server, and for our tests, we will use a very simple embedded WSGI server that comes as part of the wsgiref module. Here is the code. Let us see what this application does. First, there is the application function with the signature defined by the standard. We see that we call start_response and then create a response string. The response string contains an HTML table with one entry for each key/value pair in the environ dictionary. Finally we convert this to a byte object and return it to the server. In the main processing, we create a wsgiref.simple_server that points to our application and start it. To run the example, simply save the above code as wsgi.py (or whatever name you prefer) and run it with When you now point your browser to 127.0.0.1:8800, you should see a table containing your environment values (the simple_server includes all currently defined OS level environment variables, so you will have to scroll down to see the WSGI specific parts). Let us now try something else. Our application actually returns a sequence of byte objects. The server is supposed to iterate over this sequence and assemble the results to obtain the entire response. Thus the only thing that matters is that our application is something that can be called and returns something that has a method __iter__. Instead of using a function which returns a sequence, we can therefore as well use a class that has an __iter__ method as in the example below. When the server receives a request, it will call the “thing called application”, i.e. it will do something like Application(). This will create a new instance of the application object, i.e. call the __init__ method, which simply stores the parameters for later use. Then, the server will iterate over this object, i.e. call __iter__, where the actual result is assembled and returned. Finally, we could also pass an instance of a class instead of a class to make_server. This instance than needs a __call__ method so that it can be invoked like a function. As we have seen, the WSGI specification has two parts. First, it defines how an application should behave (call start_response and return response data) and it defines how a server should behave (call the application), as displayed below. A WSGI middleware is simply a piece of Python code that implements both behaviours – it can act as a server and as an application. This allows middleware components to be chained: the server calls the middleware, the middleware performs whatever action it wishes, for instance manipulating the environment variable, and then invokes the application, and the application prepares the actual response. Of course, instead of just passing through the start_response function to the application, a middleware could also pass in a different function and then call the original start_response function itself. A nice feature of middleware is that it can be chained. You could for instance have a middleware which performs authorization, followed by a middleware to rewrite URLs and so forth, until finally the application is invoked. Here is a simple example. If you run this example as before, you will see that in addition to the environment variables produced by our first example, there is the additional key added_by_middleware which has been added by the middleware. In this example, the full call chain is as follows. - When the server starts, it creates an instance of the class Middleware that points to the function application - This instance is passed as argument to make_server - The server gets the request from the browser - The server makes a call on the “thing” supplied with make_server, i.e. the middleware instance - The server calls the middleware instance, i.e. it invokes its __call__ function - The __call__ function adds the additional key to the environment and then delegates the request to the function application Building middleware chains with PasteDeploy So far, we have chained middleware programmatically, but in real life, it is often much more flexible to do this via a configuration. Enter PasteDeploy, a Python module that allows you to build chains of middleware components from a configuration. To make sure that you have this installed, run pip3 install PasteDeploy PasteDeploy is able to parse configuration files and to dynamically pipe together WSGI applications and WSGI middleware. To understand how this works, let us first consider an example. Suppose that in our working directory, we have the following code, stored in a file wsgi.py In addition, let us create a configuration file paste.ini in the same directory, with the following content. [app:main] use = call:wsgi:app_factory When we now run wsgi.py, we again get the same server as in our first, basic example. But what is happening behind the scenes? First, we invoke the PasteDeploy API by calling loadapp. This function will evaluate the INI file passed as argument for different types of objects PasteDeploy knows. In our case, the section name app:main implies that we want to define an application and that this is the main entry point for our WSGI server. The argument that PasteDeploy expects here is the the full path to a factory function (i.e. in our case, the function app_factory in wsgi.py). PasteDeploy will then simply call this factory and return the result of this call as an application. We then start a server using this application as before. Note that PasteDeploy can also pass configuration data in the INI file to the factory. A second basic object in PasteDeploy are filters. Filters are used to create filtered versions of an application, i.e. the application behind a defined middleware (the filter). In the configuration file, filters are specified in a section starting with the keyword filter, and refer to a filter factory. A filter factory is a callable which is called with the configuration in the INI file as argument, and returns a filter. A filter, in turn, is a function which receives an application as an argument and returns a WSGI application wrapping this application. This sounds a bit confusing, so it might be a good idea to look at an example. Our new code looks as follows with the following configuration [app:main] use = call:wsgi:app_factory filter-with = filter1 [filter:filter1] use = call:wsgi:filter_factory key = "abc" What happens if you run the example? First, PasteDeploy will create an application as before, by calling the app_factory function. Then, it will find the configuration option filter-with that tells the library that we wish to wrap the application. Here, we refer to a filter called filter1 which is defined in the section of the INI file. When evaluating this section, PasteDeploy will call the provided filter factory filter_factory, passing the additional configuration in the section as parameters. The filter factory returns a function, the filter function. PasteDeploy will now take the application and call the filter function with this application as argument. The return value of this call will then be used as the actual application that is returned by loadapp and started using the simple_server (in fact, PasteDeploy will first call the filter factory, then the app factory and then the filter itself). Of course, you can apply more than one filter to an application. To make this as easy as possible, PasteDeploy offers a third type of objects called pipelines. A pipeline is just a sequence of filters which are applied to an application. The nice thing about pipelines is that they are piped together by PasteDeploy automatically, without any need to write additional factory objects. So our source code remains the same, we only have to change the application. [pipeline:main] pipeline = filter1 filter2 myapp [app:myapp] use = call:wsgi:app_factory [filter:filter1] use = call:wsgi:filter_factory key = "abc" [filter:filter2] use = call:wsgi:filter_factory key = "def" Here, we define a pipeline which will first apply filter1, then filter2 and then finally pass control to our app. These three objects are created by the same calls to factory functions as before, and PasteDeploy will automatically load the pipeline and plumb the objects together. The result will be that once the application is reached, both keys (abc and def) will be present in the request context. This is now what we want. We can, of course, have filters in different Python modules, and thus completely decoupled. PasteDeploy will then happily plumb together the final WSGI application according to the configuration, and we can easily add middleware components to the pipelines and remove them, without having to change our code. Finally, there is another approach to configure a pipeline which is also the one described in the documentation. Here, we realize a pipeline as a composite object. This object again corresponds to a factory function with a specific signature. Part of this signature is a loader object which we can use to load the individual filters by name and apply them step by step to the application. A nice example where this is implemented is the configuration of the OpenStack Nova compute service, with the factory being implemented here. And yes, it was an effort to understand this example which eventually made me carry out some research and write this blog post – expect to see a bit more on OpenStack soon on this blog!
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655881984.34/warc/CC-MAIN-20200703091148-20200703121148-00224.warc.gz
CC-MAIN-2020-29
12,178
50
https://en.worldwidescripts.net/animation-sequences-for-android-38521
code
- Simplified Chinese - Traditional Chinese The Android Animation Sequences Pack #1 provides numerous tween animation sequence resource files for use within your Android applications. Works with Android API Levels 3+ Each animation resource is saved in its own file. Over two dozen animations are included in this package, including: The “Jiggle” family of animations that cause the target control to move around “randomly.” The “Spin” family of animations that cause the target control to rotate. The “FlyIn” family of animations that cause the target control to fly onto the screen. And many more! 22 November 10
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948126.97/warc/CC-MAIN-20180426105552-20180426125552-00449.warc.gz
CC-MAIN-2018-17
629
10
http://brewwiki.win/wiki/Post:Wonderfulfiction_Dragon_Kings_SonInLawblog_Chapter_713_Three_Against_One_Whos_Victory_cap_wink_readingp3
code
Post:Wonderfulfiction Dragon Kings SonInLawblog Chapter 713 Three Against One Whos Victory cap wink readingp3 Brilliantfiction Dragon King's Son-In-Law novel - Chapter 713 - Three Against One! Who's Victory?! tall drain -p3 Novel - Dragon King's Son-In-Law - Dragon King's Son-In-Law Chapter 713 - Three Against One! Who's Victory?! learned flat The gold s.h.i.+eld shone while sucking in the ashes of your religious natural herbs. Then, it devoured a handful of Qiu Niu's blood stream that had been on the ground and infected Qiu Niu's tummy. Zhao Kuo condensed his sword energies and instantly drawn the Black colored Dragon Spike. Then, the 40,960 sword energies concentrated on the Dark Dragon Surge, which makes it black yet s.h.i.+ny. the cook and housekeeper's complete and universal dictionary pdf “Your lifestyle doesn't really issue, but Zi's happiness is of significant worth!” he idea. Whitened mists have been encircling them… that was the Demon Ocean! Normally, he wouldn't show up in the Nine Dragon Palace. Nevertheless, when he heard that Hao Ren is in the Nine Dragon Palace from Zhao Yanzi, he was worried that Hao Ren would perish here. Which had been why he instantly compelled one other three ocean dragon clans to work alongside Eastern Seas and reveal the position of the Nine Dragon Palace. Then, he utilised his complete chance to open up a hole on the assortment growth in the Nine Dragon Palace in order that he could endeavor in by himself. “Sorry!” Su Han stepped lightly on the longsword and had taken the Nuwa Natural stone with the very stop in the range. “What? You will get out from my Seven Eliminating Selection?” Qiu Niu was stunned. Should they didn't have got to combat an effective foe, he could reduce Hao Ren by 50 percent regarding his sword! Woman Zhen acquired selected a fantastic time for Su Han because she believed it was actually when Qiu Niu was weakest! Qiu Niu had never required Su Han, Hao Ren, and Zhao Kuo so that you can reach this place! rise of the undead legion She understood there had been G.o.dly products stored in the Nine Dragon Palace but didn't know there were clearly ten! Ding, ding, ding, ding… The black colored palace begun to crumble, and 15 of decorative treasures set about showing up. Hao Ren believed that he acquired been told this audio somewhere ahead of, so he shouted out noisy. The gold s.h.i.+eld was very petty. It had been such as a modest dog and organised grudges. Considering the fact that Qiu Niu attack it, it desired vengeance. Nevertheless, Hao Ren coming over to the Nine Dragon Palace was harmful and was not undertaking Zhao Yanzi appropriate. Bam! Qiu Niu searched very vulnerable, but it really instantly shook its tail. The fantastic s.h.i.+eld shone while sucking up the ashes in the psychic herbal remedies. Then, it devoured a number of Qiu Niu's blood stream which has been on the ground and infected Qiu Niu's stomach area. is faceless void good Now that he observed Hao Ren was safe and sound, he was relieved but in addition mad. Whoos.h.!.+ The gold s.h.i.+eld that have been resting by Hao Ren's feet out of the blue bounced up. The simple truth is, he was just behind Su Han and Hao Ren a little. Having said that, the tower was surrounded by heavy mist, hence they weren't capable of seeing the other. read avoid the death route Out of the blue, in all places was pitch dimly lit. Actually, he was just behind Su Han and Hao Ren a bit. Having said that, the tower was flanked by dense mist, in order that they weren't able to see the other. Su Han waved lightly to recover her longsword. Whoos.h.!.+ The fantastic s.h.i.+eld that had been lying down by Hao Ren's ft abruptly bounced up. Bam! Qiu Niu clawed in front, and also that directly knocked away the greater number of than 80,000 sword energies and Su Han's longsword. He couldn't use any kind of his dharma treasures and methods! He was at the stage where he got just started cultivating! If his world possessed not fallen, he wouldn't care if there had been 100 optimum point Qian-levels cultivators. Nevertheless, he was at the smallest part of his kingdom, and the the outdoors essence wasn't streaming perfectly. That had been why he arrived at the Nine Dragon Palace to start with. The 15 old G.o.dly things in stories were definitely all stored on this page! Even Zhao Kuo sensed the chill to his your bones, and this man s.h.i.+vered. Su Han utilized her strategy, and snowfall fell in the area. She was already very skillful along with her strategy. Now that she was at optimum Qian-stage, she was not a compel to become reckoned with! shameless great marshall Usually, he wouldn't show up in the Nine Dragon Palace. Even so, as he listened to that Hao Ren was in the Nine Dragon Palace from Zhao Yanzi, he was anxious that Hao Ren would pass on listed here. Which had been why he immediately forced another three water dragon clans to do business with Eastern side Sea and disclose the positioning of the Nine Dragon Palace. Then, he employed his full power to opened an opening within the assortment creation in the Nine Dragon Palace so that he could opportunity in by themselves. Not alone was Qiu Niu the ancestor of the dragons, but he have also been the grandmaster at helping to make treasures. The Hundun Perfect Fireplace which he had could dissolve the fantastic s.h.i.+eld simply! the clockwork universe Su Han utilised her approach, and snowfall decreased in the community. She was already very skilled together procedure. Since she was at optimum point Qian-amount, she was not a compel being reckoned with! Su Han's longsword simply let out ice cold and well-defined icicles that behaved like sharp swords, and she pierced it into Qiu Niu's dragon scales. Su Han's longsword just let out freezing and sharp icicles that acted like very sharp swords, and she pierced it into Qiu Niu's dragon scales.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00412.warc.gz
CC-MAIN-2023-14
5,908
44
https://forum.cosmicpvp.com/threads/galaxy-needs-active-members.302881/page-3
code
Discussion in 'Recruitment' started by Masterville, Jan 20, 2019. Discord name: [bad] g0d #2536 Hours per day: 4 to 5 on and off Basework or pvp: Basework Extra info or questions for me: I’ll do pvp sometimes too Hey daddy its virtual national we gotta get me hero and ill get u some godly crates ~mum credit card cough~
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592261.1/warc/CC-MAIN-20200118052321-20200118080321-00180.warc.gz
CC-MAIN-2020-05
322
6
https://www.thegeekdiary.com/asdf-command-line-interface-for-managing-versions-of-different-packages/
code
“asdf” is a powerful command-line interface (CLI) tool that simplifies the management of multiple versions of different packages. It provides a convenient way to install, manage, and switch between various versions of programming languages, runtimes, frameworks, and other software packages. With asdf, developers can effortlessly handle versioning complexities, ensuring compatibility and flexibility within their development environments. Here are the key features and functionalities of asdf: - Version Management: asdf is specifically designed to manage versions of different packages. It supports a wide range of programming languages and tools, including but not limited to Python, Ruby, Node.js, Java, Elixir, Erlang, and more. Users can easily install and switch between multiple versions of these packages, allowing them to work on different projects or maintain compatibility with specific application requirements. - Simple Command-Line Interface: asdf offers a user-friendly command-line interface, making it easy to install, manage, and switch between package versions. Users can utilize simple commands to install specific versions of packages, list available versions, set the active version, and switch between different versions as needed. The intuitive CLI enables efficient package management without the need for manual installation or complex configuration. - Plugin System: asdf features a plugin system that allows users to extend its capabilities by adding additional package managers. These plugins enable support for additional programming languages and tools, allowing users to manage an even broader range of software packages through a unified interface. The plugin system enhances the flexibility and versatility of asdf, making it adaptable to various development environments. - Dependency Management: asdf takes care of managing package dependencies and their respective versions. It ensures that the correct versions of dependencies are installed and used, avoiding conflicts or compatibility issues between different packages. This simplifies the process of managing dependencies and reduces the effort required to set up and maintain a consistent development environment. - Version Overrides: asdf allows users to define version overrides on a per-project basis. This means that developers can specify a particular version of a package to be used for a specific project, regardless of the globally set version. This feature ensures that projects remain isolated and can have their own unique package versions, facilitating project-specific requirements and maintaining consistency across different development projects. - Scriptable and Automatable: asdf is designed to be scriptable and automatable, allowing developers to incorporate it into their build systems or continuous integration workflows. Users can leverage the command-line interface and integrate asdf commands into scripts or automation pipelines to manage package versions seamlessly. This facilitates the setup and maintenance of consistent development environments across different stages of the software development lifecycle. - Community-Driven and Extensible: asdf benefits from an active and supportive community of users and contributors. The community actively maintains and updates plugins, provides documentation, and offers support through forums and chat channels. This ensures that asdf remains up-to-date, compatible with the latest package versions, and well-supported by the community. 1. List all available plugins: # asdf plugin list all 2. Install a plugin: # asdf plugin add name 3. List all available versions for a package: # asdf list all name 4. Install a specific version of a package: # asdf install name version 5. Set global version for a package: # asdf global name version 6. Set local version for a package: # asdf local name version In summary, asdf is a command-line interface that simplifies the management of multiple versions of different packages. With its user-friendly CLI, plugin system, dependency management capabilities, and version overrides, asdf provides developers with a convenient way to handle versioning complexities and maintain consistent development environments. Its scriptability, automation support, and vibrant community make asdf a valuable tool for managing package versions across a wide range of programming languages and tools.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474688.78/warc/CC-MAIN-20240227220707-20240228010707-00876.warc.gz
CC-MAIN-2024-10
4,396
22
https://about.psyc.eu/HTTP
code
HyperText Transfer Protocol Yeah yeah, you know about it. TBL and everything. The original HTTP was just... throw a connect at a host, tell him GET this or that, then you receive the file as is and the socket is closed, which means that all the time TCP needed to get its window size properly set was thrown away as for the next click a new connection is made. HTTP was designed with an idea in mind, a vision, the same vision that led us to the URL (we call it a uniform) concept and the HTML. And yet over the years it got fixed up quite impressively and now even serves the job of a File Transfer protocol better than FTP (that's one of the reasons why we'd like to have an HTTP Server built into PsycZilla). So what does this really have to do with PSYC? Well, as Larry Masinter suggested, psyced and the early WWW have one thing in common: they support most existing technologies, before coming up with a new one. That was easy with Gopher and WAIS, but won't be with IRC and XMPP. What else? psyced comes with a builtin HTTP-Server that features a little CMS based on textdb. It is also a good framework for dynamic web applications, especially when they involve multi-user interaction or realtime aspects like pushing data and yes, even AJAX and Comet are fancy newer words for things that symlynX has been doing with psyced Webchat technology since 1997. In psyced, persons can define content for their web profile and be customized using style sheets, so can places. With the advent of PsycZilla we even have web applications that do not use HTTP at all, because the HTML content is delivered directly via PSYC. This could lead to a truly Private Web. the POST superhighway For clients behind firewalls we are planning to support the POST superhighway as Jim Gettys once called it. I'll fill you in on the details at a later time, although symlynX servers already support it. Essentially, you can access any interface protocol from the HTTP port. Scalability and the REST Fundamentally HTTP's query/response architecture still gets in the way of bidirectional messaging a lot, even after keep-alive and pipelining have been introduced (which took a decade!). In exchange it offers caching and expiry logic, which so-called RESTful HTTP applications such as OStatus make use of. For PSYC applications that aspect isn't very interesting, since PSYC models all data distribution in a publish/subscribe way, where everything you need is delivered to you as it happens, so you don't need to cache. Also, the PSYC syntax has the advantage of letting you deliver complex data with multiple binary elements (encrypted data for example) in a single message, while HTTP requires you to introduce an actual syntax to convey complex data, popularely JSON or XML, both not being capable of raw binary data. In theory you could use HTTP headers for structured payload, but that's not the intended way of operation. 2011: Google improves on HTTP with SPDY While some folks develop entire APIs on top of interserver HTTP exchanges.. see Activity for an example.. others realize that HTTP isn't fast enough even for its original purpose, let alone intense interserver traffic. Read about SPDY. How to implement the web over PSYC
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710417.25/warc/CC-MAIN-20221127173917-20221127203917-00743.warc.gz
CC-MAIN-2022-49
3,220
15
https://saurav-samantray.medium.com/using-mock-servers-in-microservice-development-oversimplified-d0fdc9a6ef3f
code
Consider a simple microservice architecture. Microservice A produces a message to a Kafka topic. Microservice B is a consumer for the same topic and reads the message. Microservice B also needs to make a synchronous HTTP REST call to Microservice C to fetch additional information, compute data, and then save it to the database. Team A manages Microservice A, Team B manages Microservice B, and …you get the idea. A developer named Saurav 😊 in Team B has been assigned the development of a new feature in Microservice B. After completing local development and unit testing, Saurav now publishes his changes to GitHub and gets his changes deployed to the development environment for testing. However, he finds out that Team A and Team C don’t have a development environment! We all have faced this problem. What can Saurav do? Skip development testing! He could do away with the development testing and move directly to the release environment and hand it to the QA for validation. Well, you know how untrusting the QAs are 🤨. So what other option does Saurav have? Wait for the other teams to set up their development environment He could reach out to Team A and Team B to set up the development environment and share the details for integration. Saurav definitely doesn’t want to end up like the meme above. Mock with Microcks Microservice A being a Kafka Producer in event driven architecture should have a AsyncAPI specification to describe and document message-driven APIs in a machine-readable format. Similarly Microservice C, a REST based service should have OpenAPI Specification describing standard, language-agnostic interface to HTTP APIs. We can leverage these specification documents to spin up a mock instances of Microservice A and Microservice B. Mocking Kafka Producer — https://microcks.io/blog/apache-kafka-mocking-testing/ Mocking HTTP Rest API — https://microcks.io/documentation/using/advanced/templates/
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817442.65/warc/CC-MAIN-20240419172411-20240419202411-00139.warc.gz
CC-MAIN-2024-18
1,943
16
https://toc.123doc.org/document/2692723-16-where-to-draw-the-line-kernel-versus-user-space-drivers.htm
code
Tải bản đầy đủ - 0trang 16 Where to Draw the Line: Kernel Versus User- Space Drivers Starting a Local X Server One Size Doesn’t Fit All An X server can be started in different ways to suit different types of use. In this chapter, we’ll examine the techniques available for starting X and discuss the best approach for some common scenarios, including: • Presenting a graphical login display (Section 2.4) • Configuring a home system with two graphical login displays, so that two people can alternately use it without disturbing each others’ work (Section 2.7) • Starting X on a server system only when it is really needed, in order to conserve system resources for more important uses (Section 2.9) • Starting an X server that is displayed within another X server (Section 2.11) We’ll also take a look at how to use Virtual Terminals (Sections 2.2 and 2.10), how to simulate a mouse when a bad configuration leaves you without one (Section 2.12), and how to terminate X (Sections 2.13 and 2.14). Linux, FreeBSD, and many other modern Unix kernels support a virtual terminal (VT) (or virtual console) capability, which provides independent virtual video cards. The monitor, keyboard, mouse, and physical video card are associated with only one VT at a time, and each virtual video card can be in a different display mode—some may be in character mode while others are in graphical mode. This enables multiple X servers and nongraphical sessions to be active at the same time. To switch virtual terminals on Linux, press Ctrl-Alt-Fx (where Fx is a function key from F1 through F12, corresponding to a virtual terminal from VT1 to VT12; you can also use Alt-Fx if the current VT is in character mode). When you are connected to a virtual terminal that isn’t running an X server, you can use Alt-LeftArrow to go to the previous VT and use Alt-RightArrow to switch to the next VT. Some Linux distributions also configure the Windows key to advance to the next VT; you can also switch virtual terminals using the switchto or chvt commands (Section 2.10). By default, most Linux distributions boot up with six nongraphical logins on VT1– VT6 and one X server running on VT7. FreeBSD provides a very similar VT capability, except that the VTs are numbered starting at zero, and the key combination to switch VTs when in character mode is Alt-Fx. Virtual terminals are numbered one off from Alt keys, because there is no F0 key. Therefore, if you’re on VT3 in character mode and press Alt-F1, the kernel will take you to VT0. System V Release 4.x systems such as UnixWare use Alt-SysReq followed by Fx to switch virtual terminals. Although most kernels support more than 12 virtual terminals, this capability is rarely used because you can’t usually use the keyboard to go directly to higher-numbered VTs. Starting a Raw X Server Manually The simplest way to start an X server is also the least-used technique: simply type the name of the server at a shell prompt: Most Unix command and program names are lowercase, but the X server is an exception. You must enter “X” as a capital letter. X is actually a symbolic link to the installed server binary, which is named Xorg if you’re using the X.org server, XFree86 if you’re using the XFree86 server, and so on. If an X server is already running on display :0, you will get an error message, because the network port will already be in use. In that case, you can give the new X server a different display number: $ X :1 By default, the X server will start on the first unused VT (usually VT8). You can request a specific VT by specifying it on the command line: $ X :1 vt10 You can also specify that a particular configuration file should be used, or a particular ServerLayout within a configuration file: $ X :1 -config configFile $ X :1 -layout layoutName Chapter 2: Starting a Local X Server The downside to starting the X server this way is that no clients are started. Until you start some manually, you’ll be left staring at a blank screen with only a mouse pointer to amuse yourself. You can start the X server and a client at the same time like this: $ X :1 -terminate sleep 2 ; DISPLAY=:1 xterm The -terminate option will cause the X server to exit when the last client disconnects, and the sleep 2 option ensures that the X server has time to start before the xterm client attempts to connect to it—not usually required, but it’s good practice to ensure that your commands will work reliably. Note that this command line does not start a window manager or a desktop environment, so you will not be able to move or resize the xterm window, start additional programs (except by typing in the terminal), or set the keyboard focus. The advantage of starting X directly is that you have precise control over the X server startup options and the list of clients displayed, which is perfect for a kiosk. Using a Display Manager to Start the X One of the possible layers of an X-based GUI is a display manager, which is the graphical equivalent of the login program. It is usually configured to start one or more local X servers to present a greeter dialog that collects the user’s name and password. Once the user is authenticated, the display manager starts some preconfigured clients—typically a session manager that goes on to start a window manager and desktop environment such as KDE or GNOME. Many display managers let you select a session type, which will in turn activate a specific desktop environment. When the user exits the client(s), the process starts over again. Three display managers are in common use. The biggest difference between them is the toolkit upon which they are built: • GDM: GNOME Display Manager (built on GTK) • KDM: KDE Display Manager (Qt) • XDM: X Display Manager (Xt) KDM and GDM offer some advanced features not present in the older XDM program, such as a picture-based face browser and the ability to select the desktop environment that will be loaded once the user authenticates. You may be able to recognize the display manager used on your system by its appearance, since each toolkit has a distinctive look. Alternately, you can search the process table to see what’s running, using the following: $ ps -e | grep '[gkx]dm' 2.4 Using a Display Manager to Start the X Server If you prefer BSD-style arguments, or if your version of ps permits these arguments only, use ps ax in place of ps -e. Enabling or Disabling the Display Manager at Boot Time Many commercial Unix systems and Linux distributions borrow a boot technique pioneered in Unix System V: the use of runlevels to start and stop software sets. Table 2-1 lists the standard runlevels. Table 2-1. The standard runlevels observed by most System V Unix variants and Linux Single-user mode: no per-runlevel scripts executed; /etc/inittab not required (emergency use only) Single-user maintenance mode Multiuser, nonnetworked mode (the default runlevel for Debian-based systems, including Ubuntu, but rarely used on other systems) Multiuser, networked mode Multiuser, networked mode with local graphical login 7, 8, 9, a, b, c Runlevel s or S is a special case: it’s used internally by init and normally shouldn’t be entered directly by the user, who can enter runlevel 1 for single-user mode instead. But it has a special quality: it’s the only runlevel that does not require /etc/inittab and is therefore useful in emergency recovery situations. When you boot a Linux or Unix system into runlevel 5 (the default for most distributions except Debian/Ubuntu when an X Window server is installed), the display manager will start automatically. To prevent this, you can boot your system into runlevel 3 by editing the kernel boot parameters, either temporarily or permanently. To temporarily change the boot into a different runlevel if you are using the grub bootloader, take the following steps: 1. At the start of the system boot process, access the boot menu (you may or may not need to press a key to do this—watch the screen prompts closely), highlight the menu entry you wish to use, and press A (to append kernel arguments). Chapter 2: Starting a Local X Server 2. You will be taken into an editor mode that lets you adjust the kernel boot arguments. Add the number 3 at the end of the argument line and press Enter to If you are using a system that uses Xen virtualization, the kernel entry specifies the hypervisor instead of the Linux kernel. To edit the kernel boot parameters, press E (for Edit) at the main grub menu, which will display the details of your boot configuration. Select the module line that specifies the kernel file and press E. Add the desired runlevel (3) at the end of this line and press Enter to save your change, then press B to Or, if you are using the LILO bootloader: 1. At the start of the system boot process, access the LILO: prompt, then type the name of the boot configuration you wish to use (the Tab key will display the list of possibilities) and append the number 3 at the end (for example, linux 3). 2. Press Enter to boot. You can change the runlevel of system after it has been booted by executing the init or telinit command with the desired runlevel: $ init 3 To return to the graphical login state, switch to runlevel 5: $ init 5 Permanently changing the default runlevel requires editing /etc/initab. The runlevel is controlled by this line: Change the second field to 3 to disable the automatic start-up of the display When you boot into any runlevel that does not start X automatically, you can start the display manager manually by typing the command name at a root shell prompt: By default, Debian-based systems (including Ubuntu) start the display manager in all runlevels. You can easily disable the startup of the display manager in runlevel 3 by executing these commands: # update-rc.d -f gdm remove # update-rc.d gdm start 31 2 4 5 . stop 31 1 3 . 2.5 Enabling or Disabling the Display Manager at Boot Time What Started the Display Manager? Depending on your system configuration, the display manager may be started directly by init, or through an init script. It’s useful to know how the display manager starts so that you can make changes and so that you know what will happen if the display manager exits (or crashes!). Started Directly by init In some Linux distributions, the display manager is directly started by init. For example, in Fedora’s /etc/inittab, you will find this entry: # Run xdm in runlevel 5 In the second line, the second field specifies that this command is executed only in runlevel 5, and the third field directs that it is to be respawned (executed again) if it The script /etc/X11/prefdm will execute /usr/sbin/autologin to automatically log in one user if that feature has been set up. Otherwise, it will start one of the display managers (GDM, KDM, or XDM) depending on the specification in /etc/sysconfig/desktop. If that file does not exist, then the first display manager found in alphabetical order will be used. Since init has been set up to respawn the display manager automatically, it is relatively easy to load and test changes to the display manager configuration file—just kill the display manager! If you’re using XDM or KDM, you can kill the display manager by name: # killall xdm Killing the display manager will also kill all the display manager’s child processes, including X servers—so if you do this through the graphical interface, expect your session to disappear! GDM is a wrapper script for gdm-binary, so if your system uses GDM, you’d have to kill the display manager with the following: # killall gdm-binary Alternately, you can restart GDM immediately using its restart script: Or you can specify that a restart should take place as soon as everyone is logged out: Chapter 2: Starting a Local X Server In FreeBSD, the display manager is started by init but the configuration information is in /etc/ttys instead of /etc/inittab: The fourth field can have a value of on or off to enable or disable the display Started by an init Script Some Linux distributions use startup scripts to execute the display manager. For example, on a SUSE system, the display manager is started by /etc/rc.d/rc5.d/S17xdm (which is a symbolic link to /etc/rc.d/xdm). Similar to the prefdm script used by Fedora, this script finds your preferred display manager using a configuration file—in this case, /etc/sysconfig/displaymanager—or it uses XDM if that file is missing. Since this is a regular init script, it is executed only once at startup; when the display manager terminates, it will not be restarted. After editing the display manager configuration file, you can reinvoke the XDM init script using the restart option to put your changes into effect: # /etc/X11/xdm restart Or you can use the SUSE shortcut: # rcxdm restart Starting Multiple X Servers Using a Display On a home computer, it can be useful to configure the display manager to start two or more X servers. You can then flip between them using the virtual terminal mechanism (Section 2.2). A few years ago, I used this configuration on my home computer, so that when I wasn’t using it, other members of my family could change VTs and log in without disturbing my work. When they finished, I would just switch back to my VT and continue where I left off. (Now I’ve extended this configuration by adding additional video cards, keyboards, mice, and monitors so we can log in simultaneously.) Starting Multiple X Servers Using XDM (or Early Versions of XDM and older versions of KDM (pre-3.4) use the Xservers file to configure the number of servers started by the display manager. The location of this file varies; try /etc/X11/xdm/Xservers or /opt/kde3/share/config/kdm/Xservers. 2.7 Starting Multiple X Servers Using a Display Manager This is a fairly standard Xservers file: # $Xorg: Xserv.ws.cpp,v 1.3 2000/08/17 19:54:17 cpqbld Exp $ # Xservers file, workstation prototype # This file should contain an entry to start the server on the # local display; if you have more than one display (not screen), # you can add entries to the list (one per line). If you also # have some X terminals connected that do not support XDMCP, # you can add them here as well. Each X terminal line should # look like: :0 local /usr/bin/X Lines that start with # are comments. The active line, at the bottom, specifies that display 0 is a local X server, and gives the command line to be used to start that X To start additional X servers, simply add lines at the bottom of this file: :1 local /usr/bin/X :1 vt8 :2 local /usr/bin/X :2 vt9 Although it’s not strictly necessary to specify the VT on these lines, it’s a good idea, because then you will confidently know which display is paired with which VT. If you wish to specify a different configuration file for one of the X servers, you can add a -config argument to the command: :3 local /usr/bin/X -config configgile :3 vt10 This must all appear on a single line in the configuration file. Starting Multiple X Servers Using KDM If you’re using KDE 3.4 or higher, the local X server configuration is controlled by the kdmrc file (/etc/X11/xdm/kdmrc or /opt/kde3/share/config/kdm/kdmrc). In the [General] section of that file, you can specify a list of local displays to be started by adding a StaticServers key: If this line is missing, the default is to start only display :0. Starting Multiple X Servers Using GDM GDM is configured using two files; the first specifies default values, which may be overwritten when GDM is updated, and the second provides local values, which are never overwritten. The name and location of these files varies; on an Ubuntu system, the defaults are in /etc/gdm/gdm.conf and the local settings are in /etc/gdm/gdmcustom.conf, while on a Fedora system, the defaults are in /usr/share/gdm/ defaults.conf and the local settings are in /etc/gdm/custom.conf. Chapter 2: Starting a Local X Server There are two sections in the GDM default configuration file that deal with local X servers. The first defines the command to be used to start a new server, and it looks The name field is for your reference only. The last line enables GDM to start additional servers on-the-fly when instructed to do so by the gdmflexiserver command Once it has been defined, the configuration is associated with a display number by a servers section elsewhere in the file: This will start a single server with a display number of :0. To configure GDM to initially start additional servers with the same configuration, add a servers section to the local configuration file: If you wish to use a different configuration for a specific display, you can add a new configuration section to the local configuration file: command=/usr/bin/X -config /etc/X11/xorg.conf-lowres Then specify that configuration for one of your displays: GDM automatically adds an argument to the X server command to specify the display to be used. Starting Additional X Servers on Demand Using a Display Manager Recent versions of both GDM and KDM are capable of starting additional X servers on demand. This is useful when you occasionally want to use multiple X servers but 2.8 Starting Additional X Servers on Demand Using a Display Manager don’t want the extra overhead when a single X server only is in use. The GNOME developers call these additional servers flexible servers; the KDE folks call them Starting Additional X Servers Using gdmflexiserver The GDM display manager provides a command-line utility, gdmflexiserver, which communicates with a running gdm process and instructs it to start a new X server. Assuming that you have flexible=true in at least one of your GDM server configurations (Section 2.6)—which is the default—the GNOME menu contains a New Login option on the System group. If you’re not running GNOME, don’t have a New Login option on the menu. If you prefer to use a shell prompt, simply run gdmflexiserver: If more than one X server is already active, you will be given the option of switching to an existing session or starting a new one; otherwise, a new X server will be started and a new session login prompt will appear automatically. Your existing X session will be locked automatically (via the screensaver) and can be unlocked with your password when you switch back to the original VT. If you don’t want this automatic locking, add the -l option to the preceding command line. gdmflexiserver can also start a nested X server (using Xnest) and present a session login prompt there: $ gdmflexiserver -n Starting Additional X Servers Using KDM Although it doesn’t provide a command-line interface, KDM can start new sessions. Before you can use this, you must edit the kdmrc file. In the [General] section, add a line that specifies some reserve servers: If you also have a StaticServers line (Section 2.7), make sure that no display numbers appear in both lists. In order to start a reserve server, you must be running KDE as the desktop environment (this isn’t a given, since you can run any desktop using any display manager). Select “Start new Session” from the Switch User menu group on the K Menu, and a new X server will start with a session login prompt. If you lock your session (either using the menu option or by configuring session locking for the KDE screensaver), a “Start new Session” button will appear on the locked-screen password dialog as well. You can switch between open sessions—including character-mode VT logins—by using the Switch User options on the K Menu or screensaver password dialogs (as an alternative to using the switch-VT key combinations (Section 2.2). Chapter 2: Starting a Local X Server Starting an X Server with Clients Only When Systems used primarily as network servers don’t need to have an X server running all the time and should be configured to boot into runlevel 3. This saves some memory that is best used for network services. However, it’s handy to run an X server when performing administration on a server system; for example, to start a web browser to search for documentation. The xinit utility can be used to start an X server with specified clients, but the startx wrapper script provides a friendlier interface. After logging in at a character-based login prompt, simply execute: startx permits you to specify which client is to be started as well as any options for the X server. A double-dash (--) is used to separate the client arguments (left) from the X server options (right). You can explicitly specify a client to be started: $ startx /usr/bin/xterm -bg yellow -geometry 180x50 Or you can specify the X server options to be used. If an X server is already running on display :0, for example, you could specify that display :1 should be used for the $ startx -- :1 Or you can specify both the client to be started and some server options: $ startx /usr/bin/xterm -bg yellow -geometry 180x50 -- :1 -config /etc/testconfig When specifying a client for startx, the client command pathname must begin with a single dot or a slash; otherwise, it will be treated as an argument to the default client (typically xterm). Likewise, you can specify the pathname of the X server on the righthand side of the double-dash by using a pathname that starts with a dot or slash; if you omit the dot or slash, the value is treated as an argument to the standard X server (which is specified in ~/.xserverrc on a user-by-user basis or /etc/X11/xinit/xserverrc as the system-wide default). For example, Xorg would be interpreted as an argument to the standard X server, while ./Xorg or /usr/local/test/Xorg would be interpreted as the name of an alternate X server. To start multiple clients, create a shell script and specify that shell script on the startx command line. startx is usually used without any arguments. It will start an X server with a default set of clients. The clients are specified in the script ~/.xinitrc in your home directory, 2.9 Starting an X Server with Clients Only When Needed
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583792338.50/warc/CC-MAIN-20190121111139-20190121133139-00005.warc.gz
CC-MAIN-2019-04
22,043
311
http://www.antondevilliers.com/
code
Software and Website Developer from Cape Town, South Africa. Ex WooCommerce Support Ninja from Woothemes Support & Customisation on WooThemes, WooCommerce & WooCommerce Extensions. In a nutshell I am a software & website developer from Cape Town, South Africa. My passion is coding and building awesome, excited and functional websites for clients. I believe in honest business practices and walking the extra mile with my clients. I always strive to get the job done on time and keep my clients happy and interested. Software Development, Support & Maintenance. Custom PHP MVC frameworks - PHP & MySQL Certified Developer,C# .net Certified Developer, - HTML5 Developer,CSS3 Developer, - C++ Developer,Drupal Developer, - 13 Years Business Experience,5 Years Retail Experience, - 5 Years E-Commerce Experience.
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542932.99/warc/CC-MAIN-20161202170902-00493-ip-10-31-129-80.ec2.internal.warc.gz
CC-MAIN-2016-50
810
11
https://community.cisco.com/t5/vpn/use-cisco-1841-to-vpn-into-windows-server/m-p/2007811
code
I am resorting to posting on here because every google search for the past week leads me to nothing but how to set up the Cisco router as a VPN server. This isnt what I want. I have a client who has a Windows Server in a colocation data center. The customer has to setup a VPN connection on each of the desktops and laptops that need to connect which is becoming a nightmare for usability. Would it be possible to drop in a Cisco 1841 and use that to “site-to-site” VPN into the Windows Server. No hardware firewall or Cisco box is on the other end. The server/clients are currently using PPTP but I would most likely end up reconfiguring the server with L2TP or any other more secure protocol that the server and Cisco device support. Unfortunately this isnt an option due to the setup. The servers are virtual and the extra rack space would add quite a bit to the monthly bill. I was pretty sure this would work, but didnt want to try without hearing from someone who has attempted it. BenefitsDocumentationPrerequisiteImage Download LinksSupported PlatformsLimitationsLicense RequirementsTopologyStep-by-step ConfigurationConfigure PATCreate Custom ZonesCreate Class MapCreate the Policy-mapCreate Zone PairAssign the Interfaces to the Zone... Listen: https://smarturl.it/CCRS9E20Follow us: https://twitter.com/CiscoChampion With over one trillion email scams per year, more than 22 billion records were exposed by data breaches in 2021. Phishing attacks are clearly on the rise, and they’re e... Radius server configuration for 802.1X Server radius test1 Address ipv4 10.1.1.1 Server radius test2 Address ipv4 10.1.1.2 aaa group server radius TEST-gr server name test1 server name test2 Umbrella’s cloud-delivered firewall (CDFW) is a cool features that provides Firewall Services in the Cisco Umbrella Cloud without the need to deploy on-premises firewall devices and visibility and control for internet traffic across all branch offices. To...
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663019783.90/warc/CC-MAIN-20220528185151-20220528215151-00146.warc.gz
CC-MAIN-2022-21
1,957
16
http://code.call-cc.org/legacy-eggs/3/uuid-ossp.html
code
Provides access to the OSSP UUID Library. This document does not supplant the OSSP UUID Library documentation. The OSSP UUID Library is not included in this distribution. The library source is available at the above URL. See the Issues section below for more information. DCE 1.1 variant UUID of version 1 DCE 1.1 variant UUID of version 1 with random MAC address DCE 1.1 variant UUID of version 3 DCE 1.1 variant UUID of version 4 DCE 1.1 variant UUID of version 5 Is UUID a uuid? Is UUID the "nil" uuid? Are UUID1 and UUID2 equal? Are UUID1 and UUID2 not equal? Does UUID1 order below UUID2? Does UUID1 order above UUID2? Does UUID1 order below or the same as UUID2? Does UUID1 order above or the same as UUID2? Returns a copy of the UUID. Returns a uuid for the NAMESPACE. Returns a uuid VARIANT, or the nil uuid when missing. The NAMESPACE and NAME are required for V3 and V5. Returns a uuid from the external STRING representation. Returns a uuid from the external BINARY-STRING representation. Returns the external string representation of UUID. Returns the external binary-string representation of UUID. Returns the external text representation of UUID. Returns OSSP UUID Library version as an unsigned-long. Errors generated by the OSSP UUID Library are signalled using a condition of (exn uuid) , with properties message, the error message, and code, the error code. Argument errors are signalled using Requires at least release 1.3.0 of the OSSP UUID Library. The use of the "uuid" prefix/name by the OSSP UUID Library is problematic. A generic name often with an existing meaning. Reed Sheridan has pointed out that the Debian system renames the library, as an existing library of the same name already exists. For example, Without the use of a build system which can deal with platform specific conditions installation of this egg might require manual intervention. The file "uuid-ossp-fix.c" has the references to the include file. Copyright (c) 2006, Kon Lovett. All rights reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the Software), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED ASIS, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817398.21/warc/CC-MAIN-20240419110125-20240419140125-00505.warc.gz
CC-MAIN-2024-18
3,013
34
http://www.theverge.com/2012/10/15/3505382/what-do-you-guys-think-of-newsstand
code
What do you guys think of Newsstand? Until Marco Arment's Magazine, I didn't use Newsstand at all, mainly because I can't really afford subscriptions (and I could just go the the website for free), and judging from the reviews, the apps range from bad to mediocre. Do any of you guys use it, and if so, which Newsstand apps do you use? Update (1/1/13): I think Newsstand is far more promising after using some more apps. I know Engadget is ballyhooed round these parts, but their Distro app is very good (and free). TNW is pretty good as well.
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921318.10/warc/CC-MAIN-20140909050424-00250-ip-10-180-136-8.ec2.internal.warc.gz
CC-MAIN-2014-35
543
4
https://supportforums.cisco.com/discussion/10549926/script-review-urgent-callback-queueing-scripts-attached
code
Hello. Attached are 2 scripts. The main IVR script and a Callback Queue script. The main script will determine if callback option is offered to the caller. This is referred to as VIRTUALHOLD. It will prompt the caller for callbacknumber, then it will put a call into the CSQ to hold the caller's place in line. Next, the Callback Queuing script will, once the callback is at the top fo the queue, loop for an available agent. Once an agent is READY, my objective is for the agent to be prompted to initate the callback, once the call is answered the remote person will be prompted to accept call. If accepted, the call will commence. I don't know if this is the best way to handle. I have passed session info between the scripts but this may be causing trouble with the way I have it written. Any assistance would be appreciated. NOTE: when I answer a call as an agent once putting agent in READY state, I either hear fast busy or system message. I have also attached an IVR log.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721008.78/warc/CC-MAIN-20161020183841-00527-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
979
6
https://findyourgadget.com/2020/01/12/what-is-deep-learning/
code
What is Deep Learning? Everything you need to know!! What is Deep Learning? Deep learning is a man-made intelligence that emulates the activities of the workings of the human brain in processing data and creating patterns for use in decision making. Deep learning is a subset of Artificial Intelligence that is fit for taking in solo from information that is unstructured or unlabeled. It is also known as deep neural learning or deep neural network. Deep learning requires substantial computing power. High-performance GPUs have a parallel architecture that are used efficiently for deep learning. When clusters or cloud computing combined with, this enables development teams to reduce training time for a deep learning network from weeks to hours or less. Deep Learning is also a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. Deep learning learns from vast amounts of unstructured data that would normally take humans decades to understand and process. In, Deep learning we don’t have to program unequivocally everything. The idea of deep learning isn’t new. It has been around for a few years now. Over the most recent 20 years, Deep learning and AI came into the image. How Deep Learning Works? Deep learning has developed a vast connection with the advanced time, which has achieved a blast of information in all structures and from each locale of the world. This information, referred to just as large information, is drawn from sources like web-based life, web search tools, internet business stages, and online films, among others. This huge measure of information is promptly open and can be shared through applications like distributed computing. Nonetheless, the information, which regularly is unstructured, is immense to such an extent that it could take a long time for people to grasp it and concentrate significant data. Organizations understand the amazing potential that can come about because of unwinding this abundance of data and are progressively adjusting to AI frameworks for robotized support. Implementation of Deep learning Deep learning is used used in industries from automation to research. Automated Driving, Aerospace and Defense, Medical Research, Cancer treatment, Industrial Automation, Object Detection, Electronics, Home assistance etc. are some example fields of Deep Learning Implementation. Machine learning Vs Deep learning Deep learning is a subset of machine learning which utilizes artificial neural networks to carry the process of machine learning. These artificial neural networks are built like the human brain, with neuron nodes connected like a web. While traditional programs build analysis with data in a linear way, the deep learning systems enables machines to process data with a nonlinear approach. Big data processing is made possible easily by machine learning. A key advantage of deep learning is that they continue to improve as the size of your data increases. When choosing between machine learning and deep learning, you should have to consider whether you have a high-performance GPU with lots of labeled data. If you don’t have either of those things, it doesn’t make more sense to use machine learning instead of deep learning. Deep learning is generally more complex, so you’ll need at least a few thousand images to get the desired results. Having a high-performance GPU means the model will take less time to analyze all those images as it analyses the data model more effectively. *Image Source : Internet
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00553.warc.gz
CC-MAIN-2020-05
3,583
14
http://mail-index.netbsd.org/tech-pkg/2002/02/13/0003.html
code
Subject: libtool problem with runtime shlib path propagation To: None <email@example.com, firstname.lastname@example.org> From: Matthias Drochner <M.Drochner@fz-juelich.de> Date: 02/13/2002 16:56:41 I've found a case where the runtime library search path is not set as expected (with i386-current and the newest pkgsrc/devel/libtool): A package builds a shared library (using libtool) which depends on X11 libraries. The linker call looks like libtool --mode=link cc -o lib.la *.lo -L/usr/X11R6/lib -Wl,-R/usr/X11R6/lib The "-Wl,-R" flag gets passed to the linker, the shared library looks well, "ldd" shows that its search path is OK. Then an executable is built which needs this library, but has no dependency on X libraries itself: libtool --mode=link cc -o mist path/to/lib.la The linking succeeds, but the resulting binary doesn't have the "/usr/X11R6/lib" in its runtime search path. The problem seems to be that "libtool" does store the "-L" part of the dependency in its lib.la file, but not the runtime path - if it is given by "-Wl,-R". "libtool" supposedly dtrt if the runtime path is given by a "-R" option. Otoh, while this is documented behaviour, it is not portable and hardly usable with these "gnome" style xxx-config scripts. It would be better if "libtool" handled "-Wl,-R" the same way like just "-R". Worth a PR? Or should I expect the linker itself to propagate runtime library search paths?
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891808539.63/warc/CC-MAIN-20180217224905-20180218004905-00703.warc.gz
CC-MAIN-2018-09
1,413
29
https://ladb.uga.edu/activity/the-hide-and-seek-tube-game
code
The Hide and Seek Tube Game What to do: - Tie the three objects to the string, leaving several inches space between each object. Be sure the string is long enough on each end so you can drop it all the way through the tube and then pull string through. - Talk about each object and have the children watch as you push each one into the tube, until all objects are hidden in the tube. Continue to pull the string slowly. - Ask children to guess what will come out of the other end first, next and then last. - Repeat several times using different objects. Change the order of the objects. - Rather than cardboard paper towel tubes, use a shoe box, a cereal box, oatmeal box or small paper sacks. For older children, add more objects to the string.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500151.93/warc/CC-MAIN-20230204173912-20230204203912-00686.warc.gz
CC-MAIN-2023-06
746
7
https://www.topconf.com/conference/topconf-tallinn-2017/talk/resilient-design-101/
code
Resilient Design 101 Queueing Theory is perhaps one of the most important mathematical theories in systems design and analysis, yet only few engineers learn it. This talk teaches the basics of queueing theory and explores the ramifications of queue behaviour on system performance and resiliency. This talk aims to give practical skills that can be applied better build and tune your systems. The talk covers:
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158320.19/warc/CC-MAIN-20180922103644-20180922124044-00558.warc.gz
CC-MAIN-2018-39
409
3
https://www.internetnews.com/blog/sabayon-linux-5-3-adds-anaconda/
code
A new version of Sabayon Linux is out this week and for new users, I think it offers a good entry point into the world of Gentoo (and Sabayon of course). Sabayon is a Gentoo Linux based distribution providing a slick user interface and the latest open source applications. Gentoo Linux is a really solid source based Linux distribution that has been around for over 10 years, but it has always been a bit difficult to install (though it has gotten better in recent years thanks to a Gentoo installer). Sabayon itself seems to be all about making Gentoo more accessible, so it makes sense that the new release has a new installer. Though the new installer isn’t born from the Gentoo Linux project, it’s from Red Hat. Sabayon Linux 5.3 is now using the Anaconda Linux installer, which is used in Fedora and has been a part of Red Hat Linux distributions for as long as I can remember. In my opinion, Anaconda is one of the reasons that helped make Red Hat popular in the late 90’s. The modern Anaconda has continued to improve and it makes for an easy yet powerful, installation experience. It’s great to see open source at work isn’t it?
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00450.warc.gz
CC-MAIN-2021-49
1,146
4
https://meta.stackexchange.com/questions/43383/communication-breakdown-on-the-global-rep-recalc-a-userbase-uninformed
code
On the back of the Global Reputation Recalc there has been a flurry, nay, swarm of questions along the lines of: Why my reputation decrease from XXXX to YYY? How are rep change for today, brokened? Users asking such are being downvoted and their questions closed as duplicate. Even though a large base of users have been around the site for a while, many, it seems, were not aware of the realignment of question upvotes and other such clearing of house matters (like deleted and migrated posts). As not everyone reads the blog, nor do they hang out reading the posts on Meta, have the crumbs of information about the Global Reputation Recalc failed in delivery?
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529664.96/warc/CC-MAIN-20190723193455-20190723215455-00480.warc.gz
CC-MAIN-2019-30
661
6
https://fvwmforums.org/t/fvwm-2-6-0-finally-released/2057
code
In no particular order (and neither an exhaustive list): EWMH support, including support for managing different window types (“docks” for example). Colorset commands replace FvwmTheme (which is now deprecated) WindowStyle command applies a style to a specific window only. XFT fonts are supported, per locale. Focus-specific policies supported as styles (FP*, !FP*) FocusStyle command allows styling of focus policies specifically. Dynamic actions can be sent to most modules whilst they’re running to customise their behaviour (SendToModule). Gettext support introduced for output strings (most commonly seen on Mouse gestures (libstroke) bindings are available. New module FvwmProxy to manage moving windows around. New module FvwmWindowList to make the list of running windows more customisable over the builtin “WindowList” command. Variable placeholders ($w, $d, $c) deprecated in favour of newer formats. Also, the number of variable placeholders has been expanded. - $[func.context], $[w.desk], $[w.layer], etc. Nesting of placeholders is also allowed. FvwmEvent: The PassId option is deprecated as actions always run within a Many new conditional commands with different options. New style command “Unmanaged” to make certain windows completely divorced from FVWM’s control. New command FakeKeyPress. Window-specific key/mouse bindings. (Bindings no longer have to be global.) Many new style options: Plus many others… Window states are now available to identify windows to perform “groups” of fvwm-menu-desktop uses the XDG menu specification. PNG/SVG support for icons. The name style names match against can be augmented by the X-resource “fvwmstyle”. New fvwm-convert-2.6 script to convert older fvwm 2.4.x config files. There are of course significant bug fixes gone into this release; far too many to list here, and many of them so specific to the development version alone that listing them is out of context when comparing them to the last stable release. Overall, a number of bug fixes for memory management, and ease of managing windows has happened. Upgrading from FVWM 2.4.X -> 2.6.0 It is worth spending some time looking at the dependencies at FVWM’s disposal. None of them are required per se for FVWM to work: - SM (Session Manager) Note that not all of these are external libraries – some of them are merely facets of an XServer’s configuration, but a lot of it will depend on the platform FVWM is running on. Preferred way of upgrading your configuration file In the old stable (2.4.X), the path to the default user config file is now: by default, although the older paths of: are still supported; just deprecated in favour of ~/.fvwm/config. Note that the “INITIALIZATION” section in “man fvwm” lists the other locations FVWM might also look in to find a valid config file. Since there’s been some syntax changes, a handy script can be used – “fvwm-config-2.6” to convert a 2.4.X style config file. Please see the man page for “fvwm-convert-2.6”. It’s taken almost ten years for FVWM 2.6.0 to arrive. In that time, FVWM has had contributions from numerous people – many of them can be found Whilst it is unfair to single any one particular person out, it is without question that I (Thomas Adam) would like to pay particular homage to the following people (again in no particular order; and far from exhaustive): - Dominik Vogt – for being one of the most useful sources of information on FVWM to date, and making it such a great program. - Viktor Griph – for implementing some cool features - Dan Espen – just for being completely reliable with any request I’ve thrown at him, and for helping to cobble this release together. - Olivier Chapuis – for writing far too many features than can be listed here, but responsible for things like: new conditional commands, original EWMH support, XFT support, gettext support, numerous bug fixes, - Mikhael Goikhman – for perllib, Session management support, various perl helper scripts FVWM uses (fvwm-config, fvwm-menu-desktop, etc.), as well as along with Olivier Chapius, starting fvwm-themes. - Scott Smedley – Converting documentation to use Docbook, window-specific binding support, FvwmTabs, perllib fixes, FvwmButtons enhancements. But that list is far from the full picture. There are numerous people on support, and that’s just as important as development. So I would like to also thank the following people (in no particular order): This list is not exhaustive either, but they have in particular been the life-blood of the channel on ensuring it runs smoothly and help provide the best support possible. Thank you all! During the course of FVWM’s development, the community lost one of its members, Alex Wallis (“awol”, on IRC). Alex founded the IRC channel, brought together a community on IRC for FVWM, which still continues to this day. He also was a large contributor to the fvwm-themes project. Sadly, Alex is no longer with us, and he would have been proud to see the FVWM he knew, as being released as stable, so this release is dedicated to him. May he still rest in peace. The official fvwm homepage: Questions about the release can be asked on our mailing list: fvwm at fvwm.org Bugs can be reported to the fvwm-workers mailing list. – Thomas Adam (2011/04/15), on behalf of the FVWM community.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00478.warc.gz
CC-MAIN-2022-27
5,365
90
https://ez.analog.com/thread/83227-adv7611-without-any-hpa-and-edid
code
I have a video source that doesn't have any Hot Plug and DDC CEC signals, I handled those relative pins as stated at the user guide of ADV7611. For example HPA_A , DDCA_SDA is left float and DDCA_SCL is connected to ground via 10K resistor and so on. I am trying to receive 720p60 signal but I cannot get any signal. I am trying to init the 7611 as the scripts at the design support files. In my custom circuit, ADV7611 is connected to a blackfin 609. I am trying to modify and use the 7842 drivers in the EI3 decoder board. If anyone can hint what I might be missing, it will be very helpful
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865450.42/warc/CC-MAIN-20180523043959-20180523063959-00440.warc.gz
CC-MAIN-2018-22
592
4
https://discourse.suttacentral.net/t/performance-and-font-optimization-strategy/333
code
I’m developing in my head a strategy for improved performance, particularly font handling. Google Chrome has this super nifty feature where you can emulate another device (particularly mobile devices) including connections, so you can see how the site behaves. At the moment our load time on an unprimed cache is really bad for poor mobile connections, I’m talking 10s or more. I saw that for real when I used an android phone and going to suttacentral.net was a traumatically bad experience. I believe the standard to strive for is 2s maximum load time, as users tend to say ‘stuff this’ if a site takes more than 2s to load. I am quite confident we could cut load time by 80% - or to put it another way, after 2s, be able to display the content of the site, even if not absolutely everything is loaded. In terms of size (all sizes are gzipped) for components of the home page, it goes like this: - CSS: 14kb. Although 80% of css is not needed for displaying the home page (which includes header and footer), 14kb is small by any standard. However it might be worthwhile inlining some of the CSS as an inline stylesheet, for example the quicker the browser sees a font-face declaration, the quicker it can start downloading the font. Important on high latency connections. - Fonts ~400kb: The fonts in total to render the home page come to about 400kb, mostly Skolar (there are also some font variants not used on the homepage). The two most important fonts are the header fonts; Skolar sc and Hetu sc. The load time reduction strategy would go like this: - Divide the JS into essential and deferable. Essential is probably like 2kb. - Divide the fonts into essential and deferable. Essential fonts are inlined into the CSS using Base64, this saves one or more requests and means the fonts are available the instant the CSS is parsed. I consider the essential fonts to be skolar sc and hetu sc, both for displaying the logo correctly, and because a FOUT from normal -> small caps is quite jarring! Skolar sc is quite large, so the inline component should be subsetted to ‘SutaCenrl’ reducing it to ~5kb. Deferable fonts would be loaded in the background but not delay page rendering, a fallback font would be used until the webfont is downloaded. A 500ms (or perhaps 1000ms) FOUT guard could be used for the benefit of fast connections. - The final high-powered option is to inline stuff into the HMTL. Both the JS and CSS are candidates for this (inlining as in inserting a <script>...</script>element into the HTML, not the other sense as attributes on elements). What happens at the moment at Jhanagrove with it’s 700ms round trip to Geosynchronous orbit with an unprimed browser cache is this: 0ms: Browser pointed at suttacentral.net. 700ms: DNS server responds with IP address, browser requests page. 1400 ms: HTML arrives, parses it and sends request for CSS and JS assets. 2100ms: CSS arrives, parses it and sends request for webfonts. 2800ms: Webfonts arrive. Page can finally be displayed. What happens if the CSS and Fonts are inlined into the HTML, is this: 0ms: Browser pointed at suttacentral.net 700ms: DNS server responds, browser requests page. 1400ms: Everything arrives. Parses it and renders the page. Inlining can thus easily cut load time by 50% on high latency connections. All of the big sites do inlining for this reason. Inlining is very powerful but like most very powerful things it should be used sparingly. With quite minimal changes we could get the assets download required, down from 500kb, to about 100kb - the 80% load time reduction. I believe that is the low hanging fruit, if you went for higher hanging fruit a 90% reduction would be possible. I believe targeting the ‘unprimed cache’ situation is important because generally about 50% of page views are from new visitors. Maybe 50% of those users are in the fast internet belt around USA/Europe, but that still leaves 25% of page views being on poor connections. Implementing a more explicit ‘fall back font’ strategy will also go well with pure WOFFification. For example inlining quickly blows out if you have to inline more than one font type, but if we are using only WOFF, it is easy to just inline the WOFF font, end of problem.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00060.warc.gz
CC-MAIN-2022-27
4,247
26
https://www.mindprod.com/jgloss/encoding/foot/koi8.htm
code
Case to Abolish The Electoral College If you live in California (a blue state) or Utah (a red state) your vote has no effect on the outcome of the election. You can vote either way. It won’t make any difference. In contrast, if you live in a swing/battleground state, such as Florida, your vote can, in theory, flip the entire election. Everybody’s vote should count equally, no matter where they live. There should be no especially electorally privileged Americans. In 2000 and again in 2016 the electoral college vote (the official vote) anointed one candidate, where the popular vote selected another, creating huge resentments. The president should be the one wanted by the most Americans, the popular vote. Why is there an electoral college? Three reasons: ~ Roedy (1948-02-04 age:70) - In former times, people tended to vote for favourite sons. The electoral college was intended to stop the largest state from always picking the president. - The founding fathers were not nearly as enthusiastic about democracy as we are now. They felt a layer of indirection with the electoral college could block wrong-headed popular enthusiasms. The electoral college members are not required by law to elect the presidential candidate the people voted him to represent. - By the number of electoral votes assigned to a state, it gives more or less weight than the state would get naturally by population. The was a non-obvious political tool to rig the elections.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817780.88/warc/CC-MAIN-20240421132819-20240421162819-00718.warc.gz
CC-MAIN-2024-18
1,461
8
http://serverfault.com/questions/494522/huge-performance-difference-between-two-postgresql-8-4-releases-on-freebsd-8-2-v?answertab=oldest
code
I have a question about PostgreSQL performance differences. I am developing a web application on a MAC OS X system and the web application has to be deployed on a FreeBSD server. On a page of the system there is a ajax controlled data entry field. In this field you can enter a city name and when you enter two characters or more the system starts to look for the cities in the database and presents a drop-down of cities conforming these 2 (or more) characters at the beginning of the string. All this seems to work well until I deployed it on FreeBSD servers. The first deployment was alright but the second deployment shows a huge performance difference. This is a list of test results: system1 proc: Intel Core 2 Duo 3.06 GHz, mem: 8GB. : OS: OS X 10.6.8, 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun 7 16:33:36 PDT 2011; root:xnu-1504.15.3~1/RELEASE_I386 i386 DB: PostgreSQL 8.4.5 on i386-apple-darwin10.5.0, compiled by GCC i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664), 64-bit Tests system1: query parameter time in ms 01 'de%' 909 02 'de%' 886 03 'den%' 132 04 'den %' 115 05 'den h%' 115 06 'den ha%' 117 07 'den haa%' 95 08 'den haag%' 100 host: system1 guest: parallels virtual machine, proc: 2 cpu, mem: 1 GB. system2 OS: 8.2-RELEASE FreeBSD 8.2-RELEASE #0: Thu Feb 17 02:41:51 UTC 2011 firstname.lastname@example.org:/usr/obj/usr/src/sys/GENERIC amd64 DB: PostgreSQL 8.4.7 on amd64-portbld-freebsd8.2, compiled by GCC cc (GCC) 4.2.1 20070719 [FreeBSD], 64-bit Tests system2: query parameter time in ms 01 'de%' 1178 02 'de%' 857 03 'den%' 298 04 'den %' 233 05 'den h%' 134 06 'den ha%' 132 07 'den haa%' 132 08 'den haag%' 136 host: system1 guest: parallels virtual machine, proc: 2 cpu, mem: 1 GB. system3 OS: 8.3-RELEASE FreeBSD 8.3-RELEASE #0: Mon Apr 9 21:23:18 UTC 2012 email@example.com:/usr/obj/usr/src/sys/GENERIC amd64 DB: PostgreSQL 8.4.11 on amd64-portbld-freebsd8.3, compiled by GCC cc (GCC) 4.2.2 20070831 prerelease [FreeBSD], 64-bit Tests system3: query parameter time in ms 01 'de%' 7096 02 'de%' 7012 03 'den%' 6228 04 'den %' 6237 05 'den h%' 6145 06 'den ha%' 5640 07 'den haa%' 5512 08 'den haag%' 5561 The parameters mimic the way data is entered in the ajax application. These results are from queries directly to the database through pgAdmin3, so through the application. I have put the query here because I don't think that should be relevant. The databases are identical and the same query is used on all three database instances. Now I can understand the performance difference between system1: OS X system, bare hardware and system2: virtual machine running FreeBSD. What I don't understand is the huge performance difference between system2 and system3 which are both vm's running under the same host. The test were done with each vm running individually. Does anyone have a clue why this could be happening?
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999666921/warc/CC-MAIN-20140305060746-00028-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
2,870
9
http://communications.emory.edu/resources/identity/school-logos.html
code
An institution as large and varied as Emory requires a consistent visual identity that unifies its various affiliates. Each Emory school has its own signature for print and web, and you can download the logo full set for each school below. Each folder contains horizontal, square, and vertical logos in Emory PMS 280 blue, black, and reverse white. Some schools have two logo configurations and some have three, depending on the name of the school. Downloads include a .zip-formatted compressed folder structure that includes all available variations, including versions with and without the Emory shield.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119120.22/warc/CC-MAIN-20170423031159-00442-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
605
2
https://blog.gordon-chan.net/2017/11/10/listed-top-20-technology-blog-2017/
code
Honestly I am shocked more than a surprise when a website call MrDiscountCode.hk sent me a email saying my blog is being listed in their Top 20 Technology blog 2017. As I only use my blog share the website that I made, or share the technique that I think good to see how it could work for others. Of course I subsequently receive follow up email to ask if I can add their badge above, which I know it is a way to not only help me from having my blog being listed in their site, but also to have their site url mentioned in my blog so as to improve SEO. Even knowing very limited about MrDiscountCode.hk , as long as it is not spam site, and it is a Hong Kong Startup, I am fine to receive this appreciation and recognition. And this may also be a chance to really allow me examine how this can affect a site SEO if both side mentioned each other website. I know my blog is still very in-adequate in knowledge sharing, and it is still a long journey for me to learn more how to run a e-commerce better. Such recognition would be a small encouragement for me to share more and more often. Hope you find my site really helpful.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00324.warc.gz
CC-MAIN-2022-05
1,124
4
https://ojiz.net/videos/deutsche-ehefrau-fickt-mit-dem-sohn-der-nachbarin-heimlich.html
code
Download The Same Videos From PornHub Added: 7 months ago Rate this video using the stars above! Thank you for rating this video! You have already rated this video! , big boobs , big cock , mom stepson , real homemade , big tits , erotik von nebenan Thank you! Your message has been sent.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710237.57/warc/CC-MAIN-20221127105736-20221127135736-00260.warc.gz
CC-MAIN-2022-49
288
12
https://esj.com/articles/1999/01/06/rogue-wave-without-limits_633718593540683691.aspx
code
Rogue Wave without Limits Like the component software market, Rogue Wave Software Inc. (www.roguewave.com) is evolving. The company recently announced that it is expanding its horizons by adding a component framework to its existing software components product line. "We eventually want to be able to go from modeling to component assembly tools, provide a set of components, provide tools to build your own components, and then give you an application framework on which those components will live," says Tom Kim, a product manager at Rogue Wave. As a result, he adds, Rogue Wave will be able to provide solutions earlier in the development cycle than it has in the past. The first product in this new framework is RW-Metro, which includes a set of customizable tools for mapping C++ class definitions to database schemas. Kim explains that RW-Metro tackles what he calls the "model mismatch problem -- components developed using relational database models don’t work well with components developed using application models, which focus on behavior." RW-Metro can import the object hierarchy and definitions a developer creates into an object-relational mapping tool. The generated map enables those objects to persist in a relational database. According to a Rogue Wave white paper, developers creating object-relational maps are often forced to convolute the object model to suit the relational model. As a result, the object model becomes bound to the specific relational model that existed when the mapping was done. If the relational model is changed, the object model would have to be changed as well. According to William Blundon, executive vice president of the Extraprise Group Inc. (www.extraprise.com), an e-business consulting firm, RW-Metro will extend Rogue Wave’s market reach, "by making [its] technology more accessible to an IT organization, not just the sort of rocket scientist people in the past that have been doing object-oriented development." Blundon agrees that developers have been struggling with object-relational mapping, and that RW-Metro can provide them with a strong approach to mapping components to legacy data in relational databases. Future plans for RW-Metro include integration of a reverse (relational-to-object) mapping tool called DBFactory, generation of Java source code, support for XML, CORBA and transaction processing monitors, addition of an application server framework and integration with modeling tools, such as VisualCase from Rogue Wave’s Stingray division and Rational Rose from Rational Software Corp. (www.rational.com). Rogue Wave also plans to announce additional products that will fill out its frameworks product line.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824059.7/warc/CC-MAIN-20181212155747-20181212181247-00480.warc.gz
CC-MAIN-2018-51
2,688
7
https://www.istudiotech.in/2018/04/17/website-framework-trending-web-frameworks-for-this-year/
code
Servicing the demanding product to the customer, updating your products and services, framing the web framework according to the industry and service you provide, marketing your website in a proper way in digital media altogether contributes towards the success of an organization in reaching their customer. Every contributing factor, every contributing person need to get updated accordingly in a proper to satisfy their targeted audience. Developers adapting the latest trends and technologies of web development and delivering an forecasting based website will lessen the burden of marketers. Here are some trending frameworks which will be creating a great impact in the web development industry for 2018. Angular JS is a Java Script enabled open source web framework which used for developing web pages. If you are effective user or a follower of trending web frameworks then you must be knowing about this framework. This framework operation basically depends on MVC (Model View Controller) architectural pattern, it is not full stack but a front end development application used in your webpages.This Google product is getting updated in its every version, the second version is much better than the first one by including lot of features. Now Google has released its fourth version in-built with all latest and required features. If Angular JS is a well recommended framework for developing front end application then Laravel is known for its best back-end application development. This framework is similar to Angular as it follows MVC architectural concept in its operations. This back-end framework is developed in 2011 by possessing all the required and demanding features in it like high-end dependency manager, latest utilities used in application deployment and maintenance, varied methods of approaching relational database and most importantly orientation towards syntatic sugar. Most importantly this back-end framework keeps on updating since its evolution, which makes it as the most important PHP framework. This web framework is created in order to maintain user interface based web applications. This is running based on Java Script library and maintained by Facebook, this framework also works as a convertible one for many web application. This framework can easily handle large applications in running, maintaining and updating it over time inclusive of data included. React.js is really implemented with the intention of developing some challenging and real time applications by implementing complex algorithm. Another java script framework which is light weight and highly efficient, this is not just a framework but a complete environment which provides all the tools and required functionalities for the developers. This framework is highly performing, scalable and securing application provider with fast and complete connectivity of varied network applications. The very existence of this framework is due to is approach of ideology behind blocking and event driven input and output from the application. Ruby on Rails (RoR) This web framework is one among the few web development framework which keeps developers highly engaging and moves them interesting towards application development. Some of the well established brands like Airbnb, Hulu and Basecamp are developed using RoR. This framework is an open source available in free of cost and runs based on Linux. This will carry out your developing work in a much smoother way in entire process cycle. Symfony is basically developed on PHP framework in order to create complex applications in a highly scalable and more flexible manner. This framework is exclusively used for developing large projects based applications. The recent version of Symfony (version 3.1) is highly flexible so that it create any kind of application. This framework works with various established open source platforms like PHPBB, Piwik and Drupal. Symfony is the bundled pack of PHP components, application framework, community and philosophy. This web framework tool is developed by Microsoft in 2002, where it is run by Common Language Runtime (CLR) which helps coders to write programs of ASP.NET under .NET framework. This framework is well known for handling dynamic website and creating complex applications. Since this is the most preferred framework among web developers it solely possess 15% of market share in the web framework industry. Yii framework possess most of its functionalities similar to that of ASP.NET in providing web based applications. It is well known for working with applications performing repetitive tasks. This framework features built-in component model, database abstraction layers, event driven programming and modular application architecture. One of the notable feature of Yii framework is that it runs the application in shorter period, with high efficiency and in a more customized way according to market demand. Moving one step ahead this framework gives you the advantage of upgrading or downgrading its versions according to the requirement during installations. It is an out and out java script featured framework developed in both front as well as back-end. The primary purpose of this framework is to make things easy and in a more convenient way, so that the output can also be efficient and scalable.The number of codes used by this framework is comparatively less in numbers which helps the developers to complete quickly. This is the bundle of java script frameworks, library collections and its packages. Meterojs has also derived concepts from peer framework in order to develop a strong application. This is the best and highly demanding open source framework developed by PHP, this framework utilizes the concept of MVC associated with data mapping concepts. The most important reason behind this framework’s fame is that the developers feel highly comfortable while using this framework for application creation and it have a highly structural and simple guide to help the developer in using cake PHP for web page creation. At the end of the day it is not about the features present in the framework rather the functionalities of a web framework decides its place in a project. Understand your requirements and study about the demands for functionalities to your project and then select the fittest one.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00733.warc.gz
CC-MAIN-2022-40
6,324
14
http://www.cplusplus.com/forum/general/84559/
code
Hey, I was working with a few of my programs using dynamically allocated arrays and they never seem to leave me alone. I even deleted Them when I was done using them however, I got errors of 'Heap Corruption'. My Computer has not been able to connect to the internet and windows gives me random errors, I ran applications an im getting peer reset connection errors and NULL pointer exeptions. Please help! Any pre-boot programs you guys recommend?
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549425737.60/warc/CC-MAIN-20170726002333-20170726022333-00628.warc.gz
CC-MAIN-2017-30
447
1
http://www.itprotoday.com/microsoft-sql-server/trying-recover-without-ldf
code
I'm trying to recover a database that has one master data file (.mdf) and one log data file (.ldf). I got the .mdf file from a standard OS backup tape. But sp_detach_db wasn't run on the database before the .mdf backup, so I don't have the .ldf file. I know that the stored procedure sp_attach_single_file_db can recreate the log file in some cases, and I've tried to use it to simply reattach the database, but I get the following error: Server: Msg 1813, Level 16, State 2, Line 1 Could not open new database 'db'. CREATE DATABASE is aborted. Device activation error. The physical file name 'C:\Program Files\Microsoft SQL ServerMSSQL\datadb_log.LDF' may be incorrect. Can I Recover My Database? SQL Server Books Online (BOL) clearly documents that you must run sp_detach_db on a database to let the database reattach with sp_attach_db or sp_attach_single_file_db. Using sp_detach_db ensures transactional consistency within the database and ensures data integrity. However, if complete data integrity isn't important or you know that no data has changed recently, you might be able to use the undocumented Database Consistency Checker (DBCC) REBUILD_LOG command that Listing 1 shows to attach the database. REBUILD_LOG will recreate a new log file and let you reattach a database even if a good log file doesn't exist. However, the data might not be transactionally consistent because you might have thrown away active and uncommitted transactions. Use this command only for emergency recovery when you move data to a new database. Use caution when you apply any undocumented technique in a production environment. I strongly encourage you to contact Microsoft Product Support Services (PSS) for recovery of production data rather than use undocumented recovery techniques. But sometimes, tips such as this one are good to have in your bag of tricks.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863684.0/warc/CC-MAIN-20180520190018-20180520210018-00435.warc.gz
CC-MAIN-2018-22
1,853
5
https://www.alibabacloud.com/help/doc-detail/84514.htm
code
This topic describes the mechanism for Function Compute to access databases in a virtual private cloud (VPC). Function Compute dynamically allocates instances for running functions. Therefore, you cannot add the dynamic IP addresses of these instances to a database's whitelist. Specifically, you cannot control the access of Function Compute to a database by using a whitelist. In addition, based on the principle of least privilege, we recommend that you do not add the IP address 0.0.0.0/0 to the whitelist of your database in your production environment. To resolve the preceding problem, you can create a VPC and grant Function Compute the permissions to access resources in the specified VPC. You can deploy your database in a secure VPC and enable the service to which the function belongs to access the VPC. - The client sends a request to Function Compute. - Function Compute accesses the database in the specified VPC when the service to which the target function belongs is enabled to access VPC resources. VPC is an isolated cloud network built for private usage. Function Compute and the database reside on different VPCs. Therefore, Function Compute must use an elastic network interface (ENI) to access the database across VPCs. You must authorize an ENI to access the specified VPC, and bind the ENI to the function instance. For more information, see VPC access.Note VSwitches in the same VPC can communicate with each other. If the VSwitch of the VPC where the database resides is not in a zone supported by Function Compute, you can create a VSwitch in a zone supported by Function Compute in this VPC and configure the ID of the new VSwitch in the VPC configuration of Function Compute to achieve interconnection by using VSwitches in different zones. - Function Compute returns the obtained data to the client.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565376.63/warc/CC-MAIN-20210125061144-20210125091144-00696.warc.gz
CC-MAIN-2021-04
1,831
8
http://www.linuxquestions.org/questions/linux-newbie-8/installer-prompt-602246/
code
Originally Posted by vonedaddy I am playing around with installing fc8 on a HP proliant DL360 and I need to type linux isa at an installer prompt in order to select the smart array. The problem is I am really new to linux and am not sure where the installer prompt is. I am booting to a Live CD and select install to HD at the moment. can someone help me with this? where is the installer prompt? Is it just a terminal opened from the live CD? Given the slight differences in language used between here (UK) and "your side of the pond" I'd guess that that's exactly what it means. I don't know whether it would mean that the command is required as user or root (normally indicated by either $ for user and # for root - these can be changed of course), so open a terminal and try it, it will either work or not. Try as user, if you get a negative response from user, then log in as root and try it.
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171163.39/warc/CC-MAIN-20170219104611-00234-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
897
5
https://www.ironistic.com/ironisticlogo_blackbgdonly/
code
Meet the team: Integrated Digital Marketing Services How 280 Characters Will Change Your Life #Hashtags: A How-To on Social Media’s Most Popular Character Basic Guide to LinkedIn Etiquette The Benefits of Being Spied On There are currenty no responses.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807660.32/warc/CC-MAIN-20180217185905-20180217205905-00452.warc.gz
CC-MAIN-2018-09
254
7
https://meta.stackoverflow.com/questions/392764/why-is-this-answer-getting-downvoted
code
I just need some help or an explaination on what is the wrong with this answer: https://stackoverflow.com/a/59644587/4171008. How I can improve it in order not to get another downvote? I don't think you can really improve that answer - it will stay "not useful" for future visitors pretty much with any edits. There is really no reason to answer spelling mistake questions as answers - a comment would be enough. The only thing that may make it marginally better is to change "because it's not written right:" (that sounds like some real problem that needs to be fixed) into "you misspell For the second problem (undeclared variable) I seriously doubt that no one ever asked about it in Python... Again comment/duplicate would be better option.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357935.29/warc/CC-MAIN-20210226175238-20210226205238-00490.warc.gz
CC-MAIN-2021-10
744
5
https://www.my.freelancer.com/projects/internet-marketing-seo/generate-million-page-views-daily/
code
Looking for someone who can do this. Please provide your information, how you plan on doing this, where the traffic is being generated from, details people details..... Do not just say you can do it, then when we ask you how you have no answer or clue. Must be able to provide proof. Will need to show real life examples that we can test with Google Analysis. Tests proving you can send 1 million + unique page views daily to websites we want traffic directed to. Unique ip addresses for each Traffic generating Marketing software Basically software that will send real human traffic viewers (not bots) to websites in the database No bots, no fake traffic, nothing a user has to download, no buying traffic, no pop ups/unders, no having to go to their site first. Real human viewers/Real traffic Software program that promotes any site. Marketing software Software should send targeted visitors (real people not bots) to affiliate sites to promote all its sites The software you create is supposed to go out and find the traffic sources to redirect to other website locations. That is the whole concept of the software. No bot traffic, no fake traffic, nothing a user has to download, no spyware / malware, no toolbar, no buying traffic, no pop ups/unders, no having to go to their site first. The software you create will somehow go out find traffic ( scrape it, redirect it, generate it, etc) from websites across the internet and then send those people (real people) to other website urls that are in the system. The whole concept and purpose is that you are to develop software that will go out and find the traffic sources on its own (using some method) and then send / redirect that traffic to the websites in the system The criteria the software should have: -Software should be able to operate 24/7 -Software should send targeted traffic to thousands of sites in real time -Should be able to extract site urls from any file type for fast input -Should be able to change the websites (where the traffic is going to) in real time -Need to be able to control how many, where and time frames traffic is being sent in real time -All sites that are put in to the software must be targeted traffic -Visitors must be real people (not bots) -Software must provide the number of visitors that the site has received -Software should be simple to use, ( a child must be able to easily use) -Software should produce 1 million - 50 million + unique targeted visitors per day to website urls -Unique ip addresses for each -Must use Windows, Ubuntu, Mac,Linux -The software should have English and French languages -The software should have options for increased performance -The software should have a mode of jobs with full explanations Software should send real human targeted traffic visitors (real people not bots) Software needs to send actual real human traffic, not bots. Purpose of this is so that the client can send real viewing people to websites so they view and buy their products and services. Has to be real people being directed / redirected to these sites. Bots will not be able to buy products or services, only inflate bogus page views with no purpose and be banned. Example: You are using Chrome firefox right now. All of a sudden We send you right now through your browser to [login to view URL] and you are actually there looking at the [login to view URL] webpage. Now you are there you look around maybe buy something. You, real person, not a bot.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825349.51/warc/CC-MAIN-20181214022947-20181214044447-00200.warc.gz
CC-MAIN-2018-51
3,464
31
https://forum.arduino.cc/t/enigma-machine-simulator-using-arduino-uno-and-touchscreen-lcd/272052
code
This enigma machine simulation using an Arduino Uno and a Touch Screen LCD focuses on an accurate implementation of the three wheel Enigma I and Enigma M3 and the four wheel Enigma M4. The double stepping anomaly is correctly implemented as well. The plugboard can be left empty, set up with the standard 10 plugs or up to 13 plugs can be used. When 10 plugs are used, the Uhr switch, a device that was invented to further scramble the plugboard settings can be used. You can read more about this at the project web page: The source code for the enigma engine used is available at the Google Drive linked below. A couple of examples are available that decode encrypted text provided by APC Magazine and another product called the Enigmuino. The files in question are called EnigmaSerial.ino, EnigmaSerialAPCMAG.ino and EnigmaSerialEnigmuino.ino At the link below there is a product video showing how the machine is set for operation. EPUB KXQY DDPR YURG DTKB WOZI UVN
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00409.warc.gz
CC-MAIN-2022-33
967
5
https://www.influxdata.com/blog/benchmarking-leveldb-vs-rocksdb-vs-hyperleveldb-vs-lmdb-performance-for-influxdb/
code
Benchmarking LevelDB vs. RocksDB vs. HyperLevelDB vs. LMDB Performance for InfluxDB For quite some time we’ve wanted to test the performance of different storage engines for our use case with InfluxDB. We started off using LevelDB because it’s what we had used on earlier projects and RocksDB wasn’t around yet. We’ve finally gotten around to running some basic tests against a few different engines. Going forward it looks like RocksDB might be the best choice for us. However, we haven’t had the time to tune any settings or refactor things to take advantage of specific storage engine characteristics. We’re open to suggestions so read on for more detail. Before we get to results, let’s look at the test setup. We used a Digital Ocean droplet with 4GB RAM, 2 Cores, and 60GB of SSD storage. The next release of InfluxDB has a clearly defined interface for adding different storage engines. You’ll be able to choose LevelDB, RocksDB, HyperLevelDB, or LMDB. Which one you use is set through the configuration file. Under the covers LevelDB is a Log Structured Merge Tree while LMDB is a mmap copy on write B+Tree. RocksDB and HyperLevelDB are forks of the LevelDB project that have different optimizations and enhancements. Our tests used a benchmark tool that isolated the storage engines for testing. The test does the following: - Write N values where the key is 24 bytes (3 ints) - Query N values (range scans through the key space in ascending order and does compares to see if it should stop) - Delete N/2 values - Run compaction - Query N/2 values - Write N/2 values At various steps we checked what the on disk size of the database was. We went through multiple runs writing anywhere from 1 million to 100 million values. Which implementation came out on top differed depending on how many values were in the database. For our use case we want to test on databases that have more values rather than less so we’ll focus on the results for the biggest run. We’re also not benchmarking put operations on keys that already exist. It’s either inserts or deletes, which is almost always the use case with time series data. The keys consist of 3 unsigned integers that are converted into big endian bytes. The first is an id that would normally represent a time series column id, the second is a time stamp, and the third is a sequence number. The benchmark simulates values written into a number of different ids (the first 8 bytes) and increasing time stamps and sequence numbers. This is a common load pattern for InfluxDB. Single points written to many series or columns at a time. Writes during the test happen in batches of 1,000 key/value pairs. Each key/value pair is a different series column id up to the number of series to write in the test. The value is a serialized protobuf object. Specifically, it’s a FieldValue with an Here are the results of a run on 100 million values spread out over 500k columns: ode> method of each of the storage engines. A few interesting things come out of these results. LevelDB is the winner on disk space utilization, RocksDB is the winner on reads and deletes, and HyperLevelDB is the winner on writes. On smaller runs (30M or less), LMDB came out on top on most of the metrics except for disk size. This is actually what we’d expect for B-trees: they’re faster the fewer keys you have in them. I’ve marked the LMDB compaction time as a loser in red because it’s a no-op and deletes don’t actually reclaim disk space. On a normal database where you’re continually writing data, this is ok because the old pages get used up. However, it means that the DB will ONLY increase in size. For InfluxDB this is a problem because we create a separate database per time range, which we call a shard. This means that after a time range has passed, it probably won’t be getting any more writes. If we do a delete, we need some form of compaction to reclaim the disk space. On disk space utilization, it’s no surprise that the Level variants came out on top. They compress the data in blocks while LMDB doesn’t use compression. Overall it looks like RocksDB might be the best choice for our use case. However, there are lies, damn lies, and benchmarks. Things can change drastically based on hardware configuration and settings on the storage engines. We tested on SSD because that’s where things are going (if not already there). Rocks won’t perform as well on spinning disks, but it’s not the primary target hardware for us. You could also potentially create a configuration with smaller shards and use LMDB for screaming fast performance. We’re open to updating settings, benchmarks, or adding new storage engines. In the meantime we’ll keep iterating and try to get to the best possible performance for the use case of time series data.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510412.43/warc/CC-MAIN-20230928130936-20230928160936-00819.warc.gz
CC-MAIN-2023-40
4,829
26
http://www.linuxquestions.org/questions/linux-newbie-8/how-to-boot-linux-without-starting-services-628282/
code
How to boot linux without starting services? I setup a program to run as a "service" by creating a script to run it and putting the script in the init.d directory to see what it would do (it was a dyndns daemon program to update my ip). So now on the bootup it hangs on "starting 0inadyn". Presumably the program is running and linux is doing everything I told it to do, I just screwed up how to execute the service. Is there a way to boot up without services (like some light weight command line run level?) and remove the script so that I can boot up normally? (Running CentOS 4.5)
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121665.69/warc/CC-MAIN-20170423031201-00148-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
583
2
https://www.singlestore.com/forum/t/singlestore-7-3-released/2617
code
SingleStore is proud to announce its 7.3 release, for SingleStore Managed Service and SingleStore DB. To try it free, visit singlestore.com/free. - the latest Universal Storage features: columnstore as the default, upserts on columnstore tables and support for unique key enforcement for very large INSERT/UPDATE statements (blog, video). - developers, check out - user-defined session variables - DDL forwarding (any aggregator can run DDL) - vastly improved support for optimizing queries with >> 18-way joins - five new ingest features - admins, check out - backup history and progress tracking - ability to check if DR site is caught up with the primary - cross-database rebalancing See the 7.3 Release Notes for further details.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103344783.24/warc/CC-MAIN-20220627225823-20220628015823-00613.warc.gz
CC-MAIN-2022-27
733
12
https://gisinfo.net/newspages-news-1899-0
code
Copyright © Panorama Group 1991 - 2021 In KB "Panorama" the For all requests, a parameter has been added that is responsible for the sequence order of coordinates (latitude/longitude or longitude/latitude). Now any GIS that supports OGC standards can receive data from GIS WebService SE. Service allows to carry out an automatic clustering of objects on a map at the expense of classifiers use. The objects indicated in the classifier will be automatically grouped on the map using standard GetMap and GetTile requests. New requests for saving and deleting documents located in the semantics of map objects located on the There is improved an output of images for requests according to the advanced standard OGC WMS and WMTS. For point (raster) objects, the possibility of forming with a transparent background is implemented. Parameters are added for setting the color of a transparent background, filters for drawing data by key and semantics. To request height at a point, a parameter has been added to obtain a geodetic height. The new version implements the ability to deactivate a license such as a registration key. The requests parameters for obtaining information from the 3D layer are expanded. There are added an output of information from various levels of representation (near, middle, far) and the ability to request textures and description of objects separately. This will speed up the process of creating a model on the client. The structure of the displayed parameters is expanded, which allows to obtain more accurate information about the 3D layer. GIS WebServiсe SE supports all international standards (OGC WFS, WFS-T, WMS, WMTS, WCS), with the help of which the transmission and display of spatial data is carried out. The program implements the ability to issue tiles for any user or local coordinate system. The application is implemented on Windows and Linux platforms, is compatible with Apache, IIS and nginx web-servers. The new version of the program and documentation are available on the website in the Specialists of KB "Panorama" completed the preparation of educational and presentation materials in English about the GIS "Panorama" - the universal geographic information system having the tools of creating and editing the digital maps and plans of cities, processing of remote sensing data, execution of various measurements and calculations, the overlay operations, building 3D models, processing of the raster data, tools of preparing the graphic documents in a digital and printing form, and also tools for work with databases. You can familiarize yourself with the presentation about the possibilities of the GIS "Panorama" version 13 on the In KB "Panorama" the GIS ToolKit Active version 13 toolkit has been developed for creating geographic information systems. The new version allows you to create 32x and 64-bit applications with a free license in any programming environment that supports ActiveX technology, for example, Visual Studio. GIS ToolKit Active is delivered with source codes of components and the usage examples. The product provides the use of all types of spatial data (bases of geodata), prepared in the GIS ToolKit Active contains a set of the visual and non-visual components allowing to use the spatial and attributive data for display and execution of special calculations. Using these components, geographic information systems of various levels are created (federal, regional, municipal, corporate). The toolkit supports the local, state and international coordinate systems (parameters of more than 4 000 different coordinate systems are included into the delivery complete set). For use of toolkit the extensive Bank of geospatial data is available. The base of the geodata can be located both at the workplace (direct access to data) and on a local network or the Internet (access components to the A set of components allows you to create monitoring systems for the moving objects that store the coordinate description in popular databases (Oracle, MS SQL Server, Postgre SQL). Use of special components of navigation and geodata management makes it easy to work with large areas. In the new version, the dialog of information about the map object has been improved. There is added the ability to save object descriptions in file formats: SHP/DBF, OGC GML, GeoJSON. You can save the object through the menu item "Save Object" by right-clicking on the bookmark "Metric". The installation package includes examples for Visual Studio 2015, Visual Studio 2012. The examples demonstrate the possibilities of searching and highlighting map objects, scaling images, navigation, accessing data on GIS Servers using the TMapGISServer component, displaying the movement of an object by given coordinates using the TMapView and TMapWindow components and others. The version of the toolkit in the Free version allows you to create geographic information systems that can be freely distributed without requiring additional licensing of the toolkit supplier or installation of additional modules. The new version of the program is available for download on the The map was led to the modernised classifier of large-scale plans of scale 1: 5 000 (map5000m.rsc). Publishing of updates in bank of spatial data is made by using the program of Free maps on the basis of OpenStreetMap data are available for download on the page "Digital maps and images". In KB "Panorama" the In the "Preparing for printing" section of the list of applications, the "Report Designer" application has been added for quick creation of graphic documents using maps insets and image insets. The wizard for creating a new project allows by "one-click" to form a template of graphic document with ready-made layout and footers (headers, corner stamps and other elements). Modes for drawing maps insets and images inserts, editing content, positioning and scaling images inside the insets make it easy to create visual and informative graphic documents. The creation of new and an editing of existing templates is supported for their further replication when creating new graphic documents. The dimensions of the printed fields of created document are automatically transferred to the print dialog, which allows you to quickly print a document without selecting a printable area and setting indents. New types of objects semantics are added: semantics the numerical formula and semantics the symbolical formula. New types complement the programmable semantics the formula, the value of which is calculated in the connected iml-libraries. For new types of computed semantics, mathematical or logical expressions are defined that are executed without programming. For the numerical formula in semantics description the mathematical expression (formula) is indicated that contains numbers, mathematical operations, links to the values of semantics of object, its area, perimetre, coordinates of the first point in meters or degrees, coordinates of the geometrical center of object and other properties. For example, the following operations can be used as mathematical operations: +, -, *, /, max, min, arm, sin, cos, tg, ctg, abs, sqrt2, sqrt3, pow2, pow3. To insert the semantics value into the formula, the symbol # is indicated behind which there is a code of semantics and in brackets a value by default is contained. For example, # 1 (10) - take the value of semantics 1, if the object does not have semantics with code 1, take the value 10. For an approximate estimate of the timber stock in the forest, one can use the expression: S/(#61(1)*#61(1))*PI*#60(0.5)*#60(0.5)/4*#1(4). Divide the forest's area S by the area occupied by one tree - semantics 61 (Distance between trees) squared, and multiply by the volume of wood of the tree obtained by the semantics 60 (Thickness) and 1 (Relative height). For a symbolic formula, in the semantics description, a string template is specified containing one or more links to semantics of an object of the form #XX (YY), where XX is the semantics code, YY is the value of the semantic characteristic if it is absent. For example: #101(5)storey building\#3(RESIDENTIAL) - take from the object the value of semantics with code 101, if not, the value 5 will be assigned, add the symbols "storey building" and the following semantics with code 3, the default value is RESIDENTIAL. The symbolical formula supports special characters for code recording of values, the same as for the formation of the title text. For example, #XX.*(Z^P), where #XX is the digital code of semantics (from 1 to 65535), * is an indication of accuracy (from 0 to 9), reduction of the value (s) or line break (w), Z is the default value, P is the string formatting options (^P1^P2^P3...). For example: X=#32205.3 Y=#32206.3 H=#32207.3(0) - form a coordinate line with an accuracy of the 3rd sign from the service semantics; #46!1()(#11!1())#55.s() - form a reference designation for the road, including the width of the coating, the total width, the designation of the coating material. When editing the coordinates or semantics of an object, the values of semantic formulas are automatically recalculated. The choice of type of semantics and entering of formulas is carried out in the task the Classifier Editor. The mode "Recalculation of coordinates in text files" from the Map Computer task is improved. As the initial data the text files with coordinates in arbitrary coordinate system are used. Text files with coordinates can be loaded from XLS tables, databases and other sources. The description of a format of the used text file is presented in the electronic help for the mode. The choice of parameters of the input and output coordinate systems of a list of points is carried out in the following ways: by entering numerical values in the dialog; by using the EPSG code; from the list of coordinate systems of the XML file (for example, "LCS of the Subjects of the Russian Federation.xml"). All types of maps, projections and ellipsoids supported in the GIS Panorama system are available for recalculation of coordinates. The transformation of coordinates from the input user projection to the output projection is performed through geodetic coordinates on the common earth ellipsoid WGS84. The dialog "Select Object" has been improved. On the bookmark "Metrics" the possibility of updating of object from files of formats: SHP/DBF, OGC GML, GeoJSON is added. You can update an object or only its metric through the items in the context menu "Update object metric" or "Update object". In the Map Editor, a mode for copying metric data of one map object into another object was added. The task is intended to perform actions on copying the coordinates from a specified object or group of selected objects. A new way of a choice of square objects on a map - only on the object contour is added. The main mode of the object choice is the mouse-click inside the object or along its contour. This mode complicates the enumeration of objects if in one point there is a large number of areal objects. For example, areal objects of territories are displayed in the following order: City - District - Region. The choice of objects is performed in the reverse order: Region - District - City. With the main mode of a choice the city territory will be chosen only from the third time. At a choice of the areal object on the contour the city territory can be chosen from the first time. Customizing the mode of a choosing the areal objects is carried out through the main menu of the program:Options\Square objects choice on the contour. The task "Search by name" has been improved. Possibility of line-by-line search or text marking by full (partial) coincidence is added. Multiline text can be loaded from a text file, pasted from a buffer, or entered in the appropriate field. For example, the mode allows you to select on the map all objects containing in the semantics of "Name" one of the lines of a multi-line list. The restriction on the size of a multi-line title on the map is excluded. When a line longer than 126 characters is entered in the "Label editing" dialog, an service record with code 32860 with the entered text is created in the semantics of the object, and the link "# 32860" is saved in the label text. When editing text, an automatic updating the text of label and semantics is carried out. If the text becomes shorter than 126 characters, the service semantics will be deleted. The task "Calculation of semantics by entering objects" is improved. The process of transferring the established semantic characteristics from entering objects into selected polygons is accelerated. The acceleration was achieved by improving the search algorithm for objects included in the polygon by determining their belonging by the calculated center of coordinates. When exporting a vector map to DXF and MIF\MID formats, the title text of the form #XXX is replaced with the value of the corresponding semantics. The new version of the program is available for download in the
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991252.15/warc/CC-MAIN-20210512035557-20210512065557-00439.warc.gz
CC-MAIN-2021-21
13,036
40
https://mail.python.org/pipermail/python-list/2003-February/228814.html
code
float / double support in Python? aahz at pythoncraft.com Fri Feb 14 01:22:10 CET 2003 In article <3E4C3006.DA9C0F18 at alcyone.com>, Erik Max Francis <max at alcyone.com> wrote: >This entire subthread has been a complete red herring, with you playing >the part of the herring. Smells fishy to me, too. (At last. Something Erik and I agree on. ;-) Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ Register for PyCon now! http://www.python.org/pycon/reg.html More information about the Python-list
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886860.29/warc/CC-MAIN-20180117082758-20180117102758-00363.warc.gz
CC-MAIN-2018-05
510
11
http://www.softcity.com/question/os-utilities/fix-it/fix-it-utilities-11-pro-installation/2kjM5MzN
code
I am using Windows XP S/P3. After uninstalling my McAfee Security Suite and Fix-it 10 as instructed, I attempted to install Fix-It 11. I received the following message; Threat Definition Error Error(s) occured during Threat Definition file copy. Please run update check after finishing installation. OK After pressing OK I receive the following; Installation ended prematurely because of an error OK After pressing OK the install aborts. On the next attempt, after the Wise Installation Wizard runs, I get the following message; Corrupt installation detected. Check source media or re-download. How can I complete my installation of Fix-It Utilities 11 Pro.
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507443869.1/warc/CC-MAIN-20141017005723-00038-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
657
9
https://stor-tec.com/2017/08/23/redesign-your-warehouse-layou/
code
3 Reasons To Redesign Your Warehouse Layout! After remodeling, moving, or building your warehouse and getting settled in, there are usually a few revelations about places where the old design was hurting their ability to work efficiently and cost effectively. If they had discovered them in advance, it would have been a priority to address them. That’s where we come in! Typically people come to Stor-Tec USA with warehouse layout design questions because they are building, expanding or moving! Here are some things to keep an eye out for! - Misappropriated Space When looking at current warehouse utilization, it’s common to find slack and over-utilized spaces and usually both in the same facility. While the slack space is simply wasteful, over-utilized space often leads to decreased productivity and higher risk for product damage and employer injury. Having to dig out product and not being able to access heavy items with a lift are symptoms of over-utilized space. Addressing it early instead of waiting for it to become an issue is the least expensive path. - Constant Cost Increases It wouldn’t be unlikely to trace cost increases to warehouse design problems. As more material is being handled and more employees being brought in, there should not be any increase in the warehouse costs beyond anticipated maintenance. But a poorly designed warehouse requires constant attention and adjustments as the workload increases. At some point, someone needs to say, “Enough!” - Workflow and Process Collisions It isn’t unusual to discover workers at odds at work, but your warehouse layout should not be a contributing reason for it. A poorly designed facility will create collision points that impede natural workflows and processes. This not only decreases productivity, but also morale.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100686.78/warc/CC-MAIN-20231207185656-20231207215656-00712.warc.gz
CC-MAIN-2023-50
1,807
10
https://supportforums.cisco.com/discussion/9975791/ssl-vpn-client-error
code
- Blue, 1500 points or more I setup a Cisco ASA 5510 SSL VPN with the folowing; SSL VPN CLient sslclient-win-184.108.40.206.pkg Out of 400 users, there is one user having problem installing the SSL Client to his laptop. The user laptop information is; IBM Thinkpad T40 Windows XP SP 2 Internet Explorer 7 All patches up-to-date All drivers up-to-date SSL VPN Client connection process; - User login with valid account and password - The SSL VPN Client package will automatically download and installed. - User will then be connected to SSL VPN 1. GUI (Cisco SSL VPN Client installation process) "The SSL VPN Client driver has Encountered an Error" 2. Event Viewer The only error in this user event viewer that differs from other users who successfully connected are; Return code: 0 Return code: 0xFE080007 Anyone know what thus the error means? BTW, anyone know the link to SSL VPN knowledgebase. i.e errors, root cause, solutions?
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107490.42/warc/CC-MAIN-20170821041654-20170821061654-00197.warc.gz
CC-MAIN-2017-34
931
21
https://www.my.freelancer.com/projects/Excel/Excel-VBA-TIME-SERIES-DECOMPOSITION/
code
I have the history for some stocks and shares. The history is in a worksheet sheet and there will be a column of dates in column A and columns of prices ( starting at row 8) . I want this project to let me select a price column and create the study below in a new worksheet: [url removed, login to view] Yiou can store the source sheet & column in cells in the new sheet. We need the data & chart in a new worksheet with a refresh button to recalculate the data & forecast. The forecase data must be present in the sheet under the chart. Clicking on the chart plot area should calculate a forward curve on the date along the X axis so we can see what the forecast was on that date and compare it with the actual data. We need to be able to draw the 3 forecasts on the clicked date if they are displayed in tte forecast. I'd like to add some analysis to the chart like Confidence interval, [url removed, login to view] Regression : [url removed, login to view] Excel Forcast : [url removed, login to view] and STD Deviation plus 2 user defined moving averages These can be toggled & adjusted by checkboxes on a user form where necessary. Add 2 buttons on the sheet to let us select a different source series... VBA only - if formulas used they must be added by VBA Prices can have #N/A values so they must be allowed for in the calculations Obviously changing the data source (column) can change the range of data used. It has to look smart and be designed to add more analytics or perhaps plot a second column in the chart. Knowlewdge of the subject is important..... Impress me :) Let me know what you can do to improve the concept... More work available to the successful bidder if all goes well. I'm looking for an Excel VBA expert with statistical/ stock market experience. Sorry but if your resume dosn't show this I'm afraid I can't hire you. Please don't bid if you do not have indepth experience of VBA and time series analysis.. Update to Clarify. I have been asked to expand on what I need/expect. I take end of day prices (via JSON) from our web database and displays them in Excel. It is mostly commodity & currency prices. I want to add some 'forcasting' or prediction analysis. I realize is not good to predict based on one time series. At best we are only showing simple probability on patterns. This stage is to set the basis for expanding this project to stage 2 & 3 below. Stage 1 here is easy enough but charts/reports must be dynamic, let the user select which studies to display (Confidence interval, Excel Forcast STD Deviation, Moving Averages etc). If moving average periods change they must update in real time. We must allow user to refresh and change datasource series, and run the prediction studies on any date in the chart so we can see the accuracy of the predicton ( this can be a second chart if needed). We will want to see a table/chart plot of prediction % accuracy over time. I'd like you to tell me what you can offer to improve the project.... more analysis techniques, charts. The VB is simple enought. It is the analysis I am interested in. It could get interesting is if we let the user use a sheet containing multiple time series (we have a standard format/layout) and select one primary series and check the other series in the sheet to see if individually or collectively actas signals/influences for the primary series, It would obviously depend on the series the user selects but perhaps change in the prime series could be more accurately predicted by change in the other series - or not. A report/chart showing influence of each series would be good. It would be slow but perhaps it could be expanded to database searches for series that appear to influency the prime series. This would involve searching through 10000's of datbase series probably using python. This project is for stage 1 only. I need the developer to be able and willing to do stage 2. I have upodated the project description to add what we discussed and explain where I want to go with it. You can of course withdraw/change your bid if you feel it changes the original projects scope.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867644.88/warc/CC-MAIN-20180625092128-20180625112128-00121.warc.gz
CC-MAIN-2018-26
4,103
40
http://www.tannerhelland.com/4616/photodemon-5-2-update/
code
PhotoDemon v5.2 is now available. New features include selection tools, arbitrary rotation, HSL adjustments, CMYK support, new user preferences, multiple monitor support, and more. Download the update here. New Feature: Selection Tool Selections have been one of the top-requested PhotoDemon features since it first released, so I’m glad to finally be able to offer them. A lot of work went into making selections as user-friendly and powerful as possible. Three render modes are provided. On-canvas resizing and moving are fully supported, as are adjustments by textbox (see screenshot above). Everything in the Color and Filter menus will operate on a selection if available, as well as the Edit -> Copy command. (Note: as of this v5.2, selections are not yet tied into Undo/Redo, and selections will not be recorded as part of a Macro. These features will be added in the next release.) New Feature: Crop to Selection New Feature: HSL Adjustments New Feature: Arbitrary (Free) Rotation New Feature: CMY/K Rechanneling New Feature: Sepia (W3C formula) New Feature: Preferences Dialog (rewritten from scratch) New preferences include: - Render drop shadows between images and canvas (similar to Paint.NET) - Full or compact file paths for image windows and Recent File shortcuts - Improved font rendering on Vista, Windows 7, and Windows 8 (via Segoe UI) - Remember the main window’s location between sessions Loading and Saving: - Tone map imported HDR and RAW images - Options for importing all frames or pages of multi-image files (animated GIFs, multipage TIFFs) - Automatically clear selections after “Crop to Selection” is used - Pick your own transparency checkerboard colors - Pick from three transparency checkerboard sizes (4×4, 8×8, 16×16) - Allow PhotoDemon to automatically remove empty alpha channels from imported images All preferences from v5.0 remain present, and there is now an option to reset all preferences to their default state – so experiment away! New Feature: Recent File Previews (Vista, Windows 7, Windows 8 only) New Feature: Multi-Image File Support (animated GIFs, multipage TIFFs) New Feature: Waaaay better transparency handling, including adding/removing alpha channels It’s hard to overstate how much better transparency support is in v5.2 compared to v5.0. Images with alpha-channels are now rendered as alpha in all viewport, filter, and tool screens. When printing, saving as 24bpp, or copying to the clipboard, transparent images are automatically composited against a white background. As mentioned previously, user preferences have been added for transparency checkerboard color and sizes. PhotoDemon also allows you to add or remove alpha channels entirely. Here’s an example of an image with an alpha channel, and the associated “Image Mode” setting: And here it is again, after clicking the “Mode -> Photo (RGB | 24bpp | no transparency)” option: Finally, PhotoDemon now validates all incoming alpha channels. If an image has a blank or irrelevant alpha channel, PhotoDemon will automatically remove it for you. This frees up RAM, improves performance, and leads to a much smaller file size upon saving. (Note: this feature can be disabled from the Edit -> Preferences menu if you want to maintain blank alpha channels for some reason.) New Feature: Custom “Confirm Unsaved Image(s)” Prompt Improved Feature: Edge Detection New Feature: Thermograph Filter This Wikipedia article describes thermography in great detail. PhotoDemon’s thermography filter works by correlating luminance with heat, and analyzing the image accordingly. Here’s a sample, using a picture of the lovely Alison Brie, of Mad Men and Community fame: New Feature: JPEG 2000 (JP2/J2K), Industrial Light and Magic (EXR), High-Dynamic Range (HDR) and Digital Fax (G3) image support PhotoDemon now supports importing the four image types mentioned above, and it also supports JPEG 2000 exporting. Other New and Improved Features: - Much faster resize operations, thanks to an updated FreeImage library (v3.15.4) - Multiple monitor support during screen captures (File -> Import -> Screen Capture) - Many miscellaneous interface improvements, including generally larger command buttons, text boxes, labels, and more uniform form layouts. - Many new and improved menu icons. - Heavily optimized viewport rendering. PhotoDemon now uses a triple-buffer rendering pipeline to speed up actions like zooming, scrolling, and using on-canvas tools like the new Selection Tool. Even when working with 32bpp images, all actions render in real-time. - Bilinear interpolation is now used during Isometric Conversion. This results in a much higher-quality transform. Hard edges are still left along the image border to make mask generation easy for game designers. - Vastly improved image previewing when importing from VB binary files. - Better text validation throughout the software. Invalid values are now handled much more elegantly. - More accelerator hotkey support, including changes to match Windows standards (such as Ctrl+Y for Redo, instead of the previous Ctrl+Alt+Z). - Update checks are now performed every ten days (instead of every time the program is run). - All extra program data – including plugins, preferences, saved filters and macros – have been moved to a single /Data subfolder. If you run PhotoDemon on your desktop, this should make things much cleaner for you. - PhotoDemon’s current and max memory usage is now displayed in the Preferences -> Advanced panel. - Tons of miscellaneous bug fixes, tweaks, and optimizations. For a full list of changes, visit https://github.com/tannerhelland/PhotoDemon/commits/master Not bad for two months work, eh? I hope you enjoy all the new features in 5.2., and please remember to donate if you find the software useful!
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587577.92/warc/CC-MAIN-20171216104016-20171216130016-00707.warc.gz
CC-MAIN-2017-51
5,826
52
https://findanexpert.unimelb.edu.au/scholarlywork/329965-the-limiting-kac-random-polynomial-and-truncated-random-orthogonal-matrices
code
The limiting Kac random polynomial and truncated random orthogonal matrices Peter J Forrester Journal of Statistical Mechanics: Theory and Experiment | IOP PUBLISHING LTD | Published : 2010 This work was done while the author was a member of the MSRI, participating in the Fall 2010 program 'Random matrices, interacting particle systems and integrable systems'. Partial support for this research was also provided by the Australian Research Council. Comments on the original draft of this paper by A Mays and the referee are acknowledged.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738855.80/warc/CC-MAIN-20200811205740-20200811235740-00491.warc.gz
CC-MAIN-2020-34
539
4
http://thetubeguru.com/video/input-reasoning-ibps-sbi-bank-po-clerk-exam-preparation-material/
code
Friend’s in today’s video I will tell you about the questions of input & output and how to solve them as soon as possible. Specifically, I will tell you all how to solve 5 questions in 2 minutes from this segment. In all the bank related exams there are around 5 questions which are asked from this topic which can be solved very easily through the technique which I will going to tell you in this video. PART 1 : Simple rearrangement In this section we are going to discuss all the patterns from this topic.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948537139.36/warc/CC-MAIN-20171214020144-20171214040144-00528.warc.gz
CC-MAIN-2017-51
512
4
https://learn.microsoft.com/en-us/azure/devops/artifacts/maven/install?view=azure-devops&viewFallbackFrom=vsts
code
Install Maven Artifacts Azure DevOps Services | Azure DevOps Server 2022 - Azure DevOps Server 2019 | TFS 2018 With Azure Artifacts, you can publish and restore Maven packages from Azure Artifacts feed and public registries. In this article, you will learn how to connect to Azure Artifacts feeds and restore your Maven packages. An Azure DevOps organization. Create an organization, if you don't have one already. An Azure Artifacts feed. Create a new feed if you don't have one already. Connect to feed From your Azure DevOps project, select Artifacts, and then select your feed from the dropdown menu. Select Connect to feed. Select Maven from the left navigation panel. Follow the instructions in the Project setup section to set up your config files and generate a new personal access token. If your settings.xml file is shared within your team, you can use Maven to encrypt your passwords. Restore Maven packages Run the following command in an elevated command prompt to download your Maven packages. Maven automatically downloads all your dependencies to your local repository when the build command is executed. <id> tags in your settings.xml and pom.xml files must be the same.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475422.71/warc/CC-MAIN-20240301161412-20240301191412-00726.warc.gz
CC-MAIN-2024-10
1,187
15
http://www.sevenforums.com/performance-maintenance/204419-keyboard-filtering.html
code
I have a computer installed with windows 7 64 bit home edition and I have a strange problem. This happens in every program be it web browser, notepad etc. The problem is that when I write the " or ' or ` keys these do not print immeditaely when I press them but I have to press another key for them to be written. If say I press " then any other key both get printed on say notepad (but only after pressing the second key), but if I press " and space bar only the " gets printed. The problem is not from the keybaord as I have used this keyboard with another computer with windows 7 32 bit on it and it works normally. Also the problem is not from the computer hardware either because I have installed ubuntu linux on another hard disk partition of this computer and here the problem does not occur. It occurs only when running the windows 7 partition. Is there a filtering thing going on? If so how can I remove it?
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898751.26/warc/CC-MAIN-20141030025818-00002-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
916
1
https://hykdq.real-team.eu/how-to-access-mongodb-from-android-studio.html
code
Pyspark read csv specify schema MongoDB is one among those databases used for storing the desired data. It is an open-source, document-oriented database. If you need step by step explanation of the topic, you can check our article here: MongoDB For Storing and Retrieving Data For Android. Here we are using MongoLab to access MongoDB. Follow the below steps to complete the task: Node.js and MongoDB API with Android - Part I February 25, 2016 We are starting with a tutorial series which will be multi-part consisting creation of a simple Node.js API with MongoDB integration and later will be connecting the API to Android App. . How to Use MongoDB Stitch in Android Apps. 1. Create a MongoDB Atlas Cluster. MongoDB Stitch is meant to be used with a MongoDB Atlas cluster. You're free to use a cluster you already have ... 2. Create a MongoDB Stitch Application. 3. Configure Users and Rules. 4. Prepare Android Project. 5. ... Mar 18, 2019 · The tutorial on how to see the data stored in sqlite in android studio is a simple step by step way to access your data in SQLite database. An easy way to access sqlite database in firefox is by using sqlite manager add-ons. Create a Web API and consume it through Android App; A custom API is the way most apps solve this kind of situation. MongoDB's documentation has several references of existing frameworks to interact with a mongoDB database through HTTP. It is recommendable to use such a framework for robustness, security, and community support. Yes you can Your Android App to MongoDB Database : it can possible to connect mongoDB with JAVA, you just have to: setup mongoDB Database and Create Restful APIs in JAVA and deploy it . Now you can Consume those APIs in your Android with Retrofit or Volley or etc. which is feasible for you. Pesquise outras perguntas com a tag java android android-studio mongodb ou faça sua própria pergunta. Em destaque no Meta TLS 1.0 and TLS 1.1 removal for Stack Exchange services
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370492125.18/warc/CC-MAIN-20200328164156-20200328194156-00295.warc.gz
CC-MAIN-2020-16
1,973
6
http://www.vogons.org/viewtopic.php?f=46&t=93536&p=1153615
code
Look for local ads. Garage sales. Some day one will be available for a decent price. I started my collection with 2001-2005 hardware. So 3Dfx was not in my scope. When i expanded my interests to earlier things, i bought cheap parts first and improved my parts step by step. In 2020 or 2021 i found a Voodoo banshee, mis-labeled as an SiS 315 for 20€ in a local second hand website. In 2022 I bought a lot of two Voodoo 1 for 40€ A few month later I was given a Voodoo2 12Mb for free. Only after that, I paid good money to get a second matching Voodoo2. This is the only one I paid 100€+. And only because i wanted a matched pair (Orchid Righteous 3D II 12Mb) So. As an advice. Just wait and look for the deals. By the meantime, there is a lot of fun to have with cheap parts you probably already have. «Story in a game is like a story in a porn movie. It's expected to be there, but it's not that important.» - John Carmack My collection (not up to date)
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506045.12/warc/CC-MAIN-20230921210007-20230922000007-00119.warc.gz
CC-MAIN-2023-40
962
10
https://mail.python.org/archives/list/buildbot-status@python.org/2018/10/
code
My x86 Gentoo buildbot worker is down until further notice due to a dead power supply fan. Given the age of this machine, I'm not sure that it's even worthwhile to replace the power supply, though I do plan to bring the worker back in some form as resources allow. Unfortunately, this worker provided three unique functions to the buildbot fleet, namely 32-bit (x86) builds, installation tests, and refleak checks. Does anyone else have the capacity to pick up some of the slack until this worker is back, on either new or existing
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711712.26/warc/CC-MAIN-20221210042021-20221210072021-00272.warc.gz
CC-MAIN-2022-49
531
8
http://cc.oulu.fi/~jarioksa/pages/dosdev.html
code
My primary environment is Linux. However, I switched from MS-DOS/Windows in November 1998, and so a large part of my programs were first developed for that platform. This page gives information of the MS-DOS tools I used. Most of these tools were actually ported to MS-DOS from GNU, and so they are more at home in Linux. This means that I may produce DOS ports of my future Linux programs as well. |Operating system||MS-DOS/ MS-Windows||All programs are compiled to run in the MS-DOS box of Microsoft Windows.| |Programming platform||environment||djgpp is a MS-DOS port of GNU C-compiler | |Linking Fortran subroutines to C||f2c||Fortran is used in reading in Fortran-formatted data files and in some mathematical subroutines. F2c allows mixed development with C.| |Numerical routines library||Numerical recipes||So far the only C library for Numerical Analysis I have found, but I'm working to replace this with some alternatives. With updates may be fairly good (I'm running v2.08 now: I had to patch to this version with my Unix-utilities, since automatic patching did not work).| |Random number generation||ranlib.c||Again from netlib. Basic generator in gcc is OK, but with this it is possible to generate random numbers from several probability (density) functions.| |Matrix algebra||Meschach||More reliable in matrix algebra than Numerical Recipes. For instance, SVD does not work correctly in NR, and they seem to have no intention to correct or even confess their bugs.| |Presentation graphics||DISLIN||Basic graphics routines are provided by |
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256546.11/warc/CC-MAIN-20190521182616-20190521204616-00430.warc.gz
CC-MAIN-2019-22
1,554
8
https://devrant.com/rants/1617863/shit-recruiters-say-we-need-solution-experts-not-language-experts-because-a-lang
code
Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up "THE SOLUTION IS ALSO JUST A TOOL YOU FUCKING DIPSHIT" And that's why I keep getting job posts for PHP positions after I worked almost entirely on Android. squid47611dYep. There is a point when the solution becomes the problem. I have yet to meet a recruiter who isn't a self-serving, lying cunt with next to no knowledge about development anyway. Even if that logic was applicable for every developer or programmer, you'd still have to learn the language(s) and it's almost never lucrative for employers to teach you and pay you for learning first, and have you actually be productive for them second. Also, as long as you're just learning you're even more easily replaceable which is a disadvantage, and it's not that programming languages are learned in a few days until you're worth your salt using them. Recruiters doesn't care, of course, as long as they get their bonuses. Your Job Suck? Take a quick quiz from Triplebyte to skip the job search hassles and jump to final interviews at hot tech firms Get a Better Job karmak8Dear Websites, If I have to go through a fucking slideshow or even multiple pages to see your content, I won'... Mizz14132Me wanting to board Plane, Goes through security Check... "Sorry sir Laptops are not allowed." Me "Why?" Secur... practiseSafeHex13Current mood: - Referred to manager as "Mein fuhrer" to a colleague in slack. - Reading an email from a recr...
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213737.64/warc/CC-MAIN-20180818193409-20180818213409-00210.warc.gz
CC-MAIN-2018-34
1,524
14
http://blog.practicingitpm.com/2019/04/21/new-pm-articles-for-the-week-of-april-15-21-2/
code
New project management articles published on the web during the week of April 15 – 21. And this week’s video: Mike Clayton explains the PMI, APM, and Prince2 definitions of an issue, and what project managers should do to manage them. 5 minutes, safe for work. Business Acumen and Strategy - Susan Lund and Jacques Bughin describe the changing nature of globalization, driven by flows of information and data. 8 minutes to read. - Stephen Bungay debunks five popular myths about strategy. 6 minutes to read. - Brad Plizga argues that human rights must always come before business. It’s time for Big Tech to say no to oppressive governments. - Valaiporn Niramai does a deep dive on what it takes to organize and manage a transformation project. 7 minutes to read. - Sarah Hoban explains the fundamentals of project branding. 3 minutes to read, or listen to her podcast: 6 minutes, safe for work. - Nenad Trajkovski suggests that we consider what type of task to use, based on constraints and drivers, before we start up MS Project. 2 minutes to read. - Elise Stevens interviews Laura Dallas Burford on how to become a project management consultant. Podcast, 33 minutes, safe for work. - The Nice Folks at Clarizen explain one of my favorite methodologies: Gap Analysis. Loved by business analysts and implementation project managers everywhere. 3 minutes to read. - Tapera Mangezi tells how to maintain positive stakeholder engagement during business analysis processes. 3 minutes to read. Managing Software Development - Stefan Wolpers curates his weekly list of Agile content, from the merits of less communication to the demerits of A/B testing to multiple team Scrum. 7 outbound links, 3 minutes to read. - Tamás Török gives us an executive summary of Coding Sans Software Development Trends 2019 annual report. Full report and data available for download. 7 minutes to read. - Doug Bradbury suggests a less risky alternative to a major re-write of your current software product in order to exploit a new market. 3 minutes to read. - Barry Weston observes some of the challenges in testing AI solutions. 4 minutes to read. - Brendan Wovchko coaches us on the choice between using Scum and using Kanban. 4 minutes to read. - Kristin Jackovny, professional tester and former professional organizer, tells us how to organize for testing success. 5 minutes to read. - Michael Lopp considers the leadership responsibilities of meetings. 4 minutes to read. - Melody Wilding coaches us on managing the complainers who come to our meetings. You can give people a voice without losing control. 4 minutes to read. - Pawel Brodzinski reflects on the co-dependent nature of autonomy and transparency: you can’t have one without the other. 7 minutes to read. - Leigh Espy shares a few simple things you can do to endear yourself to your project team. It’s easier to lead people who like you. 4 minutes to read. Research and Insights - Greg Satell gives us an executive summary of quantum computing. My take: they aren’t faster general-purpose computers. 5 minutes to read. - MIT Technology Review recaps what we’ve learned in the 20 years since the first distributed denial of service (DDoS) attack. 6 minutes to read. - Raconteur shares an infographic that illustrates how much new data is created in a single day. I wouldn’t call the 4,000 terabytes generated on Facebook useful, but it’s data. A minute or two to read. Working and the Workplace - Jenny Foss suggests we send a letter of interest to that company we’d really like to work for. Even if they don’t have a position open. 5 minutes to read. - Mayo Oshin looks at the science of how music affects your productivity. Sad news: although music reduces anxiety, lyrics reduce mental performance. 5 minutes to read the rest. - Lauren Adley on motivation: “It’s easier to get things done when we’re driven, but it’s not a necessary precondition in order to do so.” 4 minutes to read.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545548.56/warc/CC-MAIN-20220522125835-20220522155835-00224.warc.gz
CC-MAIN-2022-21
3,965
30
http://math.stackexchange.com/questions/83322/how-to-find-mode-from-a-sample-of-continuous-distribution
code
First, my background is not math. My objective is to find the value that occurs most frequently in a sample data OR the value that is most likely to have. Lets say my sample data is [1,5,6,6,7,10]. To find mode for this sample is simple(the mode is 6). But if lets say i change the sample data to [1,5,6,7,10], i dont know how to find mode. The results that i expected is 6 since 6 is the most probable data that i will get. Problem is, i dont even know what to google(tried for hours), and even if i find something that MAYBE the answer(kernel density estimation,Continuous probability distribution), i dont understand what the hell they talking about. The actual situation consist of hundreds of data(in floats) that i save in excel. i appreciate it if someone could demo it in excel.
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00138-ip-10-164-35-72.ec2.internal.warc.gz
CC-MAIN-2016-26
786
6
https://learn.adafruit.com/adafruit-metro-esp32-s2/circuitpython-pin-names
code
CircuitPython for the Metro ESP32-S2 uses different pin names than you may be used to. Many CircuitPython boards use the D prefix for digital pin names, such as D1 or D12. The pin names for the Metro ESP32-S2 use the IO prefix, such as IO1 or IO12. The pin numbers on the Metro ESP32-S2 match the ESP32-S2 'low level chip pin numbers' that ESP32 users are most familiar with. The pins are not numbered like other typical Metro-shaped boards, so where you may expect pin 0 to be, its actually IO5. We're not yet using D prefix names to avoid the confusion of having D-prefix names not match the IO pins. The following pins have both the standard CircuitPython pin name and the IOx pin name available: - Analog pins A0-A5 - Default I2C port SCL & SDA - Default SPI port SCK, MISO, MOSI - Default hardware Serial port RX, TX - LED (red LED) - NEOPIXEL (built in RGB LED) The following diagram shows the standard CircuitPython pin names, the IO pin names, the singleton names and the debug/DFU pins.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510924.74/warc/CC-MAIN-20231001173415-20231001203415-00031.warc.gz
CC-MAIN-2023-40
995
11
https://www.sqlservergeeks.com/sql-server-transparent-data-encryption/
code
Transparent Data Encryption is introduced in SQL Server 2008. This feature has been introduced to provide more security to the data stored at the database level. It has the most valuable features which will help secure data by help of exploiting multiple CPUs and less increase in the disk space used to save encrypted data. Why Transparent Data Encryption? - Companies which maintain sensitive data should comply to more stringent security standards. - Data Encryption should be transparent to the applications using the database. - Encryption should be extended to the data at rest. - TDE helps with real time I/O encryption of data and log files. How Transparent Data Encryption Works TDE is enabled by using ALTER DATABASE command. After running the Alter SQL Server performs basic checks such as Edition Check, Read-only Filegroups, and presence of DEK etc. It returns immediately with success message but the database is not completely encrypted. A background process with shared lock runs to encrypt the database. Encryption is done in I/O path – RE-ENCRYPTION SCAN/ ENCRYPTION SCAN. All the data is read into memory and written back after encrypting. SELECT * FROM sys.dm_exec_requests returns command type “ALTER DATABASE E” and status “background process”. The encryption status is stored at regular intervals to recover in case of server restart. TDE related DDLs do not work when any of the Filegroups is in Read-Only mode. Data files are encrypted at page level. 32 pages encrypted in a single go and checkpoint for every 1024 pages is issued. The process sleeps for 250ms for every 32 pages. TDE works on the concepts of second checksum. Page header is also encrypted and checksum is for the encrypted data in the page. It is calculated and saved in header before encryption. Second checksum is used to check the decrypted data. TDE Encryption does not encrypt already written log records in log files. Granularity of encryption in log files is at VLF level. This means that once the TDE is implemented the next active VLF will be encrypted. An overhead of one log record for every 4 extents (32 pages) encrypted is added to the log file. TEMPDB is encrypted when any database is encrypted. TEMPDB Encryption is always with AES_265 algorithm. Encryption once done on TEMPDB stays even after Encryption is disabled on all user databases. Only TEMPDB system DB can be encrypted. Encryption Scan cannot be rolled back. Background encryption process can be pause using Trace Flag 5004 and slowed down using Trace Flag 5005. Implementation of TDE Below is a demo to implement TDE. 1. Creating Database Master Key: use master; go; CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Myp@ssword123’; go; 2. Creating a Certificate: use master; go; CREATE CERTIFICATE MyCertDEK WITH SUBJECT = 'My DEK Certificate’; go; 3. Backup the Certificate: BACKUP CERTIFICATE MyCertDEK TO FILE = 'path_and_file_name.cer' WITH PRIVATE KEY ( FILE = 'C:\MyCertDEK_Bak.pvk' , ENCRYPTION BY PASSWORD = 'Myp@ssword456' ); Go; 4. Create a Database to implement TDE Create Database MyDBForTDE GO; 5. Create Database Encryption Key in the User DB use MyDBForTDE; go; CREATE DATABASE ENCRYPTION KEY WITH ALGORITHM = AES_128 ENCRYPTION BY SERVER CERTIFICATE MyCertDEK go; 6. Enable the DB for Encryption ALTER DATABASE MyDBForTDE SET ENCRYPTION ON GO; Impact of TDE You have to note these important points when you are going to implement TDE. - Backup/Restore and Attach/Detach Certificate must be backup with DB backup Target server should have database master key and Certificate used for encrypting DEK on source Backup compression has little or no effect Third party backup tools which use log scan will break Database compression is effective as it reduces I/O - Key Management Certificate change is faster as it encrypts only DEK ALTER DATABASE ENCRYPTION KEY ENCRYPTION BY SERVER CERTIFICATE ALTER DATABASE ENCRYPTION KEY REGENERATE WITH ALGORITHM = AES_128 - High Availability No effect on Clustering and Replication Mirroring and Log shipping – - Create Database Master Key - Backup and restore Certificate from Primary/Principal to Secondary/Mirror - Recovery Process Recovery Process is single threaded Recovery time is longer Impact on Tools Utilities that scan Transaction Logs will fail Inbuilt functions that use log and VDI are not affected Issues with TDE Below are few issues you may see when you implement TDE. High CPU Utilization – Expect high CPU utilization when you are going to implement TDE. We are gaining the benefits of security by exploiting CPU. It is recommended to implement TDE during low or no business hours and only on servers with more CPU to exploit. When you see CPU spikes as soon as you implement TDE check encryption status by using the DMV, sys.dm_database_encryption_keys. You can also use trace flags 5004 (pause background encryption process), 5005 (Slow down background encryption process) Slow Query Performance – Expect to see slow performance of query on unencrypted databases. Performance degradation of 2-3% under normal CPU utilization and ~30% under 100% CPU utilization is expected. High Disk Space Utilization – Check if snapshots are running for the database. Large database encryption can increase log file size significantly as the encryption process of whole database is logged. Backup Restore Issues – Database Master key already exists on target server. There may be a missing certificate. Certificate must be backup using WITH PRIVATE KEY option. Changed DEK must be created on destination which is changed on source between log records. Cannot Encrypt Database – Check the Edition of SQL Server and any read only file groups. Check the DMV sys.dm_database_encryption_keys to see if encryption is already enabled. Mirroring and Log Shipping – When the data is out of Sync, Check for certificates on Secondary/Mirror server. Expect significant overhead due to log record for encryption. Like us on FaceBook | Follow us on Twitter | Join the fastest growing SQL Server group on FaceBook Follow me on Twitter | Follow me on FaceBook One Comment on “SQL Server Transparent Data Encryption” Good one, very well explained….
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224650409.64/warc/CC-MAIN-20230604225057-20230605015057-00472.warc.gz
CC-MAIN-2023-23
6,191
61
https://christianity.meta.stackexchange.com/questions/6423/if-i-vote-up-an-answer-should-i-also-vote-up-the-question
code
Often I see questions with very few up votes have answers that get quite a bit of up voting. I have myself have read a question,found it interesting and then read a very good answer. Later when re reading I see that I up voted the answer but never bothered to vote on the question. Shouldn't we be up voting the question if we found an answer worthy of our approval? In most cases, if the question has great answers, it's probably because it was in response to a great question. However, it's possible for low-quality questions to have high-quality answers, and that would be a case where you probably wouldn't want to upvote the question. There's an intentionally small amount of guidance on when you should vote on posts, because it's often subjective. The primary bit of guidance that the site gives on reasons to upvote/downvote are the tooltips: This question shows research effort; it is useful and clear This question does not show any research effort; it is unclear or not useful If you think a question meets one of these criteria, feel free to vote. Just make sure you're voting on the quality of the content, and not on the user (that would be voting fraud).
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817438.43/warc/CC-MAIN-20240419141145-20240419171145-00246.warc.gz
CC-MAIN-2024-18
1,169
6
https://racheleditullio.com/blog/2023/11/autocomplete-accessibility-bookmarklet/
code
Drag the autocomplete bookmarklet to your bookmarks bar. I created this bookmarklet for testing WCAG success criterion 1.3.5 Identify Input Purpose (AA) which requires that if an input field requests personal information about the user, we must apply the appropriate autocomplete value to the field. For example, an input field requesting the user’s full name would have an attribute of autocomplete="name". This allows the browser to attempt to autofill the input field with previous values entered for the same information. It is very much browser-specific. This bookmarklet checks the webpage for any input fields that contain the autocomplete attribute. If none are found, it returns an alert message. If autocomplete attributes are found, the script returns the values and displays them in context of the input field. This makes it easy to determine if input fields have autocomplete attributes defined and if they are valid. Click the bookmarklet again to remove the overlay text.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948223038.94/warc/CC-MAIN-20240305060427-20240305090427-00344.warc.gz
CC-MAIN-2024-10
988
6
https://support.google.com/adwords/editor/answer/47656?hl=en&ref_topic=14626
code
Advanced URL changes If you need to edit multiple URLs quickly, the advanced URL changes tool is an easy way to change final URLs, final mobile URLs, and tracking templates. The tool allows you to: - Change all selected URLs to a new URL. - Append text to all selected URLs. - Remove a specified parameter from all selected URLs. - Select the items that you want to edit. For example, select Ads > Text ads to edit text ad final URLs. - In the data view, select the rows to edit. To filter the data view so that you see fewer rows, use the tree view or advanced search. - Go to the Edit menu > Change URLs. - Choose the type of URL from the “Perform action in” drop-down menu. - Enter your changes. - Click Change URLs. Good to Know When adding final URLs to existing ads, make sure to post the URL upgrades before posting the rest of your changes. If you post normally without upgrading the URLs, you will lose all stats and history associated with that ad. Here's how to upgrade URLs in AdWords Editor: - Enter final URLs in the edit panel. - Click the drop-down arrow next to the "Post" button. - Select Post URL upgrades. - Select the campaigns you want to upgrade. - Click Post URL upgrades. After completing these steps, you can post other changes to your account normally, and the ads will retain their historical data. You only need to post URL upgrades once per ad. Final URLs offer better tracking options and have replaced destination URLs. Your ads' historical data will be lost unless you upgrade. Learn more about upgraded URLs and how to upgrade URLs on your account. AdWords Editor can add multiple final URLs and final mobile URLs to an item, but not in the Change URLs dialog. Instead, use the “Make multiple changes” tool at the top of the data view. Here’s how: - Select the items you want to edit in the type list and data view. - Click Make multiple changes. - Paste your changes from a spreadsheet, or enter information manually by using the column drop-down menus to add columns for items that will be affected (such as keywords or ad groups) and columns for Final URL or Final mobile URL. - If entering manually, fill in the fields with the related information, and then enter URLs for each row in a single field, separated by spaces (it won’t work if you use semicolons to separate URLs). - Click Process. |Keyword||Final URL||Final mobile URL|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158958.72/warc/CC-MAIN-20180923020407-20180923040807-00402.warc.gz
CC-MAIN-2018-39
2,382
28
https://www.physicsforums.com/threads/rooted-tree-impossible-constructs.748646/
code
1. The problem statement, all variables and given/known data For a UVa problem, I am working on constructing a rooted tree with the following constraints. 1. A tree of depth D means that the tree should contain at least 1 node which is exactly D distance away from the root and there is no node of more than D distance from the root. 2. The degree of a node of the tree cannot be greater than V . Degree of a node is simply measured by the number of nodes it is directly connected to, via a single edge. 2. Relevant equations 3. The attempt at a solution The goal is to determine the maximum possible number of nodes. To find that, I am looking to sum over all V^i , where i ranges from 0 to D. This summation appears to give the maximum number of nodes correctly in many cases, so I'm assuming it's correct. However, the question also states that 'If it is not possible to construct the tree, print -1'. I can think of no possible case where this might occur. Do you think this is supposed to be be printed when the user enters V and D outside the range given in the problem.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592636.68/warc/CC-MAIN-20180721145209-20180721165209-00453.warc.gz
CC-MAIN-2018-30
1,076
1
https://backwpup.com/backwpup-release-3-9-0-migrate-website-to-another-url/
code
BackWPup release 3.9.0 is available: with our back up plugin is now possible to perform a migration of your site to another url. Together with this main feature several fix have been provided. Here follow further details. Migrate website to another URL This new feature had been wanted for a long time by our customers. In fact they needed more than restoring their site from an archive backup: they wanted to create a backup of their site and then restore it to another domain. This mean that if you just need to change your domain, or create a copy of your site in a staging environment, for example for testing and development reasons, now using BackWPup you can easily achieve that just with few clicks. For more details about this topic you can check How to migrate your site with BackWPup . Validation for database credentials on restore During the restore process, the user was able to bypass the database credentials by simply clicking continue without entering anything. Now validation has been added to ensure that correct credentials are added. Unable to download backup from Google Drive A bug in the Google Drive module has been detected and reported by a customer. After saving a back up in Google Drive it was not possible to download that back up from the plugin interface. The issue has been inspected and properly fixed in this BackWPup version. Don’t pre-fill database credentials when backing up non-WordPress database This is an important security issue that has been solved. In fact in the settings panel, when the user chose to back up a database, the related credentials were filled in the form: this way through a browser inspection tool a malicious user could detect the database password. Of course, even if this kind of action could be performed only by a user with administrator privilege access, it was not a good practice anyway to expose such kind of info. So the dev team promptly set a fix, and now the credentials are not filled anymore, so they are kept safely secret. Changelog for BackWPup 3.9.0 Here follows the changelog for the BackWPup Release 3.9.0: - (Pro) Migrate website to another URL - (Pro) Validation for database credentials on restore - PHP notice for outdated PHP versions less than 7.2 - (Pro) License deactivated on settings save - (Pro) Corrupted path name in Google Drive destination - (Pro) Unable to download backup from Google Drive - Unable to connect to custom S3 endpoints - Intermittent error selecting restore strategy - Memory leaks when uploading to S3 - PHP 7.4 Deprecation notices - PHP 8 compatibility issues - Remove BackWPup user roles on uninstall in multisite - Correctly handle relative upload paths - Display welcome page even after consent dialog clicked - Exclude non backup files from the backups page - Format dates as ISO-formatted dates instead of binary hex in MySQL backup - Don’t pre-fill database credentials when backing up non-WordPress database - Description of replacement patterns for archive name - Added missing destinations to destination list in about page Still Problems? Contact us! Did you find another bug in BackWPup? Please let us know over at Github. Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings. If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again. This site uses Google Analytics pixels to collect anonymous information such as the number of visitors to the site and the most popular pages. If this cookie remains in use, we can improve our website. Please enable Strictly Necessary Cookies first so that we can save your preferences!
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817438.43/warc/CC-MAIN-20240419141145-20240419171145-00782.warc.gz
CC-MAIN-2024-18
3,732
44
https://roxy-wi.org/services/portscanner
code
Some basics you may not know How do devices successfully transfer big chunks of different data over the network conncetion? In the client–server model of application architecture multiple simultaneous communication sessions may be initiated for the same service. Usually, few services or applications run at the same time. Therefore, multitasking and high transfer rate are provided to us. How do computers manage it? The network ports is the key factor here. What is a network port? A network port is a virtual point where network connections start and end. Ports are software-based and managed by a computer's OS. Each port is associated with a specific process or service. Ports allow computers to easily differentiate between various kinds of traffic: emails go to a different port than webpages, despite the fact both reach a computer over the same Internet connection. Ports are standardized across all network-connected devices, with each port assigned a number. What is a port number? A port number is an integer number (1 to 65535) of 16-bit size which helps devices to identify a specific service or application to which an internet or other network message should be forwarded when it arrives at a server. They are assigned automatically by the OS, manually by the user or is set as a default for some popular applications. Port numbers are mainly used in TCP and UDP based networks and are always associated with an IP address of a host: There are 65,535 possible port numbers, although not all are in common use. Some of the most commonly used ports, along with their associated networking protocol, are listed below: - Ports 20 and 21: File Transfer Protocol (FTP). FTP is for transferring files between a client and a server. - Port 22: Secure Shell (SSH). SSH is one of many tunneling protocols that create secure network connections. - Port 25: Simple Mail Transfer Protocol (SMTP). SMTP is used for email. - Port 53: Domain Name System (DNS). DNS is an essential process for the modern Internet; it matches human-readable domain names to machine-readable IP addresses, enabling users to load websites and applications without memorizing a long list of IP addresses. - Port 80: Hypertext Transfer Protocol (HTTP). HTTP is the protocol that makes the World Wide Web possible. - Port 123: Network Time Protocol (NTP). NTP allows computer clocks to sync with each other, a process that is essential for encryption. - Port 179: Border Gateway Protocol (BGP). BGP is essential for establishing efficient routes between the large networks that make up the Internet (these large networks are called autonomous systems). Autonomous systems use BGP to broadcast which IP addresses they control. - Port 443: HTTP Secure (HTTPS). HTTPS is the secure and encrypted version of HTTP. All HTTPS web traffic goes to port 443. Network services that use HTTPS for encryption, such as DNS over HTTPS, also connect at this port. - Port 500: Internet Security Association and Key Management Protocol (ISAKMP), which is part of the process of setting up secure IPsec connections. - Port 3389: Remote Desktop Protocol (RDP). RDP enables users to remotely connect to their desktop computers from another device. Are open network ports serve for the good, though? Portscanning, what does it stand for? Port scanning is a process, when a special port scanning tool (a port scanner) sends client requests to a range of server port addresses on a host to find open/active ports and any vulnerabilities in received data. In most cases port scanning is not used for attacking or hacking but rather for indentifying services which are available on a remote machine. A port scanner is an application designed to probe a server or host for open ports. Such an application may be used by administrators to verify the security policies of their networks as well by cyberattackers to identify network services running on a host and exploit vulnerabilities. Read more. Port sweeping is a process of scanning several hosts for a specific listening port. It is typically used to search for a certain service, on a certrain port. For example, an SQL-based computer worm may be looking for hosts listening on TCP port 1433. Why am I reading about cyberattacks and open port vulnerabilities here? Roxy-WI is capable of discovering security risks through port scanning and, therefore, can prevent possible network attacks. About Roxy-WI Port scanner Since version 4.5.3 Roxy-WI provides the opportunity to scan a remote system for open ports. Scanning is performed on demand, not regularly. Due to the irregular frequency it is impossible to track changes and make sure that all unnecessary ports are closed. Since version 5.1.0 Roxy-WI has a service which tracks all open ports, compares them, keeps history and notifies you if any changes occur. You now have up-to-date information about the network status of your servers. How Roxy-WI Port scanner works Roxy-WI Port scanner uses SYN scan: SYN scan is another form of TCP scanning. Rather than using the operating system's network functions, the port scanner sends raw IP packets itself and waits for responses. This scan type is also known as "half-open scanning", because it never actually opens a full TCP connection. The port scanner generates a SYN packet. If the target port is open, it will respond with a SYN-ACK packet. The scanner host responds with an RST packet, closing the connection before the handshake is completed. If the port is closed but unfiltered, the target will instantly respond with an RST packet. The use of raw networking has several advantages, giving the scanner full control of the packets sent and of the timeout for responses, and allowing detailed reporting of the responses. There is debate over what type of scan is less intrusive on the target host. SYN scan has the advantage that the individual services never actually receive a connection. However, the RST during the handshake can cause problems for some network stacks, in particular for simple devices like printers. There are no conclusive arguments either way. The port scanning service scans the remote systems (the one this option is enabled for) every 5 minutes by default. For Port scanner service installation you should run: Read here how to start using rpm. Notifications about open and closed ports The Port scanner can send you notifications via Roxy-WI when a port on the selected server changes the state from open to close or vice versa. To enable this function, select Monitoring-Port scanner in the main menu and tick the Notify checkbox: Port scan history You may also enable the history for the Port scanner by ticking the Keep history checkbox. It may be helpful for future debugging:
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00724.warc.gz
CC-MAIN-2023-14
6,726
38
https://lists.linux.org.au/pipermail/linux-aus/2012-April/019579.html
code
[Linux-aus] Should Linux Australia change its name peter at cc.com.au Thu Apr 26 12:50:42 EST 2012 ----- Original Message ----- > On Thu, 2012-04-26 at 09:28 +1000, Chris Neugebauer wrote: > Only if it didn't directly affect our ability to sign up sponsors, and > restrict the delegate pool that would otherwise be interested in the > And even if it *were* fair to say that, as an organiser, I'd much > rather have the time to chase more sponsors so that we can put on a > better conference. The time spent explaining the role of LA to one > large, potential sponsor could have been used tracking down further > keynote speakers. > Our time is limited. If an *avoidable* task has been created, it > directly affects our ability to serve the Python programming community > through running PyCon Australia. >> So, you think that sponsors are NOT going to expect you explain who >> <insert some open source generalised name> organisation is? It's not about the sponsors who contact you and ask why the website says "Linux". It's about the potential sponsors that see "Linux" on the website and the prospectus and decide to not bother contacting you at all. The fact that there are organisations contacting you about the word "Linux" being present to me implies you're missing out on others who didn't bother to contact you at all. And that means conference delegates missing out and the community missing out. A sponsor isn't just about getting money for your event, a sponsor is an employer who might send delegates and can have people submit presentations. Sponsors have clients, so they're also an entry-point into a network to further promote your event. More information about the linux-aus
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646350.59/warc/CC-MAIN-20230610200654-20230610230654-00366.warc.gz
CC-MAIN-2023-23
1,692
22
https://www.axigen.com/documentation/configuring-mailing-list-quotas-and-restrictions-p45254116
code
- General Aspects - Administration Rights - Quick Links - Global Settings - Axigen Services - Configuring the WebMail Service - Configuring the POP3 Service - Managing Service Listeners - Configuring the SMTP Receiving Service - Managing Service Control Rules - Configuring Mobility & Sync Options - Configuring the IMAP Service - Configuring the DNR Service - Configuring the Remote POP Service - Configuring the WebAdmin Service - Starting, Stopping, Restarting Services - Configuring the SMTP Sending Service - Configuring the CLI Service - Domains & Accounts - Managing Groups - Managing Domains - General Domain Settings - Setting Up Domain Aliases - Setting Up a Message Appender - Configuring the Storage - Configuring Account Defaults - Managing the Domain Message Filters - Setting Up Account Classes - Managing Mailing Lists - Managing Public Folders - Managing Accounts - Managing Account Aliases - Managing the Account Message Filters - General Account Settings - Editing the Account Contact Information - Defining the Account WebMail Options - Configuring Account Quotas & Restrictions - Defining Send / Receive Restrictions for an Account - Security & Filtering - Managing SSL Certificates - AntiVirus & AntiSpam - Viewing the Quarantine - Setting Up Incoming Message Rules - Managing Acceptance & Routing - Additional AntiSpam Methods - Configuring Global Access Control - Status & Monitoring - Axigen Logging - Back-up & Restore - Automatic Migration The "Mailing Lists" → "Quotas and Restrictions" page contains parameters relative to parameters at mailbox and folder level, notifications to be sent to the list members, and restrictions imposed to the mailing list being edited. Managing Mailing List Quotas At mailbox level, the total mailbox size, the total number of folders and the total number of messages can be limited by selecting the respective options in the "Mailbox Level" area and using the up and down arrows to adjust the limits to the desired value. For the total size limit use the available drop down menu to select if you want it calculated in KB, MB, or GB. At folder level you can set limits for the size of each folder and the total number of messages per each folder by checking the respective options in the "Folder Level" section and using the up and down arrows to adjust the limits to the desired value. For the folder size limit use the available dropdown menu to select if you want it calculated in KB, MB, or GB. To have the account user notified when reaching a certain level of their allowed quota, through a pop-up displayed when accessing the WebMail interface, check the respective option in the "Notifications" section and use the up and down arrows to increase or decrease the default percentage of the quota. The number of POP3, IMAP, and WebMail sessions can be limited using the up and down arrows or directly editing the text fields pertaining to each type of session. POP3 and IMAP sessions take values from 1 to 16, while WebMail sessions take values from 1 to 2048. To limit the attachment and message size check the respective options in the WebMail section and use the up and down arrows to select the desired size. To have the size measured in KB, MB, or GB, use the available drop down menu. Use the up and down arrows of the "Limit number of attachments per message" and "Limit number of recipients" options or edit their corresponding text field to set the maximum number of attachments and recipients in an email message. Message Sending Restrictions Limits imposed to sent messages offer you an easy possibility to prevent account users from generating spam. They can thus limit the total number of messages to be sent and their size in a time interval. Use the up and down arrows to select the desired size or edit the corresponding text field. To have message size calculated in KB, MB, or GB, use the respective drop down menu. When you are done configuring these parameters, remember to click the "Save Configuration" button to preserve your changes.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100229.44/warc/CC-MAIN-20231130161920-20231130191920-00193.warc.gz
CC-MAIN-2023-50
4,026
61
https://www.efinancialcareers.co.uk/jobs-Germany-Oststeinbek-Quantitative_Developer_in_the_area_of_Trading_Analytics.id14495817
code
SSW-Trading is the perfect combination of innovative trading strategies, state-of-the-art technology and a unique, interdisciplinary team. This is how we have secured our place at the top of automated trading of financial instruments. Continuous investments in qualified personnel and IT enable us to trade around the clock on the global financial markets. And we continue to grow. Become part of our Trading division and contribute your interests and skills in a motivating environment. In your team of quantitative developers, you will develop technically sophisticated trading solutions. You are responsible for scripts and tools that efficiently process large amounts of data in a high-performance infrastructure. Alongside trading experts and analysts, you will support the development and advancement of quantitative control and optimization models. You support the implementation of the analysis/reporting setup and actively contribute to process automation. You will develop into an expert in the technical implementation of commercial logic or infrastructural challenges. You have experience in functional and object-oriented programming, preferably with R or Python. You know methods for dealing with big data or have a high interest in getting to know them. Ideally, you have already implemented your own software projects with a focus on scripting and automation. You program result-oriented, always ensuring the highest quality standards and sustainable resilience of your code. You are interested in new technologies and proactively share your know-how in the team. Fluent German and good spoken and written English skills are required.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00743.warc.gz
CC-MAIN-2022-33
1,650
12
https://www.dctriclub.org/calendars/hagerstown-duathlon-2-2022/
code
Hagerstown Duathlon #2 2022 This “sprint” distance Duathlon, designed for all levels, will start and end at Halfway Park in Hagerstown, MD…..just 25 minutes from Frederick, MD and 90 minutes from Baltimore and DC. This is a great course for the beginner and for the experienced duathlete who wants to go fast! We will host the Hagerstown Youth Duathlon #2 and Hagerstown 5K Run #3 during this weekend. See those pages for details.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683683.99/warc/CC-MAIN-20220707033101-20220707063101-00187.warc.gz
CC-MAIN-2022-27
436
2
https://www.coderanch.com/t/304475/databases/Query-MySQL
code
I'd love to help you with this - SQL problems are always fun to solve - but I don't get what you're trying to do. First of all, 18 is correct. What answer were you expecting? Second, I don't understand the join you're trying to do. Perhaps it would help if you provided the results you're trying to get out of the table. Maybe come up with the resultset you're looking for - then, perhaps we can help you write the SQL to produce the resultset. Correct result should be 3, because only best results is counter from each player. Se the query to get players position in list should be executed agains result set created with something like this: "select * from result group by player_id order by points desc" So first I need results list which cointain only best result from each player and from there I need to read players position in list. select player_id, max(points) as points from result where game_id = 3 group by player_id order by points; My "order by" statement may not work in your db as is, but does the general idea work? This will list all the players who played game 3, and put them in order from most points to least. Is that closer to what you're looking for? You showed up just in time for the waffles! And this tiny ad: Building a Better World in your Backyard by Paul Wheaton and Shawn Klassen-Koop
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00585.warc.gz
CC-MAIN-2021-43
1,317
10
https://blog.tlocke.org.uk/2008/07/varlibtomcat55webapps.html
code
I've used GNU/Linux since (February?) 1999. I started with Slackware, and now I'm running Ubuntu. I think Linux is fantastic, but one thing always irritates me. Whereas MS-Windows has a 'Program Files' directory with a sub-directory for each application, Ubuntu stores application files across the file-system according to type, eg. /etc, /var/log, /bin. I find the Windows approach much easier to work with. The Ubuntu method means it takes much longer to find where things are. For log files it may be easy, but where is the webapps directory for apache-tomcat?
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257481.39/warc/CC-MAIN-20190524004222-20190524030222-00391.warc.gz
CC-MAIN-2019-22
563
2
https://demo4.simplifyyourweb.com/index.php
code
Multi-purpose template, designed to be easily modifiable and a base for building fluid templates in a responsive environment. Built like Cassiopeia, so if you are familiar with the core template, you'll be familiar with Bare 960 Responsive in no time. This website (and all our projects) is built with Bare 960 Responsive. - based on Bootstrap 5, packaged with Joomla, - optional slide pane with hamburger menu, - optional scroll indicator, - possible use of breakpoints for the logo (show different logos for different screen sizes), - fix the main menu on scroll, - GDPR compliant web fonts (when selecting Bunny fonts), - available Bootstrap 2 styles for backward compatibility with pages that contain Bootstrap 2.3.2 classes (label, badge, alerts, buttons, tables).
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100172.28/warc/CC-MAIN-20231130062948-20231130092948-00303.warc.gz
CC-MAIN-2023-50
769
10
http://y2fox.com/projectmethod/
code
Meeting our obligations is not just a business goal but a moral imperative. Our Project Management Methodology encapsulates well-tested approaches, describes in detail the phases, activities and tasks required to conduct a project from start to finish. Our Project Management Methodology: The bottom line of our project management methodology is based on the following best practices - Decrease cost by saving time and effort to build deliverables - Reduce the time spent completing project deliverables - Minimize change, risks and issues by defining project properly before start working on the first task. - Assure the quality of deliverables, increasing likely of meeting the customer’s requirements - Monitor and control the project more efficiently, especially during the Execution phase - Manage suppliers more effectively with comprehensive supplier contracts - Improve staff performance by clarifying roles, responsibilities and delivery expectations - Mitigate resources management to avoid conflict or shortage - Establish a clear line of communication, collaboration and authority between project staff and project - Rapid Application Development, which we mainly use for data conversion, migration and Implementation processes. - eXtreme Manufacturing, which we mainly use to create a prototype for custom software - eXtreme Programming, which we mostly use for COTS configuration/implementation - Scrum, which we mostly use to develop focuses on iterative goals set by the Product Owner Project Management Techniques and Tools we use: We use agile project management technique for being a technology company. We champion Agile methodology because the project is organized in a series of relatively small tasks conceived and executed to conclusion as the situation demands in an adaptive manner, rather than as an entirely pre-planned process. We love agile technology because the approach keep the project owner (client) engage with our team from start to finish. Agile is an umbrella for multiple methodologies:
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526359.16/warc/CC-MAIN-20190719202605-20190719224605-00044.warc.gz
CC-MAIN-2019-30
2,028
18
https://www.experts-exchange.com/questions/25006762/How-can-I-resolve-enter-full-pathname-to-Java-when-first-starting-Sql-Developer.html
code
How can I resolve "enter full pathname to Java" when first starting Sql Developer? I am automating the Oracle installation. I want to resolve a "enter full pathname to Java" message that appears when SQL Developer is first started. How can I resolve this - maybe with a batch file giving a path to java? Not sure how to go about resolving this.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864191.74/warc/CC-MAIN-20180621153153-20180621173153-00086.warc.gz
CC-MAIN-2018-26
344
2
http://webmasters.stackexchange.com/questions/tagged/isp+static-ip
code
to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site How can I host a website on a dynamically-assigned IP address? I recently upgraded my internet to the point that it is much faster and more reliable than my current webhost. I would like to move my current domain to be hosted at home, but my IP address is ... Mar 21 '12 at 5:20 newest isp static-ip questions feed Announcing The Launch Of Meta Stack Exchange Hot Network Questions Examples and applications of the pigeonhole principle Is there one word for both horizontal or vertical, but not diagonal, adjacency? What happened when i "mv *" no errors shown and now only one folder left. Why? Should I unsubscribe uninterested mailing-list members? Did Arwen actually die? When does undefined behavior strike in C++? Personal pronoun - Using 'it' when introducing a person What exactly is a "Pixel"? Why can private method be final? Why does a nuclear explosion have directionality? Use of 'so' as interjection at sentence start Tic Tac Toe in C++ What is the different between these two triangles? Is "Second Breakfast" only in the movie version of LOTR? problem with inequality of modulus One of my players is too passive and uninterested Trigger on Task/Activity - can you access Activity? How to make this hand-colored drawing even more fancier? Nameserver working for all TLDs except .org Why did early telephones use a rotary dial instead of 10 individual buttons? Define and set length in one command Import multiple files from folder in order of date added more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00473-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
2,104
51
https://www.buzzfeed.com/donnad/this-maze-took-7-years-to-draw
code
In Japan, Twitter use Kya7y came across an insanely detailed maze. A whopping 33 × 23 inches, the maze was created by her father over 30 years ago. Over a period of 7 years her father, a janitor at a public university, doodled the intricate loops and dead ends in his free time. - UK chancellor George Osborne says Brexit will impact the economy but Britain faces the challenge from "a position of strength." - What Brexit can tell us about how the U.S. presidential election is going. Note: it's not the same thing as Trump. - Thousands flooded New York City's streets to celebrate Pride on Sunday. It was a colorful party of love and acceptance.
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396027.60/warc/CC-MAIN-20160624154956-00135-ip-10-164-35-72.ec2.internal.warc.gz
CC-MAIN-2016-26
648
4
https://www.spectroom.com/1021338403-generic-security-service-algorithm-for-secret-key-transaction
code
Generic Security Service Algorithm for Secret Key Transaction What is Public Key Infrastructure (PKI) by Securemetric Latest Software Engineering Projects for Computer Students GSS-TSIG is an extension to the TSIG DNS authentication protocol for secure key exchange. It is a GSS-API algorithm which uses Kerberos for passing security tokens to provide authentication, integrity and confidentiality. Explore contextually related video stories in a new eye-catching way. Try Combster now!
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00067.warc.gz
CC-MAIN-2021-17
486
5
http://open-std.org/JTC1/SC22/open/n3571.html
code
ISO/IEC JTC 1/SC22 Programming languages, their environments and system software interfaces Secretariat: U.S.A. (ANSI) ISO/IEC JTC 1/SC 22 N3571 Canadian National Body Position on Creation of a Linux Study Group Canadian National Body National Body Contribution This contribution will be reviewed at the Linux Study Group meeting, 28-30 May 2003, London, UK. Address reply to: ISO/IEC JTC 1/SC22 Secretariat 25 West 43rd Street New York, NY 10036 Telephone: (212) 642-4992 Fax: (212) 840-2298 ____end of cover page, beginning of document__________ Canadian National Body submission to ISO/IEC/JTC1/SC22 about creation of a Linux Study Group. Canada fully supports the creation of a Study Group to participate in Linux Standardization at the ISO level, indeed Canada would prefer to see a Linux WG formed as soon as a NP for Linux work can be processed. We propose the creation of New Work Items for Linux Standardization (if agreement is reached with community-based Linux Standards groups). Canada is prepared to participate in Linux standardization through SC22. We support the development of a close working relationship with community-based Linux Standards groups (including LSB but possibly including other groups if they exist). We see this close working relationship including category C liaison. We are aware, however, that there is considerble sensitivity on the part of "community-based standards" groups that involvement in JTC1 standardization will eliminate freely-available documents or distort a functioning standards process. The LSG, and indeed SC22 and JTC1 itself must take steps to ensure such groups that they will not be subsumed by JTC1 processes, and that work products originating from community-based groups retain the community- based copywrites, i.e. - things that are/were considered freely available continue to be freely available
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232261326.78/warc/CC-MAIN-20190527045622-20190527071622-00052.warc.gz
CC-MAIN-2019-22
1,861
36
http://www.dynamicdrive.com/forums/showthread.php?21530-Z-indexing-flash&p=95968
code
Hey all. On my current page I have a drop down menu done in JS and a scrolling image thing done in Flash. I have z-indexed the div for both elements, yet the drop down menu still goes behind the flash, but above everything else I have on the page. Is there anyway to directly z-index the flash element besides sticking it inside of a div which I am now? Or is there something with AS I can do that will make it fall underneath the drop down menu?
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123276.44/warc/CC-MAIN-20170423031203-00193-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
446
4
https://www.houhaswissies.com/best-shared-web-hosting-for-small-business/
code
Best Shared Web Hosting For Small Business Finding a top quality affordable web hosting service provider isn’t easy. Every website will certainly have different needs from a host. Plus, you have to contrast all the attributes of a holding firm, all while seeking the very best bargain feasible. This can be a whole lot to type via, especially if this is your first time acquiring hosting, or building a site. The majority of hosts will certainly provide incredibly low-cost initial prices, only to increase those rates 2 or 3 times greater once your initial contact is up. Some hosts will certainly offer free incentives when you subscribe, such as a totally free domain name, or a complimentary SSL certification. While some hosts will be able to provide far better performance as well as high levels of safety and security. Best Shared Web Hosting For Small Business Listed below we dive deep into the most effective low-cost webhosting plans out there. You’ll discover what core holding features are necessary in a host as well as exactly how to evaluate your own holding needs so that you can pick from one of the best affordable hosting providers listed below. Disclosure: When you buy a host bundle via web links on this page, we earn some commission. This assists us to keep this site running. There are no extra expenses to you whatsoever by utilizing our links. The listed here is of the best cheap web hosting plans that I have actually directly utilized as well as tested. What We Consider To Be Low-cost Webhosting When we define a host plan as being “Cheap” or “Budget plan” what we suggest is hosting that falls under the price bracket in between $0.80 to $4 monthly. Whilst investigating inexpensive organizing carriers for this overview, we looked at over 100 various hosts that came under that price range. We then examined the top quality of their cheapest holding plan, value for money and customer care. In this write-up, I’ll be discussing this first-rate website hosting company and stick in as much relevant information as possible. I’ll go over the attributes, the rates alternatives, and also anything else I can consider that I believe might be of benefit, if you’re determining to register to Bluhost and get your internet sites up and running. So without additional trouble, allow’s check it out. Bluehost is just one of the largest host firms on the planet, getting both large advertising and marketing assistance from the business itself and affiliate marketing experts who promote it. It really is an enormous firm, that has been around for a very long time, has a huge track record, as well as is most definitely among the leading choices when it involves web hosting (definitely within the leading 3, at least in my publication). But what is it exactly, and should you obtain its services? Today, I will certainly address all there is you need to know, offered that you are a blog owner or a business owner who is looking for a webhosting, as well as doesn’t recognize where to get going, given that it’s a wonderful service for that target market as a whole. Let’s picture, you wish to host your sites as well as make them visible. Okay? You currently have your domain (which is your website location or LINK) now you want to “turn the lights on”. Best Shared Web Hosting For Small Business You require some organizing… To accomplish all of this, and to make your website visible, you require what is called a “server”. A web server is a black box, or gadget, that stores all your internet site data (files such as photos, messages, video clips, web links, plugins, and also other info). Now, this server, needs to get on constantly and also it has to be attached to the web 100% of the moment (I’ll be stating something called “downtime” later on). In addition, it additionally needs (without obtaining as well fancy and also into information) a file transfer protocol commonly called FTP, so it can reveal web internet browsers your internet site in its desired form. All these things are either pricey, or call for a high level of technical skill (or both), to create as well as maintain. As well as you can completely go out there and also learn these things by yourself as well as set them up … but what about rather than you getting as well as maintaining one … why not simply “renting out organizing” rather? “This is where Bluehost can be found in. You rent their web servers (called Shared Hosting) and you launch an internet site making use of those servers.” Given that Bluehost maintains all your files, the business also permits you to establish your content administration systems (CMS, for brief) such as WordPress for you. WordPress is an incredibly preferred CMS … so it simply makes good sense to have that alternative offered (almost every hosting company currently has this choice too). In other words, you no longer need to set-up a server and afterwards incorporate a software program where you can develop your content, independently. It is currently rolled into one plan. Well … imagine if your server is in your residence. If anything were to take place to it whatsoever, all your files are gone. If something goes wrong with its interior procedures, you require a technician to fix it. If something overheats, or breaks down or gets damaged … that’s no good! Bluehost takes all these headaches away, and takes care of whatever technical: Pay your web server “lease”, as well as they will take care of every little thing. And also as soon as you buy the service, you can then start concentrating on including web content to your internet site, or you can place your initiative into your advertising projects. What Provider Do You Receive From Bluehost? Bluehost uses a myriad of various services, yet the primary one is hosting of course. The organizing itself, is of various types incidentally. You can lease a shared web server, have a devoted server, or additionally a digital private server. For the function of this Bluehost evaluation, we will focus on hosting solutions as well as various other solutions, that a blogger or an on the internet business owner would certainly need, instead of go unfathomable into the rabbit opening and also talk about the various other services, that are targeted at more seasoned individuals. - WordPress, WordPress PRO, as well as shopping— these holding services are the bundles that enable you to hold a website using WordPress as well as WooCommerce (the latter of which enables you to do ecommerce). After buying any one of these packages, you can start building your internet site with WordPress as your CMS. - Domain Market— you can also purchase your domain name from Bluehost as opposed to other domain name registrars. Doing so will certainly make it simpler to aim your domain to your host’s name web servers, given that you’re making use of the very same market. - Email— as soon as you have acquired your domain, it makes good sense to likewise get an email address linked to it. As a blogger or online business owner, you should virtually never use a complimentary email service, like Yahoo! or Gmail. An e-mail like this makes you look amateur. Fortunately, Bluehost provides you one totally free with your domain. Bluehost also uses specialized servers. As well as you may be asking …” What is a devoted web server anyway?”. Well, the important things is, the standard webhosting plans of Bluehost can just a lot web traffic for your internet site, after which you’ll need to update your hosting. The reason being is that the typical web servers, are shared. What this indicates is that a person server can be servicing two or more sites, at the same time, among which can be yours. What Does That Entail For You? It implies that the solitary server’s resources are shared, as well as it is doing multiple tasks at any offered time. Once your internet site begins to strike 100,000 site gos to each month, you are going to require a specialized web server which you can additionally receive from Bluehost for a minimum of $79.99 per month. This is not something yous needs to worry about when you’re starting out however you should keep it in mind for sure. Bluehost Prices: Just How Much Does It Expense? In this Bluehost evaluation, I’ll be focusing my attention primarily on the Bluehost WordPress Hosting bundles, given that it’s the most preferred one, and most likely the one that you’re trying to find which will certainly fit you the very best (unless you’re a significant brand, business or site). The 3 available plans, are as follows: - Basic Plan– $2.95 monthly/ $7.99 regular price - Plus Plan– $5.45 monthly/ $10.99 regular price - Selection And Also Strategy– $5.45 monthly/ $14.99 routine cost The very first cost you see is the rate you pay upon register, as well as the second price is what the cost is, after the initial year of being with the firm. So primarily, Bluehost is going to bill you on a yearly basis. And you can additionally choose the quantity of years you want to organize your site on them with. Best Shared Web Hosting For Small Business If you select the Basic strategy, you will pay $2.95 x 12 = $35.40 beginning today and by the time you enter your 13th month, you will certainly currently pay $7.99 monthly, which is additionally charged annually. If that makes any feeling. If you are serious about your site, you need to 100% obtain the three-year alternative. This implies that for the standard plan, you will pay $2.95 x 36 months = $106.2. By the time you hit your 4th year, that is the only time you will certainly pay $7.99 per month. If you consider it, this approach will certainly conserve you $120 during 3 years. It’s not much, however it’s still something. If you want to get greater than one internet site (which I extremely recommend, and if you’re serious, you’ll most likely be obtaining more at some time in time) you’ll intend to take advantage of the option plus plan. It’ll allow you to host unrestricted websites. What Does Each Strategy Offer? So, when it comes to WordPress holding strategies (which resemble the common hosting strategies, yet are more tailored in the direction of WordPress, which is what we’ll be concentrating on) the features are as complies with: For the Basic strategy, you get: - One web site just - Secured site through SSL certification - Optimum of 50GB of storage space - Free domain for a year - $ 200 advertising and marketing credit Remember that the domain names are acquired separately from the holding. You can obtain a complimentary domain name with Bluehost below. For both the Bluehost Plus hosting and Choice Plus, you obtain the following: - Unrestricted variety of web sites - Free SSL Certification. Best Shared Web Hosting For Small Business - No storage space or transmission capacity limitation - Totally free domain name for one year - $ 200 marketing credit score - 1 Office 365 Mailbox that is cost-free for one month The Choice Plus plan has an included advantage of Code Guard Basic Alternative, a back-up system where your file is saved and replicated. If any collision takes place and your site data vanishes, you can restore it to its original kind with this function. Notification that despite the fact that both plans set you back the very same, the Selection Strategy then defaults to $14.99 each month, regular price, after the set quantity of years you’ve picked. What Are The Advantages Of Using Bluehost So, why select Bluehost over various other host services? There are thousands of web hosts, much of which are resellers, but Bluehost is one choose couple of that have stood the test of time, as well as it’s possibly one of the most well known available (as well as forever reasons). Right here are the 3 main advantages of choosing Bluehost as your web hosting company: - Server uptime— your web site will not be visible if your host is down; Bluehost has greater than 99% uptime. This is exceptionally important when it concerns Google SEO as well as rankings. The higher the better. - Bluehost rate— exactly how your server reaction determines how quick your internet site reveals on a browser; Bluehost is lighting quickly, which indicates you will certainly minimize your bounce price. Albeit not the best when it pertains to packing speed it’s still hugely crucial to have a quick speed, to make customer experience far better as well as better your position. - Endless storage space— if you obtain the Plus plan, you need not worry about how many documents you store such as videos– your storage space capacity is limitless. This is truly crucial, since you’ll most likely encounter some storage space problems later down the tracks, and you don’t desire this to be a problem … ever before. Lastly, consumer support is 24/7, which suggests despite where you are in the world, you can contact the support group to fix your web site problems. Pretty common nowadays, yet we’re taking this for granted … it’s additionally really vital. Best Shared Web Hosting For Small Business Additionally, if you have actually gotten a totally free domain name with them, after that there will be a $15.99 charge that will certainly be subtracted from the quantity you initially purchased (I envision this is since it type of takes the “domain out of the market”, not sure about this, but there possibly is a hard-cost for registering it). Finally, any kind of requests after thirty day for a reimbursement … are void (although in all honesty … they ought to possibly be stringent right here). So as you see, this isn’t necessarily a “no doubt asked” plan, like with a few of the various other holding choices around, so make sure you’re all right with the plans before proceeding with the holding.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710801.42/warc/CC-MAIN-20221201053355-20221201083355-00608.warc.gz
CC-MAIN-2022-49
13,890
81
https://01.org/dleyna/documentation/manager-object
code
There is only ever a single instance of this object. The manager object exposes a single d-Bus interface, com.intel.dLeynaRenderer.Manager. The interface com.intel.dLeynaRenderer.Manager contains four methods. Descriptions of each method, along with their d-Bus signatures, are given below. |GetRenderers() -> ao||GetRenderers takes no parameters and returns an array of d-Bus object paths. Each of these paths reference a d-Bus object that represents a single DMR. Note this method was called GetServers prior to version 0.0.2.| |GetVersion() -> s||Returns the version number of dLeyna-renderer| |Release() -> void||Indicates to dLeyna-renderer that a client is no longer interested in its services. Internally, dLeyna-renderer maintains a reference count. This reference count is increased when a new client connects. It is decreased when a client quits. When the reference count reaches 0, dLeyna-renderer exits. A call to Release also decreases the reference count. Clients should call this method if they intend to keep running, but they have no immediate plans to invoke any of dLeyna-renderer's methods. This allows dLeyna-renderer to quit, freeing up system resources.| |Rescan() -> void||Forces a rescan for DMRs on the local area network. This is useful to discard DMRs which have shut down without sending BYE messages or to discover new DMRs which for some reason were not detected when either they or the device on which dLeyna-server runs was started or joined the network. New in version 0.0.2.| The com.intel.dLeynaRenderer.Manager interface also exposes two signals. |FoundRenderer(o)||Is generated whenever a new DMR is detected on the local area network. The signal contains the path of the newly discovered server. Note this signal was called GetServers prior to version 0.0.2.| |LostRenderer(o)||Is generated whenever a DMR is shut down. The signal contains the path of the server, which has just been shut down. Note this signal was called LostServer prior to version 0.0.2.|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655921988.66/warc/CC-MAIN-20200711032932-20200711062932-00156.warc.gz
CC-MAIN-2020-29
1,997
9
https://cyshih.github.io/
code
I'm a PhD candidate in EECS at UC I'm part of the Berkeley AI Research Lab I'm interested in machine learning for control and I received my M.S. in Machine Learning from Carnegie and B.S. in EECS from UC Berkeley. In the past, I've worked on multi-agent collision avoidance, learning from demonstration, human behavior learning, optimal control, hybrid systems, and hierarchical planning in the CMU Machine Learning Department and the Berkeley AI Research Lab.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154127.53/warc/CC-MAIN-20210731234924-20210801024924-00338.warc.gz
CC-MAIN-2021-31
460
11
https://docs.microsoft.com/en-us/archive/blogs/steve_fox/silverlight-rtm-and-sharepoint-blueprints-ship-on-codeplex
code
Silverlight RTM and SharePoint Blueprints Ship on Codeplex Today, additional SharePoint and Silverlight Blueprints on CodePlex were uploaded to include two more Silverlight RTM samples: custom navigation in SharePoint and the colleague viewer. These blueprints provide samples for you to build and explore using Silverlight as an alternate way to develop and integrate powerful user experiences within SharePoint and add rich Internet application functionality to your SharePoint sites. Included in the overall set of samples are a Hello World sample, a Slider sample, and the recently added Custom Navigation and Colleague Viewer samples added today. You can download the samples along with documentation and screencasts at http://www.codeplex.com/SL4SP. Over the next few weeks, I'll publish a couple of other blog posts on Silverlight and SharePoint. I think there is tremendous opportunity here for building rich Internet applications that can also be thought of as OBAs. For example, think about skinning an integration with SAP with Silverlight and then dropping it into a SharePoint site. This not brings the LOB system world into SharePoint, but does it in a way that improves the look and feel of the UI as well as providing additional controls that you can build around the UI to, for example, filter on SAP, Dynamics or other LOB system data. For more resources on how to integrate Silverlight and SharePoint, be sure to visit http://mssharepointdeveloper.com. Keep an eye out on this site over the next few weeks, as we're going publish a new round of content for WCM and MOSS. Some cool SharePoint and Silverlight stuff will be shipping.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657154789.95/warc/CC-MAIN-20200715003838-20200715033838-00131.warc.gz
CC-MAIN-2020-29
1,650
4
https://bitcoinexchangeguide.com/substratum-sub-token-faces-scam-accusations-due-to-faking-github-activity-numbers/
code
Substratum (SUB Token) Faces Scam Accusations Due to Faking GitHub Activity Numbers The cryptocurrency market is full of scams and projects without a clear path. Their main intention is to profit from investors that do not have a clear regulatory framework. Substratum (SUB) seems to be one of these coins and projects. One of Substratum’s developers, B.J. Allmon, informed a few months ago that the project had over 2 million lines of codes on the Substratum GitHub repo. Nevertheless, something that should be great news for the community ended up discovering something not so good behind. The Substratum community did not take all these numbers for granted and started to investigate whether this was real or not. Using a tool that counts lines of code in GitHub repos, the company discovered that they had only 32000 lines of code. With the Count LOC tool, the community discovered that Substratum’s developer multiplied this number 62.5 times. However, having more lines of code does not mean that something is necessarily good. The Reddit user CHAiN76, explained that Substratum’s decision to expand the numbers is not a good decision. He quoted Bill Gates that says that measuring programming progress using lines of code is similar to measuring aircraft building progress by weight. Because of this situation, there are several users that decided to question the project and its viability. Another Reddit user decided to post an explanation about why Substratum has a similar minting contract to Oyster Pearl. It is important to remember that Oyster Pearl had some problems with its smart contract at the end of October. An insider was able to start minting new PRL tokens. It seems that Substratum is affected by the same issue. A Substratum developer could eventually exploit this issue and damage the whole network. It is also possible for the team behind this virtual currency to perform an exist scam. Another issue that arose around this project is related to its regular token burn. When a project decides to burn tokens, it means that they send the coins to an address that nobody has the private keys for it, meaning that it is not possible to have full access to it. Despite that, Substratum apparently owns the private keys of this wallet where they send the burned tokens. At the time of writing, Substratum is the 120th largest virtual currency with a market capitalization of $43 million dollars. Each SUB token can be bought for $0.113 dollars.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948868.90/warc/CC-MAIN-20230328170730-20230328200730-00420.warc.gz
CC-MAIN-2023-14
2,473
10
https://gust.com/companies/jamscreen
code
JamScreen is a SME CRM platform that leverages social, local and mobile. We have an immense opp. to get market share in 600 cities in NA. JamScreen allows SME's to leverage s.media and s.networks to interact with consumers in an ecosystem that removes friction and transacts goods and services. The platform lets SME's track, manage, and gather intel through custom profiles. It lets consumers stay socially and locally connected to SME's and lets SME's develop social local mobile marketing campaigns at a fair cost. We get revenue through research, advertising and transaction fees.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829542.89/warc/CC-MAIN-20181218164121-20181218190121-00625.warc.gz
CC-MAIN-2018-51
584
2
http://www.meetup.com/Ocean-Beach-Buddies/photos/13118232/204434652/
code
This group is for people who live near Ocean Beach or in the western part of the City or who would like to meetup or attend events in this area. It's meant to build community and to provide a place where people who enjoy our unique foggy part of the City can come together. So lets meetup and do groovy things at or near the beach! I'll need help with organizing events, or it won't be that active a group, so please let me know if your willing to help out. I'd like to have a few organizers in addition to me.
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988061.16/warc/CC-MAIN-20150728002308-00342-ip-10-236-191-2.ec2.internal.warc.gz
CC-MAIN-2015-32
510
2
http://cciew.blogspot.com/2011/03/acs-network-access-profiles.html
code
You can use Network Access Profiles in ACS to either grant or deny access based on various attributes. This example denies users from SSID "Sec1" and a particular OUI from accessing the network. You can use various attributes so its worth learning what the main ones are. You can also use this to allow authentication based on certain attributes and deny others. Disposal of household insects with natural recipes Clean the kitchen and arrange it Clean the living room
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646181.29/warc/CC-MAIN-20230530230622-20230531020622-00368.warc.gz
CC-MAIN-2023-23
468
5