url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://community.spiceworks.com/topic/109863-ftp-with-adhoc
|
code
|
I am trying to find out if anybody knows of any FTP software that will produce or configure links to folders or files I can send to end users. We use WS FTP Server and I know that AD HOC is available for use with WS FTP but the company will not purchase that option.
Most of the end users use Windows Explorer or FileZilla (+5) :-) to connect to the server. As you can imagine or may know first hand, end users are not exactly understanding of the workings of FTP even when I send them complete instructions and links.
So, with that in mind, I was hoping to find a free or open source solution that will fill this gap. Something to alleviate the end users from not having to worry about something they do not use on a regular basis, and for me to not have to hold their hands so much.
Any thoughts or information would be greatly appreciated!
net2ftp can do this. It is separate from your FTP server, and can be hosted/branded/modified to fit.
We use it to send out links to files by email, although it isn't obvious how to do it. You need to log in as an anonymous user or the target user, navigate to where you need to be, and click the favorite icon. It gives you a link that you can then send to someone for them to click on, and it drops them to a password prompt which they have to fill in, then drops them into a web UI for the FTP server with their rights in place
Thanks for the replies. I do not think they will go for the net2ftp idea as nothing is allowed off of our network (no idea how they expect that to happen when we have worldwide offices and locations and files go to all of them). I did not know about the ftp option in Fire Fox but will check it out. Is it a plug in or native to the new release(s)?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105976.13/warc/CC-MAIN-20170820053541-20170820073541-00060.warc.gz
|
CC-MAIN-2017-34
| 1,721
| 7
|
https://articulate.com/support/article/add-or-remove-the-zoom-tool-from-an-image-in-quizmaker-09
|
code
|
You can easily add or remove the zoom tool from an image you have inserted into a Quizmaker '09 question. Here's how:
- Double click the question that contains the image.
- Click on Slide View.
- Double click the image you have inserted to bring up the Format tab.
- Select or de-select Zoom Picture.
- Save and close your question.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400197946.27/warc/CC-MAIN-20200920094130-20200920124130-00407.warc.gz
|
CC-MAIN-2020-40
| 332
| 6
|
https://ithicos.com/documentation/upgrading-from-previous-versions-v3.html
|
code
|
Customers whose support and maintenance is up-to-date are eligible to upgrade from an older version of the software to a newer versions. However, during the installation process the installer programs for Directory Update, Directory Manager, Directory Search, and Directory Password do not provide a direct upgrade path from older versions.
This article applies to:
Almost all configuration and customization work is performed via XML files. The v3.x product family made substantial changes to some of these files. We provide an XML updater but if your current version of the software is more than one or two versions behind, the converter utility may not be able to update the files. In that case, you will need to re-customize your XML files using the new XML files as your template.
Updating to a newer version usually requires a new license key. Contact support @ ithicos.com to determine if you are eligible for updates.
In the v3.x versions of our products, we made a couple of significant changes to the DirectorySettings.XML file. Below is an example of the Office name attribute from earlier versions:
<office label="Office" type="dropdown" visible="true" editable="true"> <value>Office 1</value> <value>Office 2</value> </office>
Now, let's take a look at the new Office attribute. The open and close text is no longer the field name, but the world field. Next is the id property; this is required and used internally by our software. The field id is used in the AppSettings.XML, SubSettings.XML, and AddressSettings.XML files. The value are using is the same as the field name in previous versions. Do not change or localize the id value. Next is the attribute property; this maps to the LDAP attribute name for this field in the Active Directory. Finally, the maxLength property specifies the maximum number of characters that the Active Directory attribute can hold.
<field id="office" label="Office" attribute="physicalDeliveryOfficename" visible="true" editable="true" type="dropdown" maxLength="128"> <value>Office 1</value> <value>Office 2</value> </field>
You can not retain older versions of the DirectorySettings.XML, AppSettings.XML, or PasswordSettings.XML files nor can you retain the dll, css, ascx, or aspx files. You must use the new versions of these files or use the XML converter.
The upgrade from an older version to a newer version is essentially an uninstall and reinstall. The following is an example for upgrading Directory Update (v2.6 and later), but these steps can be used for Directory Manager v2.3.
You may have noticed a theme in those steps. Each step of the way, test your update to ensure that the software is still working. This will save you some time trying to figure out where to start solving an update problem.
If you run in to problems during the installation, we have troubleshooting guides for each supported operating system on the TechNotes Support page.
We have introduced a new utility called Settings Updater. It reads your old DirectorySettings.XML and AppSettings.XML files and moves your original configuration settings to the new XML files. This utility works best when converting from Directory Update v2.6 / v2.7 and Directory Manager v2.3 / v2.4. It may not work with older versions. The Settings Updater is found in the .\Configuration folder and is named SettingsUpdater.exe. Here is an example of how to update the DirectorySettings.XML file:
The Settings Updater may not work in the following situations:
The XML files from older versions of Directory Update (v2.5 and earlier) and Directory Manager v1.6 and earlier are not compatible with newer versions of Directory Update. You must completely remove the old version and customize the XML files that come with the new version.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371799447.70/warc/CC-MAIN-20200407121105-20200407151605-00455.warc.gz
|
CC-MAIN-2020-16
| 3,748
| 15
|
https://www.physicsforums.com/threads/netbeans-jtable-illegal-forward-reference.883219/
|
code
|
Hi, I have created a table using Netbeans. Then i have used properties->model option for table to insert row and to give column names using NetBeans frame work. Now i am trying to create an instance variable of DefaultTableModel in the application class: DefaultTableModel model = (DefaultTableModel) figTable.getModel(); But i am getting following error: "illegal forward reference": Move initializer to constructor. Some body please guide me. Zulfi.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593051.79/warc/CC-MAIN-20180722061341-20180722081341-00500.warc.gz
|
CC-MAIN-2018-30
| 451
| 1
|
https://www.i-programmer.info/bookreviews/21-database/5039-sql-and-relational-theory.html
|
code
|
|SQL and Relational Theory|
Author: C.J. Date
Subtitled “How to Write Accurate SQL Code”, this book is part of a “Theory in Practice” series.
Here one database legend writes about another one. This is more than a book review, it's part history, part theory, and a thoroughly interesting read (Ed)
Chris Date has produced a collection of books on RDBMS and SQL from various publishers over several decades. His “An Introduction to Database Systems” (ISBN: 978-0321197849, Addison-Wesley, 2004) is now in its eighth edition and it was the standard college textbook for years. His claim to fame in the RDBMS world is that he worked with and then partnered with Ted Codd to create a consulting company (Codd & Date) for many years.
Going back to the early days of RDBMS, when there was not so much internet, we had these things made out of paper called magazines. In particular, there were newsstand computer magazines devoted to the “new” exciting topic of databases. They were DBMS from M&T Publishing and DATABASE PROGRAMMING & DESIGN from Miller-Freeman. The publishing industry is volatile and there are buyouts and cancellations. Thanks to something happening to a parent company in Germany, both magazines wound up belonging to Miller-Freeman.
So for a few years, Chris Date and I each had columns in a different magazine from the same publisher! Chris would write a piece on topic X and I would respond the next month with an anti-X piece. If you are really old or are a radio nostalgia buff, you will remember the Jack Benny & Fred Allen mock feud. They sniped at each other back and forth on their respective radio shows and boosted the audience for both shows.
We did the same thing; people had to buy both magazines to get the full story. Without that incentive, would you have bought two separate magazines on the same topic otherwise? Perhaps the best part of the series was “Dueling Medians”; each of us would offer a solution for finding the Median in SQL and the other would reply with another approach, other people joined in. I gave the various solutions in my SQL for Smarties.
Chris Date collected his columns and some other material into a series of books for Addison-Wesley in 1986, 1990, 1992 and 1994, then a collection from Apress in 2006 of articles written after the magazine columns were gone.
To this day, I still get asked if I hate Chris Date. Of course not! I buy and read every one of his books. But we do disagree on technical issues. The super short version is that I am the great defender of SQL and data standards; Chris is the defender of a Tutorial D and his school of Relational Theory. I am more hands-on and Chris is more theory.
The bad news is that large amounts of the discussion in this book is how SQL does not subscribe to the Date Relational Model and much of the code is in Tutorial D. If you are not familiar with Tutorial D, it a relational programming language that is directly based on the relational calculus. The reader has to learn enough Tutorial D to read the comparisons between SQL and Tutorial D. Date uses his famous Suppliers and Parts database for the examples. He does not spend a lot of time on the DDL and moves to DML. But 80-95% of the work in SQL is done in the DDL, not the DML. And his examples are done with very simple code at the SQL-92 level. Let me be more specific:
The Parts table P (I will get to the DDL for it shortly) gives the weight of a part in pounds and we want it in grams. The Tutorial D version is:
EXTEND P ADD (weight *454.0 AS gm_wt)
The SQL is:
SELECT P.*, (weight *454.0) AS gm_wt FROM P;
Now let's go to the next step. Write a query to give us all parts with a weight greater than 7000.0 gm. The Tutorial D version Date gives is:
((EXTEND P ADD
The SQL he gives is:
SELECT pno, (weight *454.0) AS gm_wt
His point is that you have to re-use the computation in the WHERE clause. But that is not the case; it can be done with a derived table, or CTE if you want to avoid using a VIEW:
This is a direct translation of the first query into SQL. The inner SELECT is a derived table that mimics the function of the EXTEND in Tutorial D. Going further, an SQL programmer would probably say to himself, “I am going to need to do this conversion in a lot of places” and he then does the computation in the DDL with a VIEW or with a computed column:
CREATE VIEW Metric_Parts
Alternatively, the computed column will act like a VIEW; the syntax is just a little different:
CREATE TABLE Metric_Parts
In fairness, Date also has complaints about Tutorial D because it lacks a Relational Division and introduces his DIVIDEBY operator from his previous books. Unfortunately, Relational Division come in many flavors – Codd's original division, Todd's division, with and without remainders and probably others.
Some statements are incorrect. In Chapter 8 on constraints, on page 100, he states that “Transition constraints aren't current supported in either Tutorial D or SQL (other than procedurally).” His discussion uses a transition from “never married” to “married” and you cannot go back to a status of “never married” again.
I published the DDL code for state transition constraint in an article entitled “Constraint Yourself” in Simple Talk. I used (born, married, divorced, dead) as my status values.
In a footnote, Date says:
“The semantics of WITH LOCAL CHECK OPTION are far too baroque to be spelled out in detail here. In any case, it's hard to see why anyone would ever want such semantics; indeed, it's hard to resist the suspicion that this alternative was included in the standard for no other reason than to allow certain flawed implementations, extent at the time, to be abler to claim conformance.”
ANSI tries to get all of the membership to agree on a common abstract model of how SQL works. We then create features on that model. By now, you should know that the clauses of a basic SELECT.. FROM.. WHERE.. GROUP BY.. HAVING.. statement start with the from clause and end with the select clause. We had to work out those rules in the committee because not all products did it that way. At one point, the GROUP BY implementations either put the NULLs in one group or put each NULL in its own group. Sybase did the ALL() and ANY() predicates wrong. Oracle had to add the CHAR2(n) because they got the CHAR(n) wrong; Microsoft added DATETIME(2) to implement the ANSI Standard TIMESTAMP.
Did you notice that *= is long gone from products? The truth is that when the standards change, the vendors change their products, not the other way around. As the standards have progressed, we have fewer and fewer “implementation defined” features.
All that said, yes, the WITH [LOCAL | CASCADE] CHECK OPTION is baroque when you nest VIEWs inside each other. But it can be very powerful and enforce complex relationships that would otherwise have to be done with triggers or worse. It how you can express multi-table joins in SQL when you do not have a CREATE ASSERTION statement.
Chapter 11 is “Using Logic to Formulate Expressions” which gives a quick introduction to two valued predicate logic and quantifiers. The explanations are done with Tutorial D, then translated into SQL. That is confusing. Tutorial D is like classic (i.e. NULL free) predicate logic but it is still another language to learn. But SQL does have NULLs and we need to consider them from the start. This is one reason that minimal Netiquette on SQL Forums requires that you post DDL even for the simplest SQL problems.
For example, Date gives one of his classic tables:
CREATE TABLE Parts
Please notice that the weight is NULL-able, but the first sample data is like this:
INSERT INTO Parts
Given the problem “find the parts that have a weight that is different from the weight of any part in Paris”:
produces the same results, but only if there are no NULL weights in the table.
The EXISTS() predicate is always TRUE or FALSE, but the IN() predicate is a shorthand for a chain of OR-ed predicates. The IN() predicate can return UNKNOWN if there are NULLs. We would need get rid of the NULLs:
And we now the “IS [NOT] DISTINCT FROM” comparison operator that will treat NULLs as if they are equal:
In conclusion, I do not feel the book lived up to its title. Someone trying to improve his SQL or find a systematic approach to constructing a query has to first predicate logic and Tutorial D. Date's dislike of SQL shows up everywhere in the book; he was looking for ways to make SQL look bad. Much of his SQL code is dated and fails to use newer features.
While you expect some repetition, the material was a re-arrangement of his older material without adding anything new. If you have not read Date's other books, then might not be a problem.
I feel that a better approach would have been to show dangerous or shoddy SQL, demonstrate the problems, explain the math, relational algebra and logic that was ignored and then solve the problems with better SQL. The “Paris-weights” example I showed would be the start of such a detailed analysis. The reader would not be confused by Tutorial D code and would have come away with a better understanding for the limits and power of SQL.
|Last Updated ( Monday, 14 January 2013 )|
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646937.1/warc/CC-MAIN-20230531150014-20230531180014-00376.warc.gz
|
CC-MAIN-2023-23
| 9,250
| 45
|
https://feedback.mountaineers.org/forums/273688-general-feedback/suggestions/18604159-make-it-so-you-can-t-close-a-trip-without-trip-res
|
code
|
Make it so you can't close a trip without trip results GH2508
Right now if you are in the team roster you can't close the trip without giving a status like Sucessful/Cancelled etc. however you can do it via the yellow admin bar. We don't want trips closed without knowing how they went so it'd be great to get an error notification when you try and do this that says "Trip result must be entered on the roster before activity can be closed."
Completed Aug 2022.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816942.33/warc/CC-MAIN-20240415045222-20240415075222-00314.warc.gz
|
CC-MAIN-2024-18
| 461
| 3
|
https://outhistory.org/exhibits/show/afam-timeline/timeline
|
code
|
African American LGBTQ+ U.S. Timeline: 1912-present
Use this link to access the African American LGBTQ Timeline Bibliography.
Our aim is to create a thorough representation of queer history that pays attention to the many conversations about what sexuality has meant in various communities across time. These timelines are a work in progress: If you would like to contribute to this project as a teacher, a student, or on your own, please fill out this form that requests additional references, including pictures, with as full citations as possible. Any additional questions please email us at firstname.lastname@example.org and put “Timeline” in the subject line.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711045.18/warc/CC-MAIN-20221205200634-20221205230634-00419.warc.gz
|
CC-MAIN-2022-49
| 669
| 3
|
https://www.biostars.org/p/368217/
|
code
|
Hi, I have a weird question. But I'm looking for a software which not just merge the paired reads but also writes them in a new file.
When a software matches a pair of reads it writes them in the output fastq (or fasta), but you don't have the option of knowing what they paired. I've checked some of them but mostly they create a file with the unpaired reads as the set of the discarded ones. I've been trying to find one which would give an output like that
SAMPLE_MATCHED_FORWARD.fastq SAMPLE_MATCHED_REVERSE.fastq SAMPLE_JOINED.fastq
the first two files would contain the reads that are going to be merged, but have been already found their mate. Is there a software with an option like that? I know it sounds weird but it could help me to discriminate from a pool containing a lot of unwanted sequences from different sources.
thanks for your time.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573385.29/warc/CC-MAIN-20190918234431-20190919020431-00516.warc.gz
|
CC-MAIN-2019-39
| 853
| 5
|
https://codematcher.com/questions/php-5-4-call-time-pass-by-reference-easy-fix-available
|
code
|
Is there any way to easily fix this issue or do I really need to rewrite all the legacy code?
PHP Fatal error: Call-time pass-by-reference has been removed in ... on line 30
This happens everywhere as variables are passed into functions as references throughout the code.
You should be denoting the call by reference in the function definition, not the actual call. Since PHP started showing the deprecation errors in version 5.3, I would say it would be a good idea to rewrite the code.
From the documentation:
For example, instead of using:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00482.warc.gz
|
CC-MAIN-2023-14
| 542
| 6
|
https://blog.krum.io/replacing-docker-desktop-with-rancher-desktop/
|
code
|
If you haven't already heard, Docker Desktop isn't free anymore for many users. But that's ok! As the container tooling market has matured, many orgs have been removing branded Docker from their container tooling and pipelines in favor of open-source options.
Unfortunately, installing and running many of those options requires some knowledge or confidence in operating system management - whether Linux, Windows, Mac, etc. For beginners, Docker Desktop is still the easiest installation and management option. Until today.
Rancher Desktop is a true open-source offering from SUSE, an organization investing heavily in bringing Operators and Developers together to one toolchain. (seriously, consider a free registration for the SUSE Community, tons of useful content, constantly. Here is an invitation: https://community.suse.com/share/NysvS2xkB8JCK0Jb?utm_source=manual
- Out of the box Kubernetes
- Kubernetes version switching
- Container Runtimes - switch between container runtimes containerd
- Docker Compose support (kind-of, we'll cover that)
- Port forwarding UI
- Use the same docker commands you're familiar with
Let's dig in
I'm going to walk through a migration for Windows + WSL environments, from Docker Desktop to Rancher Desktop, while keeping docker-compose. For specific instructions for other operating systems, please refer back to the Rancher Desktop Docs
- Uninstall Docker Desktop
2. Download Rancher Desktop from https://rancherdesktop.io/
3. Install and run Rancher Desktop
4. Choose a runtime (you can change this later!). For interim compatibility, I chose dockerd.
5. Wait for the Rancher Desktop to initialize for the first time
6. Select your WSL distribution to enable cross-environment support:
It looks like I have a problem! I have already set up kubectl and kubernetes on this system. Rancher Desktop will attempt to merge your kubeconfig, but may have trouble. The shortcut here is to remove or rename the <home>/.kube/config file and merge it yourself after completing setup.
7. Check the box next to your chosen distribution, and Rancher Desktop will install the appropriate links with the WSL instance. Once complete, the checkbox will persist.
8. Test your installation. In WSL, run
docker ps 🚀
9. Update! Since the Rancher Desktop 1.1 release, these steps are no longer necessary for installation of Docker Compose, it is now automatic.
Docker Desktop comes with a tool called Docker Compose, which is one of the most common container management tools used by developers. This may become bundled in the future, with podman-compose or another tool, but it is easy enough to install docker-compose yourself.
a. In the WSL instance, download the docker-compose binary
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
b. Next, apply permissions
sudo chmod +x /usr/local/bin/docker-compose
c. Pro tip: Install command line completion
sudo curl -L https://raw.githubusercontent.com/docker/compose/1.29.2/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
d. Test version
Test an application
- Clone the repository
git clone https://github.com/calendso/docker.git calendso-docker && cd calendso-docker
2. Create a .env file from the example
cp .env.example .env
2. Build the images
3. Run the application
4. Access the application with localhost, or using the service name as specified in the docker-compose.yml
5. Start developing!
Rancher Desktop gives us a simplified gateway into application development on Kubernetes. From here, we can begin to transition into more advanced subjects from load balancing and ingress, through workload right-sizing and multi-application deployment. In addition, we now have a sandbox in which we can install Rancher, and move into treating our local machine as a remote cloud environment.
Now that we have kubectl, helm, docker, and docker-compose all in one place and at our disposal, we can now start looking into migrating into a kubernetes workflow using Kompose.
Consider deploying your favorite kubernetes orchestrators locally (ours is SUSE Rancher!). Check out this great article as a starting point: https://itnext.io/kubernetes-rancher-cluster-manager-2-6-on-your-macos-laptop-with-k3d-k3s-in-5-min-8acdb94f3376
Also, please subscribe to us in the field below!
My docker uninstall did not appear to properly remove
/var/lib/docker from my WSL. I needed to
rm -rf /var/lib/docker and restart Rancher Desktop in order to get past a
/usr/bin/docker-compose: Input/output error.
Other feature issues to watch are here:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100535.26/warc/CC-MAIN-20231204214708-20231205004708-00806.warc.gz
|
CC-MAIN-2023-50
| 4,627
| 48
|
https://www.whoishostingthis.com/compare/typo3/resources/
|
code
|
TYPO3 Intro and Resources
TYPO3 is a free and open source content management system (CMS), licensed under the General Public License 2. TYPO3 provides a powerful set of tools designed to manage large amounts of data — making it easy to develop websites and web applications. The software has many useful functions for business, with scalable features for:
- Website deployment
- Website management
One man's passion for programming and sharing is what sparked the development of TYPO3. While it may not be the most well-known CMS on the planet, the project has been slowly growing in the shadows like a dark unicorn on a magic truffle hunt. What started out as a lone geek's pet project became a worldwide phenomenon.
In 1997, a developer from Denmark named Kasper Skårhøj began the TYPO3 project to scratch an "itch." He saw the need for software like TYPO3 to help businesses maintain their websites. Skårhøj's decision to keep the software free and available to everyone in the year 1999 may have the most significant factor to shape the future of the CMS. Apparently, hundreds of thousands of other people had that same itch.
Although Skårhøj is the face of TYPO3, it wouldn't have been possible without such a robust community of volunteers and loyal developers. Since 2007, Skårhøj has stepped away from the project to pursue his other interests. The future for the TYPO3 project looks promising. Widespread use of TYPO3 started in Europe, Germany, and Thailand. But over the years, active users throughout the world have dedicated their talents to the TYPO3 project.
- History of TYPO3: checkout this official timeline of milestones, releases, and major events related to the development of TYPO3.
- Kasper's Korner: hear it straight from the
horse'sunicorn's mouth. Skårhøj gives some insight into his motivation behind his work and gives a personal account of the early days of the project.
- Case Studies: these live examples show the power and flexibility of the platform.
- TYPO3 YouTube: this is TYPO3's official YouTube channel. You will find useful introduction videos and tutorials here.
How Does it Work?
TYPO3 allows you to manage the look of your website independently from the content elements. Separating the presentation layer (layout design, colors, etc) from content (ie, text, images, video) may be a standard web development paradigm nowadays, but this wasn't always the case. A good CMS lightens the burden of tasks like updates.
TYPO3 is free and open source, and it requires a LAMP/LEMP software stack. Written in PHP, TYPO3 can run on virtually any modern operating system (Unix, Windows, Mac OS X). It connects to numerous data sources and includes support for Apache, IIS, and Nginx.
The extendible core of TYPO3 is huge, which eliminates the need to download tons of extra software. Features you get out-of-the-box include:
- WYSIWYG editor
- Built-in versioning functionality
- Intuitive user interface
- Built-in support for multi-lingual sites
- Granular admin control
- User management options (roles and permissions)
- Front-end editing
- Well documented APIs.
There are many more features included in the core and through extensions. In the world of TYPO3, "extensions" are programs that extend functionality. Use them to do anything that isn't included in the core.
- Complete Feature List: this is an exhaustive list of all the features available.
- The TYPO3 Demo: take it for a spin. This demo gives you access to a live installation of the CMS. Take a tour of the TYPO3 front-end or backend.
- Introduction to TYPO3: this video is a bit dated, but it still provides a good introduction to the TYPO3 CMS.
- Download TYPO3: download the latest stable release of the TYPO3 CMS for free.
- TYPO3 for Cloud: download one of these installers to run on a cloud based hosting setup.
- TYPO3 Local Installer: this is a complete localhost setup to run on Mac, Windows, or Linux. These are all older versions. A quick and easy way to test offline.
- Older TYPO3 Versions: here you can find legacy releases. TYPO3 has a great track record when it comes to backward compatibility.
Getting Started — TYPO3 Guides and Tutorials
The documentation for TYPO3 is well written and extremely large. At times, it can be hard to sift through documentation, but if you look in the right places you can find answers to your problems. Although the documentation is updated regularly, the best way to understand how TYPO3 works is to get your hands dirty. Below are some useful links to help you get started.
- Tutorials: beginners start here. This is where you can learn the basics and get tips on how to build up your chops.
- TYPO3 Documentation: this is TYPO3 bible. You will find answers to many problems when you consult the documentation.
- Guides: documentation for installation, workflow, front-end localization, and rendering.
- Extension Repository: here you will find officially aproved extensions for TYPO3. Find extensions to help you build templates, fight spam, add shopping carts, and so on.
There are lots of opportunities for those interested in contributing to the TYPO3 project. The global TYPO3 community continues to drive the evolution of the CMS. Participating is a smart way to learn and connect with a true open source community.
- Cool Stuff for Nerds: special templates for advanced REST users.
- TYPO3/Surf: learn about Surf, a tool to automate deployment.
- Fluid: power users can do some heavy lifting with this templating engine for TYPO3.
- Certification: a TYPO3 CMS certification holds some weight in the IT industry. Learn about the official certification programs.
- API Documentation: developers can use this as a reference guide when working with TYPO3 APIs.
- Latest News: stay up to date with happenings in the TYPO3 universe.
- TYPO3 Comminity: see the community goals, values, and learn how to get involved.
Putting It Into Practice
Now that you have a good idea of what TYPO3 is, you can put that knowledge to good use. So, go on — manage data in a more effective way. And good luck!
Further Reading and Resources
We have more guides, tutorials, and infographics related to website development and management:
- Google Rankings: Understand, Diagnose, and Fix: what good is a website if no one knows about it? Learn all about getting the Google ranking you deserve.
- The Ultimate List of Webmaster Tools A-Z: find all the tools you need to make managing your site easy.
Ultimate Guide to Web Hosting
Check out our Ultimate Guide to Web Hosting. It will explain everything you need to know in order to make an informed choice.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803944.17/warc/CC-MAIN-20171117204606-20171117224606-00153.warc.gz
|
CC-MAIN-2017-47
| 6,603
| 54
|
https://issuetracker.unity3d.com/issues/android-wrong-behavior-with-external-slash-internal-application-dot-persistentdatapath
|
code
|
Search Issue Tracker
Fixed in 5.6.0
[Android] Wrong behavior with external/internal Application.persistentDataPath
Steps to reproduce:
1) Download attached project "FilePermissions.zip" and open in Unity
2) In Player settings, make sure Write Permission is set to "External(SD Card)"
3) Build and run project on a device
Expected result: /storage/sdcard0/Android/data/com.ea.gp.fp/files/
Actual result: /storage/emulated/0/Android/data/com.ea.gp.fp/files/
Note: Looks like Application.persistentDataPath returns wrong path
Reproduced with: 5.3.7p2, 5.4.3p1, 5.5.0f3
Resolution: These both paths are symlinks and are the same. They point to the same place in the internal storage, so it does not matter which of them is returned.
All about bugs
View bugs we have successfully reproduced, and vote for the bugs you want to see fixed most urgently.
- [Universal RP] Missing Camera cannot be removed from Camera Stack after scene is saved
- Game View focus is lost when entering Play mode with maximized Game View (either by Shift+Space or options->Maximize)
- Switching refresh rate without switching resolution or fullscreen mode doesn't work
- [iOS] [IL2CPP] crash on il2cpp::os::Image::Initialize() on Application launch
- [iOS] crash when Notification with Data parameter set to null is pushed due to uncaught exception 'NSInvalidArgumentException'
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347423915.42/warc/CC-MAIN-20200602064854-20200602094854-00511.warc.gz
|
CC-MAIN-2020-24
| 1,349
| 19
|
https://embedded.eecs.berkeley.edu/seminar/forum/8.html
|
code
|
Fall 2012 Seminars
Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS)Fall 2012
The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar (DREAMS) occurs weekly on Tuesdays from 4.10-5.00 p.m. in the DOP Center Classroom, 540 Cory Hall or in the open area in the DOP Center.
The Design of Robotics and Embedded systems, Analysis, and Modeling Seminar topics are announced to the DREAMS list, which includes the chessworkshop workgroup, which includes the chesslocal workgroup.
Fast-Lipschitz OptimizationSep 11, 2012, 4.10-5pm, Carlo Fischione, KTH Royal Institute of Technology, Sweden
Abstract: In many optimization problems, decision variables must be computed by algorithms that need to be fast, simple, and robust to errors and noises, both in a centralized and in a distributed set-up. This occurs, for example, in contract based design, sensors networks, smart grids, water distribution, and vehicular networks. In this seminar, a new simple optimization theory, named Fast-Lipschitz optimization, is presented for a novel class of both convex and non-convex scalar and multi-objective optimization problems that are pervasive in the systems mentioned above. Fast-Lipschitz optimization can be applied to both centralized and distributed optimization. Fast-Lipchitz optimization solvers exhibit a low computational and communication complexity when compared to existing solution methods. In particular, compared to traditional Lagrangian methods, which often converge linearly, the convergence time of centralized Fast-Lipschitz algorithms is superlinear. Distributed Fast-Lipschitz algorithms converge fast, as opposed to traditional Lagrangian decomposition and parallelization methods, which generally converge slowly and at the price of many message passings among the nodes. In both cases, the computational complexity is much lower than traditional Lagrangian methods. Fast-Lipschitz optimization is then illustrated by distributed estimation and detection applications in wireless sensor networks.
Bio: Dr. Carlo Fischione is a tenured Associate Professor at KTH Royal Institute of Technology, Electrical Engineering and ACCESS Linnaeus Center, Automatic Control Lab, Stockholm, Sweden. He received the Ph.D. degree in Electrical and Information Engineering in May 2005 from University of L'Aquila, Italy, and the Dr.Eng. degree in Electronic Engineering (Laurea, Summa cum Laude, 5/5 years) in April 2001 from the same University. He held research positions at University of California at Berkeley, Berkeley, CA (2004-2005, Visiting Scholar, and 2007-2008, Research Associate) and Royal Institute of Technology, Stockholm, Sweden (2005-2007, Research Associate). His research interests include optimization and parallel computation with applications to wireless sensor networks, networked control systems, and wireless networks. He has co-authored over 80 publications, including a book, book chapters, international journals and conferences, and an international patent. He received numerous awards, including the best paper award from the IEEE Transactions on Industrial Informatics of 2007, the best paper awards at the IEEE International Conference on Mobile Ad-hoc and Sensor System 05 and 09 (IEEE MASS 2005 and IEEE MASS 2009), the Best Business Idea award from VentureCup East Sweden, 2010, the "Ferdinando Filauro" award from University of L'Aquila, Italy, 2003, the "Higher Education" award from Abruzzo Region Government, Italy, 2004, and the Junior Research award from Swedish Research Council, 2007, the Silver Ear of Wheat award in history from the Municipality of Tornimparte, Italy, 2012. He has chaired or served as a technical member of program committees of several international conferences and is serving as referee for technical journals. Meanwhile, he also has offered his advice as a consultant to numerous technology companies such as Berkeley Wireless Sensor Network Lab, Ericsson Research, Synopsys, and United Technology Research Center. He is Member of IEEE (the Institute of Electrical and Electronic Engineers), SIAM (the Society of Industrial and Applied Mathematics), and Ordinary Member of DASP (the academy of history Deputazione Abruzzese di Storia Patria).
User interface modelling - Model-based UI designSep 18, 2012, 4.10-5pm, Hallvard Traetteberg, Norwegian Univ. of Science and Technology (NTNU), Trondheim, Norway
Abstract: User interface modelling is an established cross-disciplinary field, combining elements from Human-Computer Interaction (HCI) and Software Engineering (SE), Information Systems (IS). This talk will present a conceptual overview of the field based on a classification framework developed in my thesis. Important work will be discussed in the context of this framework. Some time will be devoted to my own dialog modelling language Diamodl, since it is (coincidentally) based on an actor model similar to Ptolemy's (which is why I'm here), and how I believe it can be combined in the context of internet-based systems.
Bio: Hallvard Traetteberg is an Associate Professor at the Norwegian Univ. of Science and Technology (NTNU) in Trondheim, Norway, with a PhD in Information Systems. His research interests are model driven engineering in general with a focus on user interface modelling and model-based user interface design. He has developed an dialog modelling language called Diamodl and have experience building both graphical and textual syntaxes for it, as well as a runtime, mostly based on Eclipse.
Enclosing Hybrid BehaviorOct 17, 2012, 4.10-5pm, Walid Taha, Halmstad University, Sweden, and Rice University, USA
(Joint work with Michal Konecny, Jan Durac, and Aaron Ames)
Rigorous simulation of hybrid systems relies critically on having a semantics that constructs enclosures. Edalat and Pattinson's work on the domain-theoretic semantics of hybrid systems almost provides what is needed, with two exceptions. First, domain-theoretic methods leave many operational concerns implicit. As a result, the feasibility of practical implementations is not obvious. For example, their semantics appears to rely on repeated interval splitting for state space variables. This can lead to exponential blow up in the cost of the computation. Second, common and even simple hybrid systems exhibit Zeno behaviors. Such behaviors are a practical impediment because they make simulators loop indefinitely. This is in part due to the fact that existing semantics for hybrid systems generally assume that the system is non-Zeno.
is a Professor of Computer Science at Halmstad University. He is interested in the design, semantics, and implementation of programming and hardware description languages. His current research focus is on modeling, simulation, and verification of cyberphysical systems, and in particular the Acumen modeling language.
Time and Schedulability analysis of Stateflow modelsOct 23, 2012, 4.10-5pm, Marco Di Natale Scuola Superiore Sant'Anna of Pisa, Italy.
Model-based design of embedded systems using Synchronous Reactive (SR) models is among the best practices for software development in the automotive and aeronautics industry.
The correct implementation of an SR model must guarantee the synchronous assumption, that is, all the system reactions complete before the next event.
This assumption can be verified using schedulability analysis, but the analysis can be quite challenging when the system also consists of blocks implementing finite state machines, as in modern modeling tools like Simulink and SCADE.
Bio: Prof. Marco Di Natale is IEEE Senior member and Associate Professor at the Scuola Superiore Sant'Anna of Pisa, Italy, where he was Director of the Real-Time Systems (ReTiS) Lab. He received his PhD from Scuola Superiore Sant'Anna and was a visiting Researcher at the University of California, Berkeley in 2006 and 2008-2009, principal investigator for architecture exploration and selection at General Motors R&D in 2006 and 2007 and is currently visiting fellow for United Technologies Research. He's been a researcher in real-time and embedded systems for more than 15 years, author of more than 130 papers, winner of five best paper awards and two presentation awards. He is also member of the editorial board of the IEEE Transactions on Industrial Informatics and chair for the embedded systems track of the IEEE Industrial Electronics Society.
Beyond the Hill of Multicores lies the Valley of AcceleratorsOct 30, 2012, 4.10-5pm, Aviral Shrivastava, Arizona State University, USA
The power wall has resulted in a sharp turn in processor designs, and they irrevocably went multi-core. Multi-cores are good because they promise higher potential throughput (and never mind the actual performance of your applications). This is because the cores can be made simpler and run at lower voltage resulting in much more power-efficient operation. Even though the performance of single-core is much reduced, the total possible throughput of the system scales with the number of cores. However, the excitement of multi-core architectures will only last so long. This is not only because the benefits of voltage scaling will reduce with decreasing voltage, but also because after some point, making a core simpler will only be detrimental and may actually increase power-efficiency. What next! How do we further improve power-efficiency?
Prof. Aviral Shrivastava is Associate Professor in the School of Computing Informatics and Decision Systems Engineering at the Arizona State University, where he has established and heads the Compiler and Microarchitecture Labs (CML) (http://aviral.lab.asu.edu/). He received his Ph.D. and Masters in Information and Computer Science from University of California, Irvine, and bachelors in Computer Science and Engineering from Indian Institute of Technology, Delhi. He is a 2011 NSF CAREER Award Recipient, and recipient of 2012 Outstanding Junior Researcher in CSE at ASU.
Closing the loop with Medical Cyber-Physical SystemsNov 2, 2012, 2.10-3pm, Rahul Mangharam, University of Pennsylvania, USA
The design of bug-free and safe medical device software is challenging, especially in complex implantable devices that control and actuate organs whose response is not fully understood. Safety recalls of pacemakers and implantable cardioverter defibrillators between 1990 and 2000 affected over 600,000 devices. Of these, 200,000 or 41%, were due to firmware issues (i.e. software) that continue to increase in frequency. There is currently no formal methodology or open experimental platform to test and verify the correct operation of medical device software within the closed-loop context of the patient. IN this talk I will describe our efforts to develop the foundations of modeling, synthesis and development of verified medical device software and systems from verified closed-loop models of the device and organs. The research spans both implantable medical devices such as cardiac pacemakers and physiological control systems such as drug infusion pumps which have multiple networked medical systems. In both cases, the devices are physically connected to the body and exert direct control over the physiology and safety of the patient. With the goal to develop a tool-chain for certifiable software for medical devices, I will walk through (a) formal modeling of the heart and pacemaker in timed automata, (b) verification of the closed-loop system, (c) automatic model translation from UPPAAL to Stateflow for simulation-based testing, and (d) automatic code generation for platform-level testing of the heart and real pacemakers.
Rahul Mangharam is the Stephen J Angello Chair and Assistant Professor in the Dept. of Electrical & Systems Engineering and Dept. of Computer & Information Science at the University of Pennsylvania. He directs the Real-Time and Embedded Systems Lab at Penn. His interests are in real-time scheduling algorithms for networked embedded systems with applications in energy-efficient buildings, automotive systems, medical devices and industrial control networks. His group has won several awards in IPSN 2012, RTAS 2102, World Embedded Programming Competition 2010, Honeywell Industrial Wireless Award 2011, Google Zeitgeist Award 2011, Intel Innovators Award 2012, Intel Early Faculty Honor 2012, NAE US Frontiers 2012, Accenture Innovation Jockeys 2012, etc.
Computing without ProcessorsNov 13, 2012, 4.10-5pm, Satnam Singh, Google, USA
Abstract: The duopoly of computing has up until now been delimited by drawing a line in the sand that defines the instruction set architecture as the hard division between software and hardware. On one side of this contract Intel improved the design of processors and on the other side of this line Microsoft developed ever more sophisticated software. This cozy relationship is now over as the distinction between hardware and software is blurred due to relentless pressure for performance and reduction in latency and energy consumption. Increasingly we will be forced to compute with architectures and machines which do not resemble regular processors with a fixed memory hierarchy based on heuristic caching schemes. Other ways to bake all that sand will include the evolution of GPUs and FPGAs to form heterogeneous computing resources which are much better suited to meeting our computing needs than racks of multicore processors. This presentation will highlight some of the programming challenges we face when trying to develop for heterogeneous architectures and a few promising lines of attack are identified.
Bio: Prof. Singh works in the Technical Infrastructure division of Google in Mountain View, California and focuses on the configuration management of Google's data-center services. Previously Prof. Singh worked on the design of heterogeneous systems at Microsoft Research in Cambridge UK and on parallel programming techniques at Microsoft's Developer Division in Redmond USA. He has also worked on re-configurable computing and formal verification at Xilinx in San Jose, California and as an academic at the University of Glasgow. He also currently holds a part-time position as the Chair of Reconfigurable Systems at the University of Birmingham.
T-CREST: Time-predictable Multi-Core Architecture for Embedded SystemsNov 16, 2012, 2.10-3pm, Martin Schoeberl, Technical University of Denmark
Abstract: The T-CREST project is developing a time-predictable system that will simplify the safety argument with respect to maximum execution time while striving to increase the performance with multicore processors. T-CREST looks at time-predictable solutions for processors, the memory hierarchy, the on-chip interconnect, and the compiler. T-CREST is a 3 year project, funded by the EC. It has just passed the first year. In this talk I will give an overview of the T-CREST project, the individual sub-projects, and present some early results on the on-chip interconnect and the processor research.
Bio: Martin Schoeberl is associate professor at the Technical University of Denmark, at the Department of Informatics and Mathematical Modelling. He completed his PhD at the Vienna University of Technology in 2005 and received the Habilitation in 2010. Martin Schoeberl's research focus is on time-predictable computer architectures and on Java for hard real-time systems. During his PhD studies he developed the time-predictable Java processor JOP, which is now in use in academia and in industrial projects. His research on time-predictable computer architectures is currently embedded in the EC funded project T-CREST.
Synchronous Control and State Machines in ModelicaNov 27, 2012, 4.10-5pm, Hilding Elmqvist, Dassault Systemes AB, Sweden
The scope of Modelica has been extended from a language primarily intended for physical systems modeling to modeling of complete systems by allowing the modeling of control systems and by enabling automatic code generation for embedded systems. Much focus has been given to safe constructs and intuitive and well-defined semantics.
Elmqvist's Ph.D. thesis from the Department of Automatic Control, Lund Institute of Technology contains the design of a novel object-oriented and equation based modelling language, Dymola, and algorithms for symbolic model manipulation.
Sensor fusion in dynamical systems - applications and research challengesDec 11, 2012, 4.10-5pm, Thomas Schon, Linkoping University, Sweden.
Abstract: Sensor fusion refers to the problem of computing state estimates using measurements from several different, often complementary, sensors. The strategy is explained and (perhaps more importantly) illustrated using four different industrial/research applications, very briefly introduced below. Guided partly by these applications we will highlight key directions for future research within the area of sensor fusion. Given that the number of available sensors is skyrocketing this technology is likely to become even more important in the future. The four applications are; 1. Real-time pose estimation and autonomous landing of the helicopter (using inertial sensors and a camera). 2. Pose estimation of a helicopter using an already existing map (a processed version of an aerial photograph of the operational area), inertial sensors and a camera. 3. Vehicle motion and road surface estimation (using inertial sensors, steering wheel sensor and an infrared camera). 4. Indoor pose estimation of a human body (using inertial sensors and ultra-wideband).
Bio: Thomas B. Schon is an Associate Professor with the Division of Automatic Control at Linkoping University (Linkoping, Sweden). He received the BSc degree in Business Administration and Economics in Jan. 2001, the MSc degree in Applied Physics and Electrical Engineering in Sep. 2001 and the PhD degree in Automatic Control in Feb. 2006, all from Linkoping University. He has held visiting positions with the University of Cambridge (UK) and the University of Newcastle (Australia). He is a Senior member of the IEEE. He received the best teacher award at the Institute of Technology, Linkoping University in 2009. Schon's main research interest is nonlinear inference problems, especially within the context of dynamical systems, solved using probabilistic methods. He is active within the fields of machine learning, signal processing and automatic control. He pursue both basic research and applied research, where the latter is typically carried out in collaboration with industry. More information about his research can be found from his home page: users.isy.liu.se/rt/schon
|You are not logged in|
|©2002-2018 U.C. Regents|
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807660.32/warc/CC-MAIN-20180217185905-20180217205905-00211.warc.gz
|
CC-MAIN-2018-09
| 18,640
| 39
|
https://www.itcentralstation.com/users/it_user124704
|
code
|
I have been a PeopleSoft Administrator for over 10 years specifically focusing on application and tools Implementations, Upgrades, Support, Stress Testing, and Integration with other products.
Specialties: Experience with PeopleSoft 7.5, 8.0, 8.9, 9.0, 9.1 PeopleTools 7.x, 8.x.
Wide range of experience working on different database platforms including DB2 (OS390, UDB), SQL Server, and Oracle.
Experience on different OS platforms, including Windows, UNIX, and AIX.
Programming includes Lotus Notes Developer, Batch, Perl, Shell Scripting. Worked with BEA Weblogic, Tuxedo, Foglight Experience Monitor (FXM), Foglight, Toad for DB2, Spotlight, Quest Central, Software applications.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371620338.63/warc/CC-MAIN-20200406070848-20200406101348-00229.warc.gz
|
CC-MAIN-2020-16
| 683
| 5
|
https://www.cqu.edu.au/news/708504/artificial-intelligence-identifies-bird-calls
|
code
|
New results from researchers at CQUniversity Australia promise to improve the speed of bird identification using audio recordings from natural settings using powerful' computer-driven neural networks.
The research was led by PhD student Francisco Bravo Sanchez along with co-authors Professor Steven Moore' Dr Rahat Hossain and Dr Nathan English.
The use of autonomous recordings of animal sounds to detect species is a popular conservation tool' but usually results in thousands of hours of raw audio that in the past needed to be listened to by a trained human ear.
Advances in hardware' software and signals processing allows computers to do this now with success (around 75 per cent accuracy)' but it's still a laborious' processor intensive and technologically complex process.
Current classification software utilises sound features extracted from the recording rather than the sound itself' with varying degrees of success.
Previously' the raw audio recordings were pre-processed and what were thought to be the important bits selected out for further examination and identification.
Bravo Sanchez's work leap-frogs the pre-processing step and instead uses a convoluted neural network (CNN) to process the raw sound and decide what performs best to identify the bird species making calls on the recording— and then the CNN gets to it.
"Wildlife and computers have always been two of my passions and this PhD topic combines both of them'" Bravo Sanchez explained.
"I have a biologist background but computing has always been an important skill in my profession. And while I'm not a computer expert the open source software community makes it feasible for people like me to undertake this type of research."
In addition to eliminating any bias the pre-processing might introduce' Bravo Sanchez said the process of letting the 'machine do the heavy lifting' was about twice as fast and yields results (70 per cent) similar in accuracy to traditional methods.
"It's the difference between watching someone with headphones tap out a beat and lip sync a song or listening to the headphones yourself -- you'll figure out which song is playing more quickly the more directly connected to the music you are'" Dr English further explained.
Bravo Sanchez said the research findings would offer 'a glimpse into a different way of processing animal sounds without relying on tools designed for human speech'.
"Automatically identifying species from autonomous recordings is a very useful conservation tool' but still requires a lot of expertise. Our research shows that we can facilitate the task by reducing the number of steps and the choices required to process animal sounds."
The research also uses open-source software that is accessible to anyone in the world (with the programming skills to use it) and Bravo Sanchez has uploaded his code to GitHub' so that others can use his work.
"We will be trying to improve our results in the future' but we are sharing our code so that others can experiment with their datasets in the hope that we all can come up with better solutions that would help wildlife conservation or the search for rare species."
The team hope this work can be used in medical and industrial settings that also use acoustic monitoring in day-to-day applications.
READ THE RESEARCH PAPER HERE.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.31/warc/CC-MAIN-20231206063543-20231206093543-00375.warc.gz
|
CC-MAIN-2023-50
| 3,313
| 17
|
https://jonathancrabbe.github.io/publication/car/
|
code
|
Concept-based explanations permit to understand the predictions of a deep neural network (DNN) through the lens of concepts specified by users. Existing methods assume that the examples illustrating a concept are mapped in a fixed direction of the DNN latent space. When this holds true, the concept can be represented by a concept activation vector (CAV) pointing in that direction. In this work, we propose to relax this assumption by allowing concept examples to be scattered across different clusters in the DNN latent space. Each concept is then represented by a region of the DNN latent space that includes these clusters and that we call concept activation region (CAR). To formalize this idea, we introduce an extension of the CAV formalism that is based on the kernel trick and support vector classifiers. This CAR formalism yields global concept-based explanations and local concept-based feature importance. We prove that CAR explanations built with radial kernels are invariant under latent space isometries. In this way, CAR assigns the same explanations to latent spaces that have the same geometry. We further demonstrate empirically that CARs offer (1) more accurate descriptions of how concepts are scattered in the DNN latent space; (2) global explanations that are closer to human concept annotations and (3) concept-based feature importance that meaningfully relate concepts with each other. Finally, we use CARs to show that DNNs can autonomously rediscover known scientific concepts, such as the prostate cancer grading system.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474670.19/warc/CC-MAIN-20240227021813-20240227051813-00146.warc.gz
|
CC-MAIN-2024-10
| 1,549
| 1
|
https://forums.developer.nvidia.com/t/does-nvinferserver-support-custom-input-order/244328
|
code
|
this tensor order is not supported by nvinferserver directly, please find tensor_order in nvinferserver.
I suggest to use nvpreprocess + nvinferserver to implement, here is a sample opt\nvidia\deepstream\deepstream\sources\apps\sample_apps\deepstream-3d-action-recognition\deepstream_3d_action_recognition.cpp, its tensor order is NCDHW, nvpreprocess is used to gernerate tenser data, nvinferserver will infer this tensor directly.
if using deepstream6.2, this sample already supports nvinferserver, please refer to readme, especially here:
inference config file path ‘triton-infer-config=config_triton_infer_primary_3d_action.txt’.
from deepstream6.2, nvinferserver starts to support tensor meta input, please find input_tensor_from_meta in plugin doc.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657169.98/warc/CC-MAIN-20230610095459-20230610125459-00349.warc.gz
|
CC-MAIN-2023-23
| 757
| 5
|
http://www.tomshardware.com/forum/44369-42-need-asap-router-access-point-setup
|
code
|
I have a D-Link DIR-655 wireless rounter in the basement and want to use a Linksys BEFW 11S4 wireless router as an access point upstairs, to extend the range. I read a post that said I should connect the ethernet cable from the D-Link (downstairs - main network) to the LAN port of the Linksys (upstairs) and then run an ethernet cable from the Linksys LAN port to the computer upstairs. It also said I should use the same SSID as the D-Link, disable the DHCP on the Linksys and change the channel so that it isn't the same as the D-Link, which I did.
The problem (one of them) is that if I plug the D-link into the WAN port I can still see the D-Link router admin page from the computer upstairs (connected to the Linksys) but if I switch the cable to a LAN port on the Linksys, I can't see the D-Link any more. Neither confguration will allow me to see the D-Link main network downstairs through the Linksys router.
The password on the D-Link is a alpha-numeric word which all my devices are set-up for, but the Linksys seems to force a computer generated alpha-numeric code based on the same password. Do I have to use the linksys code for the D-Link and all devices?
They all actually use that passkey to generate the security code.
Leave the link LAN to LAN port and do the final step that it sounds like you missed: give the AP a static IP address in both the AP and in the router -- make it in the router range but outside of the DHCP range, so if your router is 192.168.1.1, make the AP 192.168.1.2, and start the DHCP range in the router at 192.168.1.3.
If you ever need to get back into the AP to configure it more, just attach by wire and use the new static address to get to its configuration pages.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132495.49/warc/CC-MAIN-20140914011212-00268-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
|
CC-MAIN-2014-41
| 1,711
| 6
|
https://www.737diysim.com/forum/need-help/b737-throttle-ver-3
|
code
|
I just printed the first Throttle knob (part 1) and followed the short guide:
Support: ON (but does not say how much or where or whatever in instructions?)
Adhesion: Skirt, worthless to me, prints far away from the knobNozzle. 0.2mm used
Layer height 0.1mm
When I printed it in my printer Geeetech A20 printer the support for the text on the knob is rock hard, and impossible to remove, see images.Also it seems impossible to remove material to be able to fasten the push button for the A/T disarm, since its seems imossible to get the overflow of material with these settings away.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511106.1/warc/CC-MAIN-20231003124522-20231003154522-00624.warc.gz
|
CC-MAIN-2023-40
| 582
| 5
|
https://community.trivantis.com/forums/topic/scorm-interactins-report-questin-text/
|
code
|
First, are your Lectora test properties checked to “The published course will report Test/Survey Question Interactions to the LMS”?Next, make sure you’re looking in the right place of your LMS reports for this data because Lectora does generate the test data, know in SCORM-speak as CMI interactions.Here is what I pulled from a sample report for ONE multiple choice test question (SCORM 1.2). This first question in the sequence generates the 0 shown in the example, and other questions would follow suit with 1, 2, 3 and so on so they all have unique identifiers. Please note that the response data below would have shown the entire answer text, but we economize suspend data by making the question distractors a single character (D, in this example). In case you don’t know, Lectora by default passes the entire text string from the distractor.) We then add the actual distractors that the learner views as separate text boxes, so they minimize “overhead” of the suspend data.SCORM Variable and its associated SCORM Valuecmi.interactions.0.descriptionWhat is the correct choice?cmi.interactions.0.idQuestion_1_107_1213412529437cmi.interactions.0.correct_responses_patternDcmi.interaction.0.resultcorrectcmi.interactions.0.student_responseDcmi.interactions.0.time23:02:09cmi.interactions.0.typechoicecmi.interactions.0.weighting1I think this is self-explanatory, which addresses your issue of “human-legible text”. Hope this helps.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313936.42/warc/CC-MAIN-20190818145013-20190818171013-00397.warc.gz
|
CC-MAIN-2019-35
| 1,449
| 1
|
https://www.timebolt.io/blog/officehours
|
code
|
Tech Office Hours, now every ThursdayFeb 07, 2022
Get Live Help!
Join Doug and Quinston every Thursday at 9AM CST live on Zoom for TimeBolt office hours.
We invite you to share your workflows, ask any questions, and learn where to get more out of TimeBolt.
To participate live on Zoom visit here.
Each week we will post the Zoom recording to YouTube, and we guarantee you won't watch dead-air :)
We look forward to meeting!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948217723.97/warc/CC-MAIN-20240305024700-20240305054700-00462.warc.gz
|
CC-MAIN-2024-10
| 423
| 7
|
https://www.preventionweb.net/publication/does-disaster-contribute-armed-conflict-quantitative-analysis-disaster-conflict-co
|
code
|
Does disaster contribute to armed conflict? A quantitative analysis of disaster–conflict co-occurrence between 1990 and 2017
The purpose of this study is to contribute a robust cross-country analysis of the co-occurrence of disaster and conflict, with a particular focus on the potential role played by disasters. Disasters and armed conflict often co-occur, but does that imply that disasters trigger or fuel conflict? In the small but growing body of literature attempting to answer this question, divergent findings indicate the complex and contextual nature of a potential answer to this question
The main findings indicate that, despite a sharp increase in the co-occurrence of disasters and armed conflict over time, disasters do not appear to have a direct statistically significant relation with the occurrence of armed conflict. This result contributes to the understanding of disasters and conflicts as indirectly related via co-creation mechanisms and other factors.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297295329.99/warc/CC-MAIN-20240425130216-20240425160216-00105.warc.gz
|
CC-MAIN-2024-18
| 979
| 3
|
https://www.experts-exchange.com/questions/26471391/Serialize-using-Subquery.html
|
code
|
Up till now I've been using Lebans Serialize function to add sequential numbers to my query.
For a particular query it was very slow, so I decided to take a different path. I use a subquery, & in the form it works fine...much quicker than the serialize function.
The trouble is when I try and view the query Access blows up/Closes down...no message.
Any ideas why it should work in a form, but not able to view it.
I can also view it in design mode.
Attached please find the query..It's based on many others..I can't post them all.
I'm using Access 2003 SP3 + latest hotfixes
SELECT o1.ContactID, (SELECT COUNT(contactid) FROM QryPreSubTotalFacility AS o2 WHERE o2.DueDate <=O1.DueDate) AS InstNo, o1.DueDate
FROM QryPreSubTotalFacility AS o1;
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947328.78/warc/CC-MAIN-20180424202213-20180424222213-00157.warc.gz
|
CC-MAIN-2018-17
| 743
| 9
|
https://hibernate.atlassian.net/browse/HHH-13356
|
code
|
I have been doing flame graph analysis to understand CPU usage for some of my applications. The application I was testing in this instance is a Spring Boot app using Spring data JPA + Hibernate with MySQL jdbc driver for persistence. The entity was saved successfully in the database and the functionality itself was not affected.
From the flame graph (hibernate-exception.png), I saw that a lot of CPU was used in filling stack traces for an IllegalArgumentException which was never causing any issues. I have attached the graph with this issue. On looking at the stack, it looks like for entities with composite primary keys, Spring JPA is checking if id has to be derived for every property of the composite primary key (IdClass). This is done by calling Hibernate's managedType method in MetaModelImpl. The method is throwing IllegalArgumentException every time as the properties of my composite key are literals and id derivation is not required. In the example I tested, my composite key had 4 properties and 4 exceptions were thrown for each of them and this happens every time I persist an entity. It easily multiplies in write-heavy workloads. And this is consuming a lot of CPU as more time is spent on creating these exceptions and stack traces.
Taking into account just Spring JPA's code, I don't think this situation warrants an exception to be thrown as it silently ignored and not adding any extra information. I don't know if throwing the exception is critical to other parts of Hibernate. Can managedType just return null if the provided class is not a managed type instead of throwing exception?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738723.55/warc/CC-MAIN-20200810235513-20200811025513-00526.warc.gz
|
CC-MAIN-2020-34
| 1,613
| 3
|
http://coding.derkeiler.com/Archive/General/comp.programming/2003-12/1870.html
|
code
|
Re: Letter to US Sen. Byron Dorgan re unpaid overtime
From: Randy Howard (randy.howard_at_FOOmegapathdslBAR.net)
Date: Sun, 21 Dec 2003 18:22:39 -0600
In article <email@example.com>, spinoza1111
> No, I want Algol. I shall have to develop a Windows compiler myself
Let us know when you're done.
> I also want an END to a tribalized and regressive culture which is
> based upon C, and a dreamlike inability to get beyond the passive
> aggression of C.
Why do you continue to associate sociological babble with programming
languages and those that use them? Why can't you just say you're
not very good at C and stop interacting with it or other C programmers?
> It appears to me that this is what you do, the only difference happen
> to be that your opinions coincide with those of the tribe.
Yes, 10,000 programmers are wrong, and you are the only correct one
on the face of the earth. That seems likely.
> The left and right braces are useless unless you have a consistent
> standard that dictates their use even around loops that contain one
The above is another perfect example of why you need to star as a
new character in Dilbert. Perhaps a troll should be added to
the cast of characters. The style of bracing even bothers you,
but you hate the language and don't use it?
> for (intLineIndex = 0; intLineIndex < intLineCount; intLineIndex++)
> if (processLine(strLine[intLineIndex]) != SUCCESS) break;
Ugh. Hungarian notation is one of the worst ideas in programming
in the last 40 years. Don't believe me, talk to a MFC programmer
that has had to migrate an app from 32-bit to Itanium or Opteron.
They'd like to kill anyone that even mentions the term.
> > I consider that to be a strength of C, in that it does what I expect. I was
> > surprised to find that VB.NET does things differently.
> Your expectation is malformed by overexposure to C.
But yours was not malformed by overexposure to Algol, Fortran or
radon in the basement where your parents locked you up at night.
> I've clarified this. I wrote a subset compiler for business rules that
> doesn't support typedef.
You wrote a C "compiler" that did not support typedef at all. According
to the standard, it has to in order to actually BE a C compiler. You
wrote something else, C-- perhaps. Why not post the code for this
mythical compiler. I suspect it's a figment of your warped imagination.
> Again, you appear to know nothing about runtime. To find the end of
> the string a single instruction, which scans a string for a character,
> is needed.
Simple test: Get out your favorite X86 assembler, and code up some
implementations of code to search through a block of memory looking
for a 0 byte. See how long it takes on 30 bytes of memory, then
see how long this "single instruction" takes to complete on 512MB
of RAM. Hint: You're not going to like the result.
> > Your technique is still way slower, you see.
> Its a worse workman who makes pronouncements about C without having
> any experience in the development of a C compiler, even a partial one.
Let's see your compiler. That doesn't support typedef. I am curious
how an expert does such a thing.
> The runtime doesn't count the characters in a loop until it finds a
> null character. It executes one instruction to scan for the character
> in nearly all cases.
This is only true on SOME processors that happen to support such an
instruction. Also, even those that do so take a variable amount of time
to complete based upon the amount of scanning required. It's not
about instruction count, it's about clock ticks consumed.
-- Randy Howard _o 2reply remove FOOBAR \<, ______________________()/ ()______________________________________________ SCO Spam-magnet: firstname.lastname@example.org
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711441609/warc/CC-MAIN-20130516133721-00034-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 3,720
| 60
|
https://joshuacrowley.com/study/wriveted
|
code
|
NSW State library visitor trying the finished Wriveted's Chat bot.
I supported the Wriveted team through multiple iterations of their product, as a UX designer. Together we rapidly prototyped the chat bot experience using LandBot and custom thermal printer integration. I also helped conduct design works with school children to build the ultimate chat bot.
Wriveted finds books that match a child’s interests, increasing their desire to read and improving literacy. Wriveted's chat bots are in NSW schools and also in the State Library of NSW Children’s Library. The team has worked closely with librarians to create an engaging experience for children to find books that match their interests and general reading ability.
The 1st prototype, was perfect for gathering feedback and engaging potential stakeholders.
We used a service called landbot.io to develop the chat bot experience. I helped push LandBot to the limit, using API calls, and custom logic to help achieve the experience set by the Wriveted team. Landbot's SDK allowed me to embed the chatbot in a web app, that could communicate with a Thermal printer on the same network. The Wriveted team used lego to fashion an physical unit for all the components.
The Wriveted team were able to use the prototype to collect feedback and attract keen stakeholders, which encouraged them to iterate the design into a model ready for production. I then helped rework the system to run on a Windows 10, Intel Compute stick. This system proved to be quite stable, and was installed into the NSW State Library to be used an enjoyed by young readers.
School students at our design workshop prototyping and providing input for the 2nd iteration.
A simple chat interface that enables unique recommendations based on the readers attributes.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816734.69/warc/CC-MAIN-20240413114018-20240413144018-00201.warc.gz
|
CC-MAIN-2024-18
| 1,791
| 8
|
https://awrcorp.com/download/faq/english/docs/Analyst_User_Guide/Host_Configuration.html
|
code
|
To configure a link to your Linux cluster:
From the Home in the Settings group, choose Job Scheduler Admin from the Environment drop-down menu:
The Job Scheduler Admin dialog box appears. On the General Settings tab of the dialog box, click the Create a new host entry button to create a new host entry. Note that the initial entry in the Verified column for your new host entity is "Not Verified".
Click the line for the host entry, then click the Edit host entry details button to display the Host Details dialog box.
For Entry Name enter a name for your host. From the transport drop-down menu choose .
For Initialization > host enter the IP address, or hostname of the master node on your cluster:
For Initialization > authentication > username enter the user account name on your cluster. For privatekey click the ellipses on the right margin of the dialog box. Navigate to the location where the private key matching the public key on your cluster is stored, and click on the Browse dialog box to choose the key. This should be the Linux private key you created (see “Configuring SSH”); do not use a private key created by means of a third-party tool.
If the Linux cluster has to be accessed through a remotehost/gateway, enter the ipaddress or remotehost name in Initialization > tunnel > remotehost. Otherwise, leave remotehost empty. Change the remoteport and forwarding port, if different from the defaults.
Click thebutton on the Host Details dialog box. Analyst tries to verify that a communication link can be established with your cluster. On success, a message box is displayed showing the message "Host Verification Succeeded!".
On failure, a message box is displayed showing the message "Host Verification FAILED!" and the error causing the failure.
For a discussion of steps to take to troubleshoot a verification failure, see “Troubleshooting Host Verification”.
Click the Verified column for the new host entry. The verification process for this host is now complete, you do not need to perform verification again the next time you run a simulation on this host.button to close the message box, then click the button close the Host Details dialog box. If the verification succeeded, a green check mark and the word "Verified" is displayed in the Job Scheduler Admin dialog box
If you have additional hosts on which you would like to run remote simulations, add them using the method above. When you have finished adding all of your hosts, make sure the box next to each host's name in the Entry Name column is checked to enable the host. If you prefer to disable the host and make it unavailable for use while storing its values in the system, clear the box.
Click the Done button in the lower left corner to close the dialog box.button on the Job Scheduler Admin dialog box to save the settings for your host entries, and then click the
After you have added and verified your host(s), restart Analyst to load the host definition(s) into the system.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669546.24/warc/CC-MAIN-20191016190431-20191016213931-00411.warc.gz
|
CC-MAIN-2019-43
| 2,977
| 15
|
https://swishjam.com/changelog/changelog-update-1-30-2024
|
code
|
We've been super busy the last several weeks since our last Changelog update. Here's a run down of what we've shipped:
Revenue Analytics Swishjam’s Revenue Analytics solution pulls in all of your core SaaS-related revenue metrics into a single dashboard to enable you to better understand your business and keep track of what matters most; your revenue
Event Trigger Updates Part of our automation suite, Event Triggers received an upgrade. You can now easily test triggers before creating them as well as edit triggers after they've been created. We've added the ability to filter and trigger slack messages only when you want them.
Infra Work We've been doing a bunch of work to connect profiles and organizations better. We've been putting in a lot of work here to make this really robust and it'll pay off in our next update for lots of features for you all.
- Revenue Analytics is live
- Filters on event triggers
- Rebuilt the Stripe data ingestion
- Added ability to test Slack triggers before creating them
- Added ability to edit the Slack triggers
- Automated summary of slack events sent to your channels
- Opened up backend events & added to the docs
- UTM parameter free tool
- Added auto-capturing of events to our instrumentation
- Work and profiles and organizations. We've tied events to users like Stripe & Resend events.
- Filtering on users page
- Github Integration
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475897.53/warc/CC-MAIN-20240302184020-20240302214020-00264.warc.gz
|
CC-MAIN-2024-10
| 1,388
| 16
|
http://www.linuxine.com/story/syslog-better-logging-tutorial
|
code
|
Syslog Better Logging Tutorial
Syslog is a powerful tool, but only if you can actually use it. This guide will go over the basics of syslog and provide you with a much more powerful default configuration.
on 05/06/2010 – Made popular on 05/06/2010
I want to configure log rotate on my syslog server where there is a directory called /syslog inside this directory there are couple of folders with different hostnames and there i can see syslog.log on each folders
My intention is to configure log rotate and keep 90days data and remove rest of the files and need to add this in cron to run weekly once
Hi all, how would I go about logging to syslog (systemd-journald) but for my own user? For example, a simple python program logs as follows:import syslog
syslog.syslog('hello, syslog')But when I run journalctl -xn as a normal user, I don't see the log message. I only see it when I run journalctl -xn as root.
It runs for some time after /etc/init.d/syslog start and after some time, it stops the logging of messages..also ps -eaf | grep sys not showing the syslog processes..........
When i start syslog using /etc/init.d/syslog start, /var/adm/messages reports messages as follows:
krtld: [ID 472681 kern.notice] WARNING: mod_load: cannot load modul
Hi,I have a problem with journalctl and syslog output within a C program:When I use journalctl in follow mode: $ journalctl -fand I run the folowing program in another terminal#include <stdio.h>
/* logging made in file /var/log/syslog */
sudo is an essential tool in an environment where there are multiple server and system administrators. By default sudo will log to syslog, and it is very straight-forward to isolate the logging to a local file which can be useful.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131145.0/warc/CC-MAIN-20140914011211-00333-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
|
CC-MAIN-2014-41
| 1,723
| 13
|
https://share.flixhouse.com/cat/movie-trailers/
|
code
|
Watch full movie @FlixHouse.com: https://www.flixhouse.com/cat/action/video/slavemen | SUBSCRIBE to Our YouTube Channel: https://bit.ly/2nY4k58
A bullied youth happens upon a mask that gives him superpowers and allows him to change his past but revisiting history brings unforeseen consequences.
Connect with FlixHouse Online:
Visit FlixHouse website: https://bit.ly/2nY4k58
Like FlixHouse on Facebook: https://bit.ly/2Gtr2w4
Follow FlixHouse on Instagram: https://bit.ly/2V96T6B
Follow FlixHouse on Twitter: https://bit.ly/2vdDIAN
Find FlixHouse on Roku, Amazon Fire, Android TV, Tablets and Mobile devices.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986669057.0/warc/CC-MAIN-20191016163146-20191016190646-00281.warc.gz
|
CC-MAIN-2019-43
| 608
| 8
|
http://www.trekbbs.com/showpost.php?p=4572358&postcount=24
|
code
|
Personally I would go with QSS. It seems to fit in well with the Federation look and feel. Transwarp feels a little bit Borg (obviously).
Having dabbled with QSS in the past I should imagine the Federation showing some keen interest in further developing the technology upon the return of Voyager from the Delta Quadrant. I should imagine, like with any new technology, there would be stumbling blocks and the occasional accident but the rewards of succeeding would be well worth the sacrifice of the odd mishap.
Much like Warp Speed it would probably take decades to master, much like the slow but steady improvement in Warp technology from 1 right the way up to 9.975 over the period of a couple of centuries.
As stated in previous posts the main problem appears to be the lack of computing/processing power. The ships themselves seem to be able to cope just fine in the slipstream without disintegrating after a couple of minutes.
This is only my second post. Im like a fat kid in a sweetshop after finding this site :-)
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398446286.32/warc/CC-MAIN-20151124205406-00214-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 1,023
| 5
|
https://www.stepstone.de/stellenangebote--Thesis-for-Masters-degree-m-w-d-Codebook-design-of-high-resolution-CSI-Type-II-feedback-in-5G-NR-Muenchen-Rohde-Schwarz-GmbH-Co-KG--6174544-inline.html?cid=partner_smets___SP-Ingenieurjobs
|
code
|
Rohde & Schwarz develops, produces and markets innovative products for test and measurement, broadcast and media, cybersecurity, secure communications and monitoring and network testing areas. Founded 85 years ago, the independent company has an extensive sales and service network in more than 70 countries.
Join our Test and Measurement Division in München (Germany) at the earliest possible date as
Thesis (for Master´s degree) (m/w/d) Codebook design of high resolution CSI (Type II) feedback in 5G NR
- Analysis of dual stage codebook design used in LTE-A TM9/10, LTE-Pro FD-MIMO and 5G NR CSI Type 1
- Investigation of CSI Type II codebook components (basic set size, frequency granularity, phase shift) and configuration through RRC layer
- Establishment of a MATLAB/Python model of hybrid beamforming using CSI-Type II PMI codebooks for MU-MIMO transmission
- User throughput comparison of CSI Type II feedback with 4G LTE CSI feedback
- Academic studies in electrical engineering, computer science or comparable field of studies preferably with a focus on signal processing or communication technology
- Good Knowledge in MIMO
- Basic Know-how in mobile communication
- Strong analytical skills and a conscientious and efficient approach to work
- Good command of written and spoken Englisch and German
You can expect very good payment, excellent terms and outstanding opportunities for growth and development. We will also be happy to help you find a room.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00485.warc.gz
|
CC-MAIN-2020-10
| 1,468
| 13
|
https://communities.sas.com/t5/Base-SAS-Programming/problem-facing-while-importing-csv-file-by-using-infile/td-p/312700?nobounce
|
code
|
11-18-2016 12:53 PM
please help me out in solving below mentioned error. How can rectify this error. Even i tried ny mentioning RECFM=v and LRECL=256 in the statment.Still didn't work out. Please help out.
11-18-2016 01:14 PM
The only things I can suggest from what you show is to first verify that the file is actually in the folder and then open the file with a text editor such as Notepad and examine the contents.
If there isn't any actual content in the file then that's the problem to fix.
11-18-2016 01:16 PM
How does the file look when viewed with a text editor?
Does it have proper (for Windows) line-ending sequences?
11-18-2016 04:05 PM
Your code contains next line:
infile '...' dlm='09'x dsd firstobs=2;
Are you sure that the delimiter in the file is the tab character ('09'x) ?
Then you have:
input customer $10 month $ type amount;
Do you realy need the dsd option ? Is any variable may be missing having two tabs in sequence ?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591216.51/warc/CC-MAIN-20180719183926-20180719203926-00395.warc.gz
|
CC-MAIN-2018-30
| 942
| 15
|
https://roveridx.com/category/best-in-class-idx/
|
code
|
Rover IDX 2.1 allows you to choose whose map you'd like to display on the Property Detail Page. You can choose from Bing, Google, Mapbox, and Mapquest. Setup is simple - just paste your map key.
You can also set the Zoom and Map Type (the style). The different maps have different styles, so it's fun to select a style and see what it looks like.
Rover IDX 2.1.0 is primarily a performance release. It contains many enhancements to improve overall speed. In addition, these features are included:
- Improved ability to display listings from multiple MLS regions in one search result.
- Property Layout slideshow with lazy loading images and swipe gestures.
- New Listing layouts, along with the ability to customize existing layouts.
- Display both active and sold listings together in one search result.
- add 'Forgot password' to Login dialog
- Price dropdown is now two simple
<select> dropdowns. Much simpler, more mobile friendly.
- Improvement search panel responsiveness
Custom (non-MLS) Listings
- Rover has long has the option to create property listings of any type: Single Family Homes for sale, Land for sale, Homes for lease, vacation rentals. This new release contains a feature for Lease processing: A lease can be created with the click of a button and emailed to the tenant. So the entire transaction can be paperless.
- In addition, we've completed quality assurance testing and are happy to announce that Rover IDX 2.1.0 supports PHP 7.2.
We are thrilled to announce that Rover IDX 2.0 will become available this week. A long time in the making, Rover IDX 2.0 is packed with new features you've been asking for:
- New, more modern looking Search Panels!
- Facebook login
- Ability to combine counties, cities, areas, subdivisions… into one drag and drop organized search control on Search Panel. See it in action here.
- Upon setup, offer to add search pages for specific towns
- Listing layout 'Active', 'Sold', 'New', 'Pool'… banner diagonally across photo. See it in action here.
- Allow multiple MLS regions to be searched in one page (if allowed by MLS)
- Allow multiple search shortcodes on one page, each acting independently. See it in action here.
- Add ability to redirect non-Active crawled pages to the default 404 page, the Home page, or any selected page.
- New Listing Layout `ritz` See it in action here
- Add ability to add Login and Register menu items to primary or secondary WordPress menu. See it in action here.
- Copy settings from one domain_id to this domain_id
- Configurable registration dialog
- Configurable wait icons. Choose from five great icons.
Other great features:
- Search button will auto-appear on non-custom search panels when search panel has no listing or map framework on same page.
- New Contact form shortcode [rover_idx_contact] See it in action here.
- Add Captcha to Contact form. See it in action here.
- New Registration form shortcode [rover_idx_register] See it in action here.
- Map drawing tools for visitor. See it in action here.
- Allow sorting of Styling >> Search >> Property Types
- Allow sorting of General >> Office & Agents
- Allow removal of Emoji code (JS and CSS) added by WordPress 4.2
- Agent can now manage agent cities that are used to generate Agent Newsletter
Rover IDX allows one license-holder to use that license on up to 5 domains. Managing those domains is going to get a bit easier with this feature.
Rover IDX 2.0 will allow you to copy your Styling settings from one site to another:
Simple three-step process
- Enter the domain id of the site source website.
- Go to that source website, and Approve the copy request.
- Go back to the target website and Finalize the copy.
The Copy Settings feature will copy all Styling settings, including:
- CSS Framework
- Login / Register display
- Search panel preferences
- Listing layout preferences
- Property Detail page preferences
This feature will also copy custom layout templates and Map defined locations that you may have created to the target site.
Rover performs searches in real-time (immediately) as you change selections on the search panel. So you do not need a Search button when listings or a map are displayed on the same page. As you make changes to the search panel, the map and listings update almost instantly.
If you have a page that has just a search panel, then you can do two things:
- Use the Rover Quick Search widget, and check 'Redirect search to new page'. This will allow visitors to click a Search button, and open a new page with the search results.
- Define a custom search panel and place any search fields in any order - see here for an example:
Custom search panels are drag and drop created in Rover IDX >> Styling >> Search Panel. Just click Build Your Own (Drag & Drop). After you are done dragging and dropping, you can save this as your default search panel, or copy the Shortcode Example to use this new Search Panel in one or a few pages.
This map is trying to show available listings in Austin, TX. A few rogue map markers is causing the map to zoom out to include all markers on the map.
Listing addresses are geocoded to get the correct latitude and longitude. Sometimes, especially when the address was not correctly input by the agent during the creation of the MLS listing, that geocoding goes awry:
Detecting that the geocode didn't quite give us what we expected isn't so easy. The geocode succeeded, we just got wrong results.
Rover IDX can handle this scenario very nicely, using the defined_location feature.
Step 1. In the Rover IDX plugin admin pages, go to Styling >> Map.
Step 2. Select the "Defined Locations" tab.
Step 3. Move the map to include the entire area that you want to cover. For instance, if you want to display Austin, TX, zoom in/out to show all of Austin on the map.
Step 4. Click your mouse on the map to place a marker in a corner. Repeat this for all four corners.
Step 5. Notice that a Save this new Polygon button appears. Press it, and give your four-cornered polygon a name.
This is what your polygon might look like
Step 6. On your search page, add defined_locations="<polygon name>" to your Rover shortcode. For instance:
[rover_idx_full_page defined_locations="austin area" ]
If you do not want the polygon to display, but still have the map respect the boundaries of the polygon:
[rover_idx_full_page defined_locations="austin area" show_defined_locations="false"]
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655901509.58/warc/CC-MAIN-20200709193741-20200709223741-00354.warc.gz
|
CC-MAIN-2020-29
| 6,396
| 70
|
http://tamewhale.com/whalespeak/2011/08/aim-higher-not-lower/
|
code
|
Aim higher, not lower
When Apple first brought out the iPad it was much cheaper than the pundits predicted. That has presented a real challenge to their competitors to produce a comparable product for the same cost or cheaper.
With the HP Touchpad selling well now that it is being sold off from most outlets at a very reduced price, the reaction seems to be that HP would have done well to make a loss on each unit sold which they could then recoup somehow. On apps maybe, or future versions of the hardware.
What’s striking to me is that no one has suggested that anyone try to out-Apple Apple and aim higher. HP had a real opportunity with WebOS to make a product not just comparable to Apple’s but better. What if another company tried to create a tablet double the price of the iPad, with better hardware, better features and a well-polished OS?
What other tablet makers should be learning from Apple is that you don’t have to worry about recouping anything if your product makes a profit in the first place because it is worth buying. The reaction to Apple’s strategy doesn’t have to be to undercut them, it could always inspire you to aim higher.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711286.17/warc/CC-MAIN-20221208082315-20221208112315-00571.warc.gz
|
CC-MAIN-2022-49
| 1,163
| 5
|
http://forums.carolinaeuros.com/topic/10286-looking-to-split-garage-rental-raleigh-nc-area/
|
code
|
I'm one of those unfortunate folks living in a townhouse with no true garage space
I've made do for the last few years with my storage building but with the addition of the Genesis, there's no way that with the additional space needed it's going to work out.
So, I'm looking to see if there's any interest in a small number of folks splitting a garage space for a longer-term rental.
Something like the space in the link below (although not necessarily that particular space unless the owners are receptive, they seem to be BMW folks after all, the space is still available and we find a consensus quickly).
Somewhere in the Raleigh area-ish. I don't mind a little drive, but I'd rather not be an hour to the space.
If interested post here or e-mail me at larryXharperXatXgmailXdotXcom
Thanks in advance!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890991.69/warc/CC-MAIN-20180122034327-20180122054327-00331.warc.gz
|
CC-MAIN-2018-05
| 804
| 7
|
http://selfcoachingcards.eu/privacy-policy
|
code
|
We want our users to be aware of information we collect, how we use it and under what circumstances, if any, we disclose it.
For each visitor to our website, our web server automatically recognizes only the visitor's domain name; not the email address.
The information we collect is never shared with other organizations for commercial purposes.
We have a PayPal order form. We require information from the user on this order form. A user must provide contact information (such as name, phone number, email address and postal address) and financial information (such as credit card number, expiration date). If we have trouble processing an order, we use this information to contact the user. This information is used only for shipping and billing purposes and to fulfil customers' orders.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118851.8/warc/CC-MAIN-20170423031158-00406-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 789
| 4
|
https://wordpress.org/plugins/visit-counter/
|
code
|
This is a widget based on Simple Hit Counter
by pjungwirth. All you have to do is drag the widget to your desired widget container, set up the title,
text to display and font size and color for the counter and voilà.
It was developed for (and tested in) WordPress MU 1.3
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00228-ip-10-171-10-108.ec2.internal.warc.gz
|
CC-MAIN-2017-09
| 271
| 4
|
https://thetechcurmudgeon.blogspot.com/2009/03/quickie-on-safari-4-beta.html
|
code
|
- The top tabs are potentially dangerous. Drag a tab in the wrong place, and you move the whole window. To re-order the tabs, you have to grab the active tab by it's little "tread" triangle in the upper right corner of the tab. Annoying. I realize Safari is trying to be a Chrome clone here, but it doesn't work.
- The tab-ordering tread and the little close-this-tab-X-thingie don't appear until you actually move the pointer into the tab. That means you can't simply go grab the tab. You have to go to the tab, stop and look, and then do whatever you're going to do. I've only had the beta for a week or so, and already I've closed tabs inadvertently more times than I can remember.
- Can we please put the damn bookmarks in a bar on the left side, like every other frigging browser in the world? PLEASE? Even with the tabs above the URL box, and the bookmarks below it, it's still far to easy to hit a bookmark when reaching for a tab. And they're visually confusing.
I'm not alone in this. Yan Pritzger agrees with me.
Whew. Ok, I'm done now.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148375.36/warc/CC-MAIN-20200229022458-20200229052458-00303.warc.gz
|
CC-MAIN-2020-10
| 1,046
| 5
|
http://fixunix.com/openssh/176861-solaris-8-password-inactivity-openssh-print.html
|
code
|
Solaris 8 password inactivity with openssh
We have recently updated our password aging to include setting inactivity days. We are running ossh 4.1p1 in a Solaris 8 environment. It appears that ossh isn't picking up on inactivity. Accounts that have been inactive still prompt to change passwords - if you telnet the same servers you get kicked out immediately. On the Solaris 9 servers running SUN's ssh the inactive accounts are being locked. Any ideas?
openssh-unix-dev mailing list
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982967797.63/warc/CC-MAIN-20160823200927-00204-ip-10-153-172-175.ec2.internal.warc.gz
|
CC-MAIN-2016-36
| 484
| 3
|
http://forums.colts.com/profile/1110-major_adobe/
|
code
|
Lack of interest in dming a moderator I suppose. This seemed like a perfectly acceptable route. Long threads always get derailed eventually.
Regardless of the rumor status, it is a hot social media topic. Her response was that a Google search didn't produce any leads. A twitter search of the Indy / Colts media would show a conversation re: this topic.
I'm not advocating that this is the truth. I'm just a fan of an open web. Change the title and let the people discuss if they want.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818685129.23/warc/CC-MAIN-20170919112242-20170919132242-00342.warc.gz
|
CC-MAIN-2017-39
| 485
| 3
|
https://businessintelligence.com/dictionary/cloud-business-intelligence/
|
code
|
Cloud Business Intelligence
Cloud business intelligence (cloud BI) refers to network-based tools that turn raw data into information that businesses can use to cut costs, streamline inefficiencies, increase revenue and generally make better organizational decisions.
Because it doesn’t have to be downloaded from a disc or hard drive, cloud based BI offers many advantages as a business intelligence solution. It is easy to access, relieves the user of many of the administrative tasks associated with data management, comes relatively cheap and is highly scalable.
Cloud-based BI can perform just about any business intelligence function:
Online analytical processing (OLAP)
Business performance management
http://www.klipfolio.com/satellite/what-is-cloud-bi http://en.wikipedia.org/wiki/Business_intelligence http://www.businessdictionary.com/definition/business-intelligence-BI.html http://site.xavier.edu/sena/info600/businessintelligence.pdf http://en.wikipedia.org/wiki/Cloud_computing http://blogs.computerworld.com/19959/defining_cloud_computing_part_one_laymen_s_terms http://www.techterms.com/definition/cloud_computing
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890771.63/warc/CC-MAIN-20180121135825-20180121155825-00103.warc.gz
|
CC-MAIN-2018-05
| 1,131
| 7
|
https://remotejobsdesk.com/remote-jobs/remote-senior-backend-software-engineer-job-at-remote-circunomics-gmbh-193813
|
code
|
amazon web services
Are you interested in the circular economy and battery technology? At Circunomics, as a Backend Developer (f/m/d), you will be part of our team responsible for designing and implementing components of our cloud-based data analytics platform. This includes, among other things, the specification and implementation of various APIs, backend microservices, and deployment to our cloud infrastructure.
- Degree in computer science or a similar technical field
- 3+ years of related work experience
- Successfully completed development projects with PHP and Symfony Framework
- A broad understanding of message-based systems, RESTful, API design, microservice architectures, and distributed systems
- Design and operation of container-based applications, including those using Kubernetes
- Experience in planning and maintaining large and high-performance backend systems
- Experience with SQL/NoSQL-based database systems and schema designs
- Clean understanding of HTTP protocol and web technology
- Experience in designing systems for public clouds (AWS)
- Experience with Linux based environments, shell scripting, and infrastructure diagnostics
- Willingness to write and maintain documentation
- A keen sense of knowing when a feature “works” and when it can be improved
- A focus on coding standards and code quality
- Architecture skills (code and infrastructure). Acronyms like SOLID and DDD make you excited
- You regularly follow KPIs and can get the most out of them to make well-reasoned decisions and iterate to improve those through time
- Push for shipping. CI/CD is a must. Putting code live every day is a given.
- Your profile is rounded off by a good command of German and/or English
You get bonus points for:
- Express and/or Django
- With us, you are part of a young and ambitious team that works together on the technology for the next future
- Modern cloud technologies, topics related to the circular economy, new energy, electromobility and li-ion batteries, modeling of complex systems and machine learning algorithms, data science
- Your strengths and interests determine your development potential - we place great value on individual personality and skill development
- We consider your individual situation and allow you to work in a family-friendly way
- A true remote culture
- Flexible working hours and benefits such as a travel card
Circunomics is a cloud-based platform for the Circular Battery Economy.
The Circunomics battery data platform will power an ecosystem of OEMs, fleets, recyclers, and remanufacturers.
Circunomics is building a symbiotic system that integrates tracking/tracing information, a neutral cloud environment, and commercial service to enable battery producers and OEMs to build circularity into their batteries – enabling reuse and recycling.
We are a team of experts in the circular economy, data science, battery, and software development. As a spin-off of Next Mobility Labs, we are working with cutting-edge technology to help shape the European battery industry in line with the principles of the circular economy.
Read more at Circunomics.com
Interested? Upload your CV together with a Cover Letter (only a few sentences explaining what motivates you about the position and what your salary expectation is) in one pdf.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154798.45/warc/CC-MAIN-20210804080449-20210804110449-00049.warc.gz
|
CC-MAIN-2021-31
| 3,306
| 33
|
https://enggskills.net/lms_courses/integrated-physics-webinars/
|
code
|
This module fundamentally focusses on advanced physics concepts relevant to component stress and vibration situations relevant to aero and auto industry. This is mapped to each physics module of phase 1 such that fundamental learning is consolidated with appreciation for several real time aspects.
|Lectures :||Pre-recorded webinars on all modules|
|Quizzes :||Quizzes after each sub module.|
|Certificate of Completion :||yes|
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.31/warc/CC-MAIN-20200812225607-20200813015607-00343.warc.gz
|
CC-MAIN-2020-34
| 428
| 4
|
https://www.essential-freebies.de/board/viewtopic.php?f=33&t=16439
|
code
|
KWPB soll die Erstellung von Windows Vista, 7 Pre Install CD /DVD vereinfachen.
Ergänzendes Add-On Tool ...Overview
With the release of WinPE 2.x, Microsoft introduced a new method of distributing WinPE,
namely using pre-built Wim files included with the Windows Automated Installation Kit (WAIK).
In order to build a new WinPE environment with this method the user must typically utilize a set of command line utilities
and scripts from the WAIK, which can often be an arduous or confusing process to manage. KAPE was created to simplify this process,
starting with the origination of the WinPE project through the creation of the final WinPE ISO.
KAPE supports operating under all Windows operating systems starting with Windows XP.
All versions of the Windows AIK are supported, including Windows Vista/2008 and Windows 7 versions.
KAPE provides for the creation of a new WinPE project, enables automatic mounting/unmounting of .wim files,
and includes functionality such as injecting Windows drivers and modifying the installed WinPE packages.
Please see the reference tab for details.
PS: Vom selben Anbieter kommt auch der MBRWizard (EFB-Suche)Bootsage Flash Builder http://firesage.com/bootsage.php
Since the release of Windows 7 and the new Windows AIK, there has been a great deal of interest in booting and installing
these environments from USB devices and flash media. Additionally, with the steady increase in Netbook popularity,
many users have experienced difficulties trying to operate or install software without a CD/DVD drive.
BootSage has been designed to help overcome these problems by creating a bootable Windows 7 installation flash drive,
or a WinPE bootable environment from a single, simple to use interface.
suchen finden tags keywords :::> hophop windows 7 8 preinstall WIM iso bearbeiten
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827769.75/warc/CC-MAIN-20181216143418-20181216165418-00248.warc.gz
|
CC-MAIN-2018-51
| 1,818
| 19
|
http://meta.stackoverflow.com/users/1902882/matthew-johnson
|
code
|
|visits||member for||2 years|
|seen||Dec 22 at 17:31|
I was a web developer for the University of Missouri School of Health Professions, and I currently work at answers.com as a Software Engineer. I received a BA in Computer Science from Mizzou in 2013.
|bio||website||developingawesomeness.com||visits||member for||2 years|
|location||United States||seen||Dec 22 at 17:31|
21 Votes Cast
|all time||by type|
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548693.153/warc/CC-MAIN-20141224185908-00052-ip-10-231-17-201.ec2.internal.warc.gz
|
CC-MAIN-2014-52
| 407
| 7
|
http://freecode.com/tags/linux?page=1293&with=&without=16
|
code
|
Nit is a statically typed object-oriented programming language. The goal is to propose a statically typed programming language where structure is not a pain. It has a simple, straightforward style and can usually be picked up quickly, particularly by anyone who has programmed before. While object-oriented, it allows procedural styles. The Nit Compiler (nitc) produces efficient machine language binaries.
BurnerOnFire is a multi-threaded program that can write the same content to multiple CD/DVD burners simultaneously. It is currently developed and tested only on Debian and only supports content in the form of ISO files. It uses D-Bus/HAL specification to interact with hardware. It spawns subprocesses that wrap around the command line program Wodim. BurnerOnFire has both CLI and GUI (GTK+) interfaces.
RayFeedReader is a PHP class to retrieve and display feed content from a given URL. It can read feed content into an array, and supports RSS 0.91, RSS 0.92, RDF, RSS 2.0, and Atom feeds. It can detect the feed type automatically, or it can be set manually. A pluggable HTML widget rendering class is supported. The HTML widget can be rendered through the optional RayFeedWidget class or your own extended class. It is easily configurable and can work without any configuration. It is simple and easy to use from anywhere in your application with a single line of code. It supports the Singleton pattern and is light weight.
RipTcl is a front-end to various programs that rips audio CDs and encodes/transcodes to FLAC, Ogg Vorbis, MP3, WAV, or AAC under Linux. It provides extended meta tagging features, detects and saves files to USB storage players, burns CDs in DAO mode, plays audio, and supports freedb.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187227.84/warc/CC-MAIN-20170322212947-00067-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 1,719
| 4
|
http://forums.imore.com/itunes/223134-help-itunes-sync.html
|
code
|
help with itunes sync
Bare with me as I have not used itunes since 1st gen ipod days...but... I am having some trouble getting this setup. My goal is to have the auto sync between the files on the phone and the files on the pc - that is the goal of the syncing right?
Music I don't seem to have an issue with... I can add files to the library and they go on the phone fine. It is everything else which doesn't seem to fly right. Photos for isntance... I have selected a specific folder on my pc where I wish to keep photos. While I can add photos to the pc and sync them to the phone I cannot do the reverse. For example, I take a picture with the phone shouldnt that picture be synced to my comp then after a 'sync'?
Apps is another one. I install some apps on the phone and the only way to 'save' them on my pc is to use the 'transfer purchases option'... shouldnt sync take care of this so both sides are the same?
Lastly, my contacts. I realize I can do a right click in itunes and use backup... I'm assuming this takes care of my contacts among a few other things, BUT... do I need to manually backup like this every so often? Can it not be automated or everytime the phone is connected?
I'm coming from a BlackBerry where backups were a breeze and no worries to be found... after the first day of the iphone I must say I LOVE it, but would like to get straightened out with the above... or perhaps it is not possible what I am asking?
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720845.92/warc/CC-MAIN-20161020183840-00128-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 1,440
| 6
|
https://lists.typo3.org/pipermail/neos/2007-July/000548.html
|
code
|
[TYPO3-50-general] TypoScript syntax: Array indexes
robert at typo3.org
Thu Jul 19 14:24:25 CEST 2007
Am 19.07.2007 um 12:40 schrieb Elmar Hinz:
> that's much clearer now. The point is that you want to make a clean
> already during parsing.
> page.thing = something ???
> Your question is: Is this treated as an object porperty or is it an
> key of an internal object? You don't want to postpone this decision
> you know about the internals.
If we can be absolutely sure that we don't need to know the complete
at parse time, we can keep things like they are of course. So, what
against keeping it? My first thoughts are:
- Syntax highlighting can be done without knowing which objects
type they're of and what properties they have
- Autocompletion is probably easier to implement
- A developer can easier guess what the object model behind looks
by just reading the TS source, without knowing the object model.
(is that relevant?)
- It's way easier to parse and probably faster to parse (no
Do you have more?
More information about the TYPO3-project-5_0-general
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358953.29/warc/CC-MAIN-20211130050047-20211130080047-00308.warc.gz
|
CC-MAIN-2021-49
| 1,062
| 22
|
https://electronics.stackexchange.com/questions/191805/troubleshooting-16x4-lcd-display-showing-no-output
|
code
|
I have a 16x4 LCD (model DCM16433) that I suspect is dead. It has 14 pins rather than 16, but this should just mean no backlight. I have hooked pin 1 to Vss, pin 2 to +5 V, and pin 3 to a 10k pot, but I can't get anything to display at any contrast setting (IIRC, it should display the black character "outlines" even without any data). I know the pot works, and I know the LCD is getting power (that is, the voltage across pins 1 and 2 is +5 V). Am I missing something, or is the LCD just dead?
The display might need negative panel voltage, especially if the display is large. Your pot now has three pins: the center pin goes to pin 3 and the two other pins go to VCC and GND. Instead of GND, connect that pin to something like -10V. If your board has an RS-232 level shifter (MAX232), you can get the negative voltage from there. Then adjust the potentiometer and see if you get dark squares.
For quick testing, you can also get the negative supply from a lab power supply or a 9 volt battery.
Here are few things you need to consider:-
- make sure that both the backlight power and the contrast pins are attached to definite voltages, check using a multimeter
- for starting you can add 5 volts to backlight and a potentiometer to the contrast.
- check the read/write pin of the LCD.
- make sure the connections to your microcontroller are correct.
- check for the code that you used for possible errors
Please also share schematics and code that you used so that we can understand the situation better.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250615407.46/warc/CC-MAIN-20200124040939-20200124065939-00338.warc.gz
|
CC-MAIN-2020-05
| 1,507
| 10
|
https://www.vn.freelancer.com/projects/php/fannie-mae-format-csv-file/
|
code
|
I require a piece of code that will accept a Fannie Mae 3.2 file as input and produce an output file that that is converted to a CSV file
12 freelancer đang chào giá trung bình $118 cho công việc này
Dear client. I am well experienced in data processing with proper languages. Please share the input sample file and the corresponding output columns. I am sure I will provide the perfect result in a day. Regards.
My typing skill(50 words per minute) and my knowledge on MS office work.I am also able to make different types of correction and projects on Powerpoint. also a good copy writter from pdf to word.
I can write a Java program to convert the Fannie Mae file to CSV format. It will run on any type of computer, PCs and Macs. I have 20 years of experience using Java.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736962.52/warc/CC-MAIN-20200806121241-20200806151241-00526.warc.gz
|
CC-MAIN-2020-34
| 782
| 5
|
https://www.experts-exchange.com/questions/22073731/ColdFusion-7-and-cfladp.html
|
code
|
ColdFusion 7 and cfladp
Posted on 2006-11-27
I am trying to create our company phone directory from our active directory. In order to access the active directory does the username and password have to be the domain administrator account? If so, is there any tips or tricks around that to satisfy the security issue that IT has brought up.
start="cn=users & groups,dc=xxxx,dc=com"
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946807.67/warc/CC-MAIN-20180424154911-20180424174911-00378.warc.gz
|
CC-MAIN-2018-17
| 379
| 4
|
https://wiki.umiacs.umd.edu/clip/index.php?title=CLIP_Colloquium_(Fall_2012)&oldid=667
|
code
|
CLIP Colloquium (Fall 2012)
Computational Linguistics and Information Processing
08/20/2012: TopSig – Signature Files Revisited
Speaker: Shlomo Geva, Queensland University of Technology, Australia
Time: Monday, August 20, 2012, 11:00 AM
Venue: AVW 2120
Abstract: Performance comparisons between File Signatures and Inverted Files for text retrieval have previously shown several significant shortcomings of file signatures relative to inverted files. The inverted file approach underpins most state-of-the-art search engine algorithms, such as Language and Probabilistic models. It has been widely accepted that traditional file signatures are inferior alternatives to inverted files. This paper describes TopSig, a modern approach to the construction of file signatures - many advances in semantic hashing and dimensionality reduction have been made in recent times, but these were not so far linked to general purpose, signature file based, search engines. This paper introduces a different signature file approach that builds upon and extends these recent advances. We are able to demonstrate significant improvements in the performance of signature file based indexing and retrieval, performance that is comparable to that of state of the art inverted file based systems, including Language models and BM25. These findings suggest that file signatures offer a viable alternative to inverted files in suitable settings and position the file signature model in the class of Vector Space retrieval models. TopSig is an open-source search engine from QUT and it can be discussed too if there is an interest.
About the Speaker: Associate Professor Shlomo Geva is the discipline leader for Computational Intelligence and Signal Processing in the Computer Science Department at the Queensland University of Technology in Brisbane, Australia. His research interests include clustering, cross-language information retrieval, focused information retrieval, link discovery, and xml indexing.
Host: Doug Oard, email@example.com
09/05/2012: 5 Minute Madness (Part I)
Internal 5-minute lightning talks.
09/12/2012: 5 Minute Madness (Part II)
Internal 5-minute lightning talks.
09/19/2012: CoB: Pairwise Similarity on Large Text Collections with MapReduce
Speaker: Earl Wagner, University of Maryland
Time: Wednesday, September 19, 2012, 11:00 AM
Venue: AVW 3258
Faced with high-volume information streams, intelligence analysts often rely on standing queries to retrieve materials that they need to see. Results of these queries are currently extended by effective and efficient probabilistic techniques that find similar, non-matching content. We discuss research looking further afield to find additional useful documents via MapReduce techniques performing rapid clustering of documents. This approach is intended to provide an improved “peripheral vision” to overcome some blind spots, yielding both immediate utility (detection of documents that otherwise would not have been found) and the potential for improvements to specific standing queries.
About the Speaker: Earl J. Wagner is a Postdoctoral Research Associate at the University of Maryland, College Park in the College of Information Studies (Maryland's iSchool). He was previously a Research Assistant at Northwestern University where he earned his Ph.D. in Computer Science.
09/26/2012: Better! Faster! Stronger (theorems)! Learning to Balance Accuracy and Efficiency when Predicting Linguistic Structures
Speaker: Hal Daume III, University of Maryland
Time: Wednesday, September 26, 2012, 11:00 AM
Venue: AVW 3258
Viewed abstractly, many classic problems in natural language processing can be cast as trying to map a complex input (eg., a sequence of words) to a complex output (eg., a syntax tree or semantic graph). This task is challenging both because language is ambiguous (learning difficulties) and represented with discrete combinatorial structures (computational difficulties). I will describe my multi-pronged research effort to develope learning algorithms that explicitly learn to trade-off accuracy and efficiency, applied to a variety of language processing phenomena. Moreover, I will show that in some cases, we can actually obtain model that is faster and more accurate by exploiting smarter learning algorithms. And yes, those algorithms come with stronger theoretical guarantees too.
The key insight that makes this possible is a connection between the task of predicting structured objects (what I care about) and imitation learning (a subfield in robotics). This insight came about as a result of my work a few years ago, and has formed the backbone of much of my work since then. These connections have led other NLP and robotics researchers to make their own independent advances using many of these ideas.
At the end of the talk, I'll briefly survey some of my other contributions in the areas of domain adaptation and multilingual modeling, both of which also fall under the general rubric of "what goes wrong when I try to apply off-the-shelf machine learning models to real language processing problems?"
10/03/2012: Consistent and Efficient Algorithms for Latent-Variable PCFGs
Speaker: Shay Cohen, Columbia University
Time: Wednesday, October 3, 2012, 11:00 AM
Venue: AVW 3258
In the past few years, there has been an increased interest in the machine learning community in spectral algorithms for estimating models with latent variables. Examples include algorithms for estimating mixture of Gaussians or for estimating the parameters of a hidden Markov model.
The EM algorithm has been the mainstay for estimation with latent variables, but because it is not guaranteed to converge to a global maximum of the likelihood, it is not a consistent estimator. Spectral algorithms, on the other hand, are often shown to be consistent.
In this talk, I am interested in presenting a spectral algorithm for latent-variable PCFGs, a model widely used in the NLP community for parsing. This model, originally introduced by Matsuzaki et al. (2005), augments with a latent state the nonterminals in an underlying PCFG grammar. These latent states re-fine the nonterminal category in order to capture subtle syntactic nuances in the data. This model has been successfully implemented in state-of-the-art parsers such as the Berkeley parser (Petrov et al., 2006).
Our spectral algorithm for latent-variable PCFGs is based on a novel tensor formulation designed for inference with PCFGs. This tensor formulation yields an "observable operator model" for PCFGs which can be readily used for spectral estimation.
The algorithm we developed is considerably faster than EM, and makes only one pass over the data. Statistics are collected from the data in this pass, and singular value decomposition is performed on matrices containing these statistics. Our algorithm is also provably consistent in the sense that, given enough samples, it will estimate probabilities for test trees close to their true probabilities under the latent-variable PCFG model.
If time permits, I will also present a method to improve the efficiency of parsing with latent-variable PCFGs. This method relies on tensor decomposition of the latent-variable PCFG. The tensor decomposition is approximate, and therefore the new parser is an approximate parser as well. Still, the quality of approximation can be guaranteed theoretically by inspecting how errors from the approximation propagate in the parse trees.
10/10/2012: Beyond MaltParser - Advances in Transition-Based Dependency Parsing
Speaker: Joakim Nivre, Uppsala University / Google
Time: Wednesday, October 10, 2012, 11:00 AM
Venue: AVW 3258
The transition-based approach to dependency parsing has become popular thanks to its simplicity and efficiency. Systems like MaltParser achieve linear-time parsing with projective dependency trees using locally trained classifiers to predict the next parsing action and greedy best-first search to retrieve the optimal parse tree, assuming that the input sentence has been morphologically disambiguated using a part-of-speech tagger. In this talk, I survey recent developments in transition-based dependency parsing that address some of the limitations of the basic transition-based approach. First, I show how globally trained classifiers and beam search can be used to mitigate error propagation and enable richer feature representations. Secondly, I discuss different methods for extending the coverage to non-projective trees, which are required for linguistic adequacy in many languages.Finally, I present a model for joint tagging and parsing that leads to improvements in both tagging and parsing accuracy as compared to the standard pipeline approach.
About the Speaker: Joakim Nivre is Professor of Computational Linguistics at Uppsala University and currently visiting scientist at Google, New York. He holds a Ph.D. in General Linguistics from the University of Gothenburg and a Ph.D. in Computer Science from Växjö University. Joakim's research focuses on data-driven methods for natural language processing, in particular for syntactic and semantic analysis. He is one of the main developers of the transition-based approach to syntactic dependency parsing, described in his 2006 book Inductive Dependency Parsing and implemented in the MaltParser system. Joakim's current research interests include the analysis of mildly non-projective dependency structures, the integration of morphological and syntactic processing for richly inflected languages, and methods for cross-framework parser evaluation. He has produced over 150 scientific publications, including 3 books, and has given nearly 70 invited talks at conferences and institutions around the world. He is the current secretary of the European Chapter of the Association for Computational Linguistics.
Host: Hal Daume III, firstname.lastname@example.org
10/23/2012: Bootstrapping via Graph Propagation
Speaker: Anoop Sarkar, Simon Fraser University
Time: Tuesday, October 23, 2012, 2:00 PM
Venue: AVW 4172
Note special time and place!!!
In natural language processing, the bootstrapping algorithm introduced by David Yarowsky (15 years ago) is a discriminative unsupervised learning algorithm that uses some seed rules to bootstrap a classifier (this is the ordinary sense of bootstrapping which is distinct from the Bootstrap in statistics). The Yarowsky algorithm works remarkably well on a wide variety of NLP classification tasks such as distinguishing between word senses and deciding if a noun phrase is an organization, location, or person.
Extending previous attempts at providing an objective function optimization view of Yarowsky, we show that bootstrapping a classifier from a small set of seed rules can be viewed as the propagation of labels between examples via features shared between them. This paper introduces a novel variant of the Yarowsky algorithm based on this view. It is a bootstrapping learning method which uses a graph propagation algorithm with a well defined per-iteration objective function that incorporates the cautious behaviour of the original Yarowsky algorithm.
The experimental results show that our proposed bootstrapping algorithm achieves state of the art performance or better on several different natural language data sets, outperforming other unsupervised methods such as the EM algorithm. We show that cautious learning is an important principle in unsupervised learning, however we do not understand it well, and we show that the Yarowsky algorithm can outperform or match co-training without any reliance on multiple views.
About the Speaker: Anoop Sarkar is an Associate Professor at Simon Fraser University in British Columbia, Canada where he co-directs the Natural Language Laboratory. He received his Ph.D. from the Department of Computer and Information Sciences at the University of Pennsylvania under Prof. Aravind Joshi for his work on semi-supervised statistical parsing using tree-adjoining grammars.
His research is focused on statistical parsing and machine translation (exploiting syntax or morphology, semi-supervised learning, and domain adaptation). His interests also include formal language theory and stochastic grammars, in particular tree automata and tree-adjoining grammars.
10/24/2012: Recent Advances in Open Information Extraction
Speaker: Mausam, University of Washington
Time: Wednesday, October 24, 2012, 11:00 AM
Venue: AVW 3258
Open Information Extraction is an attractive paradigm for extracting large amounts of relational facts from natural language text in a domain-independent manner. In this talk I describe our recent progress using this model, including our latest open extractors, ReVerb and OLLIE, which substantially improve on the previous state of the art. I will end with our ongoing work that uses open extractions for various end tasks, including multi-document summarization and unsupervised event extraction.
About the Speaker: Mausam is a Research Assistant Professor at the Turing Center in the Department of Computer Science at the University of Washington, Seattle. His research interests span various sub-fields of artificial intelligence, including sequential decision making under uncertainty, large scale natural language processing, and AI applications to crowd-sourcing. Mausam obtained a PhD from University of Washington in 2007 and a Bachelor of Technology from IIT Delhi in 2001.
11/07/2012: Using Syntactic Head Information in Hierarchical Phrase-Based Translation
Speaker: Junhui Li
Time: Wednesday, November 7, 2012, 11:00 AM
Venue: AVW 3258
The traditional hierarchical phrase-based (HPB) model is prone to overgeneration due to lack of linguistic knowledge: the grammar may suggest more derivations than appropriate, many of which may lead to ungrammatical translations. On the other hand, limitations of glue grammar rules in HPB model may actually prevent systems from considering some reasonable derivations. This talk presents a simple but effective translation model, called the Head-Driven HPB (HD-HPB) model, which incorporates head information in translation rules to better capture syntax-driven information in a derivation. In addition, unlike the original glue rules, the HD-HPB model allows improved reordering between any two neighboring non-terminals to explore a larger reordering search space. In experiments, we examined different head label sets to refine non-terminal X, including part-of-speech (POS) tags, coarsed POS tags, dependency labels.
About the Speaker: Junhui Li joined CLIP lab as a post-doc researcher from Aug 2012. He was previously a post-doc researcher in the Centre for Next Generation Localisation (CNGL), at Dublin City University from Feb 2011 to Jul 2012. Before that, he was a student at NLP Lab of Soochow University, China.
11/28/2012: New Machine Learning Tools for Structured Prediction
Speaker: Veselin Stoyanov, Johns Hopkins University
Time: Wednesday, November 28, 2012, 2012, 11:00 AM
Venue: AVW 3258
I am motivated by structured prediction problems in NLP and social network analysis. Markov Random Fields (MRFs) and other Probabilistic Graphical Models (PGMs) are suitable for representing structured prediction: they can model joint distributions and utilize standard inference procedures. MRFs also provide a principled ways for incorporating background knowledge and combining multiple systems.
Two properties of structured prediction problems make learning challenging. First, structured prediction almost inevitably requires approximation to inference, decoding or model structure. Second, unlike the traditional ML setting that assumes i.i.d. training and test data, structured learning problems often consist of a single example used both for training and prediction.
We address the two issues above. First, we argue that the presence of approximations in MRF-based systems requires a novel perspective on training. Instead of maximizing data likelihood, one should seek the parameters that minimize the empirical risk of the entire imperfect system. We show how to locally optimize this risk using error back-propagation and local optimization. On four NLP problems our approach significantly reduces loss on test data compared to choosing approximate MAP parameters.
Second, we utilize data imputation in the limited data setting. At test time we use sampling to impute data that is a more accurate approximation of the data distribution. We use our risk minimization techniques to train fast discriminative models on the imputed data. This we can: (i) train discriminative models given a single training and test example; (ii) train generative/discriminative hybrids that can incorporate useful priors and learn from semi-supervised data.
About the Speaker: Veselin Stoyanov is currently a postdoctoral researcher at the Human Language Technology Center of Excellence (HLT-COE) at Johns Hopkins University (JHU). He will be joining Facebook as a Research Scientist starting in January 2013. Previously he spent two years working with Prof. Jason Eisner at JHU's Center for Language and Speech Processing supported by a Computing Innovation Postdoctoral Fellowship. He received the Ph.D. degree from Cornell University under the supervision of Prof. Claire Cardie in 2009 and the Honors B.Sc. from the University of Delaware in 2002. His research interests reside in the intersection of Machine Learning and Computational Linguistics. More precisely, he is interested in using probabilistic models for complex structured problems with applications to knowledge base population, modeling social networks, extracting information from text and coreference resolution. In addition to the CIFellowship, Ves Stoyanov is the recipient of an NSF Graduate Research Fellowship and other academic honors.
12/05/2012: Combining Statistical Translation Techniques for Cross-Language Information Retrieval
Speaker: Ferhan Ture, University of Maryland
Time: Wednesday, December 5, 2012, 11:00 AM
Venue: AVW 3258
Cross-language information retrieval today is dominated by techniques that rely principally on context-independent token-to-token mappings despite the fact that state-of-the-art statistical machine translation systems now have far richer translation models available in their internal representations. This paper explores combination-of-evidence techniques using three types of statistical translation models: context-independent token translation, token translation using phrase-dependent contexts, and token translation using sentence-dependent contexts. Context-independent translation is performed using statistically-aligned tokens in parallel text, phrase-dependent translation is performed using aligned statistical phrases, and sentence-dependent translation is performed using those same aligned phrases together with an n-gram language model. Experiments on retrieval of Arabic, Chinese, and French documents using English queries show that no one technique is optimal for all queries, but that statistically significant improvements in mean average precision over strong baselines can be achieved by combining translation evidence from all three techniques. The optimal combination is, however, found to be resource-dependent, indicating a need for future work on robust tuning to the characteristics of individual collections.
This is a practice talk for COLING 2012.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476592.66/warc/CC-MAIN-20240304232829-20240305022829-00714.warc.gz
|
CC-MAIN-2024-10
| 19,382
| 80
|
https://lists.tahoe-lafs.org/pipermail/tahoe-dev/2013-March/008057.html
|
code
|
[tahoe-dev] Secure OS for running Tahoe?
clashthebunny at gmail.com
Mon Mar 4 13:18:28 UTC 2013
On Mon, Mar 4, 2013 at 2:28 PM, Greg Troxel <gdt at ir.bbn.com> wrote:
> OpenSSL did not come from OpenBSD.
> don't know what you mean by "powerpc is no longer updated enough"; that
> seems to refer to perhaps a particular Linux distribution's practices.
Random things don't work that are becoming more and more prevalent.
node.js, or MongoDB on NetBSD on PowerPC, I would love to know.
> That's interesting that ksplice is repackaging Free updates and
> charging; presumably one can redistribute the updates but the program to
> apply them is non-Free. Still, that's going off the Free Software plan.,
I think that everything is openly available. You can read their
academic papers: http://www.ksplice.com/paper We would just have to
follow the same process and hide from the Oracle patent lawyers...
> When I look at all the problems I have, rebooting a machine every few
> months is not a big deal.
If rebooting has burned you even once in the past, you may make the
poor choice of not rebooting when necessary.
> I run tahoe-lafs from pkgsrc on a mac (including python, twisted, and
> everything else needed). I don't worry about /usr/pkg/etc much; the
> behavior on update is sane.
Good to know!
More information about the tahoe-dev
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00514.warc.gz
|
CC-MAIN-2021-49
| 1,333
| 24
|
https://www.javascripting.com/?sort=rating&%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%253%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%3D%2F%3D%3D%3D%2F%3D=&p=7
|
code
|
📦🚀 Fast, disk space efficient package manager
Realtime MVC Framework for Node.js
Mdb Ui Kit
Bootstrap 5 & Material Design 2.0 UI KIT
the last carousel you'll ever need
Budibase is an open-source low-code platform for creating internal apps in minutes. Supports PostgreSQL, MySQL, MSSQL, MongoDB, Rest API, Docker, K8s 🚀
A kickass library to manage your poppers
Material Design Lite
Material Design Components in HTML/CSS/JS
:scissors: Modern copy to clipboard. No Flash. Just 2kb :clipboard:
PouchDB is a pocket-sized database.
A modern, HTML5-ready alternative to CSS resets (as used by Twitter Bootstrap, HTML5 Boilerplate, TweetDeck, Soundcloud, and many others).
Mobile UI Components based on Vue & WeUI
SVGO is a Nodejs-based tool for optimizing SVG vector graphics files.
Application Architecture for Building User Interfaces
Simplified HTTP request client.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648695.4/warc/CC-MAIN-20230602140602-20230602170602-00343.warc.gz
|
CC-MAIN-2023-23
| 872
| 16
|
https://devpost.com/software/paper-play
|
code
|
Here at Duke, we have a whole host of world-class amenities, from sports facilities to research laboratories to spaces dedicated solely to the arts. Many people across the United States and the world, however, don't have the time or the money to do things we take for granted, like sitting down at a Piano and playing some music. Paper Play aims to address this problem by providing people with a low-cost Piano-alternative that requires only a smartphone, while still providing the tactile experience of a real Piano.
What it does
Paper Play allows the user to mount their smartphone on a simple stand (or a couple of books, or a shoebox), point the camera at a pre-printed sheet of paper with some funny cartoon faces on it, and then, by covering those faces with their fingers, make music. It also allows users to choose from playing notes versus chords, and even has recording capability so users can listen to what they just played!
How we built it
We used React-Native and Expo to design our mobile app, and the Expo camera API to process the live image stream.
Challenges we ran into
We originally wanted to use a simpler keyboard-style printout for our paper keyboard, but it turns out to be really hard to design a camera api using React Native that simply detects if a user's fingers are touching a piece of paper in a certain location. Instead, we used the Expo camera api's face-detection feature to detect cartoon faces on the piece of paper, and hacked together a way to determine which note was being pressed by figuring out which faces were visible and which weren't.
Accomplishments that we're proud of
- Hacking the Expo camera face detection feature to make our keyboard
- Designing an algorithm to associate notes with keys and ensure one sound being made for each key press
- Ability to change modes of playing (Eg. single note or chords)
- Ability to record certain time frames of playing
What we learned
- How to create a React Native application from scratch
- React Native camera apis
What's next for Paper Play
- A camera api which works for any key design, not just faces
- Sound-mixer like capability to allow users to play over sounds they just recorded, possibly in a different key
- More keys and faster response to key presses
- Cuter UI
- More possible sounds
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178373241.51/warc/CC-MAIN-20210305183324-20210305213324-00422.warc.gz
|
CC-MAIN-2021-10
| 2,292
| 21
|
https://dailytechvideo.com/video-365-james-adam-why-is-nobody-using-refinements/
|
code
|
Several years ago, a new proposal made the rounds in the Ruby community. “Refinements” were proposed as a way to modify an existing class, but without the unfortunate (and potentially dangerous) side effects of monkey patching. The Ruby community discussed and debated refinements, and when they were finally included in Ruby … well, it seems that almost nothing happened, because no one really used them. In this talk, James Adam asks why no one is using them. He describes refinements, and then points to reasons why they might be interesting and/or useful, and then discusses why they aren’t in common use.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103642979.38/warc/CC-MAIN-20220629180939-20220629210939-00260.warc.gz
|
CC-MAIN-2022-27
| 617
| 1
|
https://support.philo.com/hc/en-us/articles/115005918088-Philo-Edu-is-not-loading-
|
code
|
If Philo Edu fails to load, please try visiting watch.philo.com on your computer. Does the connection still time out? Server connection timeouts are often accompanied by error messages like the following:
This site can't be reached.
This page isn't working.
The server unexpectedly dropped the connection.
Sorry, we cannot connect to Philo right now.
There was a problem connecting to Philo.
This type of error suggests your IP address is valid but the site is down. Submit a support request letting us know the site is unreachable and watch.philo.com is timing out. We'll look into it as soon as possible.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647614.56/warc/CC-MAIN-20230601042457-20230601072457-00169.warc.gz
|
CC-MAIN-2023-23
| 606
| 7
|
https://www.odbms.org/blog/2012/03/in-memory-database-systems-interview-with-steve-graves-mcobject/
|
code
|
In-memory database systems. Interview with Steve Graves, McObject.
“Application types that benefit from an in-memory database system are those for which eliminating latency is a key design goal, and those that run on systems that simply have no persistent storage, like network routers and low-end set-top boxes” — Steve Graves.
On the topic of in-memory database systems, I did interview one of our expert, Steve Graves, co-founder and CEO of McObject.
Q1. What is an in-memory database system (IMDS)?
Steve Graves: An in-memory database system (IMDS) is a database management system (DBMS) that uses main memory as its primary storage medium.
A “pure” in-memory database system is one that requires no disk or file I/O, whatsoever.
In contrast, a conventional DBMS is designed around the assumption that records will ultimately be written to persistent storage (usually hard disk or flash memory).
Obviously, disk or flash I/O is expensive, in performance terms, and therefore retrieving data from RAM is faster than fetching it from disk or flash, so IMDSs are very fast.
An IMDS also offers a more streamlined design. Because it is not built around the assumption of storage on hard disk or flash memory, the IMDS can eliminate the various DBMS sub-systems required for persistent storage, including cache management, file management and others. For this reason, an in-memory database is also faster than a conventional database that is either fully-cached or stored on a RAM-disk.
In other areas (not related to persistent storage) an IMDS can offer the same features as a traditional DBMS. These include SQL and/or native language (C/C++, Java, C#, etc.) programming interfaces; formal data definition language (DDL) and database schemas; support for relational, object-oriented, network or combination data designs; transaction logging; database indexes; client/server or in-process system architectures; security features, etc. The list could go on and on. In-memory database systems are a sub-category of DBMSs, and should be able to do everything that entails.
Q2. What are significant differences between an in-memory database versus a database that happens to be in memory (e.g. deployed on a RAM-disk).
Steve Graves: We use the comparison to illustrate IMDSs’ contribution to performance beyond the obvious elimination of disk I/O. If IMDSs’ sole benefit stemmed from getting rid of physical I/O, then we could get the same performance by deploying a traditional DBMS entirely in memory – for example, using a RAM-disk in place of a hard drive.
We tested an application performing the same tasks with three storage scenarios: using an on-disk DBMS with a hard drive; the same on-disk DBMS with a RAM-disk; and an IMDS (McObject’s eXtremeDB). Moving the on-disk database to a RAM drive resulted in nearly 4x improvement in database reads, and more than 3x improvement in writes. But the IMDS (using main memory for storage) outperformed the RAM-disk database by 4x for reads and 420x for writes.
Clearly, factors other than eliminating disk I/O contribute to the IMDS’s performance – otherwise, the DBMS-on-RAM-disk would have matched it. The explanation is that even when using a RAM-disk, the traditional DBMS is still performing many persistent storage-related tasks.
For example, it is still managing a database cache – even though the cache is now entirely redundant, because the data is already in RAM. And the DBMS on a RAM-disk is transferring data to and from various locations, such as a file system, the file system cache, the database cache and the client application, compared to an IMDS, which stores data in main memory and transfers it only to the application. These sources of processing overhead are hard-wired into on-disk DBMS design, and persist even when the DBMS uses a RAM-disk.
An in-memory database system also uses the storage space (memory) more efficiently.
A conventional DBMS can use extra storage space in a trade-off to minimize disk I/O (the assumption being that disk I/O is expensive, and storage space is abundant, so it’s a reasonable trade-off). Conversely, an IMDS needs to maximize storage efficiency because memory is not abundant in the way that disk space is. So a 10 gigabyte traditional database might only be 2 gigabytes when stored in an in-memory database.
Q3. What is in your opinion the current status of the in-memory database technology market?
Steve Graves: The best word for the IMDS market right now is “confusing.” “In-memory database” has become a hot buzzword, with seemingly every DBMS vendor now claiming to have one. Often these purported IMDSs are simply the providers’ existing disk-based DBMS products, which have been tweaked to keep all records in memory – and they more closely resemble a 100% cached database (or a DBMS that is using a RAM-disk for storage) than a true IMDS. The underlying design of these products has not changed, and they are still burdened with DBMS overhead such as caching, data transfer, etc. (McObject has published a white paper, Will the Real IMDS Please Stand Up?, about this proliferation of claims to IMDS status.)
Only a handful of vendors offer IMDSs that are built from scratch as in-memory databases. If you consider these to comprise the in-memory database technology market, then the status of the market is mature. The products are stable, have existed for a decade or more and are deployed in a variety of real-time software applications, ranging from embedded systems to real-time enterprise systems.
Q4. What are the application types that benefit the use of an in-memory database system?
Steve Graves: Application types that benefit from an IMDS are those for which eliminating latency is a key design goal, and those that run on systems that simply have no persistent storage, like network routers and low-end set-top boxes. Sometimes these types overlap, as in the case of a network router that needs to be fast, and has no persistent storage. Embedded systems often fall into the latter category, in fields such as telco and networking gear, avionics, industrial control, consumer electronics, and medical technology. What we call the real-time enterprise sector is represented in the first category, encompassing uses such as analytics, capital markets (algorithmic trading, order matching engines, etc.), real-time cache for e-commerce and other Web-based systems, and more.
Software that must run with minimal hardware resources (RAM and CPU) can also benefit.
As discussed above, IMDSs eliminate sub-systems that are part-and-parcel of on-disk DBMS processing. This streamlined design results in a smaller database system code size and reduced demand for CPU cycles. When it comes to hardware, IMDSs can “do more with less.” This means that the manufacturer of, say, a set-top box that requires a database system for its electronic programming guide, may be able to use a less powerful CPU and/or less memory in each box when it opts for an IMDS instead of an on-disk DBMS. These manufacturing cost savings are particularly desirable in embedded systems products targeting the mass market.
Q5. McObject offers an in-memory database system called eXtremeDB, and an open source embedded DBMS, called Perst. What is the difference between the two? Is there any synergy between the two products?
Steve Graves: Perst is an object-oriented embedded database system.
It is open source and available in Java (including Java ME) and C# (.NET) editions. The design goal for Perst is to provide as nearly transparent persistence for Java and C# objects as practically possibly within the normal Java and .NET frameworks. In other words, no special tools, byte codes, or virtual machine are needed. Perst should provide persistence to Java and C# objects while changing the way a programmer uses those objects as little as possible.
eXtremeDB is not an object-oriented database system, though it does have attributes that give it an object-oriented “flavor.” The design goals of eXtremeDB were to provide a full-featured, in-memory DBMS that could be used right across the computing spectrum: from resource-constrained embedded systems to high-end servers used in systems that strive to squeeze out every possible microsecond of latency. McObject’s eXtremeDB in-memory database system product family has features including support for multiple APIs (SQL ODBC/JDBC & native C/C++, Java and C#), varied database indexes (hash, B-tree, R-tree, KD-tree, and Patricia Trie), ACID transactions, multi-user concurrency (via both locking and “optimistic” transaction managers), and more. The core technology is embodied in the eXtremeDB IMDS edition. The product family includes specialized editions, built on this core IMDS, with capabilities including clustering, high availability, transaction logging, hybrid (in-memory and on-disk) storage, 64-bit support, and even kernel mode deployment. eXtremeDB is not open source, although McObject does license the source code.
The two products do not overlap. There is no shared code, and there is no mechanism for them to share or exchange data. Perst for Java is written in Java, Perst for .NET is written in C#, and eXtremeDB is written in C, with optional APIs for Java and .NET. Perst is a candidate for Java and .NET developers that want an object-oriented embedded database system, have no need for the more advanced features of eXtremeDB, do not need to access their database from C/C++ or from multiple programming languages (a Perst database is compatible with Java or C#), and/or prefer the open source model. Perst has been popular for smartphone apps, thanks to its small footprint and smart engineering that enables Perst to run on mobile platforms such as Windows Phone 7 and Java ME.
eXtremeDB will be a candidate when eliminating latency is a key concern (Perst is quite fast, but not positioned for real-time applications), when the target system doesn’t have a JVM (or sufficient resources for one), when the system needs to support multiple programming languages, and/or when any of eXtremeDB’s advanced features are required.
Q6. What are the current main technological developments for in-memory database systems?
Steve Graves: At McObject, we’re excited about the potential of IMDS technology to scale horizontally, across multiple hardware nodes, to deliver greater scalability and fault-tolerance while enabling more cost-effective system expansion through the use of low-cost (i.e. “commodity”) servers. This enthusiasm is embodied in our new eXtremeDB Cluster edition, which manages data stores across distributed nodes. Among eXtremeDB Cluster’s advantages is that it eliminates any performance ceiling from being CPU-bound on a single server.
Scaling across multiple hardware nodes is receiving a lot of attention these days with the emergence of NoSQL solutions. But database system clustering actually has much deeper roots. One of the application areas where it is used most widely is in telecommunications and networking infrastructure, where eXtremeDB has always been a strong player. And many emerging application categories – ranging from software-as-a-service (SaaS) platforms to e-commmerce and social networking applications – can benefit from a technology that marries IMDSs’ performance and “real” DBMS features, with a distributed system model.
Q7. What are the similarities and differences between current various database clustering solutions? In particular, let’s look at dimensions such as scalability, ACID vs. CAP, intended/applicable problem domains, structured vs. unstructured, and complexity of implementation.
Steve Graves: ACID support vs. “eventual consistency” is a good place to start looking at the differences between clustering database solutions (including some cluster-like NoSQL products). ACID-compliant transactions will be Atomic, Consistent, Isolated and Durable; consistency implies the transaction will bring the database from one valid state to another and that every process will have a consistent view of the database. ACID-compliance enables an on-line bookstore to ensure that a purchase transaction updates the Customers, Orders and Inventory tables of its DBMS. All other things being equal, this is desirable: updating Customers and Orders while failing to change Inventory could potentially result in other orders being taken for items that are no longer available.
However, enforcing the ACID properties becomes more of a challenge with distributed solutions, such as database clusters, because the node initiating a transaction has to wait for acknowledgement from the other nodes that the transaction can be successfully committed (i.e. there are no conflicts with concurrent transactions on other nodes). To speed up transactions, some solutions have relaxed their enforcement of these rules in favor of an “eventual consistency” that allows portions of the database (typically on different nodes) to become temporarily out-of-synch (inconsistent).
Systems embracing eventual consistency will be able to scale horizontally better than ACID solutions – it boils down to their asynchronous rather than synchronous nature.
Eventual consistency is, obviously, a weaker consistency model, and implies some process for resolving consistency problems that will arise when multiple asynchronous transactions give rise to conflicts. Resolving such conflicts increases complexity.
Another area where clustering solutions differ is along the lines of shared-nothing vs. shared-everything approaches. In a shared-nothing cluster, each node has its own set of data.
In a shared-everything cluster, each node works on a common copy of database tables and rows, usually stored in a fast storage area network (SAN). Shared-nothing architecture is naturally more complex: if the data in such a system is partitioned (each node has only a subset of the data) and a query requests data that “lives” on another node, there must be code to locate and fetch it. If the data is not partitioned (each node has its own copy) then there must be code to replicate changes to all nodes when any node commits a transaction that modifies data.
NoSQL solutions emerged in the past several years to address challenges that occur when scaling the traditional RDBMS. To achieve scale, these solutions generally embrace eventual consistency (thus validating the CAP Theorem, which holds that a system cannot simultaneously provide Consistency, Availability and Partition tolerance). And this choice defines the intended/applicable problem domains. Specifically, it eliminates systems that must have consistency. However, many systems don’t have this strict consistency requirement – an on-line retailer such as the bookstore mentioned above may accept the occasional order for a non-existent inventory item as a small price to pay for being able to meet its scalability goals. Conversely, transaction processing systems typically demand absolute consistency.
NoSQL is often described as a better choice for so-called unstructured data. Whereas RDBMSs have a data definition language that describes a database schema and becomes recorded in a database dictionary, NoSQL databases are often schema-less, storing opaque “documents” that are keyed by one or more attributes for subsequent retrieval. Proponents argue that schema-less solutions free us from the rigidity imposed by the relational model and make it easier to adapt to real-world changes. Opponents argue that schema-less systems are for lazy programmers, create a maintenance nightmare, and that there is no equivalent to relational calculus or the ANSI standard for SQL. But the entire structured or unstructured discussion is tangential to database cluster solutions.
Q7. Are in-memory database systems an alternative to classical disk-based relational database systems?
Steve Graves: In-memory database systems are an ideal alternative to disk-based DBMSs when performance and efficiency are priorities. However, this explanation is a bit fuzzy, because what programmer would not claim speed and efficiency as goals? To nail down the answer, it’s useful to ask, “When is an IMDS not an alternative to a disk-based database system?”
Volatility is pointed to as a weak point for IMDSs. If someone pulls the plug on a system, all the data in memory can be lost. In some cases, this is not a terrible outcome. For example, if a set-top box programming guide database goes down, it will be re-provisioned from the satellite transponder or cable head-end. In cases where volatility is more of a problem, IMDSs can mitigate the risk. For example, an IMDS can incorporate transaction logging to provide recoverability. In fact, transaction logging is unavoidable with some products, such as Oracle’s TimesTen (it is optional in eXtremeDB). Database clustering and other distributed approaches (such as master/slave replication) contribute to database durability, as does use of non-volatile RAM (NVRAM, or battery-backed RAM) as storage instead of standard DRAM. Hybrid IMDS technology enables the developer to specify persistent storage for selected record types (presumably those for which the “pain” of loss is highest) while all other records are managed in memory.
However, all of these strategies require some effort to plan and implement. The easiest way to reduce volatility is to use a database system that implements persistent storage for all records by default – and that’s a traditional DBMS. So, the IMDS use-case occurs when the need to eliminate latency outweighs the risk of data loss or the cost of the effort to mitigate volatility.
It is also the case that FLASH and, especially, spinning memory are much less expensive than DRAM, which puts an economic lid on very large in-memory databases for all but the richest users. And, riches notwithstanding, it is not yet possible to build a system with 100’s of terabytes, let alone petabytes or exabytes, of memory, whereas spinning memory has no such limitation.
By continuing to use traditional databases for most applications, developers and end-users are signaling that DBMSs’ built-in persistence is worth its cost in latency. But the growing role of IMDSs in real-time technology ranging from financial trading to e-commerce, avionics, telecom/Netcom, analytics, industrial control and more shows that the need for speed and efficiency often outweighs the convenience of a traditional DBMS.
Steve Graves is co-founder and CEO of McObject, a company specializing in embedded Database Management System (DBMS) software. Prior to McObject, Steve was president and chairman of Centura Solutions Corporation and vice president of worldwide consulting for Centura Software Corporation.
– A super-set of MySQL for Big Data. Interview with John Busch, Schooner.
– Re-thinking Relational Database Technology. Interview with Barry Morris, Founder & CEO NuoDB.
– On Data Management: Interview with Kristof Kloeckner, GM IBM Rational Software.
– vFabric SQLFire: Better then RDBMS and NoSQL?
ODBMS.ORG: Free Downloads and Links:
NoSQL Data Stores
Graphs and Data Stores
Cloud Data Stores
Entity Framework (EF) Resources
Object-Relational Impedance Mismatch
Databases in general
Big Data and Analytical Data Platforms
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649343.34/warc/CC-MAIN-20230603201228-20230603231228-00501.warc.gz
|
CC-MAIN-2023-23
| 19,360
| 61
|
https://github.com/sideangleside/rho
|
code
|
rho - Tool for discovering RHEL, Linux, and Unix Servers
rho is a tool for scanning a network, logging into systems using SSH, and retrieving information about available Unix and Linux servers.
This README contains information about installing rho, basic usage, known issues, and best practices. For more details information about the available command and command options with rho, see the manpage.
- Intro to rho
- Requirements & Assumptions
- Command Syntax & Usage
- Copyright & License
Intro to rho
rho is an Ansible-based network inventory tool. rho scans a user-defined range of machines and then reports basic information about the operating system and hardware for each server. rho simplifies some basic sysadmin tasks, like managing licensing renewals and new deployments.
rho only has to be installed on a single central server to scan all of the servers on a network or subnet. rho is an agent-less discovery tool built on Ansible, so there is no need to install anything on any server but the one which will run the scans. Ansible uses SSH, which is commonly available for server, on both the scanning server and the target machines.
- The rho tool itself is set up through two configuration items:
- auth entries, which contain the username and password or SSH key to access each server
- profile entries, which contain IP address ranges, and the auth credentials to use.
There can be multiple auth entries in each profile. A profile contains all the hosts and ranges that are to be tested against the auths.
The rho tool configuration is created using rho itself. There are subcommands to create and edit auth and profile items in the configuration. For example:
rho auth add --name server1auth --username rho-user --sshkeyfile
This creates a new auth item named server1auth, which uses the SSH user rho-user with a key stored in the key file. The password is input as a CLI prompt.
(The different rho commands are covered more in the Command Syntax & Usage section.)
All the information that rho needs is stored in the $XDG_CONFIG_HOME/rho and
$XDG_DATA_HOME/rho folders in the installed directory. All the auths are stored
credentials file. All the profiles are stored in the
file. The Ansible playbook is called
rho_playbook.yml stored in the
installed directory.The roles created during the scan are stored in the
roles folder. These roles are used by
rho_playbook.yml to perform the
Running the scan is simple. Just point the rho tool to the profile
to use, the facts to collect and print the results to a CSV output file.
Optional parameters are the number of processes Ansible should use and whether
or not to process the profile using
--cache. A newly created or
freshly edited profile cannot be processed using cache as the program must
create an Ansible inventory called
<profile name>_hosts.yml that includes the
working hosts matched with an auth each (the auths are chosen in the order
passed in to the profile add or edit command as will be explained later).
rho scan --profile big_test --facts facts_file --ansible-forks 100 --reportfile rep.csv
The output is simple CSV format. If 'default' is the argument for
the csv output contains the following information:
OS,kernel,processor,platform,release name,release version,release number,system ID,username,instnum,release,CPU count,CPU vendor,CPU model,BIOS vendor,virtual guest/host,virtual type
jsmith,da3122afdb7edd23,Red Hat Enterprise Linux Client release 5.3
(Tikanga),2,GenuineIntel,Intel(R) Core(TM)2 Duo CPU,Award Software, Inc.,host,
As implied by the report output, rho differentiates between baremetal machines, virtual hosts, and virtual guests, and identifies several major virtual types (Xen, Qemu, KVM, and VMWare). It can be very important for inventorying machines and maintaining software licenses to separate virtual hosts from guests; rho returns that information with every scan, by default.
Requirements & Assumptions
- Before installing rho, there are some guidelines about which machine it should be installed on:
- rho is written to run on RHEL or Fedora servers.
- The machine that rho is installed on must be able to access the machines to be scanned, so it must be on the network and the machines must be running.
- The target machines must be running SSH.
- The user account that rho uses to SSH into the machine must have adequate permissions to run commands and read certain files.
- The user account rho uses for a machine should have a sh like shell. For example, it cannot be a /sbin/nologin or /bin/false shell.
- These python packages are required for the rho install machine to run rho:
- The following python packages are required to build & test rho from source:
Building the man page from source requires
pandoc to be installed.
rho is available for download from fedora COPR.
1. First, make sure that the EPEL repo is enabled for the server. You can find the appropriate architecture and version on the EPEL wiki:
rpm -Uvh http://fedora-epel.mirrors.tds.net/fedora-epel/7/x86_64/e/epel-release-7-10.noarch.rpm
2. Next, add the COPR repo to your server. You can find the appropriate architecture and version on the COPR rho page:
wget -O /etc/yum.repos.d/chambridge-rho-epel-7.repo https://copr.fedorainfracloud.org/coprs/chambridge/rho/repo/epel-7/chambridge-rho-epel-7.repo
- Then, install the rho package:
yum install rho
Command Syntax & Usage
The basic syntax is:
rho command subcommand [options]
- There are four rho commands:
auth- for managing auth entries
profile- for managing profile entries
scan- for running scans
fact- to show information about the facts rho can collect
profileboth have five subcommands:
add- to create a new entry
edit- to modify an existing entry
clear- to remove any or all entries
show- to display a specific entry
list- to display one or more entries
facthas two subcommands:
list- to display the list of facts that can be scanned
hash- to hash sensitive facts within report
The complete list of options for each command and subcommand are listed in the rho manpage with other usage examples. The common options are listed with the examples in this document.
For expanded information on auth entries, profiles, scanning, and output read the syntax and usage document.
Begin by cloning the repository:
git clone email@example.com:quipucords/rho.git
rho currently supports Python 2.7, 3.5, 3.6. If you don't have Python on your system follow these instructions. Based on your system you may be using either pip or pip3 to install modules, for simplicity the instructions below will specify pip.
From within the local clone root directory run the following command to install dependencies needed for development and testing purposes:
pip install -r requirements.txt
In order to build rho run the following command:
In order to lint changes made to the source code execute the following command:
To run the unit tests with the interpreter available as
Continuous testing runs on travis: https://travis-ci.org/quipucords/rho
To run end-to-end functional tests against local virtual machines follow the information in functional test document.
Frequently Asked Questions
For expanded troubleshooting information read the FAQ document.
To report bugs for rho open issues against this repository in Github. Please complete the issue template when opening a new bug to improve investigation and resolution time.
Track & find changes to the tool in CHANGES.
Authorship and current maintainer information can be found in AUTHORS.
Reference the CONTRIBUTING guide for information to the project.
Copyright & License
Copyright 2009-2017, Red Hat, Inc.
rho is released under the GNU Public License version 2.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662863.53/warc/CC-MAIN-20190119074836-20190119100836-00020.warc.gz
|
CC-MAIN-2019-04
| 7,651
| 99
|
https://www.ernstrenner.com/my-artwork/
|
code
|
2D / 3D graphics design
I am completing more online programing courses right now but as soon as I am finished, I will post more of my designs.
I have been drawing for as long as I can remember. Eventually, my crayons and colored pencils would be replaced by powerful opensource software running on high-end graphics workstations. Most of my work is done with these free applications:
- Blender 3D
- FreeCAD & CURA
Right now, I am working on giving the Little Composers Piano app an make over. Here is a WiP screenshot.
I have created web pages since the mid 90’s and am amazed on how much has changed since the early days. I spend hours a day creating or updating online content and proud of the fact that my work evaluates favorably when checked GT Metrix or similar sites.
If your company website does not evaluate in the high 90’s and you want to fix the problems then feel free to contact me and I will take a look followed by improvement suggestions.
I also specialize in virtual private server (VPS) administration and Linux server security.
Content management systems
In addition to HTML/CSS-driven websites, I also work with Drupal and WordPress content management systems. Depending on the requirements, I find existing solutions or develop custom themes or plugins as needed.
My favorite 3D render
For as long as I can remember, drawing was my “thing”. Sadly, my grade 3/4 teacher did not like kids who sketch and scribble all the time but in the end, thankfully, it all worked out. 😉
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358973.70/warc/CC-MAIN-20211130110936-20211130140936-00059.warc.gz
|
CC-MAIN-2021-49
| 1,505
| 13
|
https://zoegarden.wordpress.com/2011/07/07/what-are-we-missing/
|
code
|
I think it is time to check in with everyone who reads the blog for their share information. We’re about a third of the way through the season and we’re getting ready to change over into the summertime vegetables. What do you, as members, need more or less of in terms of information?
- Do you need more recipes that try to use several ingredients at once?
- Do you need more recipes for a particular veg?
- Do you need more cookbook / website reviews?
- Do you need more general use tips?
- Do you need _________ ?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944742.25/warc/CC-MAIN-20180420213743-20180420233743-00379.warc.gz
|
CC-MAIN-2018-17
| 519
| 6
|
https://www.oreilly.com/library/view/active-directory-with/9781782175995/ch06s03.html
|
code
|
Obtaining an Active Directory replication status
In Active Directory, replication is crucial. Any replication failures can cause inconsistency and might provide different results to different set of users. Because of this reason, organizations have tight controls around monitoring Active Directory replication failures and consistency. Windows administrators rely on the most famous utility,
repadmin.exe, to query the replication data. It provides very detailed information about the replication status between domain controllers and metadata details of DCs, objects, and so on. However, developing automations around the
repadmin.exe utility is not so easy because the output is in text format and you need to parse it to get the required portion of ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00196.warc.gz
|
CC-MAIN-2022-49
| 756
| 4
|
https://gr.pinterest.com/delbinioti/
|
code
|
White Lipped Python
Gorgeous White Lipped Python
20+ Ways to Shake Up Your Look in the Bedroom | Apartment Therapy
Throw some paint on those walls and this would be complete coziness
Add a soft throw to cuddle up with like the OFELIA throw £20
Light & bright
What you can do with Nutella!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105187.53/warc/CC-MAIN-20170818213959-20170818233959-00179.warc.gz
|
CC-MAIN-2017-34
| 289
| 7
|
https://www.tenpennynyc.com/2242-how-to-prepare-perfect-summer-fruit-salad/
|
code
|
Summer Fruit Salad🍓.
You can have Summer Fruit Salad🍓 using 9 ingredients and 5 steps. Here is how you achieve that.
Ingredients of Summer Fruit Salad🍓
- Prepare 1 Cups of Sliced Strawberries.
- You need 1 Cup of Blackberries.
- It’s 1 Cup of Raspberries.
- It’s 1 Cup of Blueberries.
- You need 1 Cup of Sliced Kiwi.
- You need 1 of Peeled Apple chopped.
- Prepare 1/8 Cup of White Caine Sugar.
- You need 1/2 of Lemon.
- You need of 1 Mixing bowl.
Summer Fruit Salad🍓 step by step
- Peel and chop apple into chunks of desired size and add to Mixing bowl..
- Slice strawberries and kiwi and add to bowl..
- Add Blueberries, Raspberries, and Blackberries to bowl..
- Add the juice of the 1/2 of lemon to the fruit mixture..
- Fold in 1/8th cup of sugar to the fruit mixture until well combined. Cover and set in fridge for 30min. (Lasts two-three days in fridge).
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141753148.92/warc/CC-MAIN-20201206002041-20201206032041-00343.warc.gz
|
CC-MAIN-2020-50
| 879
| 18
|
http://www.easynn.com/application/network.htm
|
code
|
The Network view shows how the nodes in an EasyNN-plus neural network are interconnected. When this view is opened the Network Editor starts. Any source node can be selected with the left mouse button and any destination node can be selected with right mouse button. The network nodes and connections can be edited to produce any configuration.
How to create a new neural network
A new neural network can be created from theGrid by pressing the New Network toolbar button or selecting Action > New Network. This will produce the New Network dialog. This dialog allows the neural network configuration to be specified. The dialog will already contain the necessary information to generate a neural network that will be capable of learning the information in the Grid. However, the generated network may take a long time to learn and it may give poor results when tested. A better neural network can be generated by checking Grow hidden layer 1 and allowing EasyNN-plus to determine the optimum number of nodes and connections.
It is rarely necessary to have more than one layer of hidden nodes but EasyNN-plus will generate two or three hidden layers if Grow hidden layer 2 and Grow hidden layer 3 are checked.
The time that EasyNN-plus will spend looking for the optimum network can be controlled by setting the Growth rate variables. Every time that the period expires EasyNN-plus will generate a new neural network slightly different from the previous one. The best network is saved.
How to use the Network editor
To create a network manually using the Network Editor start with a suitable Grid and then use New Network but do not check Connect layers. This will create the optimum number of nodes but the weights between the layers will not be connected.
To connect and disconnect nodes the source and the destination of the weight connections need to be selected. This is done using the mouse. The left button selects the source and the right button selects the destination. First left click on a node to select the source and then right click on a node to select the destination. The selected source node will now have a wide red border. The other nodes in the source layer will have a narrow red border. The destination node will now have a wide blue border. The other nodes in the destination layer will have a narrow blue border. The right click will also open a menu of functions that are used to change the network connections or add and delete nodes.
Any nodes or layers can be connected to any other nodes or layers. Feed forward, feedback and skip layer connections are possible.
The number of hidden layer nodes can be increased or decreased and hidden layers can be added if needed. The nodes will be reconnected using the connections that are held in the slave memory. These connections will be as close as possible to its previous state. It can then be edited further as required.
Input nodes are connected to the input columns in the grid.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190183.80/warc/CC-MAIN-20170322212950-00105-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 2,957
| 11
|
http://ibmmainframes.com/about16056.html
|
code
|
Joined: 23 Feb 2006 Posts: 305 Location: Hyderabad,India
I am generating a sequence number by a cobol program. I wan to apend the sequence number to a dataset created in the subsequent step.
I can't write a dynamic JCL to modify a skeleton jcl since my step takes a backup of a GDG generation.
While the dynamic JCl executes, new GDGs may be created and that would create a problem while refering the original GDG in the dynamic JCL.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863489.85/warc/CC-MAIN-20180620065936-20180620085936-00096.warc.gz
|
CC-MAIN-2018-26
| 433
| 4
|
https://forum.serverless.com/t/how-to-run-two-schedules-to-the-same-function-at-the-same-time/1361
|
code
|
Hello guys, Is there anyone having issues to run a lambda function with multiple schedules?
For example, Im runnning a function with the following crons
cron(*/15 * * * ? )
cron(/30 * * * ? *)
And this is causing my lambda function to go haywire
I’m getting callback was already called errors and I’m pretty sure that my code is ok…
For example when it is running at 20h15 and 20h45 my code is running great, but when it gets called at 20h30 and 21h00 due to the clash of the functions It’s doing some crazy stuff
Hmm, haven’t tried it myself. I would’ve assumed it worked, but errors like that suggest that there’s something more going on. Are you sure there’s no global variable in your code (i.e. everything is scoped correctly)? If the warm container is already being caused, there might be some “bleed” between the two.
If you just want it to work, why not just create two separate functions? The only downside I can see is that you’ll have to pay for two CWE Rules, but you won’t incur any additional Lambda costs - you only pay for the GB Seconds you use.
Yep just figured it out that sometimes the container is kept for the lambda function, so when there are variables that are not inside the handler scope their state are kept.
I didn’t know about this behaviour. =(
From the meetup last night, check out:
Slides 10 and 11 discuss some of the limit testing that the IOpipe team has done. Super interesting stuff.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00701.warc.gz
|
CC-MAIN-2022-27
| 1,447
| 13
|
https://virtfusion.com/2023/07/version-2-3-0-testing-build-3-released/
|
code
|
- Added the ability to use Redis as the task queue driver. See https://docs.virtfusion.com/guides/task-queue for setup instructions.
- Added support for Libvirt domain power management statuses. (If a VM operating system sleeps, it will show in the UI as
- It’s now possible to view the QEMU, Libvirt, CPU and PHP versions from the hypervisor settings page in the admin UI.
- Improved task queue error reporting.
- Improved task status checks based on VM states.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511002.91/warc/CC-MAIN-20231002164819-20231002194819-00502.warc.gz
|
CC-MAIN-2023-40
| 464
| 5
|
https://anindyamukherjee.wordpress.com/
|
code
|
In spring mvc , the view is backed by a command ( form backing ) object. Everything is fine when there is a one to one mapping between the command object and the view fields. But when there is a functionality like adding rows via js in an html table where the command object can grow dynamically, … More Dynamic binding in spring mvc
There is a very nice but badly documented object in js which can be used to copy clipboard contents to and from your webpages. The js object is called clipboardData. I tried this in ie 6 and mozilla 3.5 browsers. It works fine with both of them. Here is a code snippet of how to … More Clipboard object in js
In the project that I am working in currently we are having some serious performance issues.. There is a page that does about 1000 inserts under a single transaction. On looking into that we found that there is one insert that is taking about 94 ms to insert one record. Ok so for 1000 records … More Clustered and non clustered indexes
Ok so finally I have my own place in the net apart from my orkut account 🙂 Well to start with I am a software engineer by profession and I love it 🙂 Apart from that my hobbies include listening to music and playing games on my psp… Finally, those wondering what is the image … More Hey ….
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864461.53/warc/CC-MAIN-20180521161639-20180521181639-00579.warc.gz
|
CC-MAIN-2018-22
| 1,286
| 4
|
http://forums.zimbra.com/administrators/18658-bouncing-messages-filter.html
|
code
|
I have a need to bounce messages with a custom bounce message.
For example, if body contains blah, bounce it back to the sender with:
Sorry, your message contains the word blah. It has not been delivered to the recipient.
Is there a way to achieve this within zimbra?
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276780.5/warc/CC-MAIN-20160524002116-00128-ip-10-185-217-139.ec2.internal.warc.gz
|
CC-MAIN-2016-22
| 267
| 4
|
https://dev.arvados.org/projects/arvados/wiki/Controller_architecture
|
code
|
sdk/go/arvados (Apache2) provides http endpoints (method/path), request/response structs (Collection, CreateOptions, UpdateOptions).
lib/controller/federation provides an Interface with a method for each Arvados API action, e.g., CollectionList(context.Context, ListOptions) (CollectionListResponse, error).
lib/controller/federation provides Conn, which implements federation.Interface by fanning out to multiple backends (typically one local and several remotes, to suit cluster config). Federation-unaware APIs just call through to the default (local) backend.
lib/controller/rpc provides Conn, implements federation.Interface by calling an Arvados controller's http server.
lib/controller/railsproxy implements federation.Interface using an rpc.Conn whose target is the local RailsAPI server.
lib/controller/router provides an http.Handler that maps each HTTP request to a backend (federation.Interface) method: deserialize the request to a Provider call signature, check auth scope, call the backend method, and serialize the return values as an HTTP response.
lib/controller provides an http service consisting of a router with a federation.Conn backend.
The rpc, federation, and (future) localdb packages offer backends with a common interface, so any given program can switch easily between using the federation and model logic built into its own binary and calling out to a different process or host.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474669.36/warc/CC-MAIN-20240226225941-20240227015941-00805.warc.gz
|
CC-MAIN-2024-10
| 1,409
| 8
|
https://www.teamlocofit.com/podcast/episode-224-why-we-seek-to-redefine-healthy-in-our-client-lives-team-locofit-rebrand/
|
code
|
Apply for coaching at www.teamlocofit.com
Have specific questions for us you’d like answered on the podcast? (Your questions are completely confidential!) Submit them here: https://docs.google.com/forms/d/1BImRC65AWeb9C_TKi7dV6lJ9yyvEakTeBIgrSp_B6o8/viewform?edit_requested=true
Join our Facebook group! https://www.facebook.com/groups/1054619585077141
Subscribe to our newsletter where we share weekly, exclusive content https://www.teamlocofit.com/subscribe/
Follow us on Instagram: @laurinconlin @ryanconleypsa @karinanoboa @sammyfitsleeves @danni_aguilar @teamlocofit
#TeamLoCoFit #RedefineHealthy #Onlinefitnesscoach
DISCLAIMER: Links included in this description might be affiliate links. If you purchase a product or service with the links that we provide we may receive a small commission. Thank you for supporting our channel so we can continue to provide you with free content each week!
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653071.58/warc/CC-MAIN-20230606182640-20230606212640-00299.warc.gz
|
CC-MAIN-2023-23
| 899
| 7
|
https://discuss.px4.io/t/position-trajectory-to-mc-position-control/7376
|
code
|
I have a few questions about the position control.
My goal is to send a real-time reference trajectory from the companion computer to the position control. My trajectory consists of position, velocity, acceleration and jerk.
Are velocity,acceleration and jerk used in the position/attitude control for the feedforward?
Is the SET_POSITION_TARGET_LOCAL_NED message supported, i.e. if I send that message with a certain frequency, is like a trajectory is passed directly to the position/attitude control? Or is it used more like a waypoint?
@tuloski I mean by dynamic trajectories of having high velocity / accelrations.
If you send a position setpoint, the position controller assumes that your target position is static. Once your target position starts to have relatively large accelerations, your position controller will not be able to properly track the setpoint.
@Jaeyoung-Lim Is it possible to implement geometric controller in mc_att_control_main.cpp? If it is impossible, can you tell me why we need seperate controller? Actually, I want to change the control algorithm of firmware from PID to geometric one.
However, if you are testing out controllers or in development, it is better to have it running OFFBOARD as you can always shut down the setpoints published from the flight controller and regain control of the vehicle.
I still didn’t understand how the output of att_control is mixed into pwm output.
I think the normalized wrench is bullsh!t. Every serious controller outputs force (thrust) and torques. IMO the output of the controller should be a not-normalized wrench and then the mixer should be in charge of transforming the desired wrench in actuators output (for example knowing the pwm to thrust curve).
@chmi0611 which controller are you trying to implement? If needed I should have somewhere the Simulink code and the C code generated from simulink about the controller I developed and linked a few posts before.
Could you please explain a little how we can use torque as att-control? Actually, i am trying to control a drone with the thrust force and torques directly instead of the normlised thrust value and the body rate.
I have not done this. But just a suggestion. You can compute angular acceleration from torque (I don’t know how you will compute inertia of your drone), and from angular acceleration you can compute the desired body rates. I may be wrong.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00134.warc.gz
|
CC-MAIN-2022-33
| 2,395
| 13
|
https://atkelar.livejournal.com/58440.html
|
code
|
As I posted earlier, the mechanical problem was already fixed
and the camera was essentially working...
...but I decided to go all the way and fix that stupid flash too.
I had a suspect in mind and behold when I opened up the camera one
"last time", there was in fact a loose wire on the flash electronics.
Soldered it back to where it belongs, screwed everything together for
I don't know how many times it's been... and yes, camera fires up the
flash and there's the distinct buzz of a charging capacitor. Yay!
But... the release wouldn't work at all. The half pressed button would
cause the metering and autofocus to kick in but no reaction to the pressed
release. It turns out that somehow one of the flexprints finally gave in
and decided to break. Three of about ten lines on the PCB are snapped.
The next best option was to find out what these points are connecting and
add some thin wires to bypass the broken lines. Which I did. But now
there is no reaction at all... put in the battery, nothing.
Therefore: either there's another loose wire which I can't see even after
one night of searching, or there's a short somewhere in my soldering which
I also couldn't find after hours of measuring. Or the camera's main
controller chip is gone. Or there's another snapped line on these stupid
flexprints. Either of which I don't have the resources nor the spare parts
To summarize: I'm highly disappointed. Partly in Canon for causing the
root of the problem with bad plastics in the first place, partly in my
bad luck with flexprints. And I'm angry because I wasted almost an entire
week trying to get this thing to work and almost had it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585561.4/warc/CC-MAIN-20211023033857-20211023063857-00107.warc.gz
|
CC-MAIN-2021-43
| 1,643
| 24
|
https://addons.opera.com/en-gb/extensions/details/league-of-legends-events/
|
code
|
Notifications are finally supported.
- List of the upcomming 20 events (not cutting of a day)
- With click on Event getting more details (Stream etc.)
->It automatically loads information about the stream
- Auto-timezone detection
- Auto-update Events
-> Turn off Notifications
-> Use 12 hour format
-> mm/dd/YYYY format
- Link to the calendar
- For Twitch streams you have the opportunity to open as popup
- This extension can access your data on some websites.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348509972.80/warc/CC-MAIN-20200606031557-20200606061557-00027.warc.gz
|
CC-MAIN-2020-24
| 462
| 12
|
https://watershedlrs.zendesk.com/hc/en-us/articles/360028698572
|
code
|
As an xAPI conformant LRS, Watershed can receive data from any xAPI conformant Activity Provider. Many Watershed reports are flexible and you can configure Watershed to display useful visualizations and metrics from almost any xAPI data set. To help you get the most out of your data, we’re working with a number of product vendors to ensure that the data they send is optimized to produce the best possible results in Watershed.
gomo is a collaborative, cloud-based responsive eLearning authoring tool enabling users to create multi-device learning. Completions, results, slide views and question responses are all tracked via xAPI.
- User Types
- Any user with access to the report builder (Global Admins, Area Admins, and some Users) can create reports looking at gomo data. Only Global Admins can set up the connection.
- Available on paid plans (Analyst, CLO, and Enterprise).
- Anybody can use this feature.
Building gomo courses
When creating your gomo courses, be sure to give courses, topics, slides and assets clear names so that you will know what they are when in comes to reporting. In particular, ensure that you populate the alt text for images when using graphical question types.
Connecting gomo to Watershed
gomo courses can be published in a number of different ways, three of which support xAPI tracking:
- As an xAPI package
- Via gomo hosting
- As a stand alone course
For gomo hosting, select gomo hosting as as Publish destination. Then speak with your gomo Account Manager to enable and configure the xAPI connection between your gomo hosting account and Watershed. They will need to know your Watershed endpoint, key and secret. You should create a new key and secret for use with gomo.
For an an xAPI package, select SCORM / xAPI ZIP File (DOWNLOAD) as the Publish destination, then select Track via xAPI (undefined endpoint). In this mode, the LRS details will be passed to the gomo course from the platform launching that package. Details of how to configure that launching platform to work with Watershed will vary from platform to platform.
Publishing as a stand alone course uses the Track via xAPI (defined endpoint) option. Both publishing as a stand alone course and publishing for gomo hosting with a connection out to Watershed embed the LRS credentials into the package. For data security reasons, Watershed does not recommended these options.
Please note: If you do use one of the defined endpoint options, tracking to Watershed will only work if you access the content via HTTPS. For gomo hosting, this means you need to ensure your learners access the course via
https://your-sub-domain.gomocentral.com and not
gomo xAPI data
gomo data uses the following xAPI verbs and activity types:
Activity type id
Statements about answered questions include data about available options, the response given, success, and time taken.
gomo data works best with Watershed's Activity report which provides an detailed overview of individual courses including detailed information about question responses. To configure, simply select the Activity report type and filter by the course or assessment you want to look at.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00513.warc.gz
|
CC-MAIN-2022-05
| 3,146
| 23
|
https://docs.microsoft.com/en-us/archive/blogs/seanearp/mcp-mentor
|
code
|
Just saw the following on Trika's blog. GREAT opportunity to share your technical/certification knowledge with others! Signing up right now...
Check this out: MCP mentors is a cool new program some people around here are cooking up, through which our certified community helps others get started in IT. To participate in this pilot, it makes most sense if you are in eastern U.S. or Canada, so you're in the same time zone. But more on that later...
Share your stories and help build the MCP community! Become an MCP Mentor.
As an experienced MCP, you have many stories--of both successes and failures. There was the time when the new guy brought down the entire system. The time you spent an entire week troubleshooting that elusive problem. And the time you completed your task in record time. Microsoft Learning invites you to share your stories, skills, and experience to make a difference in someone's life as an MCP Mentor.
HOW IT WORKS
The MCP Mentor Program matches an experienced Microsoft Certified Professional (you) with someone new to IT and studying to pass their first Microsoft Certification exam (your mentee). Through the program, your real-world perspective, technical skills, and community connections can help others overcome the experience gap to complete a path to proven skills, new career opportunities, and confidence.
As an MCP Mentor, you will share real-world experience with your mentee about objectives that are covered by the Microsoft Certification exam, helping to build self-confidence about his or her technical skills and preparedness. You will meet regularly with your mentee by phone or e-mail. In addition, you and your mentee will have access to an online community to share best practices, tips, and study tools. The online community also helps you and your mentee connect with other IT professionals of diverse experience, perspectives, and backgrounds.
VOLUNTEER FOR THE PILOT AT WALTER REED ARMY MEDICAL CENTER, UNITED STATES
Microsoft Learning is running a pilot of this program in conjunction with the IT Academy that is associated with Walter Reed Army Medical Center. The soldiers in this IT Academy program are recovering from recent injuries and waiting to find out if they will be discharged from the military. Most of these soldiers enter the IT Academy program with little or no experience in IT.
The MCP Mentor Program is not intended to be a replacement for training, but instead to supplement training. An experienced MCT supports these soldiers in their preparation. But these soldiers would benefit greatly from the kind of 1:1 mentoring relationship that you can provide. We need volunteer mentors to help these soldiers build the skills that are validated on their target exam(s), enter the IT profession, and join the MCP community. We are specifically looking for:
- IT professionals (vs. developers) with experience on MCDST exams 70-271 and 70-272 <corrected typo immediately after pressing publish. argh>
- Commitment of 2-4 hours per month over a 3-6 month period
- Local to mid-Atlantic United States preferred <edited 4/16...SEE NOTE FROM PROG. MNGR IN COMMENTS: YOU CAN PARTICIPATE REMOTELY!!!!>
HOW TO PARTICIPATE
If you are interested in volunteering, go to http://connect.microsoft.com and enter this Invitation ID to fill out an application survey: MNTR-CGPX-QQKK. If you have questions about this program, you can contact the MCP Mentor Program administrators at email@example.com.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348504341.78/warc/CC-MAIN-20200605205507-20200605235507-00113.warc.gz
|
CC-MAIN-2020-24
| 3,456
| 15
|
http://www.lucyturnbull.co.uk/2015/02/09/lucy-and-rich-a-taunton-pre-wedding-shoot/
|
code
|
02 • 09 • 15 Lucy and Rich – a Taunton pre-wedding shoot PINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGEPINIMAGE Posted in Engagement shootTags: reportage pre-wedding photography,Somerset engagement photographer,Somerset pre-wedding photography Sunrise, Watergate bay and a surprise proposal Previous Post Helen, Ben and Yoda – a London pre-wedding shoot Next Post Back to TopEMAILPOSTFacebookPOSTTweetPOSTSubscribe READ HIDE 1 comment JW Blooms - Love these. We’ll be doing Lucy and Rich’s wedding flowers, and am already excited to see the photos! JanReplyCancel Your email is never published or shared. Required fields are marked * Name * Email * Website Comment Notify me of follow-up comments by email. Notify me of new posts by email.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323808.56/warc/CC-MAIN-20170629000723-20170629020723-00676.warc.gz
|
CC-MAIN-2017-26
| 849
| 1
|
https://www.accenture.com/us-en/blogs/blogs-why-technology-internship-best-decision
|
code
|
Last summer, I was trying to decide between a software engineering internship and an Accenture Technology internship in Chicago. As someone equally interested in the intersection of technology and business, I can say that choosing Accenture was one of the best decisions I’ve ever made.
First, on the engineering experience I received: At Accenture, we got to work along the full stack. I personally had the opportunity to:
Build three web apps from scratch using Angular2/RxJs (a live-chat app, a data records dashboard, and an API search interface).
Build backends with Node.js and Amazon Web Services (DynamoDB, Lambda functions, EC2).
Work on a java micro service backend using the Netflix OSS stack.
Set up a full DevOps pipeline from scratch, from continuous integration to continuous deployment, using Jenkins.
It was super intense, but I learned so much. I was working with a team of about eight senior developers, and they pushed me every single day. The engineering practices are stellar, because that’s what we were selling. To top it off, our work was demoed to Fortune 500 CEOs—I got to listen in on the meetings, too!
This brings me to the business-side experience I received. At a pure tech company, you focus narrowly on building software. At Accenture, you see how that software is applied in the real world. Over the summer, I was able to have one-on-one meetings with many managing directors, each of whom were responsible for a different industry—financial services, health care, technology, etc. It was fascinating hearing their perspectives on where they thought their industries were going. Also, the mentorship culture at Accenture is phenomenal!
If you’re interested in the intersection of business and technology, the Accenture Technology internship is one of the most exciting ones out there!
Get a head-start on your career and apply for an opportunity today.
Copyright © 2017 Accenture. All rights reserved. Accenture, its logo, and High performance. Delivered. are trademarks of Accenture.
This document makes descriptive reference to trademarks that may be owned by others. The use of such trademarks herein is not an assertion of ownership of such trademarks by Accenture and is not intended to represent or imply the existence of an association between Accenture and the lawful owners of such trademarks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154796.71/warc/CC-MAIN-20210804045226-20210804075226-00060.warc.gz
|
CC-MAIN-2021-31
| 2,348
| 12
|
http://javaassignmenthelp74589.affiliatblogger.com/11020958/details-fiction-and-java-assignment-help
|
code
|
— A zip archive containing source code for the many conclude-of-chapter exercises. These are extracted within the web pages that contain the options being a comfort. They're not A part of the Web-site obtain. See the README file. Size: 322 Kilobytes.
Yes! I am below to help you, And that i am not merely likely to assist you with java project growth, but I will also share hundred unique Thoughts.
The entire clarification was terrific and I induced productively my software in a single shot by on the lookout this written content.
These really should be regarded experimental. Depending upon the specific ebook reader you use, there might be problems with rendering of extended lines in software code sample. You may see that lines which might be also very long to suit across your screen are incorrectly split into numerous lines, or that the component that extends further than the best margin is just dropped.
Attractive coding is exceptional assistance in projects connected to programming. Any way thanks for him in having my project accomplished in short span of than our expectancy.
The install_jar technique from the SQL schema provides a JAR file on the database. The main argument of this course of action is the full path identify on the JAR file on the pc from which this course of action is run.
The initial kind of equality generally implies the 2nd (aside from things such as not a quantity (NaN) which happen to be unequal to themselves), however the converse will not be necessarily genuine.
The significance of this kind-examining lies within the operator's commonest use—in conditional assignment statements. With this best site usage it appears being an expression on the appropriate facet of the assignment statement, as follows:
MD5 you can try these out hash just isn't as envisioned. Predicted: a8e4d4ede43c5da5ea1355e3a465872b and found 08b66fce7afe1fc3c48d295fc0c219f6.
I'm a mechanical pupil from Hong Kong,China. I'm keen about machines, but within our second semester I acquired a programming topics. Programming is rather triable activity for me.
Not The solution you're looking for? Browse other concerns tagged java eclipse or question your individual issue. requested
The Fall Treatment assertion deletes that procedure SHOW_SUPPLIERS if it exists. In MySQL, statements in a stored course of action are divided by semicolons. However, a distinct delimiter is required to end the make treatment assertion.
I taken off and reinstalled JRE via Make route after which taken off and imported my project which solved this situation automatically. Many thanks gyro.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826856.55/warc/CC-MAIN-20181215105142-20181215131142-00153.warc.gz
|
CC-MAIN-2018-51
| 2,597
| 13
|
https://coderanch.com/t/64372/application-servers/jsp-compilation
|
code
|
the weblogic jsp compiler takes many args with one of them being what exactly does this do. The documentation just says something to the effect of : Causes the JSP compiler to emit commentary. thanks in advance paul
This is from jsp.pdf you can download from WL: -commentary Causes the JSP compiler to include comments from the JSP in the generated HTML page. If this option is ommitted, comments do not appear in the generated HTML page. Stuff I've read talks about not including sensitive comments in the HTML that the user can view. [This message has been edited by Michael Hildner (edited January 18, 2001).]
thanks...we are using ant to precompile all of our JSPs using weblogic.jspc and occasionally a jsp will generate an error but all that is displayed in the console is "error: Result 1". However the jsp compiles and works find when you visit the page. paul Also I cant seem to find jsp.pdf on bea site, do you know where it is? thanks [This message has been edited by Paul Wetzel (edited January 18, 2001).]
I don't know about your error, as I haven't even tried to compile a .jsp - I just deploy it. For the book, go to http://e-docs.bea.com/wls/docs60/index.html , click on search and type in jsp.book. You can open or download it from there.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00082.warc.gz
|
CC-MAIN-2021-49
| 1,255
| 4
|
https://www.cle.fr/test-your-level/
|
code
|
Assess your level of French by trying out our tests. They consist of a series of multiple choice questions, and only your knowledge of vocabulary and grammar are tested. If you get 50% of the questions at a particular level right, then that’s most probably the level you are at.
Nevertheless, if you enroll at CLÉ, you will sit a full test on your first day assessing all of your oral and written abilities. Don’t be stressed at the thought of taking a test, it will simply be a matter of carefully assessing your level to place you in the right class.
- False beginner level: vocabulary (40 questions)
- False beginner level: grammar (10 questions)
- Vocabulary A1 (25 questions)
- Grammar A1 (25 questions)
- Vocabulary A2 (30 questions)
- Grammar A2 (30 questions)
- Vocabulary B1 (30 questions)
- Grammar B1 (20 questions)
- Vocabulary B2 (40 questions)
- Grammar B2 (10 questions)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510326.82/warc/CC-MAIN-20230927203115-20230927233115-00116.warc.gz
|
CC-MAIN-2023-40
| 890
| 12
|
https://slashdot.org/~Jerry/
|
code
|
"Major Banks and Parts of Federal Gov't Still Rely On Cagey Programmers Who Never Write Decent Comments To Support Programs Instead Of Hiring People To Write Decent Comments."
It's not so much "cagey programmers" as it is over-worked programmers, especially at the State level, where computer illiterate legislators continue to dream up new legislation that puts pressure on coders to modify existing software to meet the legal demands. Except for management, most of whom are computer illiterates as well, State programmers are underpaid and over worked. Many States are having severe financial tax shortfalls, so there won't be new programmers being added to their teams any time soon. I wrote extensive documentation INSIDE my code to explain to any coder who took on my projects after I retired what I did and why I did it that way. Documentation for the users were rarely written because it was the users (clerks) whose functions I was computerizing who dictated what the GUI interface looked like and the underlying software did. If they weren't happy I wasn't happy. So, I didn't need to write documentation for them. They usually trained their replacements and the newbie clerks could ask their fellow clerks if they had questions.
The State Dept of Revenue in the midwest state where I worked have been using a mainframe running COBOL for almost 50 years. About a dozen years ago the suites decided to deploy Oracle as a "replacement". Now they have two database systems and Ellison lies awake nights thinking how to charge more for existing installations. Oracle has ended up costing more in the last decade than the COBOL system has in the last half century. Now they are stuck with Oracle and the taxpayers are stuck with the bill.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00107-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,743
| 3
|
https://www.digitalocean.com/community/questions/app-platform-build-missing-last-db-migration-file
|
code
|
I’ve done about 8 successful automatic GitHub builds. The first db migration (single table) was successful. Adding the second migration (new table) worked. However, my 3rd migration which was adding new columns to the first table was never ran. I checked in GH and the 3 migration files are there and the build reference the latest commit, but the build logs only show 2 migration files. I’ve tried doing a whole rebuild and still nothing. My DB only reflects the first 2 migration files.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00712.warc.gz
|
CC-MAIN-2023-50
| 882
| 4
|
https://community.deeplearning.ai/t/question-on-the-benefit-of-cnn-sparsity/241006
|
code
|
There is question in the quiz asking the benefit of using convolutional neural nets regarding “parameter sharing”. I chose “It allows gradient descent to set many of the parameters to zero, thus making the connections sparse”. I thought this was true because the gradient of max pooling layer does make the gradient quite sparse, and also the filters only apply to one small patch at a time.
However, the system said that this answer was wrong. Could anyone help me understand? Is my understanding on sparsity wrong/incomplete, or just that this sparsity is not related to “parameter sharing”?
Thank you in advance!
As part of the programming assignment in week 1, you’ll implement both forward and backward passes of conv and pooling layers. It should help you see the difference between fewer parameters and sparsity in weights. A conv layer will have fewer parameters than replacing it with a Dense layer.
As far as the pooling layer is concerned, it influences the gradient calculation and not the actual weights of the conv layer by setting them to zero. Hope the computational graph from 1st course helps where the pooling layer (having 0 learnable parameters) follows a conv layer.
It’s important to realize that even though pooling layers do not have learnable (trainable) parameters, they do still pass gradients through during back propagation as Balaji described. In a max pooling layer, the gradient will only affect one of the weights in each segment covered by one step, but the other weights are not set to zero: they are simply not modified by the gradient, so that does not encourage sparsity. A zero gradient does not imply a zero weight, right? In the case of average pooling, the gradients will be even distributed over all the inputs in a given “step” of pooling.
Thank you so much both @balaji.ambresh and @paulinpaloalto ! I learned a lot from your comments. I am now clear that 1) the fewer parameters != sparsity in weights, and 2) not modifying the gradient is absolutely not changing them to zeros!
Thank you again for your clarification and avoiding many hours that could potentially be wasted assuming wrong things
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297290384.96/warc/CC-MAIN-20240425063334-20240425093334-00545.warc.gz
|
CC-MAIN-2024-18
| 2,163
| 8
|
http://www.ducatisportingclub.com/showthread.php?t=86270
|
code
|
I've managed to get the RSS feeds working, so articles are dumped into the News Feeds
forum for you to browse.
Most feeds from Crash.Net just contain a couple of lines and a link to the story, others may have the full story.
The News forum needs to be locked (ie, you cannot create threads or reply to them) to ensure it works correctly, but that doesn't stop you copy'n'pasting or referencing threads in other sections.
Finally, if there's another RSS feed that you use which you think would be useful to include, please let me know!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827998.66/warc/CC-MAIN-20181216213120-20181216235120-00354.warc.gz
|
CC-MAIN-2018-51
| 534
| 5
|
https://magenaut.com/tag/hdf5/
|
code
|
Dask emphasizes the following virtues:
I have a reasonable size (18GB compressed) HDF5 dataset and am looking to optimize reading rows for speed. Shape is (639038, 10000). I will be reading a selection of rows (say ~1000 rows) many times, located across the dataset. So I can’t use x:(x+1000) to slice rows.
I am trying to read data from hdf5 file in Python. I can read the hdf5 file using
h5py, but I cannot figure out how to access data within the file.
I’m trying to save bottleneck values to a newly created hdf5 file.
The bottleneck values come in batches of shape
Saving one alone batch is taking up more than 16 gigs and python seems to be freezing at that one batch. Based on recent findings (see update, it seems hdf5 taking up large memory is okay, but the freezing part seems to be a glitch.
I have a struct array created by matlab and stored in v7.3 format mat file:
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817699.6/warc/CC-MAIN-20240421005612-20240421035612-00543.warc.gz
|
CC-MAIN-2024-18
| 882
| 8
|
https://docs.meshcloud.io/blog/2020/03/20/release-0
|
code
|
Release period: 2020-03-12 to 2020-03-20
This release includes the following issues:
- Seller information on chargeback statement
- meshTenant Fees
- Visualization of Multiple Currencies
- Tenant list searching
- Tenant information with replication details
- Deletion of OpenStack projects with images
Seller information on chargeback statement
Audience: Partner, Customer, User
The chargeback statement will show a breakdown of the costs per seller and seller product group. The seller and seller product group will also be included in the chargeback statements CSV export.
Audience: User, Operator
Fees per meshTenant for AWS and Azure can now be configured in the meshStack Price Catalog. They get applied on a daily basis. This enables operators to apply a certain management fee for e.g. an AWS Account or an Azure subscription.
Visualization of Multiple Currencies
Audience: User, Customer, Partner, Operator
Multiple currencies are visible in tenant usage reports and chargeback statements. If different line items in a tenant usage report are charged in different currencies, the tenant usage report will show the totals for each currency. These totals are visible in the tenant usage report overview list as well as in individual tenant usage reports. The same feature is implemented for the chargeback statement overview list and individual chargeback statements. In the project dashboard, the "project cost per payment period" chart will show a bar per currency per reporting period.
Tenant list searching
The list of tenants can now be searched/filtered for all columns
How to use
Open the administration area and click on 'Tenants' in the bottom-left. Type into any of the search boxes at the top and filter tenants on either customer, project, location, platform or tenant status.
Tenant information with replication details
The list of tenants in the administration area now has a detailed view of a tenant.
How to use
Open the administration area and click on 'Tenants' in the bottom-left. Click on 'View more' to view more information about the tenant like replication details including system remarks and user remarks.
Deletion of OpenStack projects with images
When trying to delete a project with an OpenStack meshTenant which includes images the panel will now list the images instead of showing an error message.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00479.warc.gz
|
CC-MAIN-2022-33
| 2,334
| 26
|
https://permies.com/t/75775/raising
|
code
|
Jonathan Ward wrote:I've gotta ask. Where di you learn about the comfrey/tumeric poultice? Are there specific places on these forums i should be looking? Sorry still new to the forums.
Jonathan Ward wrote: I think i'm an information hoarder lol.
Who among you feels worthy enough to be my best friend? Test 1 is to read this tiny ad:
Perennial Vegetables: How to Use Them to Save Time and Energyhttps://permies.com/t/96921/Planting-Perennial-Vegetables-Homestead
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912206016.98/warc/CC-MAIN-20190326200359-20190326222359-00289.warc.gz
|
CC-MAIN-2019-13
| 462
| 4
|
https://emacs.stackexchange.com/questions/2545/how-do-i-search-in-search-results
|
code
|
If you use library Icicles then you can easily do this kind of thing. What you are asking for (if I understand correctly), is to search only within certain search contexts.
For example, as in this case, you might want to search only within function definitions - the search contexts are function definitions. In Lisp, this would be things like
Icicles has several predefined Icicles search commands for searching definitions like this. These are collectively called Icicles Imenu commands.
To search only command definitions, you can use command
icicle-imenu-command-full. To search only non-interactive function definitions, use command
Beyond searching definitions, you can easily define any kind of contexts to be searched. The simplest way is by providing a regexp. Command
icicle-search prompts you for the search context-defining regexp. You can alternatively use a function to define the search contexts.
Other possibilities include:
Searching the text of different kinds of THINGs (e.g., sexps, sentences, lists,
strings, comments, XML elements,...), i.e., ignoring other text outside the THINGs.
Searching zones of text that have given text or overlay properties, i.e., ignoring other text.
Other answers here that mention
occur and similar (
helm-occur) provide a limited kind of context searching: the search contexts are just the lines of a buffer.
That is much more limited that, say, searching within whole function definitions, which is what I think you are asking for. With Icicles, command
icicle-occur (bound to
C-c ') lets you search within lines as search contexts.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00474.warc.gz
|
CC-MAIN-2022-21
| 1,585
| 17
|
https://appstream.debian.org/sid/main/issues/heaptrack-gui.html
|
code
|
Last updated on: 2023-03-20 14:21 [UTC]
Hints for heaptrack-gui in main
org.kde.heaptrack.desktop ⚙ amd64
:49 - Profiler pamäťovej haldy pre Linux.
The component summary should not end with a dot (`.`).
This `desktop-application` component has no `desktop-id` launchable tag, however it contains all the necessary information to display the application. The omission of the launchable entry means that this application can not be launched directly from installers or software centers. If this is intended, this information can be ignored, otherwise it is strongly recommended to add a launchable tag as well.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00595.warc.gz
|
CC-MAIN-2023-14
| 612
| 6
|
https://community.oracle.com/customerconnect/discussion/485027/unable-to-connect-to-atp-database
|
code
|
Unable to connect to ATP Database
SummaryAm unable to connect to newly provisioned ATP Database
I have provisioned 2 ATP and one ADW Databases and have downloaded the Wallet files for each of them. However, I am not able to connect to any of the databases using SQL Developer. I am using SQL Developer version 18.3 and I know it supports connections using Cloud wallet. I have connected to ATP databases in the past.
The attached images show that the DB is up and running and the connection error in SQL Developer.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476399.55/warc/CC-MAIN-20240303210414-20240304000414-00622.warc.gz
|
CC-MAIN-2024-10
| 514
| 4
|
https://djinni.co/developers/?exp_years=1y&title=.NET
|
code
|
Kharkiv · $4500 · 8 years of experience · Advanced/Fluent
8 years of experience with .NET stack. 4 years of experience as a technical leader. 1.5 years of experience with python/machine learning working on personal projects and participating in competitions. Technical leader with 8 years of full-cycle project development experience, communication with third parties, delivering on time within budget. Constantly learning new technologies and approaches, currently interested in machine learning, microservice design, highload, functional programming. Experienced in delivering a project from the requirements stage to a fully functioning project. Excellent problem-solving skills, known for the ability to find solutions for complex problems. Independent worker with remote work experience. Core skills: - Technical leadership - Full-cycle project development - .NET Core, ASP.NET, ASP.NET Core, Entity Framework, T-SQL - Python, Flask, Dramatiq, RabbitMQ, Machine Learning, Keras, Tensorflow, NumPy, Pandas, Scikit-Learn - ES6, TypeScript, React - Microservice Design, Event-Driven Design, Docker Containers, Message Queues - Test-Driven Development, Multi-threading, Amazon AWS - Linux
Built a gaming platform from scratch, started as a junior developer and eventually ended up leading the project. Helped a startup to be merged with a larger company. Developing a personal project which involves various stock statistical analysis indicators and reinforcement learning techniques.
Growing as a technical leader. Working on highload data-heavy projects. Building machine learning pipelines. Considering part-time offers. Can lower salary expectations for a machine learning project. Not interested in projects in gambling industry.
Moscow, St. Petersburg · $1400 · 1 year of experience · Intermediate
I have one year experience in commercial software development using blockchain technology. My stack of technologies includes C#, .Net core, Solidity, MS SQL Server, Аngular 5+. I have knowledge of OOP, OOD, SOLID principles; of algorithms and data structures; of SQL and ability to work with databases, also have a good understanding of cryptocurrences. The biggest projects in which I participated are the cryptocurrency exchange and investment index fund. I was working as a full stack developer. Also, I have an experience in machine learning: I know the AI algorithms and have implemented several of them for text and shapes recognition.
C#, .NET, CSS, HTML, Angular.js, GIT, Bootstrap, SQL, Solidity, Machine learning, Entity Framework, LINQ, SOLID
I want to work on interesting projects with professional and friendly teammates. Always ready for new challenges. Opportnity of professional grouth is important for me.
Remote work, Ukraine · $4200 · 7 years of experience · Upper Intermediate
Various solutions, mostly for enterprise customers Ready to start a new project but don't mind to support some well-looking legacy product.
C#, .NET, Entity Framework, LINQ, OOP, SQL, Git, Design Patterns, REST API, MVC, SOLID, WinForms, WPF, WCF, MSSQL
Done a lot complete solutions that still works
Flexible schedule Paid overtimes (or better no overtimes) Possibility to work not in the office at least sometimes
Kyiv, Lviv, Kharkiv · $1700 · 1.5 years of experience · Upper Intermediate
Брав Участь у трьох комерційних та 2 внутрішніх проектах
Remote work, Ukraine · $4000 · 10 years of experience · Intermediate
I am Microsoft Certified professional developer with over 10 years of experience in software development I have very extensive experience in .Net (WPF, WinForms, MVC), Java, SQL, Oracle.
C#, mvvmlight, WPF, Angular, .Net Core 2, C# winforms, PL/SQL, T-SQL, MSSQL, TeamCity, NUnit, OOP/OOD/SOLID, ASP.NET WEB API, Azure DevOps, Entity Framework, Hibernate, Dapper, REST, SOAP
Kyiv · $3200 · 3 years of experience · Upper Intermediate
I am a Middle .NET Developer. I am highly motivated, always improve my skills and opened for new domains and technologies. I develop new services and extend existing at my current work. Out technologies: .Net Core, ASP.Net core Web Api, ElasticSearch, Kafka, Git, Redis, NUnit, Rx.NET, Nomad, Docker, Team City, Micro-service architecture. I would like to find interesting high loaded project.
ASP.NET CORE, .NET Core, C#, REST API, OOP, SOLID
Kharkiv · $300 · 1 year of experience · Pre-Intermediate
https://youtu.be/frUFtqik8lY, http://foto-plus.atwebpages.com, https://moko.com.ua
Повышение профессионального уровня
Kyiv · $1600 · 2 years of experience · Upper Intermediate
Based: -Designing, developing business applications and services in the company's divisions (.NET MVC + SQL). -Support of corporate project teams to implement / change corporate systems. Improve processes and solutions: -Based on the knowledge of existing business processes, systems and IT technologies, looks for ways to improve the business, shares information with business users and provides recommendations for implementing the appropriate solution in the department. -Participates in business requirements analysis, design and implementation of relevant solutions, projects and services in the unit.
C#, .NET, Entity Framework, OOP, SQL, MVC, asp.net mvc, WPF, HTML, LINQ, Git / TFS, WinForms, XML, ADO.NET, ASP.NET
Added new functionality (MVC + SQL) to existing enterprise programs and services that made it easier for users to work on a daily basis.
Remote work, Ukraine · $200 · 1.5 years of experience · Beginner/Elementary
Працював над курсовими проєктами в інституті. Також, пройшов багато онлайн курсів з веб розробки.
Швидкому саморозвитку. Навчаюсь лише 1.5 роки в інституті, проте відмітки майже всі вище 95 балів з програмування.
Бажаю найти нову команду з якою я міг би саморозвиватись.
Kharkiv, Kyiv · $1600 · 4 years of experience · Pre-Intermediate
c#, .net, js, react, git ================================ Читайте "Ожидания от работы"
Умею работать в команде Принцип клиент-сервер, SOLID, Многоуровневое приложение, Мобильное приложение
на данный момент работаю back-end разрабочтиком. Очень приветствуются remote вакансии
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573124.40/warc/CC-MAIN-20190917223332-20190918005332-00357.warc.gz
|
CC-MAIN-2019-39
| 6,569
| 36
|
https://bitspark.de/blog/what-is-visual-programming
|
code
|
Learning a text-based programing language can be as difficult as learning a spoken language. It requires to wrap our minds around a very different way of solving problems which is far from intuitive. Also, programmers can not just replace one programming language with another or mix them together to create a slang - yes pun intended.
A visual programming language (VPL) allows users to create illustrations to describe different types of processes. It is a technique that was made to work with our ability to explain concepts through visual means. The inclusion of graphical elements makes it accessible to new learners. These elements can be manipulated to construct programs. Visual programming does not work with abstract information like classes, instances and I/O. Instead, it allows a person to construct a solution to a problem in a way that can be easily understood by other humans.
While commonly used to assist in the creation of computing processes, VPLs have been a part of so much more. There are hundreds of available visual languages. They cover topics including education, multimedia, simulation, video games, automation, and data warehousing. There are very few limiting factors that hold visual programming back. Drawing out workflow plans on the sheet of paper is not as intricate as a computer program. However, it is a starting form for a VPL.
The Beginning of visual programming
This process has been used for almost a century and was made before the first computer was ever created. In the 1920s a new form of planning became paramount as the world progressed at a staggering rate. This lead to an increased interest in documenting the processes of industrial construction. It was only natural that this documentation was done visually in what we know today as a flowchart. This method grew in popularity, and it became commonplace when breaking down complex problems and increasing automation.
It wasn't until 1949 that this form of visual programming was implemented in conjunction with computer programs. John von Neumann and Herman Goldstine wanted to correctly set multiple ring-counter switches to control input and output. With thousands of switches to set up, they adopted a flowchart system to help them. Neumann and Goldstine were successful and proved that VPLs could be used the rapidly growing world of computers.
Advancements in the 1960s and 1970s
Computer scientists only continued to test the range of VPLs as the graphical abilities of computers increased. One such test was done in 1963 by Ivan Sutherland. For his thesis, he created Sketchpad which was the first complete graphical user interface. It allowed a user to draw on the screen and have the drawing be manipulated based on information input via switches. These drawings were able to interact with each other and respond to multiple user commands. Some of these include changing a shape's proportions, moving the shape, adding and removing parts of a shape, and dragging pieces of a shape. Incredibly, the computer was thinking in real-time.
It was still difficult for computers to have the hardware that made significant real-time interaction possible. Some time would pass before visual programming saw another major advancement. It wasn't until 1975 that David Canfield Smith published his own thesis. It was a visual programming language named Pygmalion.
Pygmalion was stated to be a language that could take the natural creativity of the human mind and translate it into the abstract language that a computer could understand. No longer would someone have to try to envision how a computer understood a program. They would be able to see it as a graphical snapshot. All users had to do was change things based on the snapshot they were given. Pygmalion would give an opening snapshot that would describe its state and then programmers could describe how they wanted that opening snapshot to change. This thesis would become echoed throughout several future computing languages.
The Introduction of Personal Computers
It wasn't until the 1980s that computers became more mainstream. Companies across the world were run by computers. They were being bought for home use as well. Finally, computers were being built that had enough power to process visual programs without taking up too much space. This led to more and more people working on VPLs.
Between 1982 and 1985 Prograph was envisioned and designed for Apple’s Macintosh computers. Acadia University's staff and students agreed that diagrams were much more useful to denote workflow and created a new VPL to prove it. They used objects as a form of communication. These objects were hexagons that were placed on a graph. Users could interact with them by clicking on either of the hexagon's sides. One side would produce data, and the other side would give the user information on the object’s methods. Graphs could be formed between the shapes to show how data was flowing and how certain actions would affect that flow.
For the most part, visual programming wasn't used on Windows computers at this time. They simply didn't have a graphics operating system that could handle modern VPLs. On the other hand, computers such as the Commodore Amiga, Acorn Archimedes, and Apple Mac could easily run these programs.
Current Uses of visual programming
In the beginning, VPLs benefited from computer hardware growing more powerful. Yet, as programming problems became more complicated even VPLs had trouble. There were just some things that were too difficult to depict visually. In the late 1990s, this programming language found itself is an odd place. It was still useful, and it still was able to handle some tasks extremely well. However, most modern computers took a path that many VPLs could not follow. This led to a lull in the use of visual programming until three new paths were created: multimedia, gaming, and business systems.
Multimedia and visual programming
In the past two decades, the amount of multimedia products has increased exponentially. This could include anything from music to games to encyclopedias on CDs. Companies such as Phillips, Commodore, 3DO, and Microsoft blazed the way with their multimedia players. Instead of just using a text-based code, these used interactive tools which worked well with VPLs. This was especially true for musical media.
Blue Ribbon SoundsWorks’ created Bars and Pipes which was a MIDI sequencer. It allowed users to make music by interacting with rectangular bars that represented different notes on a scale or different instruments. Synthesizer programs like this are still used today.
Gaming and visual programming
When playing a video game it may be hard to think of it as a flowchart. However, they are just that. Almost every interaction in a game can be boiled down to an "if-then" statement. If I click this button then my character will attack. If I touch the enemy then I will lose health. If I move in that direction then the environment will change. These if-then statements are just the foundations of a very intricate flowchart. What a game could be was limited to the imagination of its creator. With years of software developments video games can be split into several different categories. Each of which uses its own form of VPL. Thus these languages became fairly specific, for instance Unreal Engine Blueprint and Unity with Bolt.
Business Systems and visual programming
Every business has a database and processes that drive gains in efficiency and ultimately profit. A business user can navigate a system's database through the use of procedural languages or client software. This process of moving through information or telling the software to move through the information is also similar to a flowchart. There are numerous tools and applications that help business with modelling databases or processes more effectively than by using a pure programming language. Prominent categories in this sector are ETL-Tools and Workflow-Engines
Visual programming in the Future
As the expansion of computers and what computer hardware can do increases so too do the applications of VPLs. While computers are receiving software developments that can handle these programming languages, people are often too specialized in a specific programming language to use visual programming successfully. Humans are great at drawing things out to solve problems, but they are bad at thinking on the scale of a computer.
There has been a niche of semi-code or low-code programming platforms which is lately gaining popularity - and even Amazon wants to join the game. These approaches take the complexity of text-based languages and meld them with visual graphics. This allows users to implement graphics to interact with code opening up the relationship between user and code to whole new levels. The combination of upcoming software developments and VPLs provides the best of both worlds.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710902.80/warc/CC-MAIN-20221202114800-20221202144800-00035.warc.gz
|
CC-MAIN-2022-49
| 8,918
| 26
|
https://criticalzone.org/catalina-jemez/publications/pub/mcdowell-et-al-2011-the-interdependence-of-mechanisms-underlying-climate-dr/
|
code
|
Climate-driven vegetation mortality is occurring globally and is predicted to increase in the near future. The expected climate feedbacks of regional-scale mortality events have intensified the need to improve the simple mortality algorithms used for future predictions, but uncertainty regarding mortality processes precludes mechanistic modeling. By integrating new evidence from a wide range of fields, we conclude that hydraulic function and carbohydrate and defense metabolism have numerous potential failure points, and that these processes are strongly interdependent, both with each other and with destructive pathogen and insect populations. Crucially, most of these mechanisms and their interdependencies are likely to become amplified under a warmer, drier climate. Here, we outline the observations and experiments needed to test this interdependence and to improve simulations of this emergent global phenomenon.
McDowell N.G., Beerling D.J., Breshears D.D., Fisher R.A., Raffa K.F., and Stitt M. (2011): The interdependence of mechanisms underlying climate-driven vegetation mortality. Trends in Ecology & Evolution, 26(10): 523-532, . DOI: 10.1016/j.tree.2011.06.003
This Paper/Book acknowledges NSF CZO grant support.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891654.18/warc/CC-MAIN-20200707044954-20200707074954-00599.warc.gz
|
CC-MAIN-2020-29
| 1,233
| 3
|
http://ghill.customer.netspace.net.au/researchblog/2001_11_01_blogarchive.html
|
code
|
|Research Blog - Customer Intelligence|
Once again, it's been a couple of weeks since I've blogged. I'll quickly highlight - in reverse chronological order - the people, seminars and texts before going into a lengthy ramble about ... stuff.
People: I met with my supervisor Graeme this morning, and had a quick discussion about the spectrum of formality surrounding business decision making. See the below ramble. Last Monday I had lunch with Dr. Bob Warfield - former manager from Telstra and now something of a role model or mentor for me - and Dr. Peter Sember, data miner and machine learning colleague from Telstra's Research Labs. We discussed my research, industry news and gossip and collaboration prospects.
The Friday before I re-introduced myself to Dr. Tim van Gelder, a lecturer I had in a cognitive philosophy subject a few years ago. We discussed Tim's projects to do with critical thinking, and his consultancy, and possible synergies with my own research and practice in business intelligence. While there are similarities - the goal is a "good decision" - there are differences: I'm looking at the relationships between inputs to a decision (information and decision rules) and outcomes; he's looking at the process itself and ensuring that groups of people don't make reasoning "mistakes".
Seminars: I've attended two since last blog. The first one was on a cognitive engineering framework, and its application to the operational workflow analysis of the Australian Defence Force's AWACS service. (This is where I bumped into Tim.)
The second one was on the "Soft-Systems Methodology" being used as an extension to an existing methodology ("Whole of Chain") for improving supply chains. SSM looked to me like de-rigoured UML or similar. I'm not sure what value it was contributing to the existing method (I asked what their measures of success were, and they didn't have any), but they had quotes from a couple of workshop participants who thought it was helpful. So I figure that's their criteria: people accept it. They didn't report on whether or not some people thought it unhelpful. They didn't talk about proportions of people who responded favourably, and unfavourably, and then compare with people who participated in the "reference" scheme (ie without SSM). In short, since I wasn't bowled over by the obvious and self-evident benefits of their scheme, and they gave me no reason to think that it meets other people's needs better than existing schemes, I'm not buying it.
I have to confess I'm still getting my head around IS research.
Book: I read half of, but then lost (dammit!), a text on Decision Support Systems. It was about 10 years old, but had papers going back to the 60s in it! I don't have the title at hand, but Graeme's going to try and score another copy.
I've also discovered a promising text by Stuart MacDonald entitled Information for Innovation. This is the first text I've read that talks about the economics of INFORMATION as opposed to IT. (I read some lecture notes and readings on "information economics", but found it to be an argument for why organisations shouldn't apply traditional cost/benefit analyses to IT capex.) It's quite clear that information is unlike anything else we deal with, is extremely important in determing our quality of life and yet it is suprisingly poorly understood. I would like to make a contribution in this area, and I'm starting to think that Shannon's insights have yet to be fully appreciated.
Ramble: I've been thinking that to drill-down on a topic, I'm going to have to purge areas of interest. For example, some months ago I realised that I was only going to look at "intelligence" (as opposed to "content" - see below). Now, I'm thinking I need to focus on formal decision processes. Allow me to explain ...
There's a spectrum of formality with respect to decision-making. Up one end, the informal end, we have the massively complex strategic decisions which are made by groups of people, using a limitless range of information, with an implied set of priorities and unspoken methods. Example: the board's weekend workshop to decide whether or not to spin-off a business unit.
Up the other - formal - end, we have extremely simple decisions which are made by machines, using a defined set of information, with explicit goals and rules to achieve them. Example: the system won't let you use your phone because you didn't pay your bill.
The idea is that decisions can be delegated to other people - or even machines - if they are characterised sufficiently well for the delegator to be comfortable with the level of discretion the delegatee may have to employ. The question of what becomes formalised, and what doesn't, is probably tied up many things (eg politics), but I think a key one is "repeatability". At some point, organisations will "hard-code" their decisions as organisational processes. At other times, decision-makers will step in and resume decision-making authority from the organisation process (for example, celebrities don't get treated like you or me).
I'm thinking that for each process, you could imagine a "slider" control that sets how much decision-making is formalised, and how much is informal. This "slider" might have half a dozen states, relating to process functions:
The more informal the decision, the more you'd need to look at group-think phenomena, cognitive biases, tacit knowledge and other fuzzy issues best left to the psychologists. I'm thinking that the formal or explicit processes are going to lend themselves best to my style of positivist analysis.
So in that sense, I'm inclined to look at metrics, and their role in decision-making for business processes (customer), service level agreements (supplier), and key performance indicators (staff). Typically, these things are parameterised models in that the actual specific numbers used are not "built into it". For example, a sales person can have a KPI as part of their contract, and the structure and administration of this KPI is separate from the target of "5 sales per day": it would be just as valid with "3" or "7" instead. Why, then, "5"? That is obviously a design aspect of the process.
Perhaps if these processes are measurably adding value (eg. the credit-assessment process stops the organisation losing money on bad debters), then it is reasonable to talk about the value of the metrics (both general-thresholds and instance-measures) in light of how they affect the performance of the process? If the process is optimised by the selection and use of appropriate metrics, then those metrics have value.
While I'm not sure about this, I think it's easier than performing a similar analysis on the value of an executives decisions.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591831.57/warc/CC-MAIN-20180720193850-20180720213850-00388.warc.gz
|
CC-MAIN-2018-30
| 6,733
| 18
|
https://www.aicg.com/topic/analytics/dbt-labs/page/2/
|
code
|
At AICG, we get asked how to build dashboards in Looker quite a bit, so we created this post to get you started. Before you start building in
News and Insights
Stick around, read and watch a while. Our team is cranking out some great content we think you’ll appreciate. Get updates on things we’re working on, tip n’ tricks, industry insights, and other news to help you achieve data-driven goals.
Subscribe below, and be the first to hear about the latest AICG updates.
create a .m2/settings.xml file on a mac for a github package
Private equity is an investment industry that has rapidly grown in recent years, and with this growth comes a need for effective leadership. The success of a private
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474661.10/warc/CC-MAIN-20240226162136-20240226192136-00037.warc.gz
|
CC-MAIN-2024-10
| 703
| 6
|
https://www.cliftonsystems.co.uk/useful-event-windows-ids/
|
code
|
|The physical disk is not certified.
|Cause : The physical disk does not comply with the standards set by Dell and is not supported.
Resolution : Replace the physical disk with a physical disk that is supported.
|The Windows Resource Exhaustion Detector experienced a memory allocation failure.
A storage component such as a physical disk or an enclosure has failed.
The failed component may have been identified by the controller while performing a task such as a rescan or a check consistency.
Replace the failed component. You can identify which disk has failed by locating the disk that has a red “X†for its status. Perform a rescan after replacing the disk.
|Physical disk removed.
A physical disk has been removed from the disk group. This alert can also be caused by loose or defective cables or by problems with the enclosure.
If a physical disk was removed from the disk group, either replace the disk or restore the original disk. On some controllers,a removed disk has a red "X" for its status. On other controllers, a removed disk may have an Offline status or is not displayed on the user interface. Perform a rescan after replacing or restoring the disk. If a disk has not been removed from the disk group, then check for problems with the cables.
A virtual disk or an enclosure has lost data redundancy. In the case of a virtual disk, one or more physical disks included in the virtual disk have failed.Due to the failed physical disk or disks, the virtual disk is no longer maintaining redundant (mirrored or parity) data.The failure of an additional physical disk will result in lost data. In the case of an enclosure, more than one enclosure component has failed. For example, the enclosure may have suffered the loss of all fans or all power supplies.
Identify and replace the failed components. To identify the failed component, select the Storage object and click the Health subtab.
The controller status displayed on the Health subtab indicates whether a controller has a failed or degraded component.
Click the controller that displays a Warning or Failed status. This action displays the controller Health subtab which displays the status of the individual controller components. Continue clicking the components with a Warning or Health status until you identify the failed component.
|Virtual disk degraded.
This event is logged when a physical disk included in a redundant virtual disk fails. Because the virtual disk is redundant (uses mirrored or parity information) and only one physical disk has failed, the virtual disk can be rebuilt.
Configure a hot spare for the virtual disk if one is not already configured. Rebuild the virtual disk. When using an Expandable RAID Controller (PERC) PERC 3/SC, 3/DCL, 3/DC, 3/QC, 4/SC, 4/DC, 4e/DC, 4/Di,CERC ATA100/4ch, PERC 5/E, PERC 5/i or a Serial Attache SCSI (SAS) 5/iR.
A physical disk in the disk group has been removed.
If a physical disk was removed from the disk group, either replace the disk or restore the original disk. You can identify which disk has been removed by locating the disk that has a red “X†for its status. Perform a rescan after replacing the disk.
|Physical disk inserted.
This event is logged when Physical disk was inserted .
This is an informational event.
|Device returned to normal
This alert is for informational purposes. A device that was previously in an error state has returned to a normal state.
For example, if an enclosure became too hot and subsequently cooled down, then you may receive this alert.
|Physical disk rebuild started.
This event is logged when Physical disk rebuild started.
This is an information event.
|Reboot required: To complete the installation of the following updates, the computer must be restarted. Until this computer has been restarted, Windows cannot search for or download new updates: %1
|According to Microsoft :
This event is logged when to complete the installation of the following updates, the computer must be restarted.
Restart the system
If updates are available but are not automatically downloaded, restart the system.
To confirm Windows Update Agent has installed updates:
1.Open an elevated Command Prompt window. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator.
2.At the command prompt, type eventvwr.msc and press ENTER to open Event Viewer.
3.To check for events in Event Viewer:
a.In the left panel of Event Viewer, click Application and Service Logs.
b.Expand Microsoft, and then expand Windows.
c.Click WindowsUpdateClient, and then click Operational.
d.Check to see if Event ID 19 is present in the event list to confirm that Windows Update Agent has successfully downloaded the updates.
|The previous system shutdown at %1 on %2 was unexpected.
|According to Microsoft :
This event is written during startup following an unexpected restart or shutdown. An unexpected restart or shutdown is one that the system cannot anticipate, such as when the user pushes the computer's reset button or unplugs the power cord.
If the Persistent Time Stamp group policy setting is either enabled or not configured, system information is written to the data section of this event. This information includes a timestamp that indicates the computer's uptime in seconds before the unexpected shutdown occurred.
One or more of the following options might help to determine the cause of the unexpected shutdown:
1.Check the system event log for other events that occurred around the same time as the unexpected shutdown.
2.Find out whether the computer's reset button was pressed, the power cord was unplugged, or a general power failure occurred.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474948.91/warc/CC-MAIN-20240301030138-20240301060138-00516.warc.gz
|
CC-MAIN-2024-10
| 5,683
| 48
|
https://www.autismbehaviorservices.com/uncategorized/legal-planning-for-children-with-special-needs-free-workshop/
|
code
|
When: Thursday August 8th, at 7:00 pm.
Where: Autism Behavior Services, Inc (Santa Ana) – 2080 N. Tustin Ave, Suite B. Santa Ana, CA 92705.
Please RSVP if possible, so we can plan for adequate seating. RSVP to (855) 581 – 0100. Walk-ins are still welcome as well.
We are pleased to have attorney Joseph M. Geis provide a free educational workshop for all of our clients and friends about the very important topic of Special Needs Planning. Mr. Geis will talk to us on the following areas:
- Special Needs Trust – What is it and how can it allow a special needs child receive an inheritance and still qualify for important public benefits?
- Guardianship – What role does a guardianship play should I pass away while my child is still young?
- Limited Conservatorship – What is a limited conservatorship and when do I need to begin planning for establishing one for my special needs child?
Mr. Geis has focused his legal practice on Estate Planning (Including Special Needs Trusts) and Conservatorship for over 11 years, and bring a breadth of knowledge and experience on helping families plan for Special Needs issues.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056711.62/warc/CC-MAIN-20210919035453-20210919065453-00101.warc.gz
|
CC-MAIN-2021-39
| 1,287
| 8
|