url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
http://javaforbeauty.com/product/bags/mason-messenger/
|
code
|
Done in distressed supple faux leather. Its vintage look and stylish design give it a timeless look. A great unisex bag for everybody. The front flap is secured by dual magnetic snap closures. Under the flap there is a vertical zip pocket for your tablet. The main compartment has a zip pocket for small essentials, pen loops, and open pockets for keys or phones. The gusset also has an open pocket and additional pen loops for easy access. The back of the messenger has a large open pocket that is secured by a magnetic snap. It is finished with a padded, adjustable shoulder strap.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511889.43/warc/CC-MAIN-20181018152212-20181018173712-00213.warc.gz
|
CC-MAIN-2018-43
| 583
| 1
|
https://www.ibm.com/developerworks/community/blogs/cgaix/entry/AIX_7_2_running_on_my_Macbook?lang=en
|
code
|
AIX 7.2 running on my Macbook?
cggibbo 270000TMUJ Comment (1) Visits (12977)
After reading this http
Well, the answer my friends, is yes...sort of.
Many thanks to Rob McNelly who originally tweeted this link, http
Also, thanks to Liang Guo for his assistance. Your guidance was greatly appreciated.
Note: What I describe here is NOT supported by IBM. It is purely a lab experiment to see what was possible with qemu-system-ppc64.
If you want to follow along at home, please follow and test the steps outlined here, http
I've never used QEMU until now. So all of this is very new to me. I'm still learning, so if you see something wrong with my instructions. Sorry. I'll do better next time.
The first thing you need to do, is install AIX 7.2 in a Logical Partition (LPAR VM), on a Power system, somewhere. If you don't have an IBM Power System of your own, you could try using the http
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573827.2/warc/CC-MAIN-20190920030357-20190920052357-00030.warc.gz
|
CC-MAIN-2019-39
| 885
| 10
|
https://www.telerik.com/forums/export-to-excel-of-grid-not-working-in-a-sharepoint-online-page
|
code
|
I'm using Kendo UI Grid to present SharePoint list data inside a TabStrip in a standard page in Office 365 (so SharePoint online) and everything works nicely except Export to Excel does absolutely nothing. I've tried using the toolbar "Excel" feature and it does nothing and if I add a separate button as the below:
the page refreshes on click but nothing else happens. I've tried this in Firefox, Chrome and IE11 all on Windows 10 and no joy. Anyone got any suggestions? Am I missing some kind of pre-requisite?
Thanks in advance for any assistance.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00615.warc.gz
|
CC-MAIN-2022-40
| 550
| 3
|
https://vbn.aau.dk/da/publications/get-realistic-ucd-course-design-and-evaluation
|
code
|
There is an increasing demand for software, suitable for large segments of users with different needs and competences. User-Centred Design (UCD) methods have been used in the software industry and taught to software developers to meet the various needs of users. The field of UCD covers a broad set of topics that can be covered in a range of courses with various content. In this paper we describe the design of a two-week course focusing on teaching UCD methods to students with various backgrounds that are useful for the students in the future. The course schedule included lectures and workshop activities where the lecturers taught UCD topics and coached the students in developing skills for using the selected UCD methods during the course to design and evaluate an interactive system. Additionally, we describe two types of course evaluations that we conducted: qualitative weekly evaluations and a post-course survey. The results show that students were in general positive about the course content and the combination of lectures and workshop activities. Hi-fi prototyping was the UCD method that the students rated as being most useful for the course and their future. They particularly liked how realistic these were for the users. The least useful method in the course and in the future was “Walking the Wall”, where students read an affinity diagram and make design suggestions. Finally, we suggest changes for a prospective course, based on the results of the evaluations.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500140.36/warc/CC-MAIN-20230204142302-20230204172302-00733.warc.gz
|
CC-MAIN-2023-06
| 1,492
| 1
|
https://www.pass-the-idea.com/effective.html
|
code
|
Pass The Idea™ is effective
- whilst Pass The Idea™ requires only limited time from each participant, as a crowd sourcing exercise it generates a large number of ideas in order of priority
- each participant spends on average 60 minutes going through the 4-step online process with each step taking around 15 minutes
- participants contribute on average 25 ideas per Challenge
- depending on the overall timeline, participants choose when it suits them best to engage. Location is not an issue as long as they can be online. This flexibility ensures that Pass The Idea™ can run as part of a normal working day without interfering with normal work commitments.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474361.75/warc/CC-MAIN-20240223053503-20240223083503-00526.warc.gz
|
CC-MAIN-2024-10
| 665
| 5
|
https://forum.uipath.com/t/difference-between-throw-and-re-throw/11201
|
code
|
Can you tell the difference between Throw and Re-throw.
Throw activity is to be used to explicitely throw an exception that you define
When you use a Try-Catch, and there is an exception thrown, you enter the Catch Block.
Here you can perform certain activities like take screenshot and log the error. But, here the exception that is thrown will get consumed. If you need that the exception gets propagated you should use Re-Throw.
Hi @akshi_s27, i don’t get that re-throw properly.
Rethrow - throws a previously thrown exception from within a TryCatch activity(as @akhi_s27 explained). The error is rethrown retaining the original source of the exception. Rethrow can only be used within a Catch block of a TryCatch activity.
OK … but how do I get the actual exception info in the catch block - so I can log the error without stopping the robot?
and how do I create a custom exception using Throw??
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565541.79/warc/CC-MAIN-20210125092143-20210125122143-00289.warc.gz
|
CC-MAIN-2021-04
| 903
| 8
|
https://jaruiz.io/about/
|
code
|
Hello. This is my name (José Alberto Ruiz).
I’m here because I’m, mainly, a Software Developer and I would like to share with you some topics that I think they could be interesting.
I started in this world when I was a little child, 6-7 year old, more or less. My father bought a personal computer,INVES 640 X Turbo and for buying it they gave him an awesome Spectrum 128K as a present.
Finally, I studied Computer Engineering at the University and started working professionally in Software Development. Since then I’ve done everything, programming in several languages and in several layers of the architecture.
I’ve participated in a lot of projects of different size and with different roles. As I said before, I consider myself a software developer or technical architect but I like methodologies or team managing so I’ve also had the project manager role along my career.
I’m not very theoretical because I like much more the practical way but I recognize that a solid base of concepts is necessary because Software Development is much more than throwing lines of code. It’s necessary to understand why coding in one way or in other, why we choose one technology or another, which patterns we can apply,…,etc.
I hope you enjoy this blog and it’s useful for you.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057142.4/warc/CC-MAIN-20210410134715-20210410164715-00088.warc.gz
|
CC-MAIN-2021-17
| 1,286
| 7
|
https://carrierlist.17track.net/tr/yq/190338-zt
|
code
|
Resmi internet sitesi: http://www.zt.hailei2018.com/
Z&T Founded in 2017, Through the management of network process and modern logistics, The main lines of international EMS, DHL, UPS, FedEx express and the special line are 'China-US', 'China-Europe', 'China-Japan' .Z&T has a strong customs clearance team at home and abroad to provide one-stop cross-border e-commerce logistics services.
(# -> Harf, * -> Numara, ! -> Harf ya da Numara)
- ZNT## *** *** *** * YQ
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747887.95/warc/CC-MAIN-20201205135106-20201205165106-00718.warc.gz
|
CC-MAIN-2020-50
| 469
| 4
|
https://www.lesswrong.com/posts/Eu9swHBLzBBcc3kDM/predicting-virus-relative-abundance-in-wastewater
|
code
|
At the Nucleic Acid Observatory (NAO) we're evaluating pathogen-agnostic surveillance. A key question is whether metagenomic sequencing of wastewater can be a cost-effective method to detect and mitigate future pandemics. In this report we investigate one piece of this question: at a given stage of a viral pandemic, what fraction of wastewater metagenomic sequencing reads would that virus represent?
To make this concrete, we define RA(1%). If 1% of people are infected with some virus (prevalence) or have become infected with it during a given week (incidence), RA(1%) is the fraction of sequencing reads (relative abundance) generated by a given method that would match that virus. To estimate RA(1%) we collected public health data on sixteen human-infecting viruses, re-analyzed sequencing data from four municipal wastewater metagenomic studies, and linked them with a hierarchical Bayesian model.
Three of the viruses were not present in the sequencing data, and we could only generate an upper bound on RA(1%). Four viruses had a handful of reads, for which we were able to generate rough estimates. For the remaining nine viruses we were able to narrow down RA(1%) for a specific virus-method combination to approximately an order of magnitude. We found RA(1%) for these nine viruses varied dramatically, over approximately six orders of magnitude. It also varied by study, with some viruses seeing an RA(1%) three orders of magnitude higher in one study than another.
The NAO plans to use the estimates from this study as inputs into a modeling framework to assess the cost effectiveness of wastewater MGS detection under different pandemic scenarios, and we include an outline of such a framework with some rough estimates of the costs of different monitoring approaches.
Read the full report: Predicting Virus Relative Abundance in Wastewater.
>If you're paying $8k per billion reads
>This will likely go down: Illumina has recently released the more cost effective NovaSeq X, and as Illumina's patents expire there are various cheaper competitors.
Indeed it did go down. Recently I paid $13,000 for 10 billion reads (NovaSeq X, Broad Institute; this was for my meiosis project). So sequencing costs can be much lower than $8K/billion.
Illumina is planning to start offering a 25 billion read flowcell for the NovaSeq X in October; I don't know how much this will cost but I'd guess around $20,000.
ALSO: if you're trying to detect truly novel viruses, using a Kraken database made from existing viral sequences is not going to work! However, many important threats are variants of existing viruses, so those could be detected (although possibly with lower efficiency).
Thanks! Responded there.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473690.28/warc/CC-MAIN-20240222030017-20240222060017-00212.warc.gz
|
CC-MAIN-2024-10
| 2,709
| 11
|
http://mathhelpforum.com/calculus/165404-mean-value-theorem.html
|
code
|
Verify Mean Value Theorem for the function on the interval [1,2]
by finding (all) the appropriate point(s) c where the derivative equals the slope of the
secant line between the end points of the interval.
I think I know how to do this. But I have got to solve it without a calculator apparently!!
slope of secant line is
let m = ln2 + 3
now I'm stuck here. How do I solve this without a calculator?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00349-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 399
| 7
|
https://dirtydozenraces.com/master-class-menu/
|
code
|
When I first started teaching, I thought my job was to teach people technique and help people get stronger but 3 years down the line, I worked out that what I am really teaching people is how to work with them to build their confidence. I love seeing this happen. It really drives me.
I teach people to believe in themselves.
My goal is not to make you stronger, my goal is to help you become more confident!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104204.40/warc/CC-MAIN-20170818005345-20170818025345-00208.warc.gz
|
CC-MAIN-2017-34
| 408
| 3
|
http://forums.thebehemoth.com/index.php?/topic/12429-plushy-love-for-the-behemoth-creations/?pid=248251
|
code
|
I'm AnnaTheRed. I make plushy stuff at the Behemoth.
I also make plushy stuff as a hobby. But I like the characters from the Behemoth games so much that I often find myself making plushy version their games at home. Scary!
Anyway, here are the latest fan plushes I made for my friend at Penny Arcade. I think they turned out pretty good, so I thought I'd post them on here too.
**The fact that I'm posting them here DOES NOT GUARANTEE that these will be official products released by
The Behemoth. I just like to make them for fun, usually for my friends.**
Edited by AnnaTheRed, 25 May 2012 - 01:36 AM.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163052034/warc/CC-MAIN-20131204131732-00035-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 603
| 6
|
http://www.blackjackinfo.com/bb/showthread.php?p=234184&mode=threaded
|
code
|
CTR - Cash Transaction Reports
I was wondering if anyone has had any experience dealing with Cage/Gaming staff handling CTR's (Cash Transaction Reports)? Trouble with identification, anxiety about giving out your AP information or any experience with potential suspicions over your cash transactions?
I've just been reading some information regarding all of this from the Australian Transaction Reports and Analysis Centre (AUSTRAC) and state and federal sites.
Anyhow, yeah - if you have any stories or experiences to share from within Australia that would be great to hear!
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368709135115/warc/CC-MAIN-20130516125855-00023-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 575
| 4
|
https://community.spiceworks.com/topic/412695-new-to-vpro
|
code
|
I am attempting to setup vPro for the first time in my test lab. I have a Dell 990 with a static ipv4 address. I have go through the steps here http:/
The video you watched shows the procedure to create a One-time only configuration key for OS DHCP enabled clients. So what has happened is you have mismatched IP address between the OS (Static) and AMT (Dynamic).
You can see this by accessing the client locally and using http:/
I do not recommend leaving it in this configuration, you will need to un-configure the system from within the MEBx and then perform a re-configuration using information found on the SpiceWorks vPro Navigator Page: https:/
Edited Nov 25, 2013 at 16:49 UTC
I could not get to it by http:/
Depending on how you configured the client, the WebUI can be enabled/disabled. So not all is lost.
So lets step back and take a look at IMSS (Intel® Management and Security Status) which is located in the start menu under the Intel folder. Note: Some OEM builds do not include this tool.
- Within IMSS, select the advanced tab and check out the Status and confirm it is "Configured"
If it is configured, the method of configuration disabled the WebUI - nothing to worry about, just makes debugging more a challenge
Regardless if the system is configured or not use one of the two methods below to gather more data. Please send the resultant .nfo file to my email address that I provided in a pm to you.
- Please download the SCS SDK. Within this zip folder is a tool called ACUconfig, To use this tool, just extract the zip file, then from the command line run acuconfig from the configurator folder. the command line string to use is "acuconfig.exe systemdiscovery". This will generate a <fqdn>.xml file that has a lot of great info
Client ultimately didn't have all the drivers installed and an error during the provisioning process were the main issues. After updating the MEI drivers from the Dell website and re-configuring the client, the Web UI was activated and vPro seemed to be working correctly.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583705091.62/warc/CC-MAIN-20190120082608-20190120104608-00063.warc.gz
|
CC-MAIN-2019-04
| 2,024
| 13
|
https://www.wildrose-inn.com/fat-daddy-web-hosting/
|
code
|
Fat Daddy Web Hosting – You must have stumbled upon the term “Web Hosting ” lot of times however you are uncertain what it symbolizes? Simply speaking, Web Hosting is a sort of Internet hosting service that hosts your sites without your need to develop or install your own sites. There are numerous low-cost website hosting companies available on the web today and they use different strategies, which appropriate for various sort of companies. If you are interested in going with a Web Hosting service, you can contact one of the Cheap Website Hosting companies and go over with them concerning your requirements and about your desired website style. They would offer you with a personalized Web Hosting strategy, which fulfils all your needs and provides you with the wanted outcomes.
The very best cheap web hosting services supply endless e-mail accounts, domain names, website home builders, unlimited webhosting, unlimited area and bandwidth, impressive monitoring and loads of other features in simply a single bundle. With such an amazing service, you can host different sort of sites without the requirement to produce different accounts for each. Additionally, there are numerous other benefits that feature making use of a Cheap Web Hosting service provider.
It conserves time and money: For long duration of time, it is not practical to watch on the activities of your website. In case of some concerns, it is challenging to locate the precise source from where the issue has actually emanated. Nevertheless, with Cheap webhosting service providers, you can get appropriate help through their client service group. The group is comprised of experts and they help you in carrying out your daily service operations with the utmost efficiency. Additionally, if any problems develop with the site, the client service team of the Cheap website hosting suppliers always pertains to your aid.
Manage a a great deal of accounts: Cheap Web Hosting service provider hosts large numbers of accounts. This offers you the opportunity to look after the problems referring to the website at a single location. You can always have several accounts handled by a single administrator if you choose to go for the Best low-cost web hosting service. If you own a little organization, then you can also handle numerous little accounts hosted by the Best low-cost web hosting service.
Get committed server support: When you choose Cheap web hosting services, then the administrator of the Cheap site hosting services will give you devoted server support. This suggests that the administrator will be providing you with the devoted server so that you can have maximum benefits when it comes to web hosting. If you wish to host several websites, then you need to get dedicated servers for that function. Nevertheless, in case of small businesses, getting a Dedicated server is not possible as it is cost reliable and likewise not a viable option.
Handle the site yourself: Some inexpensive web hosting business do offer Managed website hosting services. This option is good for the person who does not desire to rely on the assistance team provided by the web hosting companies.
Get endless space and bandwidth: Cheap web hosting plans provide with endless space and bandwidth. In case of a shared web hosting strategy, the owner of the site shares the exact same server with other sites.
Host your site in the nation where you are most comfortable: Greengeeks is one of the couple of hosting fastest cheap web hosting provider that enables its clients to host their site in the nation where they are most comfy. The site can be hosted in the country where the consumer has strong roots.
Simply speaking, Web Hosting is a kind of Internet hosting service that hosts your sites without your requirement to develop or install your own websites. The finest inexpensive web hosting services provide endless email accounts, domain names, website contractors, limitless web hosting, unrestricted space and bandwidth, outstanding monitoring and loads of other functions in simply a single bundle. Get dedicated server support: When you choose for Cheap web hosting services, then the administrator of the Cheap website hosting services will offer you dedicated server assistance. Manage the website yourself: Some low-cost web hosting companies do offer Managed site hosting services. Host your site in the country where you are most comfortable: Greengeeks is one of the couple of hosting fastest cheap web hosting supplier that enables its consumers to host their website in the country where they are most comfortable. Fat Daddy Web Hosting
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488534413.81/warc/CC-MAIN-20210623042426-20210623072426-00531.warc.gz
|
CC-MAIN-2021-25
| 4,629
| 9
|
http://csnlinux.genesee.edu/opsys/homework.php
|
code
|
CSN 115 - Operating Systems
I'll try to keep this page updated through the semester.
Homework and Classwork
Class # 9 (9/20)
* TEST # 1
*** TEST # 1 is Wednesday, September 20th.
*** Remember, please bring your ONE PAGE 8 1/2 by 11 inch "cheat sheet" to exam.
The cheat-sheet must be handwritten (in your handwriting) and on one side only!
The test will consist of two parts, a multiple-choice, short-answer written exam of
approximately 50 questions, and then a practical, hands-on exam. The written exam
will be worth 100 points and the practical exam will be worth approximately 75
points (with the remaining 25 points coming from homeworks
and class participation for the first third of the semester).
Your homework and attendance since the beginning of the semester will account for the
remaining 20 or so points on the practical exam.
Remember, you've had 5 homework assignments so far: madeit, man page, intro tutorial, libreoffice, and shell tutorial.
Check your grades and make sure each has been completed. After Wednesday, you will no longer have an opportunity
to get credit for these.
Sample exam questions can be found in the Documents->Test Samples
of this website.
Class # 8 (9/18)
Review for Test 1. Good luck!
Class # 7 (9/13)
* Note packet # 6: The linux shell
* Do the "Shell" tutorial in Documents -> Tutorials
* Do the first tutorial of the Unix Tutorial in Documents -> Tutorials
* Lots of practice using putty on the hands-on components.
* Make sure you have all your required homeworks completed before the test day:
putty (madeit), libre office, man page, intro tutorial, shell tutorial
* Read through chapters 1-3 and skim chapter 5 of text. Note: the "type" command is
not installed on csnlinux. Use the "file" command instead.
Class # 6 (9/11)
Post-install information, discussion of Virtual Machines, RAID, LVM.
Added the Synaptic Package Manager in Ubuntu.
* Note packet # 6: Intro to the Shell
Homework: Read the RAID intro handout and Hardware vs Software RAID handout.
Class # 5 (9/6)
Continued with installing Linux.
Looked at Partitioning Software (gparted).
Looked at Note Packet 5
Class # 4 (8/30)
Installing Linux with discussion of hardware.
Notes: Installing Linux (#4)
Discussed partitioning of hard disks to accommodate linux, including resizing a partition.
Other options discussed included using Virtual Machines and Live CDs
Homework: research definitions of terms you aren't familiar with (e.g. CMOS,
hexadecimal, ext4 ...)
Homework: send me your LibreOffice document (see the end of note packet 3) ... due by 9/8/17.
a.s.a.p. Make sure you follow directions -- especially the subject line of the email!
Third Class (8/28)
Exploring the Desktop
Notes: Exploring the Desktop (#3)
Top 10 Linux Desktop Environments
HW: Do intro1-tutorial.pdf and intro2-tutorial.pdf in the Tutorials section of this website.
HW: Send your LibreOffice document to your professor, via email, per instructions posted in note packet 3.
HW: read chapters 1-3 of textbook (The Linux Command Line - William Shotts, Jr.)
Second Class (8/23)
Intro part 2.
Logging In, Navigating the File System
Started Notes: CSN115-INTRO-part 2 (#2)
Documents: Exploring the Filesystem
Documents: Filesystem Explained - just skim ... pretty in depth.
Documents: Guide to Open-Source Licenses
HW: Read the Wikipedia Article on Operating Systems
HW: Read Docs above.
First Class (8/21)
Introduction to the Course.
Introduction to Operating Systems.
Introduction to Linux.
Logging into the CSN Linux Network
Notes: Introduction (#1)
HW: Log into csnlinux.genesee.edu via putty (outside the lab) and issue
the command: madeit (see YouTube video)
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689823.92/warc/CC-MAIN-20170924010628-20170924030628-00237.warc.gz
|
CC-MAIN-2017-39
| 3,645
| 73
|
https://web.northeastern.edu/iloseu/
|
code
|
I've moved to the University of Toronto and this page will not be updated anymore. My new page is here.
CV (updated February 22, 2018)
Videos of lectures.
A picture of me with the Big tilting object (courtesy of Evgeny Smirnov).
Seminars that I am organizing or have organized
I'm in the editorial boards of Journal of Combinatorial algebra, Selecta Mathematica and Transformation groups.
A new version of my paper with Roma Bezrukavnikov (different from arXiv).
I'm a member of the Northeastern RTG group. This site describes various activities of our group.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474853.43/warc/CC-MAIN-20240229202522-20240229232522-00250.warc.gz
|
CC-MAIN-2024-10
| 559
| 8
|
http://jeromyanglim.blogspot.com/2010/07/how-to-process-inquisit-raw-data-in.html
|
code
|
OverviewInquisit is a tool for conducting computerised psychological experiments. It is particularly useful when timing is important. The Inquisit website provides a trial download, sample scripts, and useful documentation. I've previously posted about the benefits of Inquisit. Ron Dotsch also has an introductory tutorial
This post sets out how to process the raw data that is generated by Inquisit. Specifically, my intended audience are researchers in psychology who are intending to import raw Inquisit data into SPSS and process it in SPSS. This is a common scenario in many psychology departments, although I apply the same general logic when I import and process the data in R.
Before presenting my own tutorial, it's worth noting the resources already available. Ron Dotsch has a tutorial on processing Inquisit Data. Also, a lot can be learnt by inspecting existing SPSS data processing scripts such as that on processing the IAT.
Overview of the ProcessIn summary, I divide the process into the following steps:
- Import raw data
- Remove unwanted rows
- Remove incomplete participant data and duplicate logins
- Further processing in long format
- Restructure data file from long to wide format
- Merge additional wide format data into existing wide format data
1. Import raw dataThe raw Inquisit data file is usually a tab delimited text file. Each row is one observation (i.e., a trial) on one individual. Variables names are in the first row. Raw data files typically include data for multiple participants. SPSS has the Read Text Wizard. It's fairly self explanatory, but here is a tutorial.
2. Remove unwanted rowsThe raw data file is likely to have many rows that need to be removed. There may be items that you do not want to analyse. Also, when you have multiple cases in the data file, the variable names will be printed throughout the data file when the data for each new participant commences. For example, the following SPSS syntax retains rows where the variable
timedoes not equal the text 'time'. See the menu
Data - Select Cases: if condition is satisfied.
USE ALL. COMPUTE filter_$= (time ~= 'time'). FILTER BY filter_$. EXECUTE.Once you're satisfied that the filter has worked. You can adjust it to delete the unselected data instead of filtering:
FILTER OFF. USE ALL. SELECT IF (time ~= 'time'). EXECUTE.The above logic could be extended to particular trialcodes, blockcodes, and so on.
3. Remove incomplete participant data and duplicate loginsIt sometimes happens with Inquisit that participants log on to the experiment a second time. Participants sometimes click the start button more than once. In online settings participants sometimes do the experiment a second time.
There are several ways to check for duplicate logins. You can select the first trial and then get a frequency count on the
subjectID. In SPSS syntax this might look like this:
USE ALL. COMPUTE filter_$=(blocknum = 1 & trialnum = 1). FILTER BY filter_$. EXECUTE. FREQUENCIES VARIABLES=subject /FORMAT=DFREQ.If you have multiple logins for the one subject ID, you need to determine the valid login. In general, the valid login is the first login that involved completion of the full experiment.
Create a new variable called
subjectand the start time of the experiment. You can use the Inquisit variable
timefor this. However, I typically tell Inquisit to save an additional variable called
script.starttime. It has the advantage of being accurate to the second as opposed to the minute. Thus, if a participant logs in more than once in a minute, a unique login can still be readily determined. This more precise start time can be saved to your Inquisit raw data file by adding
script.starttimeto your Inquisit script as seen in the following example:
/columns=[date, time, build, subject, trialcode, blockcode, blocknum, trialnum, latency, response, pretrialpause, posttrialpause, trialtimeout, blocktimeout, correct, stimulusitem, stimulusnumber, display.height, display.width, computer.cpuspeed, computer.os, script.starttime, script.elapsedtime] /format=tabYou can create a variable to represent a unique login with SPSS syntax like the following:
STRING login (A50). COMPUTE login=CONCAT(ltrim(string(subject, F12.0)),".", script.starttime). EXECUTE.The above code declares a string variable of maximum width equal to 50. It then computes the value of
loginto be the concatenated string of
subject, a full stop, and the script start time. In order to concatenate in SPSS, subject needs to be converted to a character variable (assuming it is numeric).
F12.0means a number of width 12 with no decimal places. The
ltrimtrims white space of the left of the resulting string. You might need to tweak the above to meet your needs.
You can now go through your data file and determine which logins are unwanted. A table of frequencies of login usually clarifies which logins are incomplete.
FREQUENCIES VARIABLES=login /format=AFREQ.Sometimes you'll have to determine which of two logins from the same participant occurred earlier in time or is otherwise the valid login.
This should result in a list of logins that you wish to exclude. You can use the previously mentioned selection code to do this. For example, the following could be used to remove the specified logins.
SELECT IF (login ~= '12.14:21:08'). SELECT IF (login ~= '13.15:22:23'). EXECUTE.
4. Further processing in long formatThe Inquisit raw data file is in long format, which is to say that each row is a participant by trial combination. Often the aim is to convert the data file into wide format, where each row is a single participant. Many of the following steps can be performed either while the data file is in long format or after it has been transformed to wide format.
The details of subsequent steps vary substantially between studies. I'll just discuss a couple of common tasks.
4.1 Recoding responsesYou might apply a filter based on trial code or stimulus property and then use transform - recode to convert responses. For example, you could:
- Reverse code responses to selected self-report items
- Remove or adjust certain latencies (e.g., dealing with outliers)
4.2 AggregationYou might apply a filter and use
Data - Aggregateto get a summary of a set of items. The break variable is typically
subjectID. Common examples include:
- Mean reaction time over a set of items
- Sum of errors
- Mean for a set of items on a scale.
5. Restructure data file from long to wide formatAs mentioned earlier, the aim is often to get the data into wide format with one row per participant. This can be achieved by using the
Data - Restructuretool in SPSS. It is often necessary to do this in several steps.
The general process is as follows:
- Prepare long format data. You typically want only three variables: ID, VARIABLE, and RESPONSE.
You may also need to filter out various rows that are not part of the current
export. You'll also need to temporarily delete the many variables in the raw Inquisit data file that are not needed.
- ID will typically be called
subjectin the Inquisit data file. It represents the participant ID.
- VARIABLE is a string variable that uniquely identifies what will become a new variable in wide format. It is often necessary to create this by concatenating strings from variables such as
stimulusnumber1. Go to
Transform - Computeand see the various string functions particularly (
CONCAT). Also see the example earlier on using
CONCAT. For example, I might concatenate trialcode and stimulusnumber1 for a personality test where stimulusnumber1 records the item number. It's essential that each value of VARIABLE has only one value for each participant ID. It's also best if the values of VARIABLE do not include spaces.
- RESPONSE is the actual value of the variable that you want to extract. This might be the actual response or it might be the latency.
- ID will typically be called
Data - Restructure: Restructure selected cases into variables. The identifier variable is ID and the index variable is VARIABLE.
6. Merge additional wide format data into existing wide format dataIf you have existing wide format data on participants or if you created a wide format through aggregation, or if you have multiple restructured files from the previous step, you'll probably want to merge the files together. This is straight forward in SPSS using
Data - Merge - Add variables. UCLA has a tutorial on merging in SPSS, but here are a few basic tips:
- Ensure that ID variables are named the same in the two data files.
- Sort the ID variables in both datasets before merging.
- Ensure that the formatting of ID variables are the same (e.g., make them both numeric or if they are string ensure that they have the same width).
- Ensure that the variable names other than ID have distinct names across the data files.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120001.0/warc/CC-MAIN-20170423031200-00101-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 8,814
| 68
|
https://docs.microsoft.com/en-us/previous-versions/bb643127(v=msdn.10)?redirectedfrom=MSDN
|
code
|
Event Review: Microsoft Virtual PC Overview (Session TNT1-103)
This session looks at Microsoft Virtual PC 2004—its key features, benefits, requirements, and some usage scenarios. The session explores how virtualization works and covers Virtual PC Console use, as well as integration between host operating system (OS) and guest OS. Disks will also be covered, including how to upgrade from the Connectix version of the product, as well as how to create differencing disks.
- Microsoft Virtual PC 2004 Features
- Using Microsoft Virtual PC 2004
- Microsoft Virtual PC 2004 Advanced Features
- Virtual Machine Additions
- Upgrading Connectix Hard Disks
This session consists of a Windows Media presentation and demonstrations:
Download the full session
Download the full multimedia recording of this session.
Demo: Microsoft Virtual PC 2004 Features
This demonstration shows how to create a virtual machine using Microsoft Virtual PC 2004, how to review virtual machine settings, and how to run a virtual machine on the host computer.
Demo: Using Microsoft Virtual PC 2004
This demonstration shows how to install Virtual Machine Additions and illustrates how they improve virtual machine integration. Advanced features, such as network settings, undo disks, and shutdown options, are also covered.
Demo: Upgrading Connectix Virtual Hard Disks
This demonstration shows how to upgrade a Connectix virtual hard disk to Microsoft Virtual PC 2004.
Demo: Differencing Hard Disks
In this demonstration, differencing hard disks is explained, and you’ll see how to create a differencing hard disk from a parent virtual hard disk.
Use the following resources to learn more about topics covered in this briefing.
- 824509 - Virtual Switch Networking Options in Virtual PC for Windows
- 824505 - How to Print From a Virtual PC Guest PC
- Microsoft Virtual PC
- Virtual PC 2004 Support Center
- 824963 - Where to Find the Documentation That Is Included with Virtual PC for Windows
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00487.warc.gz
|
CC-MAIN-2022-33
| 1,969
| 24
|
https://ko.ifixit.com/Answers/History/483390
|
code
|
원문 게시자: Pet Natividad (CyberNerdz) ,
I believed I maxed out the available upgradability of my 530s. 1. CPU: Intel Core 2 Duo E8600 3.33Ghz $24 2. Memory: 8GB (DDR2 PC2-5300) $30 3. Hard drive: C: 530GB SSD, D: 4TB Hybrid $250 4. 3D Video: MSI GTX 1050 TI 4GT 4GB GDDR5 $200 5. PCI: 4 Port PCI SuperSpeed USB 3.0 $30 6. PSU: 300w Second* external PSU (Slimline Power Supply Upgrade for SFF ) $22 *Yes, I wired a external PSU to add to the primary internal power supply. There were no available high power PSU that will fit the slimline case. There are claims that they are 300w but I highly doubt it. I bought one of this and mounted it externally. It supplied the CPU 12v P4 header to the motherboard, the 2 hard drives, and the DVD burner. The internal PSU basically supplied the video card and the motherboard. The 2 most expensive upgrades are the video card and the SSD drives, which Im still going to have to spend upgrading for a refurb or mid-level $400 dell. The upgrade, in my opinion, was worth it since I get to keep everything I had (OS Windows 7 Ultimate and apps) and didnt have to deal with transferring and setting it up. I also slowly did the upgrade in a span of 2 months. This is my second /backup workstation. I can now play all my Blizz games and 4K video editing with PowerDirector 16.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300573.3/warc/CC-MAIN-20220129062503-20220129092503-00635.warc.gz
|
CC-MAIN-2022-05
| 1,317
| 2
|
http://www.gotickets.com/sports/college_basketball/acc/virginia_cavailers/college_park_1341002_TT.php
|
code
|
Virginia Cavaliers vs Maryland Terrapins TicketsVirginia Cavaliers at Maryland Terrapins
October 12, 2013
College Park, MD
Important Maryland Terrapins Ticket Information
When buying your tickets to see Maryland Terrapins keep in mind:
- Seats are together, side by side, unless the Maryland Terrapins ticket NOTES state otherwise.
- Time permitting, ALL tickets are shipped via FedEx.
- Ticket prices are set by the sellers and may be above OR below the face value of the ticket.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707435344/warc/CC-MAIN-20130516123035-00013-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 480
| 8
|
http://www.google.es/patents/US7418657?dq=flatulence
|
code
|
US 7418657 B2
A methodology through which a host site may automatically insert relevant links into a set of text. In this methodology, the contents of the text are compared against a database containing character strings, and the character strings from the database contained in the text are identified. Each of the character strings in the database has an associated link that connects to other webpages on the same website or other websites. For each character string of the database found in the contents of the text, the associated link is inserted into the text. In this way, only relevant links are inserted into the text.
1. A method for automatically inserting hyperlinks into a webpage containing text, the method comprising:
comparing the text to at least one character string contained in a database to identify specific character strings from the database that appear in the text, wherein each of the character strings has an associated hyperlink that is also contained in the database;
for each of the identified character strings contained in the text, inserting the associated hyperlink into the webpage;
designating a name for a product;
storing the name of the product as one of the character strings in the database; and
communicating the name of the product to a producer of the text, wherein the name of the product is designated from a plurality of names of the product that are utilized by the producer of the text.
2. The method of
3. The method of
4. The method of
5. The method of
The present invention relates to a method for automatically inserting relevant hyperlinks into a webpage that is transmitted and displayed on the Internet.
It is well known for a user to access textual information through a host on a communication network. This process 10 is summarized in
As part of the connection, the host includes a Web site, a computer system that serves informational content over a network using standard protocols. Typically, a site corresponds to a particular Internet domain name, such as “www.Deja.com,” and includes the content associated with a particular organization. As used in the present invention, the term website is generally intended to encompass both (i) the hardware/software server components that serve the informational content over the network, and (ii) the “back end” hardware/software components, including any nonstandard or specialized components, that interact with the server components to perform services for users of the Web site.
Once the connection is established in step 20, the host and the user interact over a distributed network, such as the Internet. The Internet is a collection of interconnected (public and/or private) networks that are linked together over various communication mediums by a set of standard protocols, such as TCP/IP and HTTP (discussed below), to form a global, distributed network. It should be appreciated that, while the term Internet generally used to refer to what is now commonly known as the World Wide Web, it also encompasses other forms of data transfer and is intended herein to equally apply equally to variations that may be made in the future, including changes and additions to existing standard protocols.
An important segment of the Internet is the World Wide Web (“Web”). The Web is used herein to refer generally to both (i) a distributed collection of interlinked, user-viewable hypertext documents (commonly referred to as Web documents or Webpages) that are accessible via the Internet, and (ii) the client and server software components which provide user access to such documents using standardized Internet protocols. Currently, the primary standard protocol for allowing applications to locate and acquire Web documents is Hypertext Transfer Protocol (“HTTP”), and the Webpages are encoded using Hyper-Text Markup Language (“HTML”). However, the terms, Web and “World Wide Web,” are intended to encompass future markup languages and transfer protocols that may be used in place of (or in addition to) Extended Markup Language (“XML”), HTML, and HTTP.
HTTP is the standard World Wide Web client-server protocol used for the exchange of information, such as HTML documents and client requests for such documents, between a browser and a Web server. HTTP includes a number of different types of messages which can be sent from the client to the server to request different types of server actions. For example, a “GET” message, which has the format GET:Uniform Resource Locator (“URL”), causes the server to return the document or file located at the specified URL.
HTML is a standard coding convention and set of codes for attaching presentation and linking attributes to informational content within documents. HTML 4.0 is currently the primary standard used for generating Web documents. During a document authoring stage, the HTML codes (referred to as “tags”) are embedded within the informational content of the document.
In particular, after establishing a connection, the user forwards to the host a request for information, step 30. Using HTTP, this request is usually in the form of getting a document located at a URL. A URL is a unique address which fully specifies the location of a file or other resource on the Internet. The general format of a URL is “Protocol://machine_address:port/path/filename.” The port specification is optional, and if none is entered by the user, the browser defaults to the standard port for the service that is specified as the protocol. For example, if HTTP is specified as the protocol, the browser will use the HTTP default port of 80.
After receiving the request from the host, the host serves to the customer's computer the requested text, step 40. The text files are generally written in HTML, and when the documents are transferred from the host server to the user client, the codes are interpreted by the browser and used to parse and display the text. In addition to specifying how the Web browser is to display the document, HTML tags can be used to create links to other Web documents or sites (the tags are commonly referred to as “hyperlinks”). A hyperlink is a navigational link from one document to another, or from one portion (or component) of a document to another. Typically, a hyperlink is displayed as a highlighted word or phrase that can be selected by clicking on it using a pointing device or a mouse to jump to the associated document or document portion. A set of hyperlinks is combined to form a hypertext system, a computer-based informational system in which documents (and possibly other types of data entities) are linked together via hyperlinks to form a user-navigable web.
A system for implementing the method 10 is illustrated in
There are several known methods for placing the webpage 125 onto the host server 120. For example, host personnel may manually program the contents of the webpage. However, this process is time consuming and relatively expensive because of the cost for the programmers. It is therefore desirable for the host to automatically find the contents of the webpage from a secondary source.
For example, it is well known for a host to load a document from a second server. In effect, the host acts as a client and requests information from the secondary source. For example, as illustrated in
In an alternative method to easily form webpages, the host uses online news messages (“articles”) to provide content. Online articles are public communications and, thus, available for viewing by any user in a network. This feature allows a sender of the article to reach numerous other users. For example, the sender can request information without knowing a specific source for the information. In particular, the contents of articles are placed at locations called newsgroups for public viewing.
The Usenet news system supports thousands of different newsgroups. Each newsgroup is identified by a newsgroup name that identifies the topic of discussion carried on the newsgroup. Newsgroups are available for a vast array of different topics ranging from business technology to cooking. A user may simultaneously post an article to one or more newsgroups. The article is then distributed to news servers throughout the Internet so it can be accessed by other users. An article is a text message often with attachments such as pictures, audio segments or some other binary data. A group of computers that exchange news articles is called a news network. The largest and best known news network is the Usenet, which is carried through the Internet. The Usenet is not a physical network, but a logical network implemented on top of many different types of physical networks, such as the Internet, as illustrated in
As illustrated in
News servers make arrangements among themselves to specify which newsgroups they exchange. The “receiving” server tells the “sending” server which newsgroups it wants to receive, and the sending server is configured to send only the specified newsgroups. Servers typically send articles to other servers more or less in the order of arrival. However, this sequence can become scrambled for various reasons, and as a result, a server commonly receives follow-on articles before the original article.
There are two known techniques for preventing the article from being redelivered to the same news server, and servers usually use both of these methods in sequence. In the first technique, the transferred articles contain a “Path:” header line that records the news servers that the article has traveled through between the originating server and the current server. If the receiving server already appears in the “Path:” line, the sending server does not try to send the article because the article has already passed through the receiving server. In the second technique for preventing the resending of an article, the servers use a “Message-ID:” header line in the article that contains an identifying code that is unique for each article. In particular, before transmitting the article, the sending server asks the receiving server, in effect, “Has the article with the Message-ID already been received.” The receiving server responds either “No, please send a copy,” or “Yes, already received so do not send it,” whereby the sending server only sends the article if it is not already received by the receiving server.
Eventually, most servers that carry the newsgroup have a copy of the article, and ideally, an article to a newsgroup travels to all sites (news servers) that carry the newsgroup. The final result is that tens or hundreds of thousands of copies of the article will be present on news servers scattered all over the globe.
“News clients” or “newsreaders” communicate with the news server, via NNTP. Many news clients, such as Microsoft Internet News®, Microsoft Outlook Express® and Netscape Communicator's Collabra® application are commercially available.
By accessing the Usenet, the host 120 may act as a news client to subscribe and collect news articles. The news articles are public-domain and may be freely used and modified. The host 120 displays the article to users throughout a distributed information network, such as the Internet. The host may then become a portal through which a user may access the Usenet without the use of a news server.
In particular, the host adapts the contents of the articles for use over the Internet. In this process, the host converts the news to HTML format for transfer via HTTP to the client. This procedure is relatively simple because the articles are in text format and can be readily used in an HTML document. Typically, the host serves to a user an HTML page with an open area or box reserved for the contents of the article. The HTML page further contains a command to access and display the contents of the article. The HTML command “HFER” (hypertext reference) allows the webpage to access a specified document. For example, the command, “HFER=/www.site.com/id=x,” allows the HTML page to access the contents of the document number x stored at the server at the URL, www.site.com.
Thus, a host may employ several techniques to create or obtain text to display to users. Once the host has the text, it is known to automatically insert hyperlinks with the text. For example, it is common to provide advertisements around the text that link users to sponsors of the host site. Similarly, the webpage generally contains links around the text that direct the user to other parts of the website. However, these automatically inserted links have little relevance to the specific contents of the text and are displayed regardless of the contents of the text. The disadvantage of the unrelated links is that they are of little interest to the user and can be easily ignored. For example, a website could simultaneously display a criticism of a product adjacent to an advertisement for the same product.
Furthermore, by providing relevant links, the host encourages users to access information and features because the user will naturally wish to access the linked page if the page is related to a subject of interest to the user. If irrelevant links are provided, the user may become frustrated and avoid using the links, even if some of the links direct the user to highly helpful sites.
While the host personnel may manually insert hyperlinks into a webpage according to the contents of the text, this process is time consuming and relatively expensive because of the cost of labor for the programmers.
Thus there exists a current need for a method to identify the subject of the text in a webpage and to automatically insert relevant links into the text without requiring extensive reprogramming of the page. In this way, the host integrates the text with the other contents of the host site by inserting relevant hyperlinks that interconnect the related contents of the site. This design allows a user to more easily identify and access the relevant contents of the host site by selecting links, thus facilitating a user's access to other information and features contained on the host site. Similarly, the host site may alert a user of newly available features or products, by linking to them from popularly accessed webpages of relevant text.
The host may also wish to modify the text to promote other relevant websites. In particular, the host may wish to direct users by linking to the site of a relevant sponsor, such as a manufacturer or a vendor of products of interest to the user. A link should only connect to sponsors of interest to the user. By better targeting users, the host site may increase advertising revenues.
Furthermore, by linking to relevant webpages, the host may create associations with certain topics or products. For example, a host that provides information on music products and links to related music vendors sites may become a primary portal through which buyers access music related information and products.
In view of the identified current need, it is an object of the present invention to provide a methodology through which a host site may automatically insert relevant links into a set of text. In this methodology, the contents of the text are compared against a database containing character strings, and character strings from the database contained in the text are identified. Each of the character strings in the database has an associated hyperlink that allows users to connect to other pages on the same website or other websites. For each character string of the database found in the contents of the text, the associated link is inserted into the text. In this way, only relevant links are inserted into the text.
These and other features and advantages of the invention will now be described with reference to the drawings in which like number refer to like elements and in which:
The present invention provides a method 200 for automatically inserting hyperlinks into text contained in a webpage, as illustrated in
During the step 210, the text may come from various sources, as described above. In one embodiment, the text is manually entered by host personnel. However, as previously described, this method is time consuming and relatively expensive because of the labor involved. Therefore, in a preferred embodiment, the text is loaded automatically from a second website, as described above in
In an alternative preferred embodiment, the text may be loaded from newsgroups articles during step 210. This process is described above and illustrated in
Once the text is loaded, the contents of the text are compared to character strings contained in a database, step 220. Each of the character strings has an associated hyperlink also contained in the database. An exemplary database is illustrated in the following table:
While Table 1 shows a hierarchical database, it should be appreciated that many other forms of for databases are known and may be used. For example, a relational database may be used to store the character strings and the associated links.
In order to meet the needs of electronic commerce, the database should contain product names that may appear in the text. In particular, the database may include (1) common product identifiers (“CPIDs”), (2) a name defined by the host to identify a product (“shortname”); (3) full, formal name for a product; and/or (4) categories of products.
A shortname should be the most common name used to reference the product, while being as unique as possible. The shortname is often a subset of the full product name, and the same product may have more than one shortname. The use of the shortnames is advantageous because it allows an easy-to-use standard terminology for the same product that can be applied regardless of the language or format of the text. The host may coordinate with producers of text documents so that the producers of text consistently use the shortname for a product. This process helps increase the relevancy of the hyperlinks by reliably indicating a relation of the text to a product.
By using only the unique shortname to identify the products discussed in a text document, the number of character strings contained in the database may be reduced because the database would not need to contain every possible name for a product. By reducing the number of character strings, the amount of computations and the computational time required for step 220 may be greatly reduced.
In one embodiment, the links point to other webpages contained on the same website as the webpage displaying the text. In particular, the links may connect the users to webpages on the site related to products mentioned to the text. In this way, the website could direct the user from text related to a product to a webpage containing further information on the same product. Alternatively, a website may allow the user to purchase the mentioned item by linking the user to a webpage for placing an order.
In another embodiment, the relevancy of the links is improved by using secondary indicators of the subject matter of the text. For example, the host may look to the topic of a newsgroup or source site and use this information in the selection of relevant links. For example, the subject matter of the newsgroup may be used to limit the number of character strings. For example, when providing links to an article from a newsgroup related to cars, the host may search only character strings related to cars. Again, by limiting the numbers of character strings to be searched, the number of computations and the time for the computations in step 220 is reduced.
Then, in step 230, the relevant links are inserted into the contents of the webpage. For example, the hyperlink may appear as a symbol or banner adjacent to the text. The user then may select and activate the link by providing an input, such as a mouse click on the link.
In one preferred implementation, the hyperlinks appear in the contents of the text rather than at the periphery. The user therefore is exposed to the hyperlink while reading. The appearance of the identified character string is altered to indicate to the user that the character string is a hyperlink. Typically, after the character string is converted into a hyperlink, the character string is underlined. The character string may additionally be displayed in a different color to further differentiate the hyperlink from the remainder of the text. For example, if a user reads an article in a news forum about cars and the article contains the word “Acme,” the present invention causes the word “Acme” as displayed within the text as a hyperlink to the Acme page. Because the links embedded in contextually relevant text, users are more likely to click to view the linked destination.
As illustrated in
In a preferred implementation, only the first occurrence of a character string in the text is converted to a hyperlink. This method helps preserve the original appearance of the text and helps avoid the clutter caused by simultaneously displaying numerous links to the same location. Overall, the present invention seeks to avoid significantly reducing the appeal of the host site. For example, there should be a maximum 1% reduction in pageviews per session and maximum 1% increase in the abandonment rate (or “frustration rate”).
Multiple insertions of the same link in a single text file may be avoided using any of several known techniques. For example, the site may be programmed to store a record of the character strings identified in the text and to add links only at the first instance of each character string. Alternatively, the database may be modified by removing a character string after the string is located in the text. In this way, only a single instance of the character string is identified.
In addition, the insertion of the hyperlink into the text should not disturb any existing HTML codes. Therefore, if the insertion of hyperlink at the initial location would disturb the HTML code used to form the webpage, the hyperlink should be added later in the text at a subsequent occurrence of the character string. Alternatively, the hyperlink may be positioned in the periphery of the text.
As described above, it is desirable to make the links as relevant as possible. With common product names, it is possible to mislink a string of text (i.e., provide a link leading to an unrelated product or concept). One way to decrease the likelihood of mislinking is to make the database search case sensitive (e.g., only match Windows®, not window). The database may be further adapted to allow for a list of stopwords (i.e. common words that should not be automatically linked) because the risk of mislinking the stopwords is too high. In addition, the database may be designed such that certain character strings would not be linked even if portions of the character strings would normally be linked. For example, in the hypothetical example of Table 1, “car” may link to the Acme Car company site, but “Beta car” of a hypothetical rival Beta Car company should not link to the Acme site.
One concern with modifying text received from a third party is the risk of the possibility of copyright infringement. In particular, the links may be perceived as adding to an author's copyrighted work without the author's permission. This use for the text may fall outside of the host's implied license to use the text. One way to avoid such a possibility is to not change text to insert links when the author has indicated that modification is not permitted. For example, the contents of the text may contain an explicit prohibition against modification of the text contents. Similarly, Internet documents may contain a header that indicates the author does allow modification of the text. This is generally in the format of a “X-no-modify” header.
In another embodiment, the user may opt to receive only text and not the hyperlinks. This may be accomplished by displaying the original text document to the user.
A user looks at the host site regularly to keep up with his newsgroup reading. While browsing the rec.arts.movies forums for anything on musicals, he notices that some of the movie titles are linked by being displayed in hypertext. He clicks one link, and he is taken to a webpage containing information about the musicals. As he continues to browse, he discovers that information on many other products are linked through the newsgroup articles.
An author writes for a text-based, third-party site. The third-party site signs up with the host to commerce-enable all their text documents. The author sets up a feed that enables the host site to download the text from the third-party site. The host site inserts relevant hyperlinks into the text and provides the third-party host a list of the hyperlinks contained within each text document. If the third-party site indicates that it does wish to modify the appearance of the text by placing the links in-line with the contents of text, the links may be added to the periphery of the text.
The invention having been described, it will be apparent to those skilled in the art that the same may be varied in many ways without departing from the spirit and scope of the invention. Any and all such modifications are intended to be included within the scope of the following claims.
Citas de patentes
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986806.32/warc/CC-MAIN-20150728002306-00122-ip-10-236-191-2.ec2.internal.warc.gz
|
CC-MAIN-2015-32
| 25,462
| 64
|
https://jwplayer-support-archive.netlify.app/questions/16970719-could-not-load-file-can-not-play-file-
|
code
|
could not load file; can not play file????????????
I have my server computer files shared with my other computer, and when i go to the inetpub on win web server 2008 from my other computer, I can open the iisstart.htm and click on the icon to open jwplayer file to play. and it does so just fine
But when I open it from the web page that is served, it says error loading media, file could not be played.
What am i doing wrong???
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00272.warc.gz
|
CC-MAIN-2023-14
| 428
| 4
|
https://www.groundup.news/target/358
|
code
|
Clue: Troy's unbelievable visionary
Make words of at least four letters using the grid letters at most once.
The centre letter must be in every word.
There's one nine-letter word.
There are no plurals or proper nouns, except possibly for the nine-letter word.
Words are drawn from our dictionary which has about 100,000 words.
You can either type the letters or click on them. To delete a letter use the backspace key or click it again.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474581.68/warc/CC-MAIN-20240225035809-20240225065809-00535.warc.gz
|
CC-MAIN-2024-10
| 436
| 7
|
https://lists.qt-project.org/pipermail/interest/2018-December/031989.html
|
code
|
[Interest] Segmentation fault on exiting Qt event loop
kshegunov at gmail.com
Mon Dec 17 13:04:45 CET 2018
On Mon, Dec 17, 2018 at 1:39 PM Andrew Ialacci <andrew at dkai.dk> wrote:
> Assuming each threads quit() is called and all operations are stopped in
> each thread correctly is using a loop and sleep still ok?
Ok's a relative term, but I wouldn't do (or recommend) it. That's the whole
reason you have QThread::wait (and pthread_join, std::thread::join and so
on) to begin with. Just wait for the threads the usual and recommended way
instead of polling them for no obvious reason. :)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Interest
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510983.45/warc/CC-MAIN-20231002064957-20231002094957-00747.warc.gz
|
CC-MAIN-2023-40
| 701
| 13
|
http://www.ruby.mn/projects
|
code
|
These projects are the work of Ruby.MN members. Some are Ruby related. Some are not.
In-progress Rails project to help the SCA community (http://www.sca.org) catalog venues for holding events.
The SCA (Society for Creative Anachronism) is a non-profit 501(c)(3) group that promotes the study of Europe in the Middle Ages and Renaissance.
Ruby-appscript that lives in the Scripts Menu of your Mac & sends the selected iPhotos to your WordPress blog.
Synop.it is a wiki for summaries of popular articles on and off the web. With the Synop.it bookmarklet, when you've found a lengthy article you can click and see if a summary has been created, or you can be the first to create one. We built this app in response to the daunting amount of interesting articles that one runs across on any given day on the web.
Twitter + Flickr = LOLs?
This site is a research-based system for obtaining parent's reports of their child's present functioning.
The Child Development Chart at the center of the system helps determine "What" and "How Well" the child is doing, and to anticipate future development up to age five.
The report helps both parents and professionals focus on any parent concerns and provides a profile of the child's development in five areas: Social, Self Help, Gross motor, Fine Motor and Language.
Social network for BBQ fanatics. Uses CommunityEngine to provide social networking features and adds on data that is specific to BBQ'ing.
Making Coffee Tweets Suck Less... or More
This micro-app is a coffee tweet builder for Twitter. It was a fun toy app built to experiment with Twitter's OAuth system, jQuery, and Rails 2.3 templates.
RaceDay makes registering for your racing events and managing them insanely simple. Accept registrations, collect fees, manage all the aspects of your race event, manage membership and membership dues and more with this easy to use, very affordable software.
Track time, log expenses, invoice clients, keep track of account receivables and revenue.
Co-op makes it easy to stay connected with your co-workers without disrupting them. Your team can use it to post updates, ask questions, share links, and track time.
In stadium mobile phone photo sharing. See your photos up on the big screen!
This software is used locally by the Twins, Wild, and Gophers.
Find out how annoying it will be to follow someone on Twitter before you follow them!
Built by Luke Francl and Barry Hess using Sinatra and the Twitter APIs.
Carpool +100 people? No problem.
Google Maps UI. Driver/Rider selection menu.
CURRENTLY UNDER INTENSE REFACTOR.
Twitterless started out as a Twitter app that notifies people when they lose followers, but it has grown into a general sandbox for different Twitter enhancement ideas. Currently we are working on a smart link culling service.
view140 is a simple pic view of Twitter. search by hashtag or keywords and get visual results. keyboard friendly.
E-Commerce Site: Red Stamp offers distinctive, high quality, paper cards and other personalized paper products.
A Social Media Productivity tool that connects with multiple socia media networks and blogging platforms to allow the scheduling and posting to multiple sinks at once. Includes a complete media library supporting Images, Audio, and Video. Completely written in RoR and utilizing Heroku.
JRuby is an implementation of the Ruby programming language atop the Java Virtual Machine. JRuby aims to have both Ruby 1.8.6 and Ruby 1.9.1 compatibility with excellent performance for both. JRuby also provides access to Java libraries from Ruby code, making the whole of Java's ecosystem accessible to Ruby programmers.
Subscription membership website with videos, forums, blog and more for those who are interested in growing a successful Straw Bale Garden. Straw Bale Gardening is a special kind "container" gardening with the following benefits: 1. Get high yields - especially anything that grows on vines like tomatoes, cucumbers, zucchini, squash, pumpkins etc 2. Doesn't need weeding 3. Does not require soil and can even be grown on parking lots, balconies and tainted soil 4. Extends the growing season by weeks by being able to start early as the bale conditioning process generates heat 5. Can be conventional or 100% Organic: your choice 5. The garden is raised above ground 16 inches so you are not on your knees
Website for Hudson Hospital & Clinics. Full-blown CMS. The client has created hundreds of pages, doctor profiles, press items with simple tools built in rails. CMS features full control over sidebar graphics, thumbnails, header graphics, graphics links and more. The app also provides nifty role-based authorization and an inward-facing set of pages viewable by hospital staff only.
Fun project to say the least. Thinking about putting it on heroku.
Classroom Agenda lets teachers post assignments, exams, field trips and general announcements. Parents and students can see what's going on in class from home.
Collaborative feed aggregator and reader, designed for easily finding and sharing the most important items from your favorite sites.
My first "real" Rails app. You can resize or crop pictures.
The premier web based solution for managing community education.
A website which has Company management inside along with stripe payments
Ruby and Rails based wiki engine using sessions for authentication and HAML for formatting.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948532873.43/warc/CC-MAIN-20171214000736-20171214020736-00604.warc.gz
|
CC-MAIN-2017-51
| 5,367
| 36
|
https://man.emadelsaid.com/Event-ExecFlow-Job-Group.3pm/
|
code
|
Event::ExecFlow::Job::Group − Build a group of jobs
jobs => List of job group members,
fail_with_members => Boolean whether group should fail with its members,
stop_on_failure => Boolean whether execuction should stop on failure,
parallel => Boolean whether members may be executed in parallel,
scheduler => Scheduler object for add. control of par. execution,
Use this module to group together jobs of any type, including groups, which results in arbitrary complex nested job plans.
Attributes can by accessed at runtime using the common get_ATTR(), set_ATTR() style accessors.
[ FIXME: describe all attributes in detail ]
[ FIXME: describe all methods in detail ]
Jörn Reder <joern at zyn dot de>
Copyright 2005−2006 by Jörn Reder.
This library is free software; you can redistribute it and/or modify it under the terms of the GNU Library General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY ; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Library General Public License for more details.
You should have received a copy of the GNU Library General Public License along with this library; if not, write to the Free Software Foundation, Inc., 59 Temple Place − Suite 330, Boston, MA 02111−1307 USA.
above document had some coding errors, which are explained
Around line 691:
Non-ASCII character seen before =encoding in ’Jörn’. Assuming CP1252
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00412.warc.gz
|
CC-MAIN-2022-21
| 1,599
| 18
|
https://medium.com/@wdziemia/responses
|
code
|
Senior Software Engineer @nytimes
Hi Jaron Wong! Can you post a screen shot so I can see?
Hey Kevin Cronly! I submitted the APK to APKMirror, will ping once its up there!
In the mean time: https://drive.google.com/open?id=1nvPFRsHXuDW7-GuNjquWBAUwR-VkxWv1
Hey TouGe! Does long pressing home with Google as the Assist app work?
Long pressing home with Google as the Assist app does work?
Roman Zavarnitsyn The VoiceInteractionSession gives you that information! There is a great sample project by Commonsware that I recommend!https://github.com/commonsguy/cw-omnibus/tree/master/Assist/TapOffNow
Hey all, by popular demand I’ve created a very basic example project that you can take a look at here: https://github.com/wdziemia/Nightmode
AndroidDeveloperLB There can be, will write one up during my upcoming holiday break!
I think you can just use %s with boolean and it will print “true” or “false”
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738746.41/warc/CC-MAIN-20200811090050-20200811120050-00520.warc.gz
|
CC-MAIN-2020-34
| 907
| 10
|
https://community.adobe.com/t5/premiere-pro-discussions/premiere-pro-freezing-and-stops-to-work/td-p/13016148
|
code
|
I'm at my wit's end with this one. Shortly after starting the work, Premiere simply freezes with a cursor of the current tool and completely stops to work, even when I'm just trying to select some elements, and the only solution is to close it down with Task Manager. Btw, Task Manager also shows that its CPU drops to 1% and lower. I've tried everything: updating GPU drivers, reinstall, update the software itself, tweaking something inside the app. Nothing seems to work. Please, help
Error or problem
Freeze or hang
Thanks for responding! So, I did mean everything, including preferences reset, new project import, and media cache clear. It just happened on one unlucky day, still is going and the app is impossible to work in.
Premiere Pro ver. 14.5.0, build 51, Windows 10 64-bit;
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573399.40/warc/CC-MAIN-20220818185216-20220818215216-00079.warc.gz
|
CC-MAIN-2022-33
| 786
| 5
|
https://stackapps.com/questions/8027/can-we-change-the-stack-apps-homepage-tabs-to-not-show-questions-tagged-obsolet/8029
|
code
|
Following What should we do about dead listings?, we've started to add obsolete to questions that are 'deprecated, no longer available/supported, or no longer relevant'.
However, on the front page, there are a few obsolete question ('apps' tab):
There are also lots of placeholder questions shown:
Could the system please take obsolete and placeholder into account?
It would help to make the active and working apps more visible to casual users.
Also, as you can see from the right of the above screenshot, post author names are cut off -- could that be fixed too?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649293.44/warc/CC-MAIN-20230603133129-20230603163129-00718.warc.gz
|
CC-MAIN-2023-23
| 564
| 6
|
http://lists.ardour.org/pipermail/ardour-users-ardour.org/2009-May/022513.html
|
code
|
[Ardour-Users] Any ardour users/studios in/near Halifax, Canada?
ralf.mardorf at alice-dsl.net
Wed May 27 00:47:59 PDT 2009
David Taht wrote:
thank you, you made me starting the day with singing Penny Lane :).
Excepted to The Beatles I don't have any of those songs in my vinyl
collection, it's not really the music I'm listening to, but I like your
Interpretations a lot, IMO they have a touch of melodic Punk Rock, even
if it sounds more like folk/singer-songwriter music, it feels a bit like
Hüsker Dü and similar music.
> bootstrapping his machine up to ubuntu studio
Are there any troubles? The last 14 month my new mobo wasn't fine with
rt-audio on Linux (I'm not dissing Linux audio), but now it seems to be
fine. If you prefer Ubuntu as distro and Ubuntu Studio cause troubles,
you might give 64 Studio 3.0-beta3 (based on Ubuntu Hardy) a chance.
It's the first time that rt-audio is fine on my machine and this without
doing any settings myself. (For my troubles I guess using jackdmp
instead of jackd solved the troubles.)
I don't know if there is a user-mailing list for Ubuntu Studio, but
there is one for 64 Studio, so your people in Halifax can get help by
the list and I guess there are people from Canada subscribed to the list.
I'm not running Ardour, but I never heard about problems with Ardour on
64 Studio 3.0-beta3. The only problems you might get, is that you need
to compile some stuff, while there are no jack.h (libjack dev) and
kernel headers in the repository, because its still a beta version.
I often read about problems with clean installations of Ubuntu Studio. I
don't know if what I read is representative for Ubuntu Studio. Maybe
it's easier to use 64 Studio 3.0-beta3.
More information about the Ardour-Users
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00719.warc.gz
|
CC-MAIN-2023-50
| 1,746
| 29
|
https://bcgsoft.com/FeatureTour/Feature?id=254
|
code
|
BCGControlBar Pro (MFC)
BCGControlBar for .NET
The library has a built-in "Carbon" application look. All basic GUI elements such as menus, toolbars and docking panes are drawn with a dark "carbon-style" theme, so the user can concentrate his look on the application view. You can change the look on the fly.
// Enable "Carbon" look:
CBCGPVisualManager::SetDefaultManager (RUNTIME_CLASS (CBCGPVisualManagerCarbon));
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100909.82/warc/CC-MAIN-20231209103523-20231209133523-00009.warc.gz
|
CC-MAIN-2023-50
| 414
| 5
|
http://www.millerwelds.com/resources/communities/mboard/showthread.php?29334-Miller-Maxtron-450-cc-cv-s-amp-Syncrowave-351-s-for-sale&p=293098&mode=threaded
|
code
|
Recovered these cleaning out a warehouse. Have 2 each. I assume they are not working. No cables, etc. Make offer. Located in Louisville, KY. Can send photos, etc.
Results 1 to 1 of 1
09-13-2012, 02:15 PM #1Junior Member
- Join Date
- Sep 2012
Miller Maxtron 450 cc/cv's & Syncrowave 351's for sale
Last edited by Mellwood2010; 09-13-2012 at 02:27 PM.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164022163/warc/CC-MAIN-20131204133342-00012-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 350
| 7
|
http://www.elrossa.ru/truth+or+dare+dating/1393.html
|
code
|
Truth or dare dating dating anal girl
Enjoy the read and make a note of the ones you like. Then he/she must step out of the house, walk up to the nearest lamppost, touch it, yodel for 5 seconds, and then return to base.
To play the game just turn on the music and bop the balloons around the room while dancing.
Whenever the music stops everyone must grab a balloon.
Truth Questions for Friends : Many teens don’t know how to play this Game, so here are the game rules to be followed.
Initially, the first player starts the game by asking another player to choose “Truth Or Dare”? If that particular player chooses “Truth”, then opponent player asks a question which can be funny, embarrassing, dirty, good and simple one which player 2 must answer truthfully without fiction.
We share each and every moment with our best friends.
So when we play truth or dare with your best friends asking some interesting questions, the game will be of more fun which leads to better Friendship.
When you have kids that are “tweens” it is such an awesome age!
Truth or Dare is a great way to break the ice with someone new!
If a person loses all their pennies they must complete a dare.
I really encourage you to head on over and check it out to see if it is something you could use in your own life.
Everyone who HAS done this before must give up one of their pennies.Tags: Adult Dating, affair dating, sex dating
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866932.69/warc/CC-MAIN-20180624102433-20180624122433-00319.warc.gz
|
CC-MAIN-2018-26
| 1,414
| 13
|
http://spinrewriter9.com/spin1/the-five-secrets-about-spin-rewriter-9-only-a-handful-of-people-know-five-things-you-need-to-know-about-spin-rewriter-9-today.html
|
code
|
Spin Rewriter 9spin rewriter article rewriter tool rewriter tool article spinner article rewriter
Spin Rewriter Free
Spin Rewriter Reviews
Spin Rewriter Free Download
article spinningSpin Rewriter 9 Using Spinbot you can instantly spin (or rewrite) a chunk of textual content up to 10,000 characters in length (or about 1000 words), which is much longer than an average website or freely-distributed article. With a single click you can turn your old blog post or website article into a completely new one, thereby doubling the payoff you get in return for the time and energy you have already invested into creating quality website content. Spinbot is lightning fast as well as free, so there is potentially no limit to the amount of free web content that you can create using this tool. PartnershipsMembersAccount UpgradesAdvertiseMarketplace Clickopia Review and Bonus Hi Etai, Image Compression 45 Views · View Upvoters Audiobooks Book Depository Fun & Lifestyle Paraphrasing is very simple: it is basically putting what you read in your own words, and in a scholarly context, properly attributing the original author, etc. What do you mean by Article Rewriter tool? Recapitulation Of Spin Rewriter Review Say a lot with a little ten - two = English is a Global language and is considered to be the bonding language between people of different ethnicities and regions. It connects the whole world. Being one of the most famous languages, it.... Because content is what makes your business run. Featured Products Joined:Sep 20, 2017 SEO, AdWords Management, Social Media Marketing, and more. Answered Mar 12 2016 Ease of Use – One of the best things about Spin Rewriter is that this is very easy to use and as long as you can read, you will not have any issues in churning out lots of content. DA40+ backlinks from $0.15 per link. Sign up NOW - with our 3 Day FREE Trial: View all posts by Zac Johnson Click to share on Google+ (Opens in new window) Which is the best online spinning article tool to spin the content for SEO? kathywiley 6 years ago Single Seo Tools ← Video Wave Pro Review & Product Tour Online Sales Pro Review & Product Tour → 0 replies Top Quality REAL Facebook Fans. Get your page noticed! I have a great software called Speed Rewriter, but it is not a spinner. It simply breaks down the article sentence by sentence and makes it easier for you to rewrite it. However, you still have to do the rewrite manually. Spin Rewriter is a powerful tool. It uses algorithms to completely revise the articles into a copy that no one has ever posted before. PARAGRAPH SPINNING You Might Also Like... Affiliate Programs The most widely supported API in the SEO industry: huge numbers of tools let you plug Spin Rewriter straight in. 0 replies 1 retweet 0 likes So whether you want to use an article spinner to create fresh content for your blog or your website, this is the best option you have. Apps & Games › Lifestyle 1.7k Views · View Upvoters Digital worth Academy Review 2018:The 6 Figure Amazon Niche Site Revealed WP FAQ Best Article Rewriter tool by CoderDuck our Article Rewriter tool will rewrite your content, Rewrite unlimited articles or text via copy & pasting, Change sentence with same meaning. Article Rewriter Spinner tool is a one-click article rewriter tool that is not requires login,signup or registration if you want to use this free version. All you need to do is enter human readable text and you will get human readable text out. A few years ago we were wondering - is there a good paraphrasing website with an automatic paraphrasing tool online? We searched the Internet for a good sentence rephraser, and altought we found many, none of it could rephrase paragraphs correctly. Decision was made - create the best English paraphrasing tool to rewrite my or your text. Only our "paraphrase maker" has a built-in reword generator which will help rephrase any text automatically and accordingly. Paraphrase Online is free tool that can be used for automatic text processing: our paraphrasing tool (or article rewriter, article spinner, text rewriter etc.) is an online tool that automaticaly rewrites any provided content into an unique one by changing and mixing specific words and phrases with suitable synonyms. Our article rewriter is an advanced automated paraphrasing tool that allows instant online paraphrasing of any article into a unique content. Rewriting of content can help you greatly in avoiding the penalties you may suffer due to plagiarism. This free paraphrase tool does not require any registration or sign up, all you need is to enter any human readable written content, and you will get human readable rewritten content in the results. The primary goal of this software is to help compose fresh content completely for free and in no time. Mastering an online article rewriter tool can prove quite tricky. When paraphrasing or rewriting any text or paragraph, many writers and students tend to get hold of the technology that enables them to rewriter any written piece of content into a fresh piece, yet keeping the original meaning of the text same. This approach is usually used to simplify a piece of writing, minimize the use of quotes or target an alternative audience. When article spinner tool is used correctly, paraphrasing turns out to be much more concise than the original text, covering all the main points while preventing the risk of plagiarism. Whether you are a student or writer, you can use this free article rewriter online to rewrite any text to save time and get a different version. If your life revolves around writing, perhaps this is what you do for a living then this free article spinner tool is the answer to all your problems. It is the best academic and most SEO friendly paraphrasing tool that enables you to get a rewritten article with great flexibility. This competent instant article spinner helps you make a better attractive and comprehensive article in seconds. You can use tool to get rid of plagiarism or speed up your SEO performance. This is the definitive and complete rephrasing tool for rewrite and reword any of your sentences! Vector Images Spin Rewriter is the go-to spinner of choice for half the internet marketing industry in 2018. There is also the option to spin capitalized words (assumed to be proper nouns) as well as leave any number of words unchanged, depending on whatever you enter into the "ignore" field, separated by commas. You also have the option to only keep the sentences that were altered a minimum percentage, as indicated by the "Keep Sentences that Changed" option. Bonuses YES, HUGE BONUS Essay Bot You can improve your text quality in w ways: 7 Mind Numbing Facts About Spin Rewriter 9. | 15 Mind Numbing Facts About Spin Rewriter 9. 7 Mind Numbing Facts About Spin Rewriter 9. | 7 Preparations You Should Make Before Using Spin Rewriter 9. 7 Mind Numbing Facts About Spin Rewriter 9. | You Should Experience Spin Rewriter 9 At Least Once In Your Lifetime And Here's Why.
Legal | Sitemap
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512679.76/warc/CC-MAIN-20181020080138-20181020101638-00505.warc.gz
|
CC-MAIN-2018-43
| 7,031
| 6
|
https://danielmiessler.com/p/how-attack-and-defense-can-leverage-supervised-and-unsupervised-learning
|
code
|
- Unsupervised Learning
- How Cyber Attack and Defense Can Leverage Supervised and Unsupervised Learning
How Cyber Attack and Defense Can Leverage Supervised and Unsupervised Learning
A lot of people are starting to talk about how Machine Learning can help attackers and defenders in cybersecurity.
It’s an interesting topic, and I want to break down the difference between four types of cases: Supervised and Unsupervised, and Attack and Defense.
First, Supervised vs. Unsupervised.
Supervised Learning is where you are looking for the answer for whether X is a Y thing or not. Is this a dog? Is this a real attack? Is that user malicious? Those are Supervised types of questions.
You feed Supervised ML algorithms by giving them two things:
Tons of examples of situations where X was Y, and where X was not Y.
Tons of data where we don’t know which it is.
The algorithm then decides which are Y and which are not.
With Unsupervised Learning you aren’t telling the algorithm that you have Y’s and not Y’s. You’re not asking for a yes or no answer back. What you’re doing is asking the algorithm to identify patterns in the data, which you can then explore.
So it might be that you give it a whole bunch of data about shopper behavior, and you find some weird pattern that you don’t understand. And after researching it you find out those shoppers were the ones who had recently become engaged.
Unsupervised Learning, in other words, shows you new things about data that you didn’t even know to ask. Whereas Supervised Learning answers whether new X’s are Y’s or not, where you already taught it what a Y was.
Great, now let’s do InfoSec
So the way this applies to infosec is like this: When attackers or defenders need to confirm a known thing, they might use Supervised Learning. And where they want to search for new ways to find attackers (or new victims) they will use Unsupervised Learning.
Supervised Learning (Attacker)
Question: Does this fuzzing attack yield RCE?
Question: Is this target a qualified victim?
Question: Is this a honeytrap or a real system?
Supervised Learning (Defender)
Question: Is this a pcap of attack traffic?
Question: Will this user go rogue within 12 months?
Question: Are these logs generated by a legitimate user or an attacker?
Unsupervised Learning (Attacker)
Question: Which of these fuzzing attempts should I investigate?
Question: Find patterns in my internet scans.
Question: Find patterns in these spam responses that might indicate who’s a more likely victim.
Unsupervised Learning (Defender)
Question: Show me patterns in outbound DNS requests.
Question: Look at the frequency of outbound file uploads.
Question: Show me user activity in our flagship web app.
Just as in other disciplines, the breakdown is clear: Supervised Learning gives you a yes/no to a question you already know to ask, and Unsupervised Learning gives you patterns and hints about possible new questions you should be asking.
Expect both attackers and defenders to be using both with increasing frequency in the coming years.
No related posts.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817442.65/warc/CC-MAIN-20240419172411-20240419202411-00547.warc.gz
|
CC-MAIN-2024-18
| 3,087
| 35
|
https://www.phpbb.com/community/search.php?author_id=128405&sr=posts
|
code
|
You can edit the template file - but this disables it all together, the options per article won't have any effect anymoredjdurant wrote:...
What edits can I do to stop this from showing?
Install it (see instructions in the package), go to the ACP (as told in the instructions), use it (as stated in the instructions and the explanations included in the ACP pages added by the MOD). Any questions?ebuzz wrote:how to use CMS
No option to do so, for now - sorry!djdurant wrote:On the category list view how can I remove the tags/categories/calendar boxes on the right?
No comments feature implemented as of now...however, it's on the long list for future features.danswano wrote:When i publish a new post on my cms or blog i've created, can someone reply or there is no comments system ?
Sorry, but I can't tell...this sounds like a *very* customized setup. You might need to try it out, but right upfront...support won't be able as the setup seems to be unique to your very own server.network23 wrote:Please let me know if your mod might solve my problem.
Would this mod make all this possible?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703536556.58/warc/CC-MAIN-20210123063713-20210123093713-00239.warc.gz
|
CC-MAIN-2021-04
| 1,092
| 7
|
https://heutagogicarchive.wordpress.com/2012/06/26/discussing-co-creating-open-scholarship/
|
code
|
With Escola de Comunicações e Artes (ECA-USP) Sao Paulo Brasil
Co-creating Open Scholarship; was a paper Nigel Ecclesfield and I wrote a year ago for ALT-C. There was a lot of interest in reflecting on what we had learnt about learning technology since ALT was founded in 1993, and this was what we addressed. We were asked to expand our original submission into a journal article which is now freely available in ALT’s open repository. There was some debate about using Boyer’s model of scholarship as a baseline but, unlike Martin Weller in Digital Scholar, we felt that Boyer’s model itself needed updating. This was because what we had learnt most from using learning technology was about the pedagogy of learning itself. Inspired by Terry Anderson’s excellent keynote at ALT-C on Open Learning and his early scoping of Open Scholarship we felt that we should provide a synthesis and propose a new model, derived from Boyer, upon which we could debate the future of scholarship. What we are attempting to do in this post is provide some supporting arguments for such a debate with the Escola de Comunicações e Artes in Sao Paulo.
Framing the debate; In 2012 there has been a lot of discussion on what has been called open learning. However this is perhaps more about the massification of learning, or rethinking mass education, and seems to be focussed on scaling up traditional learning models, and addressing the opportunities and threats of globalisation using technology, whilst keeping the same institutional and policy frameworks. I’m thinking of Udacity, Coursera and MITx amongst others, as well as MOOCs. As I discussed on my blog on Open Academic Practice I had been a teacher for 15 years before I designed technology-enhanced (blended) learning for the first time in 1997, and I immediately designed for collaboration and discussion; which are core features of learning that do not scale and so don’t interest the biggest institutions. I have been working on pedagogically related issues concerning the use of technology ever since, mostly with an informal group of researchers known as the Learner-generated Contexts Research Group. This post outlines from where our ideas about co-creating open scholarship emerged.
Moving to networked society; for me rethinking learning, or rather unpicking how learning works when we design new educational systems using technology (or not), has to be tied into the purpose of learning. Learning is what education systems are set up to deliver and education systems are built by societies to reproduce themselves. The problem we face in designing learning in the 21st Century is that in many ways we are poised to move to a network society whilst, to use Ben Hammersley’s phrase, those who grew up in hierarchical society are in charge. Particularly since the advent of the architecture of participation provided by Web 2.0 tools, especially social networks, which are perhaps discussion platforms, we have the opportunity to rethink learning given the access to information that the internet now provides.
Here’s one we made earlier; So the idea of co-creating open scholarship emerged from a combination of practice, research, collaborations, reflections, influences, debate and design that went through many years of development. This is a short list of some of the underpinning ideas.
1. Brokering Learning; My first, pre-digital, insight into learning was that skilled educationalists, by which I mean people who have been working inside the education system long enough to meet Richard Sennett’s 10,000 hour rule, should use their skill, expertise and knowledge to broker the desire of learners to learn with the need of the education system to accredit them formally. This is best captured in this interview with me on learning by David Jennings
2. Collaborative digital learning literacies; When I first designed a blended-learning course (Information Systems in Society) using the internet I realised I needed to design part of the course to introduce learners to the new collaborative affordances of the tools, especially search and evaluation and discussion and moderation. More in this post on an Internet Model of Learning and Teaching.
3. Informal e-learning; Having built some learning resources in the 20th century – courses, intranets and a Community Grid for Learning, I was involved, some years later, in a research project to model Informal e-learning for community technology (UK online) centres. Research by LTRI found that centres had an evolving “life-cycle” which brought in and engaged learners by having “hooks” and being welcoming collaborative environments that used technology for learning. A workshop with ALT developed a model of informal e-learning which itemised possible new responsibilities for people involved in supporting learning. We exemplified this with an interactive training centre called Silwood Cyber-Centre.
4. Community Development Model of Learning; A key dimension of the recommendations we made about modelling informal e-learning was the idea of creating a community-responsive curriculum. We found this process of designing learning curricula to meet local needs to be a key element of social inclusion and the German Digital Integration team, for whom we prepared a presentation picked up on this.
5. Learner-generated Contexts; because of this work on informal e-learning we became part of a project to develop a web resource to solve the digital divide; Cybrarian. Whilst we recommend a Facebook for Learning back in 2002 it was rejected by the UK government and eventually key people from that team formed the Learner-generated Contexts Research Group. Our belief was that web 2.0 was going to change learning with user-generated content becoming a given. We concluded that for learning to remain meaningful in the digital future it need to anticipate this and enable us to design for a coincidence of motivations leading to agile configurations, using Rose Luckin’s “Ecology of resources” as a key design element.
5. Open Context Model of Learning; The first time we managed to synthesise our ideas into a useable resource came with the presentation/paper we prepared for the launch of the OU’s Open Learn; we called it the Open Context Model of Learning which we wrote collaboratively, John Seeley Brown called it the “most exciting thing happening in England”. This blog’s mission is to promote this concept. Our two key ideas were to, firstly rethink learning with technology without using technological terms; We did this by focussing on the related processes of cognition, meta-cognition and epistemic cognition. The second idea concerned how to design for these differing states of cognition and so we proposed the concept of the Pedagogy Andragogy Heutagogy Continuum. Thomas Cochrane used this in the redesign of the BA Product Design at Unitec, Auckland New Zealand.
6. Architecture of Participation; somewhat to the side of this ongoing development of a new post web 2.0 pedagogy we also recognised the need to redesign the institutions of learning and Nigel Ecclesfield and I have discussed this at length on the Architecture of Participation blog. We were also involved in the University Project in London from which the WikiQuals project emerged. We think that we need to design “agile institutions working across collaborative networks“.
7. Emergent Learning Model; Most recently in line with the post Bologna process desire in the EU for harmonisation of formal, non-formal and informal learning we took the opportunity to develop the Emergent Learning Model, on which the Ambient Learning City project Mosi-Along in Manchester was based. Our thinking was that the proposed harmonisation was about integrating institutions whereas we should be building on what we had learnt about learning and redesigning the processes. Consequently this argues for learner “coincidence of motivations” to come first, with content creation as important as text books, whilst accreditation becomes agile, negotiable and post-hoc (which is what WikiQuals is investigating)
Co-creating Open Scholarship; So the authors have been through a long process of designing, then re-conceptualising learning and locating it in a post-web 2.0 context and try to pick up on the best, emergent, ideas. At the core is the notion of a shared intellectual purpose that is both collaborative and socially useful. As we now have the tools to move away from hierarchical conceptions of institutions to, say, DIY models, we can deliberately design for co-creation and we can rethink the roles of those involved and the processes in which they engage. We think the purpose of co-creating open scholarship is to create open students who can themselves becomes open scholars but also be more responsive to real world problems and social needs.
Co-creating Open Scholarship; Participating in the perpetual beta of knowledge creation through the co-creation of learning creation through the co-creation of learning.
Please add comments and questions below and I will answer them.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121267.21/warc/CC-MAIN-20170423031201-00291-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 9,118
| 16
|
http://www.cinedeck.com/codec-wrapper-resolutions-nle/
|
code
|
File-based Insert Edit:
Supported Codecs, Wrappers and Resolutions & Supported NLEs
Cinedeck supports a wide variety of codec, wrappers, and resolutions for File-based Insert Edit in the most popular delivery standards. Additionally, Cinedeck’s ability to emulate an SRW VTR means that any number of NLEs can be used with Cinedeck to achieve a seamless file-based workflow.
Resolutions (supported for framerates to 60p)
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191405.12/warc/CC-MAIN-20170322212951-00032-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 421
| 4
|
https://forums.steinberg.net/t/tokens-2-2-10/122576
|
code
|
Is this PDF the most current that contains token information for our project
I have just created a new master page and wanting to draw upon information from Project INFO and wrote in these tokens
and they are not drawing upon my project info?
I managed to achieve this in the previous version. Can anyone advise where I may be going wrong?
If you have manually adjusted any frames or added text or images via frames, you would have a “page override”. If, in engrave mode, you see a red triangle on the affected page, that means an override is in effect. “Remove page overrides” will fix it and then the page will adjust, but you will lose whatever edits cause the overrides to begin with.
Wonderful - thank you. I will revert!
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683708.93/warc/CC-MAIN-20220707063442-20220707093442-00240.warc.gz
|
CC-MAIN-2022-27
| 734
| 6
|
https://avesis.metu.edu.tr/yayin/739dbe9f-579a-4e50-839f-aaac943a2189/a-framework-for-machine-vision-based-on-neuro-mimetic-front-end-processing-and-clustering
|
code
|
Convolutional deep neural nets have emerged as a highly effective approach for machine vision, but there are a number of open issues regarding training (e.g., a large number of model parameters to be learned, and a number of manually tuned algorithm parameters) and interpretation (e.g., geometric interpretations of neurons at various levels of the hierarchy). In this paper, our goal is to explore alternative convolutional architectures which are easier to interpret and simpler to implement. In particular, we investigate a framework that combines a front end based on the known neuroscientific findings about the visual pathway, together with unsupervised feature extraction based on clustering. Supervised classification, using a generic radial basis function (RBF) support vector machine (SVM), is applied at the end. We obtain competitive classification results on standard image databases, beating the state of the art for NORB (uniform-normalized) and approaching it for MNIST.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100381.14/warc/CC-MAIN-20231202073445-20231202103445-00568.warc.gz
|
CC-MAIN-2023-50
| 987
| 1
|
https://www.kirupa.com/developer/mx/loading.htm
|
code
|
Techniques: Flash 5 & Flash MX
written by ilyas usal a.k.a. pom
There are two ways of loading movies with
loading into target
loading into level
Loading into target was explained by Kirupa in
this tutorial, that's why I won't talk at all about it
here. We'll see how to load into target with Flash (5 or MX,
it's the same), and then how to load dynamically with Flash
enough, but you get the idea ]
Use the following links to navigate through this tutorial:
What we need is simple : a button to
launch the load, an empty clip to load the movie in, and a
.swf file (called loaded.swf), in the same folder
as the present movie.
I suppose you know how to create a button. To create
rapidly an empty movie clip, press Ctrl+F8, name it
'container', then click the Scene 1 vignette above
the timeline. Now open the library with Ctrl+L and drag
and drop the movie clip on a new layer of your scene.
u need to give this movie clip the
instance name 'container' (without the '). On your scene,
you should have nothing but a button and an empty movie
clip. The rest is code. Select the button, and open the
Actions Panel. Click the '+', select Basic Actions
and then Load Movie. The scene should look like
[ For the
moment, we're loading into level ]
Select the line that says
loadMovieNum, and enter this : URL : loaded.swf Location : Target container Variables : Don't send
How does this work ?
URL is the movie you want to load. Here, it's loaded.swf.
When you go online, you'll have to put the absolute path
here, or it won't work.
Location : target tells Flash that you're loading into
target. The following field is the target. When you load
into level, you put the level here, here we put the target
movie clip (which is not an expression) container.
The line should read :
loadMovie ("loaded.swf", "container");
Put a button on your
scene, and give it the instance name but in the
Property Panel. Create a new layer in your movie and name
Now add this code to the
first frame of the code layer:
Explanation of the Code:
but.onPress = function
We define the behaviour of the button as a callback function
in the _root. rather than in the button itself here. We
could have done it normally, of course, with a simple on
Instead of creating an empty movie and putting it on the
scene, we do it with Actionscript. Here, we create an empty
movie clip in the _root., named container and located
Exact same line as in Flash 5.
container._x = 150 ;
We set the position of the clip.
As you can see, it's much quicker here, thanks especially to
the new createEmptyMovieClip function. AND the movie clip is
very easy to handle afterwards, as we'll see in the next
This is one of the most awaited new feature of Flash MX. So
far, it was impossible to load dynamically images and sounds
without application such as Generator or Ming. Well, it's
now possible (and easy), which makes Flash a wonderful tool
to build dynamic sites.
First of all, be careful : Flash MX allows you to load
dynamically images that are non-progressive JPEG. Save the
images you want to load under this format, in RVB mode and
everything will be just fine.
picture has been loaded dynamically ]
Actually, it's exactly the same as previously. All you
have to do is replace loaded.swf by any .jpg you want.
As I said, the other advantage of this method is that you
can very easily manipulate what you've just loaded. For
instance, if we want to make draggable the image we loaded,
all we have to do is to write in the first frame of the
dynamically with Flash MX:
It's the same method. The commands
change a bit, but the idea is similar : you create an
instance of a sound object, in which you load the sound.
This instance is what we will use to change the properties
of the sound (volume for instance).
Here again, you can load dynamically sounds under a certain
format, which happens to be MP3.
You create a new Sound object.
You load music.mp3 into mySound. The second parameter is
call isStreaming. If it's set to true, the
sound will play while loading, whereas it will wait until
it's fully loaded before playing if isStreaming is set to
There ! You have just completed this tutorial. I have
provided the source code to the animations so that you can
compare your fla to mine. Click the Download FLA link below
to download the source file:
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578626296.62/warc/CC-MAIN-20190424034609-20190424060609-00467.warc.gz
|
CC-MAIN-2019-18
| 4,303
| 92
|
https://b-ventures.net/what-is-access-control-and-why-is-it-essential/
|
code
|
Access control is a security technique that controls who in a computing system can access or use information, or what. It is a fundamental security principle that minimizes the risk to the company or organization.
What are Access Control components?
Access control at a high level is about restricting access to a resource. Whether physical or logical, every access control system has five main components:
- Authentication: The act of proving a claim, such as a person’s identity or a computer user’s. These include reviewing self-identification documents, verification a website’s authenticity with a digital certificate, or verifying login credentials against stored details.
- Authorization: The role of defining resource access rights or privileges, e.g., human resource personnel, is generally authorized to access employee records. This policy is typically formalized as guidelines for access controls in a computer system.
- Access: Once authenticated and authorized, the resource can be accessed by the person or computer.
- Manage: Managing an access control program requires the implementation and removal of a user or system authentication and authorization.
- Audit: Frequently used to uphold the principle of least privilege as part of the access control. Over time, users can end up getting access that they no longer need, e.g., when changing roles. Regular audits minimize this risk.
How does control of the access work?
Once you know what is access control, then you must also know about its working. Access control may be categorized into two types for enhancing physical security or cyber-security:
- Physical Access Control: It limits access to campuses, buildings, rooms, and physical IT properties, e.g., Proximity card for unlocking a door.
- Logical Access Control: Logical access control restricts computer network links, device files, and data connections, e.g., a username and password.
For example, a company can use an electronic control system that relies on user credentials, access card readers, intercom, audit, and reporting to track which employees have access to and have accessed a restricted data center.
Access control minimizes the risk of authorized access to physical and computer systems, forming a foundation of information security, data security, and network security.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819847.83/warc/CC-MAIN-20240424174709-20240424204709-00705.warc.gz
|
CC-MAIN-2024-18
| 2,321
| 14
|
https://neoshare.net/artificial-intelligence/prediction-of-stock-prices-and-forecasting-using-deep-learning/
|
code
|
At some point of time in our lives, we all come across this term ‘Stock Market.’ Well, so what really is it?
Stock Market, also known as the equity market or share market, is the aggregation of buyers and sellers of stocks, which represent ownership claims on businesses. Buying a share or stock i.e. investing, is basically you buying a portion of the company, meaning you own a certain percentage of that enterprise, and this if done the right way can prove to be very lucrative.
So then, how do we know how exactly to invest, in order to get good returns on your investment?
Various techniques have been used for decades such as observing the Stock’s momentum, Regression techniques to estimate a mathematical function of the stock, Sentiment Analysis, etc.
Recently, Artificial Intelligence and Machine Learning have taken the world by storm. Here, we apply Deep Learning techniques in order to make predictions of stock prices using the concept of Long Short Term Memory, in other words, an LSTM Model.
LSTM is type of Recurrent Neural Network (RNN), which was made as a solution to the Long-Term Dependencies of the classical RNN model.
Long Short Term Memory networks are capable of learning long-term dependencies. The main difference between a classical RNN and a LSTM model is that an RNN will have a single activation function such as the sigmoid() or the tanh() functions, and when the recurrences get too high, the correlation between the output and an input which was considered at a very early stage becomes minuscule.
The LSTM model however, has its repeating module consisting of various layers and usually comprise of a variety of activations at play, which helps in Long-Term Dependencies, which is observed during practical applications.
Hence, we will use a LSTM Model to compute the closing prices of the stock.
We will use the Amazon stock data from a website which provides such data in a csv form and feed that as an array input. The array will contain numbers, will be one dimensional, the numbers will be the stock closing prizes daily of AMZN stock.
This data will be input into a LSTM, the LSTM has memory holding capability as it is an implementation of an RNN (Recurring Neural Network) and takes its previous output as the next input (ideally with unity weight, or whatever it has been initialized to) in order to process information which works on data of the past and has impact on the future!
So basically, the input vector will be the training set as well as the test set (excluding 30% of the training set to check for independent functioning). The LSTM will take a 30 days’ cut-off price as input, from the starting 0ᵗʰ day, using the n inputs it will obtain the training output i.e. the closing price at the (n+1)ᵗʰ day.
The next loop will start from the 1ˢᵗ day and take n inputs from the new reference point, will obtain the training output i.e. the closing price at the (n+2)ᵗʰ day and so on.
In this process, all the variables at the n positions get assigned a weight by the LSTM’s weight update algorithm and hence tries to find the pattern i.e. the effect of the previous n days on the stock price of (n+1)ᵗʰ day. After the training set the weights and biases obtained from the algorithm, the LSTM model is tried on the very same dataset having only first given n days and try to predict the remaining days by shifting by steps of 1.
So, we first start off by importing the necessary modules and the functions which will be used in the program later.
Then we import AMZN stock data from tiingo, convert it to a csv format so that it can be easily operated on.
We now use the head keyword of the Pandas module in order to access stock data via their heading.
As we have access to it title-wise, now we can easily select the ‘Close’ title and take values from it to operate on the model.
Now if we plot that, we should be able to plot it directly with the matplotlib.pyplot’s plot function, as it is a 2D Array.
The data we obtained is the daily cut-off of AMZN’s stock price, but Amazon being a huge MNC has a massive stock market, and huge cut-off prices, hence it would be tough to train the model directly on it, hence we should scale it down to a value between (0,1). We will do this using the MinMaxScaler function.
Quickly checking our array, we see:
Now that we’ve scaled it down to an easier to operate range, the preprocessing is done and now we can focus on the train-test-split. We split it in the ratio: Train: Test::0.65:0.35
Train Data :
Test Data :
Now comes the most important part, we need to make the dataset in such a way it considers the ‘time_step’ number of days before it to make the prediction of the cut-off price.
Hence, we create a function which takes an input having the data from train_data and the test_data, in order to train the model accordingly.
The train data will help set up the parameters or weights, the test data will be used to check the implementation of the set weights to see the accuracy.
We obtain the X and Y train and test datasets.
To just make sure the dimensions of the output and input are correct.
Now it’s time to finally make the Neural Network layers, we implement a model as an object of Sequential class in order to start a Sequential NN.
We add 4 layers to it, the first 3 being LSTM’s, then the Last layer is a Dense Layer, and for the cost function, the error function used is Mean Square Error, along with optimizer as ADAM.
model.summary() gives us the summary of our sequential model.
Now all that’s left is to train the model, we set epochs to 100 to try and get the error as low as possible
We used scaling to normalize the values, now we have to get them back to the original scale, hence we use scaler inverse.
Now we can plot the train, which was the original dataset we had, and the test next to it, to see if the weights were able to get anything close the trends.
Well it looks like as the differentiation of the curve at every point would give us an idea that it is at least able to predict whether the stock price is going up or down, but it isn’t giving as a very good idea about the scale of bullish or bearish.
Hence it poses a risk for the stock market investor to trust his/her money on just the basis of static weights made from such a simple LSTM model.
Nevertheless, the predictions made by the model seem to be much better than a purely random guess, and hence can be considered as a success, so cheers!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00483.warc.gz
|
CC-MAIN-2021-21
| 6,478
| 38
|
https://student.essayapple.org/response-of-a-movie/
|
code
|
You may write on any topic you wish having to do with one of the two films:
2.Takita Yōjirō, “Departures” (2008, Japan)
Do not write a plot summary. Focus on an issue, theme, or aspect to analyze.
please write 350 words at college level, i dont want any plot summary, just analyze.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00334.warc.gz
|
CC-MAIN-2022-27
| 287
| 4
|
https://experts.mcmaster.ca/display/publication1434864
|
code
|
The cultural and technological achievements of the human species depend on complex social interactions. Nonverbal interpersonal coordination, or joint action, is a crucial element of social interaction, but the dynamics of nonverbal information flow among people are not well understood. We used joint music making in string quartets, a complex, naturalistic nonverbal behavior, as a model system. Using motion capture, we recorded body sway simultaneously in four musicians, which reflected real-time interpersonal information sharing. We used Granger causality to analyze predictive relationships among the motion time series of the players to determine the magnitude and direction of information flow among the players. We experimentally manipulated which musician was the leader (followers were not informed who was leading) and whether they could see each other, to investigate how these variables affect information flow. We found that assigned leaders exerted significantly greater influence on others and were less influenced by others compared with followers. This effect was present, whether or not they could see each other, but was enhanced with visual information, indicating that visual as well as auditory information is used in musical coordination. Importantly, performers’ ratings of the “goodness” of their performances were positively correlated with the overall degree of body sway coupling, indicating that communication through body sway reflects perceived performance success. These results confirm that information sharing in a nonverbal joint action task occurs through both auditory and visual cues and that the dynamics of information flow are affected by changing group relationships.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00547.warc.gz
|
CC-MAIN-2021-04
| 1,719
| 1
|
https://sol.sbc.org.br/index.php/sbie/article/view/22464
|
code
|
A Technological Monitoring Architecture for Academics' Mental and Physical Health
Educational institutions are moving to a hybrid model that allows onsite and online classes. Students and teachers must adapt to these changes in the teaching and learning routine, leading them to stress and anxiety moments. This work proposes an architecture to assist academics in detecting these stressful moments during daily activities. The proposal uses smart bands, machine learning algorithms, and a smartphone app for environment monitoring. The evaluation was conducted by collecting real data from heart rate spikes and enriching it using the location information to send recommendations. The results show that it is possible to identify stressful moments by respecting the academics environment by monitoring their routine.
Bamber, M. D. and Morpeth, E. (2018). Effects of mindfulness meditation on college student anxiety: a meta-analysis. Mindfulness, 10(2):203–214.
Bülow, M. W. (2022). Designing synchronous hybrid learning spaces: Challenges and opportunities. In Understanding Teaching-Learning Practice, pages 135–163. Springer International Publishing
Carroll, N. and Conboy, K. (2020). Normalising the “new normal”: Changing tech-driven work practices under pandemic time pressure. International Journal of Information Management, 55:102186
Carter, T., Pascoe, M., Bastounis, A., Morres, I. D., Callaghan, P., and Parker, A. G. (2021). The effect of physical activity on anxiety in children and young people: a systematic review and meta-analysis. Journal of Affective Disorders, 285:10–21.
Chaturvedi, K., Vishwakarma, D. K., and Singh, N. (2021). COVID-19 and its impact on education, social life and mental health of students: A survey. Children and Youth Services Review, 121:105866.
Di iorio Silva, G., Sergio, W. L., Ströele, V., and Dantas, M. A. (2021). Asap-academic support aid proposal for student recommendations. In International Conference on Advanced Information Networking and Applications (AINA-2021), pages 40–53.
Gorman, J. M. and Sloan, R. P. (2000). Heart rate variability in depressive and anxiety disorders. American heart journal, 140(4):S77–S83
Gustems-Carnicer, J., Calderón, C., and Calderón-Garrido, D. (2019). Stress, coping strategies and academic achievement in teacher education students. European Journal of Teacher Education, 42(3):375–390
Hall, G., Laddu, D. R., Phillips, S. A., Lavie, C. J., and Arena, R. (2021). A tale of two pandemics: How will covid-19 and global trends in physical inactivity and sedentary behavior affect one another? Progress in cardiovascular diseases, 64:108
Hamdan, K. M., Al-Bashaireh, A. M., Zahran, Z., Al-Daghestani, A., Samira, A.-H., and Shaheen, A. M. (2021). University students’ interaction, internet self-efficacy, self-regulation and satisfaction with online education during pandemic crises of covid-19 (sars-cov-2). International Journal of Educational Management
Hasanbasic, A., Spahic, M., Bosnjic, D., Mesic, V., Jahic, O., et al. (2019). Recognition of stress levels among students with wearable sensors. In 2019 18th International Symposium INFOTEH-JAHORINA (INFOTEH), pages 1–4. IEEE.
Kastornova, V. A. and Gerova, N. V. (2021). Use of hybrid learning in school education in france. In 2021 1st International Conference on Technology Enhanced Learning in Higher Education (TELE). IEEE
Kiran, M., Murphy, P., Monga, I., Dugan, J., and Baveja, S. S. (2015). Lambda architecture for cost-effective batch and speed big data processing. In 2015 IEEE International Conference on Big Data (Big Data), pages 2785–2792. IEEE
Klein, A. and Lehner, W. (2009). Representing data quality in sensor data streaming environments. Journal of Data and Information Quality (JDIQ), 1(2):1–28
Leite, D., Santos, H., Rodrigues, A., Monteiro, C., and Maciel, A. (2021). A hybrid learning approach for subjects on software development of automation systems, combining PBL, gamification and virtual reality. In Anais do XXXII Simpósio Brasileiro de Informática na Educação (SBIE 2021). Sociedade Brasileira de Computação - SBC
Li, Q., Li, Z., and Han, J. (2021). A hybrid learning pedagogy for surmounting the challenges of the COVID-19 pandemic in the performing arts education. Education and Information Technologies, 26(6):7635–7655
Lima, M. D. S. and Maciel, R. S. P. (2021). Practices and digital technological resources for remote education: an investigation of brazilian professor's profile. In Anais do XXXII Simpósio Brasileiro de Informática na Educação (SBIE 2021). Sociedade Brasileira de Computação - SBC
Melillo, P., Bracale, M., and Pecchia, L. (2011). Nonlinear heart rate variability features for real-life stress detection. case study: students under stress due to university examination. BioMedical Engineering OnLine, 10(1):96
Misra, R., McKean, M., West, S., and Russo, T. (2000). Academic stress of college students: Comparison of student and faculty perceptions. College Student Journal, 34(2)
Munir, A., Kansakar, P., and Khan, S. U. (2017). Ifciot: Integrated fog cloud iot: A novel architectural paradigm for the future internet of things. IEEE Consumer Electronics Magazine, 6(3):74–82
Pakhomova, T. O., Komova, O. S., Belia, V. V., Yivzhenko, Y. V., and Demidko, E. V. (2021). Transformation of the pedagogical process in higher education during the quarantine. Linguistics and Culture Review, 5(S2):215–230
Pascoe, M. C., Hetrick, S. E., and Parker, A. G. (2019). The impact of stress on students in secondary school and higher education. International Journal of Adolescence and Youth, 25(1):104–112.
Pitanga, F. J. G., Beck, C. C., and Pitanga, C. P. S. (2020). Physical activity and reducing sedentary behavior during the coronavirus pandemic. Arquivos brasileiros de cardiologia, 114:1058–1060.
Priya, A., Garg, S., and Tigga, N. P. (2020). Predicting anxiety, depression and stress in modern life using machine learning algorithms. Procedia Computer Science, 167:1258–1267
REIS, H. M., ALVARES, D., JAQUES, P. A., and ISOTANI, S. (2021). A proposal of model of emotional regulation in intelligent learning environments. Informatics in Education
Silva, G., Stroele, V., Dantas, M., and Campos, F. (2019). Hold up: Modelo de detecção e controle de emoções em ambientes acadêmicos. In Brazilian Symposium on Computers in Education (Simpósio Brasileiro de Informática na Educação-SBIE), volume 30, page 139
Sohail, N. (2013). Stress and academic performance among medical students. J Coll Physicians Surg Pak, 23(1):67–71.
Souza, A. P. d. S., Silva, M. R. M., Silva, A., Lira, P., Silva, J., Silva, M., et al. (2020). Anxiety symptoms in university professors during the covid-19 pandemic. Health Sci J, 14
Vaziri, H., Casper, W. J., Wayne, J. H., and Matthews, R. A. (2020). Changes to the work–family interface during the covid-19 pandemic: Examining predictors and implications using latent transition analysis. Journal of Applied Psychology, 105(10):1073
Verma, P. and Sood, S. K. (2018). A comprehensive framework for student stress monitoring in fog-cloud IoT environment: m-health perspective. Medical & Biological Engineering & Computing, 57(1):231–244
Vitasari, P., Wahab, M. N. A., Othman, A., Herawan, T., and Sinnadurai, S. K. (2010). The relationship between study anxiety and academic performance among engineering students. Procedia-Social and Behavioral Sciences, 8:490–497
Zaccaro, A., Piarulli, A., Laurino, M., Garbella, E., Menicucci, D., Neri, B., and Gemignani, A. (2018). How breath-control can change your life: A systematic review on psycho-physiological correlates of slow breathing. Frontiers in Human Neuroscience, 12.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00297.warc.gz
|
CC-MAIN-2023-40
| 7,706
| 34
|
http://sourceforge.net/directory/natlanguage%3Ajapanese/audience%3Aother/?sort=rating&page=4
|
code
|
OSI-Approved Open Source (95)
- GNU General Public License version 2.0 (56)
- BSD License (12)
- GNU Library or Lesser General Public License version 2.0 (9)
- Affero GNU Public License (7)
- GNU General Public License version 3.0 (7)
- Artistic License (6)
- Common Public License 1.0 (3)
- Academic Free License (2)
- Apache License V2.0 (2)
- MIT License (2)
- Mozilla Public License 1.1 (2)
- PHP License (2)
- Apple Public Source License (1)
- Common Development and Distribution License (1)
- Fair License (1)
- Creative Commons Attribution License (13)
- Public Domain (8)
- Other License (5)
- Windows (106)
- Linux (94)
- Grouping and Descriptive Categories (93)
- Mac (81)
- Modern (41)
- BSD (35)
- Android (22)
- Other Operating Systems (17)
- Audio & Video
- Business & Enterprise
- Home & Education
- Science & Engineering
- Security & Utilities
- System Administration
Values-based Document Analysis: I want to take some rudimentary Document Analysis work that I have done and make it more sophisticated and to use it to analyze (at least) all of the docuemnts of the web for (human) values priorities. The project woul
J3's mainstay is a mutlilingual dictionary program with some cool utilities - and maybe games - for an international milieu. It is written in Java and is localizable (l10n) to work in any natural language, with minimal mucking about.1 weekly downloads
PHPTrans is PHP based library that allow translation of your project output to another languages. This will open your project to be understandable for the whole world. PHPTrans can be integrated in existing projects as well as used for new projects.9 weekly downloads
Protect PC against viruses, malware and other threats43 weekly downloads
各種整数関数の計算。11 weekly downloads
international glyph-based auxlang
Great Ant Colony Maxium. Be an Ant form birth to solder. Hunt, protect, work like a team. Use 3d Game Engine Plus.
a minecraft server
This is Sharp X680x0's Human-68k command-line emulator. It's run on Windows and BSD/Linux without X680x0's ROM Image and Human68k Operating System Files. It can run a X680x0's character based program without sounds(except Beeps).6 weekly downloads
Test Studio is the suit for testers. It manages test schedule, test cases, and track the bugs. And one of the big purpose of this suit is to automate software tesing. Main focus of automation tool is web site testing at this moment.
Seguro Distributor aims the design of an efficient algorithm that allows the safe distribution of public files, i.g., executables and source code. Files are decryptable with a public key, which is generated by a private key.
mBuddy is a project with the initial goal of allowing users of mobile devices (Java, J2ME) the option to find others based on interests, diet & other factors. Objectives are to allow intelligent, conditional, secure exchange of contact info, A/V & more.
online 2d interactive world for learning languages. Users are placed within a virtual world, Users are placed there where the other users dont speak the same language. users need communicate to achieve objectives and organise life within the game1 weekly downloads
You can 'wear' any clothes on the internet 'virtually' on your body (image). And this application stores the URL where you can buy the clothes, you can share your information like your good looking shot or your ratingson the clothe over internet.
this is going to be a mmorpg player world game hopefully people will like it its going to be fantasy based
((( bonzeye, bonZeye, bonsai, bonsi, BonPsi ))) a Volumetric Botanical Visualization for File Hierarchy Interfacing Goals: platform independence and consistency across devices, personalized and familiar tree-like branching structure, custom 2D overlays
Self Service Business Intelligence, Analytics & Performance Management8 weekly downloads
You are trapped on an island. What are you going to do? You choose. It's your adventure.
Centaros CMS is a easy to use user friendly CMS that will make things easier for you when creating your site or just a plain old community. Join the revolution and try Centaros CMS today.
Hyzenthlay, named after the Efrafan Doe from Watership Down rather than the plant, is a vector-based graphics rendering, manipulation and machinima program, distributed in Open Source fromat programmed using Python. This project is currently in plan...
An English, German, Japanese, Portuguese, Russian, Chinese, Hindi & Bengali desktop OS loading from miniCD or USB key with applications and personal config files on floppy or USB key.
Screen Grabber 2.0 is an application for grabbing screen shots at even timeintervals from a movie. Useful for creating previews, and thumbnails.
Nseer Open Source ERP Software integrates the customer relationship, product design, management of production, storage, outsourcing, purchase, financing control, finance system, human resources, cooperative work and system security for the company into..
This is my Otakon 2008 AMV submission
An internationalization project
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447769.81/warc/CC-MAIN-20151124205407-00175-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 5,050
| 58
|
https://www.reddit.com/r/ReadItOnReddit/
|
code
|
I can't believe people actually don't post on the sub dedicated to recording moments when people had already read it on reddit. Like that terrorist attack that was handled by a passanger or something like that.
Also, I'm hi af, lol
Do you need facts? We got em'! The point of ReadItOnReddit is to have a large amount of facts for other redditors. REMEMBER: It's about facts, not fantasies or opinions.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864848.47/warc/CC-MAIN-20180623000334-20180623020334-00573.warc.gz
|
CC-MAIN-2018-26
| 401
| 3
|
https://community.cisco.com/t5/switching/when-was-dbd-database-descriptor-packets-sent/m-p/4561435
|
code
|
I know DBD packets would be exchanged when establishing adjacency after booting the router .
However after that ,Im not sure how DBD works .So
DBD packets was sent every 30 minutes ? I mean .
When Router exchange LSDB every 30 minutes , First Do OSPF routers exchange DBD(database descriptor ) packets ?
thank you .
My understanding is when the routers go into an exchange state, the dbd packets are sent to describe the networks that the router knows about to its neighbor. The dbd packet has the lsa headers of its lsdb and it's sent to the neighbor. I believe the master of the master/slave relationship is the one that starts the exchange. After the exchange is done and the neighbors settle into a FULL state, and it would not be sent again unless a topology change happens (a change in the lsdb) or the 30 minute window lapses.
*** Please rate all useful posts ***
The DBD packets are exchanged only during the initial database synchronization between routers. After the databases have been synchronized, new LSAs are flooded and acknowledged without the need of more DBD packets. That also goes for refreshing and reflooding LSAs.
John, the DBD packets do not describe networks. What they describe are the contents of the sender's link-state database. You have stated correctly that DBD packets contain headers of individual LSAs in the sender's database. Basically, if you think in terms of database systems, every record in a database can be uniquely identified by its primary key. If you want to compare two databases, you do not need to transfer the entire database - rather, you just take the keys and check whether both databases have the same set of keys. If any key is missi
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103344783.24/warc/CC-MAIN-20220627225823-20220628015823-00793.warc.gz
|
CC-MAIN-2022-27
| 1,689
| 9
|
https://forums.comodo.com/t/pum-hijack-startmenu/279416
|
code
|
I have two laptops: gateway NV59 and HP HDX16 that have windows 7 64 bit sp1, CIS version 5.10, Avast 7.01426, Malwarebytes 1.61.01400, Spysweeper 18.104.22.168 installed on them.
On both of these laptops I ran Malware bytes and it found the following on both:
HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Advanced|Start_ShowSearch (PUM.Hijack.StartMenu) → Bad: (0) Good: (1) → No action taken.
I have enclosed the log file.
I then did the following scans on the laptops:
Spysweeper full scan - nothing found
Avast - Full scan - nothing found
Panda Active Scan - scan finished without detection
CIS - nothing found
I would like to know why CIS did not find it and what is this? Is this a false malware. I have post to malwarebytes but to date no response. Therefore I would like to know from you if you are aware of such malware.
Your assistance is appreciated
[attachment deleted by admin]
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647459.8/warc/CC-MAIN-20230531214247-20230601004247-00073.warc.gz
|
CC-MAIN-2023-23
| 905
| 12
|
https://www.blackhatworld.com/seo/p-heres-my-forgotten-intro.78933/
|
code
|
I've been here too long for an intro probably, but hey why not... still new to black hatting, but not internet marketing. I've found this forum very useful so far. My minds a mess though, I think I need to learn to stay focused on one thing... I'm trying. Hopefully the whole outsource thing will allow me to stay focused more.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657151.48/warc/CC-MAIN-20190116093643-20190116115643-00007.warc.gz
|
CC-MAIN-2019-04
| 327
| 1
|
https://developer.x-plane.com/2014/09/linux-and-libs-how-to-get-10-30-working-again-on-linux/
|
code
|
Users with newer Ubuntu versions have reported they can’t get X-Plane to start after the update to 10.30, while it worked fine with 10.25.
Since 10.30, X-Plane links to libudev to discover devices like the Oculus Rift on Linux, and that has caused a few hiccups with some of your Linux installations out there.
No, this post is NOT about the Oculus Rift on Linux!! If you want to know the current state of Oculus Rift development, go and read this one. Though there’s a little update: At OC1, Oculus confirmed they still want to support Linux. They didn’t say when, though.
Back to libudev. X-Plane for Linux is built on a very old Linux distro, Ubuntu 10.04LTS server, which is horrendously outdated by now. But it has the advantage that binaries built on that an old version, will work with basically ANY distro out there today. Basically, the older the distro is we choose for building, the more distros users can run the binary on.
The problem with libudev0 though is, it is so old, that modern distros just don’t ship it anymore! You can only get the newer libudev1. As a work-around, you can simply sym-link libudev.so.0 to libudev.so.1 to make X-Plane find the newer version.
Starting with X-Plane 10.31, we will remove the load-time dependency on libudev again so everything is back to working like it was on 10.25.
In the future, we will load libudev dynamically based on the version the Oculus SDK requires (This is when an Oculus runtime is available for Linux, which currently isn’t).
- X-Plane 10.30: you need to create a symlink if it doesn’t work
- X-Plane 10.31: no need for a symlink because we won’t depend on libudev at all
- X-Plane 10.x: X-Plane will ONLY require libudev when you are using the Oculus Rift
As for the sym-link work-around, avid Linux user and plugin developer Bill “Sparker” has created a thread on X-Plane.org where the appropriate paths for the symlinks are posted for a variety of different Linux distros.
UPDATE: The method described here works just as well and has the benefit of limiting the change to one application only.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00513.warc.gz
|
CC-MAIN-2022-33
| 2,085
| 12
|
https://robosavvy.com/store/hackrf-one-tiny-tcxo-10mhz-0-5ppm-tcxo-module.html
|
code
|
This ultra-slim 10MHz TCXO module add-on for HackRF measures just 0.58" x 0.4"!
Custom headers were manufactured to keep the profile as low as possible for ease of installation in nearly any HackRF enclosure.
The custom 0.5PPM TCXO has similar performance to the TCXO utilized in the NESDR series of SDRs, which means ultra-low phase noise and rock-solid frequency stability in nearly any condition. Perfect for any high-accuracy experimentation with the HackRF, such as GPS-related projects.
The module could not be simpler to install. Just plug into the specified section of P22 on the HackRF circuit board and you are ready to go! There is even a simple installation diagram on the back of the module for reference.
NOTE: This item is NOT directly compatible with the standard plastic HackRF enclosure from Great Scott Gadgets!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991921.61/warc/CC-MAIN-20210516232554-20210517022554-00017.warc.gz
|
CC-MAIN-2021-21
| 830
| 5
|
http://hightechnology.in/oracle-jre-7-update-51-64-bit-higher-required-polybase/
|
code
|
Oracle JRE 7 Update 51 (64-bit) or higher is required for Polybase
In this post, I will post about an error: Oracle JRE 7 Update 51 (64-bit) or higher is required for Polybase. This error comes when i am trying to install SQL Server 2017 on Windows 10. I heard about lot of new features in SQL Server 2017. So, I downloaded developer edition media from Microsoft site and tried installing. I have selected all the features and moved forward. And, during the installation process, I faced an error and I was not able to proceed.
Note: You can skip this error, by not selecting Polybase Feature, on Feature selection window.
To solve the above error, go to following link: http://www.oracle.com/technetwork/java/javase/downloads/jre8-downloads-2133155.html and accept license agreement and download windows installer of JRE. I have downloaded the one which says Windows x64 because my operating system is windows 64 bit.
After installation now i clicked on Re-run and now error is gone.
Note : I have also installed Java SE Runtime Environment 9.0.1 and Re-run the Feature Rules, but error still persists. So i have installed Java SE Runtime Environment 8u151 and it solves my error.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203493.88/warc/CC-MAIN-20190324210143-20190324232143-00218.warc.gz
|
CC-MAIN-2019-13
| 1,181
| 6
|
https://github.com/alblue
|
code
|
Alex Blewitt alblue
- mac-zfs 33 Original/Obsolete ZFS for Mac OS X - see https://openzfsonosx.org for an up-to-date version
- com.packtpub.e4 21 Code samples for the "Eclipse Plugin Development by Example: Beginners Guide" book 978-1782160328
- com.packtpub.swift.essentials 14 Code repository for the Swift Essentials book
- com.packtpub.e4.advanced 11 Code repository for the "Advanced Eclipse plug-in development" book 978-1783287796
- objectiveclipse 8 Objective C support for Eclipse CDT
Repositories contributed to
- swagger-api/swagger-codegen 1,104 swagger-codegen contains a template-driven engine to generate client code in different languages by parsing your Swagger Resource Declaration.
- szarnekow/Java8Tutorial 0 EclipseCon 2015 Tutorial: "Embrace Java8: Functional Programming with Eclipse"
- samaaron/sonic-pi 691 The Live Coding Synth for Everyone
Contributions in the last year 223 total Sep 1, 2014 – Sep 1, 2015
Longest streak 9 days September 20 – September 28
Current streak 0 days Last contributed
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645171365.48/warc/CC-MAIN-20150827031251-00270-ip-10-171-96-226.ec2.internal.warc.gz
|
CC-MAIN-2015-35
| 1,026
| 13
|
https://devin.org/how-to-enable-php-5-3-on-hostgator/
|
code
|
Please note that the information in this post may no longer be accurate or up to date. I recommend checking more recent posts or official documentation for the most current information on this topic. This post has not been updated and is being kept for archival purposes.
HostGator is a pretty ok host. I’ve had a hosting account with them for awhile and recently had to upgrade one of my sites to use PHP version 5.3+.
Enable PHP 5.3+ with via .htaccess
Open the site’s .htaccess file that you want to upgrade the PHP version in your favorite editor and add the following to top of the file:
# Use PHP 5.3 AddHandler application/x-httpd-php53 .php suPHP_ConfigPath /opt/php53/lib
Refresh your website and you should see in your PHP info that your are indeed now using PHP version 5.3+. Hope this helps my fellow HostGatorers!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474569.64/warc/CC-MAIN-20240224212113-20240225002113-00113.warc.gz
|
CC-MAIN-2024-10
| 830
| 6
|
https://nesler.dev/
|
code
|
Hey! I'm Zak, a full-stack engineer from Philadelphia with a passion for clean design and tidy code.
I'm addicted learning new skills, languages, and technologies; these are some that I love working with:
I have recently completed my B.S. degree in Computer Science, graduating cum laude, and I am now starting a career as a full-stack engineer.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00731.warc.gz
|
CC-MAIN-2022-33
| 345
| 3
|
https://blog.chron.com/techblog/2006/11/another-day-another-zero-day-windows-exploit/
|
code
|
Break out the exploit-targets-fully-patched-Windows-systems blog entry template.
Activating template . . .
Microsoft is investigating public reports of a vulnerability in the XMLHTTP 4.0 ActiveX Control, part of Microsoft XML Core Services 4.0 on Windows. We are aware of limited attacks that are attempting to use the reported vulnerability.
Customers who are running Windows Server 2003 and Windows Server 2003 Service Pack 1 in their default configurations, with the Enhanced Security Configuration turned on, are not affected. Customers would need to visit an attacker’s Web site to be at risk. We will continue to investigate these public reports.
Upon completion of this investigation, Microsoft will take the appropriate action to help protect our customers. A security update will be released through our monthly release process or an out-of-cycle security update will be provided, depending on customer needs.
The above link has a set of workarounds, including editing the Registry to prevent ActiveX from running in Internet Explorer; configuring IE to prompt when a site tries to run Active Scripting; and setting Internet and intranet security zones to “high”.
Secunia lists the flavors of Windows involved, which includes Windows XP Home and Professional (and, though not mentioned, presumably Windows XP Media Center Edition). You Windows 2000 stragglers are also affected.
Note that Windows Vista is not included in the advisory.
Insert don’t-surf-in-dangerous-places verbiage here.
Insert gratuitous get-a-Mac-switch-to-Linux comment here. Oh, wait . . .
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494852.95/warc/CC-MAIN-20230127001911-20230127031911-00176.warc.gz
|
CC-MAIN-2023-06
| 1,578
| 10
|
https://identi.ca/madamezou/note/HO4fij44RwiiMUi-LjiS3Q
|
code
|
Speaker: Enrico Zini
If you are interested in how the Debian project - or any project at all - can recognize all types of contribution (not only the coding ones), take a look at this talk! It will take place today at 13:30 PDT (20:30 UTC), here the list of links for the streaming:
https://wiki.debconf.org/wiki/DebConf14/Videostream/Room327 #debconf14 #debian
ghostdancer, Scorpio20 likes this.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00746.warc.gz
|
CC-MAIN-2023-14
| 395
| 4
|
https://www.freelancer.gr/projects/php/Sms-functionality-for-website/
|
code
|
I have a sms gateway api (bought package) need to implement that in website once someone registers on the website.
15 freelancers κάνουν προσφορές με μέσο όρο ₹2930 γι' αυτή τη δουλειά
hello, hope you are doing great. i have checked your requirements. i am ready to integrate now. I have integrated sms in may sites i can start work now please reply me thanks komal
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820930.11/warc/CC-MAIN-20171017072323-20171017092323-00514.warc.gz
|
CC-MAIN-2017-43
| 406
| 3
|
https://informatics-support.perkinelmer.com/hc/en-us/articles/4408235822100-Shape-shp-file-for-China-country-is-showing-garbled-characters-in-TIBCO-Spotfire-Analyst-
|
code
|
Product: TIBCO Spotfire®
Shape(.shp) file for China country is showing garbled characters in TIBCO Spotfire Analyst.
While loading the shapefile for China country map from a third party vendor i.e. GADM(https://gadm.org/download_country_v3.html) inside Spotfire Analyst, Chinese characters are displayed as garbled characters even after applying the Chinese language pack. Below is the screenshot for the reference:
To resolve this behavior, you need to set the System locale to Chinese.
To set it, follow the below steps in Windows:
1. Open the Control Panel -> Clock and Region/Region -> Administrative tab -> Change system locale... button -> set Current system locale to Chinese (Simplified, China).
2. Restart the machine and launch the Spotfire Analyst.
3. Now the Chinese characters are displayed correctly, below is the corresponding screenshot:
Note: After the computer restarts, we need to recreate the analysis file so that the Chinese characters are not garbled anymore. The existing DXP/analysis file will still have the garbled characters because that's how they were saved before setting the system locale properly. https://gadm.org/download_country_v3.html
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00352.warc.gz
|
CC-MAIN-2022-33
| 1,173
| 9
|
https://theludwigs.com/2016/08/nat-nails-one-of-the-core-premises-of-surround-io/
|
code
|
Nat, sharp as ever. One of the reasons we started Surround.io was to take advantage of the Moore’s Law driven wave of sensing technology. Since sensors are just carved out of silicon now, using the same process technology as digital electronics, sensors (cameras, accelerometers, mics, etc) are increasingly ubiquitous, cheap, and powerful. The challenge is software to process the flood of data.
tweets to reboot your thinking around the "camera" in your "phone." all these sensors are riding the silicon curve. https://t.co/LdQ96RiHiH
— Nat Brown (@natbro) August 6, 2016
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00099.warc.gz
|
CC-MAIN-2021-21
| 577
| 3
|
http://jobs.monster.com/v-it-q-senior-software-engineer-jobs-l-grapevine,-tx.aspx?page=2
|
code
|
Senior Software Engineer Jobs in Grapevine, Texas - Page 2
35 Grapevine, TX Senior Software Engineer jobs found on Monster.Jobs 21 to 35 of 35
Description Senior Software Engineer in Test (QA) Do you believe that the right information in the right hands leads to amazing things? Do you believe that there is always a better and simpler way of doing things and the key is finding it? Do you want to provide value to your customers and inspire their loyalty by delivering products that makes them successful? Do you love working in diverse ...
Description Thomson Reuters Tax & Accounting is one of the largest commercial software development companies in the DFW area. Our GoSystem/ONESOURCE Development group is seeking a Senior Software Engineer for its Tax Runtime Platform development team. This team provides the core compute engine used by the tax applications in the GoSystem and ONESOURCE products. We work in C++ and C# on highly mul...
Description The Sr. Software Engineer is part of a team responsible for the analysis, definition, design, construction, testing, installation, modification, and maintenance of properly engineered information systems, containing software as the major component to meet agreed business needs. Major responsibilities of this position: Writes new software, makes modifications to existing software app...
Description Position Summary: This position designs and codes software modules, supports the Project Leader and Development Manager on projects, works with clients on advanced development support issues and provides development expertise and mentoring to project team. Major Responsibilities and Percent of Work Time: Analyze, design, code and test program code for specific functionality includin...
MCG - Midwest Consulting Group Irving, TX
Description: We are looking for someone with deep design, coding, and delivery experience of multi-tier SaaS products with smart-client, web-based, plugin-based, mobile-based clients on the Microsoft C#/.NET/WPF/SQL Server technology stack. Experience building and leveraging SOAP and REST-based Web Services using Windows Communication Foundation (WCF). Experience building system modules and testin...
We are looking for a Senior Software Engineer with strong experience in the following: - Very strong experience in Java, Scala - Experience with Play framework - Experience with netty, nginx or similar opensource servers - Basic knowledge of noSQL technologies – Mongo, Cassandra...
GameStop Corp Grapevine, TX
Job Description SUMMARY Under general supervision,the Senior Software Engineer develops and supports IT retail systems and assumes responsibility for administration, documentation, support and troubleshooting of these systems. This position participates in all phases of the software implementation process including both package and custom development. Software development processes include requi...
PDS Tech Irving, TX, 75061
PDS Tech is seeking a Software Engineer for an open position. Overview: PDS is looking for a Sr Software Engineer who is interested in joining a team of highly competent software engineers focused implementing and deploying cloud infrastructure in support of data center automation of highly scalable internet services. You will have the skills and experience to create and deliver production level...
Description Senior Software Engineer in Test (QA) Do you believe that the right information in the right hands leads to amazing things? Do you believe there is always a better and simpler way of doing things and the key is finding it? Do you want to provide value to your customers and inspire their loyalty by delivering products that make them successful? Do you love the buzz and energy of wo...
Description Thomson Reuters Tax & Accounting is searching for a Senior Software Engineer to join our Corporate Software development team. This team develops robust web-based corporate tax software solutions used by Fortune 1000 corporations.Tools and methodologies used include ASP, VBScript, C#, .NET, Silverlight, XML, Infragistics, MS SQL, TFS, continuous integration, and test-driven development...
Description The Senior Software Engineer will be responsible for the design, development and testing of web applications for the ONESOURCE Workflow Tools products. Workflow Tools are a suite of products developed and managed centrally by the Shared Platform Group (SPG) within the Tax & Accounting business unit of Thomson Reuters and is an integral part of products sold to the corporate tax market...
Newt Global Irving, TX
Primary Skills: Clous,open Stack and Python Overview: Telecommunication is looking for a Sr Software Engineer who is interested in joining a team of highly competent software engineers focused implementing and deploying cloud infrastructure in support of data center automation of highly scalable internet services. You will have the skills and experience to create and deliver production level cod...
Syslogic Irving, TX
Dice Company Profile Rate Report Job Overview Company: Syslogic Title: Cloud Engineer Skills: cloud openstack python cloudstack amqp nosql lxc docker zookeeper sqlalchemy rest Date Posted: 7-18-2014 Location: Irving, TX Area Code: 214 Employ. Type: CON_W2 Pay Rate: $90 + hr Job Length: 12+ months Position ID: 671710 Dice ID: 10125386 Travel Required: none Telecommute: ...
Get new jobs by email for this search
We'll keep looking and send you new jobs that match this search.
Upload your resume and let employers find you!
It's that simple!
IT Career Tools
Sr. Software Engineer
$73,000.00 - $125,000.00
Typical Salary for Sr. Software Engineer in Grapevine
Source: Monster.com Careerbenchmarking Tool
Education / Training
Some College Coursework Completed
Source: Monster.com Careerbenchmarking Tool
Sr. Software Engineer
Develops information systems by studying operations; designing, developing, and installing software solutions; supports and develops software team.
Rate of Growth
Size of Industry in 2006:
Source: Bureau of Labor Statistics, May 2006
General Programming Skills
Determines operational feasibility by evaluating analysis, problem definition, requirements, solution development, and proposed solutions.
Improves operations by conducting systems analysis; recommending changes in policies and procedures.
Popular Senior Software Engineer Articles
Computer and Information Technology Jobs
Computer and IT workers are in high demand now and are expected to be among the most sought-after workers in the coming years. Explore IT careers now.
Best-Paying and Worst-Paying Master’s Degrees
Which master's degrees will pay you back with a big salary? Here's a look at the 10 highest-paying and five lowest-paying master's degrees.
Jobs with High Lifetime Earnings
Want to earn the most money possible over the course of your working life with just a bachelor’s degree? Consider one of these 10 lucrative jobs.
$100K Jobs with High Flexibility
Want to make $100,000 a year or more and still have some control over your work schedule? Check out one of these five jobs.
Six High-Paying Jobs for Gen Y Workers
Which jobs give Gen Y workers the best chance of earning a high salary? Online salary database PayScale took a look at its Gen Y data to find out.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657134511.2/warc/CC-MAIN-20140914011214-00250-ip-10-234-18-248.ec2.internal.warc.gz
|
CC-MAIN-2014-41
| 7,258
| 51
|
https://www.teachoo.com/20655/4285/Question-11--Fill-in-the-blanks-/category/CBSE-Class-12-Sample-Paper-for-2024-Boards/
|
code
|
Fill in the blank:
The modem at the sender’s computer end acts as a ____________.
Answer by student:
Detailed answer by teachoo:
Let’s go through each of the options and see why they are correct or incorrect:
- a. Model: This is incorrect because a model is not related to a modem or its function. A model is a representation or imitation of something, such as a physical object, a system, or a process.
- b. Modulator: This is correct because a modulator is one of the functions of a modem. A modem acts as a modulator when it converts the digital signal from the sender’s computer into an analog signal that can be transmitted over the internet.
- c. Demodulator: This is incorrect because a demodulator is not the function of a modem at the sender’s computer end. A modem acts as a demodulator when it converts the analog signal from the internet into a digital signal that can be received by the receiver’s computer.
- d. Convertor: This is incorrect because a convertor is too vague and general to describe the function of a modem. A convertor is any device that changes one form of energy or data into another. A modem is a specific type of convertor that modulates and demodulates signals between digital and analog forms.
So, the correct answer is b. Modulator.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506329.15/warc/CC-MAIN-20230922034112-20230922064112-00864.warc.gz
|
CC-MAIN-2023-40
| 1,279
| 10
|
https://journal.alt.ac.uk/index.php/rlt/article/view/1019?articlesBySameAuthorPage=4
|
code
|
A rising number of individuals and institutions are now developing multimedia courseware, or interactive multimedia (IMM) as Rob Phillips, calls it. This book sets out to offer practical advice in projects focusing on general issues of design, development and project management. Although it includes an appendix that describes the characteristics of a number of authoring tools, it is largely intended to be independent of any particular software.
Authors contributing to Research in Learning Technology retain the copyright of their article and at the same time agree to publish their articles under the terms of the Creative Commons CC-BY 4.0 License (http://creativecommons.org/licenses/by/4.0/) allowing third parties to copy and redistribute the material in any medium or format, and to remix, transform, and build upon the material, for any purpose, even commercially, under the condition that appropriate credit is given, that a link to the license is provided, and that you indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818293.64/warc/CC-MAIN-20240422113340-20240422143340-00051.warc.gz
|
CC-MAIN-2024-18
| 1,125
| 2
|
https://play.google.com/store/apps/details?id=com.citadel.app
|
code
|
Where can I get my client number?
Good I downloded and when a window appeared to register it keeps telling error. Can someone tell me if they experienced something similar like me. I m saying someone who downloaded the app the firstctime like me. Or the problem has to be fixed
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805023.14/warc/CC-MAIN-20171118190229-20171118210229-00111.warc.gz
|
CC-MAIN-2017-47
| 277
| 2
|
https://www.gamefront.com/forums/sw-jk3-modding-mapping-and-editing/mod-for-weapon-on-his-back?page=1
|
code
|
where can I find a download for weapons on the move can contribute to how ever said my english is bad
If your meaning holstered on the playermodels back, as far as I know Moviebattles II and OJP have it, not sure if any other mods do.
I have searched too for this mod, but as far as i learned, it cannot be done for sp only mp since they have the source code.
Well, i made one a while ago using JKA Unlimited, I only got the files out. But you can make one from OJP and if you use it in MP only YOU will see what weapons YOU have Holstred. Same with SP you cant see enemy weapons holstred. Sadly I lost my mod in a hard drive crash.
i once saw a mod that had a minor effect for weapon holstering in sp, but it was awhile ago, sorry i couldnt be much more helpful then that it happens that i am looking for the same mod , good luck:thumbsup::saber:
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627997801.20/warc/CC-MAIN-20190616062650-20190616084650-00232.warc.gz
|
CC-MAIN-2019-26
| 847
| 5
|
https://cenit.io/cross_shared_collection?utf8=%E2%9C%93&query=square
|
code
|
Description of Instagram RESTful API. Current limitations: * Instagram service does not support [cross origin headers](https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS) for security reasons, therefore it is not possible to use Swagger UI and make API calls directly from browser. * Modification API requests (`POST`, `DELETE`) require additional security [scopes](https://instagram.com/developer/authorization/) that are available for Apps [created on or after Nov 17, 2015](http://instagram.com/developer/review/) and started in [Sandbox Mode](http://instagram.com/developer/sandbox/). * Consider the [Instagram limitations](https://instagram.com/developer/limits/) for API calls that depends on App Mode. **Warning:** For Apps [created on or after Nov 17, 2015](http://instagram.com/developer/changelog/) API responses containing media objects no longer return the `data` field in `comments` and `likes` nodes. Last update: 2015-11-28
The SimplyRETS API is an exciting step towards making it easier for developers and real estate agents to build something awesome with real estate data! The documentation below makes live requests to our API using the trial data. To get set up with the API using live MLS data, you must have RETS credentials from your MLS, which you can then use to create an app with SimplyRETS. For more information on that process, please see our [FAQ](https://simplyrets.com/faq), [Getting Started](https://simplyrets.com/blog/getting-set-up.html) page, or [contact us](https://simplyrets.com/\#home-contact). Below you'll find the API endpoints, query parameters, response bodies, and other information about using the SimplyRETS API. You can run queries by clicking the 'Try it Out' button at the bottom of each section. ### Authentication The SimplyRETS API uses Basic Authentication. When you create an app, you'll get a set of API credentials to access your listings. If you're trying out the test data, you can use `simplyrets:simplyrets` for connecting to the API. ### Media Types The SimplyRETS API uses the `Accept` header to allow clients to control media types (content versions). We maintain backwards compatibility with API clients by allowing them to specify a content version. We highly recommend setting and explicity media type when your application reaches production. Both the structure and content of our API response bodies is subject to change so we can add new features while respecting the stability of applications which have already been developed. To always use the latest SimplyRETS content version, simply use `application/json` in your application `Accept` header. If you want to pin your clients media type to a specific version, you can use the vendor-specific SimplyRETS media type, e.g. `application/vnd.simplyrets-v0.1+json"` To view all valid content-types for making an `OPTIONS`, make a request to the SimplyRETS api root `curl -XOPTIONS -u simplyrets:simplyrets https://api.simplyrets.com/` The default media types used in our API responses may change in the future. If you're building an application and care about the stability of the API, be sure to request a specific media type in the Accept header as shown in the examples below. The wordpress plugin automatically sets the `Accept` header for the compatible SimplyRETS media types. ### Pagination There a few pieces of useful information about each request stored in the HTTP Headers: - `X-Total-Count` shows you the total amount of listings that match your current query. - `Link` contains pre-built pagination links for accessing the next 'page' of listings that match your query. Read more about that [here](https://simplyrets.com/blog/api-pagination.html).
Manages your Stackdriver monitoring data and configurations. Projects must be associated with a Stackdriver account, except for the following methods: [monitoredResourceDescriptors.list](v3/projects.monitoredResourceDescriptors/list), [monitoredResourceDescriptors.get](v3/projects.monitoredResourceDescriptors/get), [metricDescriptors.list](v3/projects.metricDescriptors/list), [metricDescriptors.get](v3/projects.metricDescriptors/get), and [timeSeries.list](v3/projects.timeSeries/list).
#Documentation This is the documentation for the partner endpoint of the BigOven Recipe and Grocery List API. The update brings with it Swagger-based documentation. [Swagger](http://swagger.io) is an emerging standard for describing REST-based APIs, and with this Swagger-compliant endpoint (above), you can make ready-to-go interface libraries for your code via [swagger-codegen](https://github.com/swagger-api/swagger-codegen). For instance, it's easy to generate libraries for Node.js, Java, Ruby, ASP.NET MVC, jQuery, php and more! You can also try out the endpoint calls with your own api_key right here on this page. Be sure to enter your api_key above to use the "Try it out!" buttons on this page. ##Start Here Developers new to the BigOven API should start with this version, not with the legacy API. We'll be making improvements to this API over time, and doing only bug fixes on the v1 API. To pretend you're a BigOven user (for instance, to get your recently viewed recipes or your grocery list), you need to pass in Basic Authentication information in the header, just as with the v1 API. We do now require that you make all calls via https. You need to pass your api_key in with every call, though this can now be done on the header (send a request header "X-BigOven-API-Key" set to your api_key value, e.g., Request["X-BigOven-API-Key"]="your-key-here".) ##Migration Notes For existing partners, we encourage you to [migrate](http://api2.bigoven.com), and while at this writing we have no hard-and-fast termination date for the v1 API, we strongly prefer that you migrate by January 1, 2017. While the changes aren't overly complex, there are several breaking changes, including refactoring of recipe search and results and removal of support for XML. This is not a simply plug-and-play replacement to the v1 API. With respect to an exclusive focus on JSON, the world has spoken, and it prefers JSON for REST-based API's. We've taken numerous steps to refactor the API to make it more REST-compliant. Note that this v2 API will be the preferred API from this point onward, so we encourage developers to migrate to this new format. We have put together some [migration notes](/web/documentation/migration-to-v2) that we encourage you to read carefully. ##Photos See our [photos documentation](http://api2.bigoven.com/web/documentation/recipe-images). For more information on usage of this API, including features, pricing, rate limits, terms and conditions, please visit the [BigOven API website](http://api2.bigoven.com).
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370491998.11/warc/CC-MAIN-20200328134227-20200328164227-00266.warc.gz
|
CC-MAIN-2020-16
| 6,729
| 4
|
https://www.vigienature-ecole.fr/en/node/310
|
code
|
Our seaweed and mollusc quizzes!
Practice with these quizzes to be ready when you have to identify the insects in the protocol!
To help you, remember to use the identification key which is available here.
Train yourself to recognize brown algae:
And become a mollusc identification expert with these quizzes:
A pdf version of this quiz is available:
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475757.50/warc/CC-MAIN-20240302052634-20240302082634-00789.warc.gz
|
CC-MAIN-2024-10
| 349
| 6
|
https://blog.geekulcha.com/rhok-pretoria-2013/
|
code
|
For those of you who still don’t know what RHoK is all about, Random Hacks of Kindness (RHoK) is a global initiative that is primarily aimed at merging a wide community of innovators to collectively to make the world a better place via socially relevant computing.
During the past weekend, two RHoK hackathons took place in Johannesburg and Pretoria simultaneously, and I was privileged to be one of the RHoK Pretoria student participants.
So what exactly did we do?
Shortly after arrival on Saturday, students, interns and a few IT experts in the industry were seated and presented with 5 challenges. The audience was then encouraged to take part in any one of the challenges at their own discretion with the aid of the experts present.
The top three challenges in summary
Challenge 1 : ”Yehla”, Presented by Tiyani Nghonyama from Geekulcha
The task here was to create a mobile app that will alert taxi users of their point of departure, to avoid missing stops and getting lost.
Challenge 2 : “Ajira”, Presented by Dr Jabu
Hackers were to design and develop a backend and client(s) app that would allow a micro employer to post jobs. The backend system should also be able to rate uWorkers after each submission.
Challenge 3 : School Library System, Presented by Jay from the African school of excellence.
The task entailed upgrading an already existing library system to keep track of books going in and out of the school’s library.
The rest of the challenges
Challenge 4 : @RobohandSA, Presented by Quentin Harley from House4Hack
Interested parties were to create a system that will be used by doctors to enter hand measurements of patients who lose their hands in fatalities and then use a 3D Printer to print the robot hand..
Challenge 5: Medical stock out, Presented by Dr Jabu from UNISA
This system should be able to alert hospitals/ clinics of soon coming medication shortages.
Challenge 6 : Donate-My-Stuff, Presented by Ishmael Makitla from the CSIR
Attendees were presented with a task to create a web based application that will act as a mediator between donors and people in need. Donors should be able to make offers and view requests made by potential beneficiaries.
We were given about 36 hours (meaning NO SLEEP!) to code the solutions. Sounds fun right? Not only do students get the opportunity to platform their coding skills, but non-coders, like me, get to draw from fellow group members. For one, I was introduced to JSON for the first time (Side note: I feel I need to broadcast how good I turned out to be at it). Another interesting aspect of this event is getting to take part in uplifting, thought shifting conversations with like-minded people during “down time”. A great way to connect, gather inspiration and make friends that will yield good fruit in the future.
Apart from all of the above, the idea of utilizing what you know to be a small part of something bigger than you to help and advance humanity is in excess of humbling. So, I’d like to urge all IT enthusiasts; students, employees and employers to look out for the next RHoK event and avail themselves. Be part of the revolution. Hack for humanity!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039568689.89/warc/CC-MAIN-20210423070953-20210423100953-00242.warc.gz
|
CC-MAIN-2021-17
| 3,159
| 20
|
https://wade.be/2014/07/23/remote-desktop-connection-logon-to-black-screen/
|
code
|
Remote Desktop Connection Logon to Black Screen
This is a problem that has plagued me for a while now, and only two things seem to fix it.
The issue is that when using a Windows 7 RDP client (mstsc) to connect into another Windows 7 system, you sometimes find you’re left with nothing but a black screen.
From this black screen you can’t do anything. The cursor seems to respond, but you can’t seem to bring up anything useful, nothing works.
So what’s the solution?
I found that if you hit CTRL + ALT + END inside the RDP connection this will bring up the same menu as when you hit CTRL + ALT + DELETE on a normal system.
When this menu appears, you can press cancel or “Start Task Manager”, which should bring it back to life.
If that doesn’t work, try disconnecting and reconnecting.
Failing that, I’ve found that sometimes, in “Options”, if you untick the “Persistent bitmap caching” under the “Experience” tab you’ll find yourself being able to connect without a blank screen.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653631.71/warc/CC-MAIN-20230607074914-20230607104914-00159.warc.gz
|
CC-MAIN-2023-23
| 1,011
| 9
|
http://superuser.com/questions/225253/transfer-of-ownership-of-windows-7
|
code
|
I am thinking of purchasing a copy of Windows 7 via either ebay or GumTree. I am unsure as to how the product key works.
A close friend of mine is warning me against buying it from ebay as he is suggesting that once it has been used, the operating system registers itself on microsoft servers using the serial number of the motherboard of the system where it has been installed. This means once installed on one machine you wont be able to install it on another machine.
Now i am struggling to believe that an operating system can only be installed on one machine. Can someone please explain exactly how this works. I can see a lot of copies being sold on Ebay which are used. I used the 'Ask a question' option and the majority of the users are saying that i should be able to use it.
If someone buys Windows 7 from the shop, installs it on his PC but then decides that he wants to sell it can he not sell it? Will the person buying it not be able to use it? Does the person selling it have to somehow unregister it first? What do i need to look out for if buying it from Ebay?
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827077.13/warc/CC-MAIN-20160723071027-00040-ip-10-185-27-174.ec2.internal.warc.gz
|
CC-MAIN-2016-30
| 1,078
| 4
|
http://foob.ar/home
|
code
|
Your today's Wiki-Acronym definition:
At home we are proud to be using:
Ruby on Rails
Ruby and its outstanding framework, Rails, is the predilect language I've used on my projects and which I prefer to work on.
Git and Github
The Version Control System is maybe one of the more important tools that any Developer should learn.
I like to test and deploy my apps on Heroku. Its free tier is always enough for any hobbyist project and can scale to production-sized apps easily.
Some of my projects:
This fancy bot just remind you to clean the air conditioner filter each month. It is written in Ruby and deployed on Heroku.
This is a colaborative and Open Source project to teach and learn the Git basis. It is deployed on Netlify.
RapaNui App consists in a back office system for on-site office work and an Api to check packages statuses. Work on an already in production Ruby on Rails project, some fixes and new functions added.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816465.91/warc/CC-MAIN-20240412225756-20240413015756-00280.warc.gz
|
CC-MAIN-2024-18
| 928
| 11
|
https://onlycode.nz/example-page-frame-structure
|
code
|
A TPageFrame defines the basic band structure for a printed page, including fixed and dynamic bands (see the Band Structure example for the basic layout). It provides a single "code loop" to cycle through the top-level "rows" of a report.
Execute a page frame by calling TPageFrame.Execute(ReportWriter).
An additional sub-frame can be linked to it via the TPageFrame.DetailFrame property (either a TMasterFrame or TDetailFrame component) to provide nested code loops and bands. You can also independently execute any number of nested frames in code.
Studying this diagramme and understanding it well can be the key to effectively using a TPageFrame.
Control Flow for a TPageFrame
When executed, a TPageFrame component cycles through a sequence of band output events: OnReportHeader, OnLetterhead, OnPageHeader, OnBodyTitle, OnBodyHeader, OnGroupHeader, OnRow, OnGroupFooter, OnBodyFooter, OnPageFooter, OnLetterfoot, OnRemittance and OnReportFooter. Each output event is preceded by a corresponding automatic band setup/definition process where band font and tab settings are applied. It includes a repeated loop which continues to cycle until invalidated in code by setting event parameter Valid := False within the loop.
Appropriate before/after events allow you to manipulate the process and add data access or other report infrastructure code.
The flow of events is represented in this diagramme:
- The "Report" output events (OnReportHeader and OnReportFooter) are outside the fixed band page structure. Anything output in these events is always separated from the rest of the report by a page break. The pages so used are considered to be entirely outside the "report proper", and do not bear the page headers or page footers etc that every other page does. These events are intended for a report "cover sheet" or "follow-up sheet" (or anything else you wish to include in this logical position). Naturally, you could execute another frame component within these events if you wish...
- The "Page" output events (OnLetterhead, OnPageHeader, OnBodyTitle, OnPageFooter, OnLetterfoot) fire for every new page in the report (except those included in the ReportHeader and ReportFooter bands). The OnBodyTitle band is a special case in that it is a dynamic band (it does not have a fixed line count or height), but is included with the fixed bands because it is locked between these bands and the following dynamic body bands. BodyTitles are, for example, column headers that you want to appear at a fixed point (just below the fixed bands) at the top of each page. The OnRemittance band is also a special case band which only fires on demand (by explicitly enabling the band with a call to EnableRemittance).
- The "Body" output events (OnBodyHeader and OnBodyFooter) start and conclude output of the "dynamic body bands" of the report (the repeated loop part of the frame).
- The "Group loop" output events cycle in a repeated loop until Valid := False. A group loosely constitutes a single report row and its associated row details (if any). The latter triggers a sub-frame (TMasterFrame or TDetailFrame) linked via the property TPageFrame.DetailFrame. However, you are not bound by this underlying frame structure and can execute any other frame(s) you wish at any point.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038065903.7/warc/CC-MAIN-20210411233715-20210412023715-00310.warc.gz
|
CC-MAIN-2021-17
| 3,276
| 12
|
http://ieeexplore.ieee.org/xpl/abstractKeywords.jsp?reload=true&arnumber=5230512&contentType=Conference+Publications
|
code
|
Spatio-temporal salient features are widely being used for compact representation of objects and motions in video, especially for event and action recognition. The existing feature extraction methods have two main problems: First, they work in batch mode and mostly use Gaussian (linear) scale-space filtering for multi-scale feature extraction. This linear filtering causes the blurring of the edges and salient motions which should be preserved for robust feature extraction. Second, the environmental motion and ego disturbances (e.g., camera shake) are not usually differentiated. These problems result in the detection of false features no matter which saliency criteria is used. To address these problems, we developed a non-linear (scale-space) filtering approach which prevents both spatial and temporal dislocations. This model can provide a non-linear counterpart of the Laplacian of Gaussian to form the conceptual structure maps from which multi-scale spatio-temporal salient features are extracted. Preliminary evaluation shows promising result with false detection being removed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396875.58/warc/CC-MAIN-20160624154956-00152-ip-10-164-35-72.ec2.internal.warc.gz
|
CC-MAIN-2016-26
| 1,093
| 1
|
http://www.webassist.com/forums/post.php?pid=24861
|
code
|
I'm not saying it's not right for your situation. I just pointed out the other side because you are quick to point to competitor's products. I think it's only fair to put things in perspective and let everyone choose what works best for them.
The only time I use a company's support forum to point to another company's product is when they don't directly compete. That's just me. There are plenty of mailing list out there to get recommendations.
Also I disagree with your premise that PHP/mySQL is only for large sites like Amazon. Just because it's capable of doing large sites doesn't mean you shouldn't take advantage of it for small sites.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584203540.82/warc/CC-MAIN-20190123064911-20190123090911-00320.warc.gz
|
CC-MAIN-2019-04
| 644
| 3
|
https://cloudyr.github.io/contributing/
|
code
|
what the cloudyr project does
The cloudyr project aims to provide cloud computing functionality for R. To achieve this, the primary goal is to create new, easy-to-use packages in the following areas:
- Clients to manage cloud computing infrastructure. Current examples of this in the cloudyr suite include existing packages for Amazon Web Services resources. Potential contributions could create analogous packages for Google Cloud Compute, Microsoft Azure, etc.
- Clients to manage cloud storage. Current examples of this in the cloudyr suite include aws.s3.
- Clients for cloud-based package development and testing. Current examples in the cloudyr suite include travisci.
- Packages to create and retrieve data from online survey and experimental platforms. Current examples include Rmonkey.
- Packages to manage human crowdsourcing tasks. Current examples include MTurkR.
- Any other package that aims to used cloud-based resources to perform data analysis and other computational tasks from R.
Some areas that are generally off-topic for the cloudyr project include those that only retrieve open data, such as scientific data sources or government data. These types of packages would be more appropriate for rOpenSci or rOpenGov, respectively. Similarly, packages for managing cloud services (e.g., social media accounts) are probably outside the scope of cloudyr. Finally, packages implementing purely local procedures (e.g., statistical algorithms, etc.) without an obvious cloud-based connection are probably not a good fit for cloudyr.
contributing to the cloudyr project
Contributions to the cloudyr project are welcome from everyone anywhere in the world. Contributions can be made to existing packages and in the form of new packages that fit within the scope defined above.
Contributions to existing packages are best made in the form of GitHub issues and pull requests on the respective package pages. In lieu of that, emails to the appropriate package maintainer is a fallback.
Contributions in the form of new packages are also very welcome! There is currently no formal onboarding process for packages. If you would like to contribute a package, please do the following:
- Look over the parameters at the top of this page to ensure the package is within the project’s scope
- Check the packages page to make sure a package doesn’t already exist (and you may also want to check the webservices task view to see if anyone else has developed a similar package.
- Propose the package idea via a GitHub issue, with links to an existing repo if you have already drafted the package.
- Look over the cloudyr style guide to ensure the package is generally compliant. These are not strict rules. A template package is available, which provides a skeleton that complies with this guide if you are starting from scratch.
If your package is accepted, you can host it under the cloudyr project GitHub organization, while retaining full control over the code, with administration rights to a GitHub “team” for the package, so you can invite others to contribute to the package, as well. The transfer process requires adding a cloudyr organization “owner” (leeper) to the repo so it can be transferred to the cloudyr GitHub organization. Some modifications may be made to enable continuous integration testing, branding on the package’s README, and deployment to the cloudyr project drat. The package will then listed on the cloudyr project package page.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710711.7/warc/CC-MAIN-20221129200438-20221129230438-00343.warc.gz
|
CC-MAIN-2022-49
| 3,470
| 18
|
https://photosynth.net/discussion.aspx?cat=01b6f15f-42eb-49cb-a221-ed56615e1c47&dis=ad4774b7-5aa6-4d6b-853e-9034e9398dfd
|
code
|
Do you have an idea for an awesome feature we should add… or hate the way we’re currently doing something? Share your ideas and suggestions here.
How cool would it be to add audio to each image. I have a pair of binaural microphones and it would be cool to be able to hear sounds from a particular place.
That would make for an interesting experience... although if exploring quickly through a synth could make for quite a crazy experience. Adding audio to a synth has been mentioned before in different ways, hadn't thought much of auido per image though.
Maybe a binaural setup isn't neccessary. If you could attach audio to a few photos, then in the viewing experience Photosynth could reconstruct the audio as if the sound is coming from that photo, so as you move closer to it it gets louder. It could also do some binaural positioning of the sounds.
Is that the sort of thing you are thinking of? How about narration?
Loading ambient noise from each photo and playing them all back simultaneously with dynamic spatial arrangement is an incredibly interesting take on the audio dream, although that would require that they all loop well to make for a decent experience.
My thoughts had always been more along the line of a single audio comment for specific photos... an extension of the current Feature system or, on the other hand, a guided audio tour, more akin to Worldwide Telescope or Photo Story 3, but I like the above suggestion very much as well.
Darius, being that it still takes Seadragon a bit of time on the average broadband connection to load in the photo, I don't know that the audio experience would be all that disjointed when whipping through a synth. Presubably all the image tiles for the current view would need to be downloaded before pulling down the audio, in the case of one audio clip at a time.
Some sort of control over the timing of when a narration on a photo began would make for a better authoring story as well. You could then have people choose to delay commentary a given amount of time after the current view loads. In that sense, it would be much more organic and you could determine a natural rhythm and sense of personality, rather than having someone blurt out commentary the instant that you arrive at a new photo.
This whole topic sparks something that's been on my mind for a while as well. It's been mentioned long ago that it would be interesting to provide audience members to construct their own tour of our synths. A subset of this is feedback on specific photos or even specific parts of photos, whether that be in text or audio form. A common explanation for Photosynth is '3D Photo Album'. It seems to me, then, that the community comments should follow the photos into 3D, rather than being pinned far away in a list below the group of photos.
*...provide audience members A WAY to construct their own...*
Also, on the delayed audio cue train-of-thought, you would enable curious creations akin to hidden tracks on the end of a CD which have great potential for surprising audence members. :)
Hi this is DublinGal, having problems signing in......anyway, I think the use of audio in any way would really enhance the experience. Like some of you mentioned, even some narration. I'm thinking of how I could include some audio so I might just make several synths and embed seperate audio files into each html page so there can be different audio from different views etc. Don't know if I'm making sense!!
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701999715.75/warc/CC-MAIN-20160205195319-00074-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 3,463
| 13
|
https://www.computerforums.org/forums/classifieds-buy-sell-trade/little-parts-120217.html
|
code
|
Just wondering if anyone's interested? Selling these as a whole or individual...
80mm air duct - Neat little thing I got recently, but just wouldn't fit in my case without some major modding I didn't care to do. Brand new...just tinkered with it for a while than got fed up and threw it in my closet for a few months ;P Been in box, etc etc...
SD Card Reader - Brand New, USB 2.0 and 1.1 compatable. I got a pair of these, works great. Extention cable too.
I might be getting a fan controller too. I'll post that too if I have no need for it.
Name a price and I might go with it... Just that and shipping. If you want pics, just ask and I'll post them.
Ah, and if anyone gets anything from me and wants some little parts, I got some I'll give away. Older stuff. I got tons of floppy drives even a 2.8M, if there's a way to use it on a PC(from an IBM PS/2.) I also got some old P1 systems if anyone wants to pay to have them shipped.
Desktop: Athlon 64 3700, 1024M RAM, GF 6200 TC 256M, 2x 80G, 2x DVD-RW
Laptop: Sempron 2800, 512M RAM, Unichrome, 60G, DVD-RW
Macintosh:G4 1.33GHz, 512M RAM, Radeon 9200 32M, 40G, DVD/CDRW
They get me from 0 to 1...
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00422.warc.gz
|
CC-MAIN-2020-16
| 1,148
| 10
|
https://evostream.com/dwqa-answer/answer-for-hls-streaming-using-evostream-8/
|
code
|
As per your suggestion, we have tried BitDash Player to play DASH files. But here it throwing exception saying you must have "Access-Control-Allow-Origin" header. How can i enable CORS for evostream ?
This information box about the author only appears if the author has biographical information. Otherwise there is not author box shown. Follow YOOtheme on Twitter or read the blog.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00338.warc.gz
|
CC-MAIN-2023-50
| 381
| 2
|
https://www.arxiv-vanity.com/papers/2005.09120/
|
code
|
Domain Adaptive Relational Reasoning for 3D Multi-Organ Segmentation
In this paper, we present a novel unsupervised domain adaptation (UDA) method, named Domain Adaptive Relational Reasoning (DARR), to generalize 3D multi-organ segmentation models to medical data collected from different scanners and/or protocols (domains). Our method is inspired by the fact that the spatial relationship between internal structures in medical images is relatively fixed, e.g., a spleen is always located at the tail of a pancreas, which serves as a latent variable to transfer the knowledge shared across multiple domains. We formulate the spatial relationship by solving a jigsaw puzzle task, i.e., recovering a CT scan from its shuffled patches, and jointly train it with the organ segmentation task. To guarantee the transferability of the learned spatial relationship to multiple domains, we additionally introduce two schemes: 1) Employing a super-resolution network also jointly trained with the segmentation model to standardize medical images from different domain to a certain spatial resolution; 2) Adapting the spatial relationship for a test image by test-time jigsaw puzzle training. Experimental results show that our method improves the performance by DSC on target datasets on average without using any data from the target domain during training.
Keywords:Unsupervised domain adaptation Relational reasoning Multi-organ segmentation.
Multi-organ segmentation in medical images, e.g., CT scans, is a crucially important step for many clinical applications such as computer-aided diagnosis of abdominal disease. With the surge of deep convolutional neural networks (CNN), intensive studies of automatic segmentation methods have been proposed. But more evidence pointed out the problem of performance degradation when transferring domains e.g., testing and training data come from different CT scanners or suffer from a high deviation of scanning protocols between clinical sites. For example, training a well-known V-Net on our in-house dataset and directly testing it on a public MSD spleen dataset yields 43.12% performance drop in terms of DSC. The reason is that their reconstruction and acquisition parameters are different, e.g., pitch/table speeds are 0.55-0.65/25.0-32.1 for the in-house dataset, and 0.984–1.375/39.37–27.50 for MSD spleen dataset. In the context of large-scale applications, generalization capability to deal with scans acquired with different scanners or protocols (i.e., different domains) as compared to the training data is desirable for machine learning models when deploying to real-world conditions.
In this paper, we focus on unsupervised domain adaptation (UDA) for deviating acquisition scanners/protocols in 3D abdominal multi-organ segmentation on CT scans. We propose a domain adaptive relational reasoning (DARR) by fully leveraging the organ location information. More concretely, the relative locations of organs remain stable in medical images . As an example shown in Fig. 1, we calculate the Jensen–Shannon divergence matrix of the location probability distribution of the 8 organs between Synapse dataset and our dataset. The co-occurrence of the same organ appearing in the same location is high. Such relational configuration is deemed as weak cues for segmentation task, which is easier to learn, and thus better in transfer . We aim at learning the spatial relationship of organs via recovering a CT scan from its shuffled patches, a.k.a, solving jigsaw puzzles. But, unlike previous methods which simply treated solving jigsaw puzzles as a regularizer in main tasks to mitigate the spatial correlation issue , we also solve the jigsaw puzzle problem at test-time, based on one single test case presented. This can help us learn to adapt to a new target domain since the unlabeled test case provides us a hint about the distribution where it was drawn. It is worthwhile mentioning that this test-time relational reasoning process enables one model to adapt all.
To better learn the correlation of organs, we must guarantee that data from different domains have the same spatial resolution. Towards this end, we further propose a super-resolution network to jointly train with the segmentation network and the jigsaw puzzles, which can obtain high-resolution output from its low-resolution version. Since there exists a multiplicity of solutions for a given low-resolution voxel, we will show in the supplementary material that our super-resolution network has the capacity to learn better low-level features, i.e., the deviation of voxels’ Hounsfield Units within an organ is reduced, and that of inter-organ is enlarged.
Our proposed DARR performs test-time relative position training, which enjoys the following benefits: (1) establishing a naturally existed common constraint in medical images, so that it can easily adapt to unknown domains; (2) mapping data from different domain sites to the same spatial resolution and encouraging a more robust low-level feature for segmenting organs and learning organ relation; (3) free of re-training the network on source domain when adapting to new domains; and (4) outperforming baseline methods by a large margin, e.g., with even over improvement in terms of mean DSC when adapting our model to multiple target datasets.
2 Related Work
(Unsupervised) Domain adaptation (UDA) has recently gained considerable interests in computer vision primarily for classification , detection and semantic segmentation . A key principle of unsupervised domain adaptation is to learn domain invariant features by minimizing cross-domain differences either in feature-level or image-level . Inspired by the success of CycleGAN in unpaired image-to-image translation, many recent image adaptation methods are built upon modified CycleGAN frameworks to mitigate the impact of domain gap . CyCADA poses unsupervised domain adaptation as style transfer with adversarial learning to close the gap in appearance between the source and target domains. Similar adversarial learning techniques are applied in cross-modality medical data . SIFA is among the latest GAN-based methods dedicated to adapt MR/CT cardiac and multi-organ segmentation networks, which conducts both image-level and feature-level adaptations with a shared encoder structure.
More recently, there have been multiple self-training/pseudo-label based methods for unsupervised domain adaptation . proposes a semi-supervised 3d abdominal multi-organ segmentation by first training a teacher model in source dataset in a fully-supervised manner and compute the pseudo-labels on the target dataset. Then a student model is trained on the union of both datasets. However, domain shift is not delicately addressed in this method, thus it hampers its usage on domain adaptation tasks. Another important class for unsupervised domain adaptation is based on self-supervised learning . The key challenge for self-supervised learning is identifying a suitable self supervision task. Patch relative positions , local context , color , jigsaw puzzles and even recognizing scans of the same patient have been used in self-supervised learning. In this paper, we aim at learning the spatial relationship of organs via recovering a CT scan from its shuffled patches.
3.0.1 Problem definition.
Our goal is to develop a framework that enables machine learning trained on one source domain to adapt to multiple target domains during testing. An overview of our architecture is shown in Fig. 2. Our framework consists of three components, a super-resolution network that upsamples low-resolution images to high resolution, a standard V-Net that performs the segmentation task and a puzzle module to learn the spatial relations among patches. We adopt the generator network from with the subpixel upsampling method as our super-resolution module and we will show the details of the puzzle module in the following section.
We first define some notations in the paper. We parametrize the super-resolution network as , the encoder part of V-Net as - which is shared by the puzzle module, the decoder part of V-Net as and the puzzle module as . Suppose we are partitioning an image I into patches where each patch can be denoted as . The index , and indicate the original relative location from which the patches are cropped. Then each patch can be associated with a unique label following the row-major policy that serves as the ground truth in the jigsaw puzzle task. We use to indicate a random permutation of the patch set , and the label for each patch is also permutated the same way, denoted as , where .
3.0.2 Training stage.
Our network can be trained end-to-end, with one loss from each module. To train the super-resolution network, we squeeze the image patch to a smaller size and minimize a mean square loss which makes the output patch as close as possible to the original patch. The segmentation network produces a cross-entropy loss , where is the ground truth segmentation mask. The third loss, , is given by the puzzle task that classifies the correct location of the patches. Note that the former two losses and are only trained on the training dataset, while the puzzle loss can be utilized on both training and testing set because it does not require any manually labeled data.
Overall, we can obtain the optimal model through
where and are loss weights.
3.0.3 Adaptive testing.
During testing, our goal is to adapt our feature extractor to the target domain, or a target image, through optimizing the self-supervised learning task. By minimizing the puzzle loss on the testing data for a few iterations, the feature extractor is able to reason about the spatial relations among organs, and thus improving the performance on the unseen target domain.
3.0.4 Jigsaw puzzle solver.
Medical images share a strong spatial relationship that organs are organized in specific locations inside the body with similar relative scales. With this prior knowledge, it is natural to investigate a self-supervised learning task that solves for the relative locations given arbitrarily cropped 3D patches. We select a Jigsaw Puzzle Solver in our case, as it has been proven to be helpful in initializing 3D segmentation models . During training, the permuted set of patches are passed through the super-resolution network and the shared feature extractor (the encoder part) of V-Net to generate corresponding features, denoted as . Following the previous work , all features are then flattened into 1D vectors and concatenated together according to the permuted order, forming a long vector. After two fully-connected layers, the puzzle module outputs a vector in size , which can be reshaped into a matrix of size . We apply a softmax function on each row so that each row of the matrix indicates the probability of patch belonging to the locations. We use negative log-likelihood loss as the puzzle loss in our model.
We train the proposed DARR model on our high-resolution multi-organ dataset with 90 cases and adapt it to five different public medical datasets, including 1) multi-organ dataset: Synapse dataset111https://www.synapse.org/#!Synapse:syn3193805/wiki/217789 (30 cases); and 2) 4 single-organ datasets : Spleen (41 cases), Liver (131 cases), Pancreas (282 cases) and NIH Pancreas dataset222https://wiki.cancerimagingarchive.net/display/Public/Pancreas-CT (82 cases). For the Synapse dataset, we evaluate on 8 abdominal organs, including Aorta, Gallbladder, Kidney (L), Kidney (R), Liver, Pancreas, Spleen and Stomach, which are also annotated in our multi-organ dataset. For all other datasets, we directly evaluate on the target organ. Each dataset is randomly split into 80% of training data and 20% of testing data. Note that unlike other domain generalization methods which use data from the target domain for training, DARR only sees target domain data during testing. We use Dice-Sørensen coefficient (DSC) as the evaluation metric. For each target dataset, we report the average DSC of all cases.
4.0.2 Implementation details.
We set puzzle-related hyperparameters in all experiments, which leads to a puzzle composed of 27 patches. The loss-related weights are set as and , which are consistent across all experiments.
We use 3D V-Net as our backbone architecture, which is initialized with a standard V-Net pre-trained on our in-house dataset. The puzzle module shares the same encoder with the segmentation branch with an additional classification head. We use two fully-connected layers to generate puzzle prediction. The whole network is then finetuned with Adam solver for another 40000 iterations with the batch size of 1 and a learning rate of 0.0003. Each patch has size , and is squeezed into size before feeding into DARR.
For each target dataset, we further train a supervised V-Net model with their ground-truth labels and test directly on the same target dataset. These results serve as our upper bound performance and can be used to calculate the performance degradation for source-to-target adaptation.
During testing, the DARR is first finetuned with a puzzle module only from each target image for 30 iterations with a learning rate of 1e-5 and SGD solver. Then we fix the network parameters and output the segmentation results via a forward pass through the segmentation branch. After predicting one target image, the model is rolled back to the original model, and the above test-time jigsaw puzzle training is repeated for the next target image. No further post-processing strategies are applied.
4.0.3 Results and discussions.
We compare our DARR with state-of-the-art methods, i.e., GAN-based methods , self-learning-based methods , and meta-learning-based methods . The performance comparison on different datasets is shown in Table 1. To measure the performance gain after adaptation, we also provide results trained on our in-house dataset and tested directly on the target datasets without DARR (denoted as “Lower Bound” in Table 1). We observe that our method improves Lower Bound results by 29.60% on average and outperforms all other methods by a large margin. It is worth noting that our method even outperforms Upper Bound results on Synapse dataset, MSD Liver dataset, and MSD Spleen dataset, without using any target domain data in training. This result indicates that our method, which captures spatial relations among organs, is able to bridge the domain gap between multi-site data.
Comparison with self-learning.
Following , we first train a teacher model on our multi-organ dataset in a fully-supervised manner and compute the pseudo-labels on the Synapse dataset. Then a student model is trained on the union of both datasets. By evaluating the Synapse dataset, we find that the student model yields a lower segmentation performance than that of the teacher model. This indicates that simply using self-learning may not effectively distill information from data of a different source site.
Comparison with Meta-learning model-agnostic learning methods.
The MASF splits the source domain into multiple non-overlapping training and testing sets and trains a meta-learning model-agnostic model viewing the smaller set as different tasks. It also utilizes delicately designed losses to align intra-class features and separate inter-class features. Nevertheless, MASF does not transfer well from the source domain to the target domains. It is only able to transfer large organs like the liver and stomach while performs poorly in detecting the other small organs. This further confirms that the domain gaps among datasets are substantial, especially in multi-organ segmentation and cannot be easily solved by Meta-learning methods.
Comparison with GAN-based methods.
The SIFA is dedicated to adapt MR/CT cardiac and multi-organ segmentation networks. It conducts both image-level and feature-level adaptations based on a modifed CycleGAN . We use the generated target domain images and their corresponding ground truth in the source domain to train a target segmentation network. Here we apply DeepLab-v2 for training the segmentation network after image adaptation of SIFA. From Table 1 we can see that our VNET-SR already outperforms SIFA and achieves inspiring results with average Dice increased to 62.74%. Our full DARR recovers the performance degradation further by an average of compared with lower bound, and outperforms SIFA by a significant margin (only by SIFA), which shows the superior performance of DARR.
4.0.4 Ablation Study.
In this section, we evaluate how each component contributes to our model. We compare different variants of our method (using V-Net as the backbone model): 1) VNET-Puzzle, which integrates an additional puzzle module to adaptively learn the spatial relations among image patches; 2) VNET-SR, which employs a super-resolution module before the segmentation network; and 3) our proposed DARR with both the puzzle module and the super-resolution module applied. As can be seen from Table 1, compared with Lower Bound (which simply uses bilinear upsampling strategies to overcome the resolution divergence among datasets), VNET-SR consistently achieves performance gains on all 5 different target datasets. Especially, for more challenging datasets, the performance improvement can be significant and substantial, e.g., on the Synapse dataset, on the MSD Pancreas dataset and on the NIH Pancreas dataset. This finding indicates the efficacy of our super-resolution module for handling the resolution differences among multi-site data. In addition, VNET-Puzzle also consistently outperforms Lower Bound by a large margin, e.g., vs. for the Synapse dataset, vs. for the MSD Pancreas dataset, and vs. for the NIH Pancreas dataset. Equipped with both the puzzle module and the super-resolution module, our DARR can even lead to additional performance gains compared with VNET-Puzzle and VNET-SR. For instance, we observe an improvement of on the Synapse dataset, on thee MSD Pancreas dataset, and over on both the MSD Spleen dataset and the NIH Pancreas dataset. We also provide component comparison results in box plots (see Fig. 3) for the Synapse dataset, which suggests a general statistical improvement among all tested organs. To further demonstrate the efficacy of the proposed DARR, a qualitative comparison is illustrated in Fig. 4, where the spatial location of both kidneys is successfully identified by DARR.
We proposed an unsupervised domain adaptation method to generalize 3D multi-organ segmentation models to medical images collected from different scanners and/or protocols (domains). This method, named Domain Adaptive Relational Reasoning, is inspired by the the fact that the spatial relationship between internal structures in medical images are relatively fixed. We formulated the spatial relationship by solving jigsaw puzzles and utilized two schemes, i.e., spatial resolution standardisation and test-time jigsaw puzzle training, to guarantee its transferability to multiple domains. Experimental results on five public datasets demonstrate the superiority of our method.
We especially Chen Wei for her valuable discussions and ideas. This work was supported by the Lustgarten Foundation for Pancreatic Cancer Research.
- Almahairi, A., Rajeswar, S., Sordoni, A., Bachman, P., Courville, A.: Augmented cyclegan: Learning many-to-many mappings from unpaired data. In: Proc. ICML (June 2018)
- Bolte, J.A., Kamp, M., Breuer, A., Homoceanu, S., Schlicht, P., Huger, F., Lipinski, D., Fingscheidt, T.: Unsupervised domain adaptation to improve image segmentation quality both in the source and target domain. In: Proc. CVPR Workshops (2019)
- Busto, P.P., Iqbal, A., Gall, J.: Open set domain adaptation for image and action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (2018)
- Carlucci, F.M., D’Innocente, A., Bucci, S., Caputo, B., Tommasi, T.: Domain generalization by solving jigsaw puzzles. In: Proc. CVPR (2019)
- Chang, H., Lu, J., Yu, F., Finkelstein, A.: Pairedcyclegan: Asymmetric style transfer for applying and removing makeup. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 40–48 (2018)
- Chen, C., Dou, Q., Chen, H., Qin, J., Heng, P.A.: Unsupervised bidirectional cross-modality adaptation via deeply synergistic image and feature alignment for medical image segmentation. IEEE Transactions on Medical Imaging (2020)
- Chen, L., Bentley, P., Mori, K., Misawa, K., Fujiwara, M., Rueckert, D.: Self-supervised learning for medical image analysis using image context restoration. Medical image analysis 58, 101539 (2019)
- Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence 40(4), 834–848 (2017)
- Doersch, C., Gupta, A., Efros, A.A.: Unsupervised visual representation learning by context prediction. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1422–1430 (2015)
- Dou, Q., Ouyang, C., Chen, C., Chen, H., Glocker, B., Zhuang, X., Heng, P.: Pnp-adanet: Plug-and-play adversarial domain adaptation network at unpaired cross-modality cardiac segmentation. IEEE Access 7, 99065–99076 (2019)
- Dou, Q., Castro, D.C., Kamnitsas, K., Glocker, B.: Domain generalization via model-agnostic learning of semantic features. In: Advances in Neural Information Processing Systems (NeurIPS) (2019)
- Dou, Q., Ouyang, C., Chen, C., Chen, H., Heng, P.A.: Unsupervised cross-modality domain adaptation of convnets for biomedical image segmentations with adversarial loss. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI). pp. 691–697 (2018)
- Ganin, Y., Lempitsky, V.S.: Unsupervised domain adaptation by backpropagation. In: Bach, F.R., Blei, D.M. (eds.) Proc. ICML (2015)
- Hoffman, J., Tzeng, E., Park, T., Zhu, J.Y., Isola, P., Saenko, K., Efros, A.A., Darrell, T.: Cycada: Cycle-consistent adversarial domain adaptation. arXiv preprint arXiv:1711.03213 (2017)
- Inoue, N., Furuta, R., Yamasaki, T., Aizawa, K.: Cross-domain weakly-supervised object detection through progressive domain adaptation. In: Proc. CVPR (2018)
- Jamaludin, A., Kadir, T., Zisserman, A.: Self-supervised learning for spinal mris. In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 294–302. Springer (2017)
- Joyce, T., Chartsias, A., Tsaftaris, S.A.: Deep multi-class segmentation without ground-truth labels (2018)
- Kamnitsas, K., Baumgartner, C., Ledig, C., Newcombe, V., Simpson, J., Kane, A., Menon, D., Nori, A., Criminisi, A., Rueckert, D., et al.: Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In: International conference on information processing in medical imaging. Springer (2017)
- Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. In: Advances in neural information processing systems. pp. 700–708 (2017)
- Milletari, F., Navab, N., Ahmadi, S.: V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: Proc. 3DV (2016)
- Murez, Z., Kolouri, S., Kriegman, D., Ramamoorthi, R., Kim, K.: Image to image translation for domain adaptation. In: Proc. CVPR (2018)
- Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: Feature learning by inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2536–2544 (2016)
- Sánchez, I., Vilaplana, V.: Brain MRI super-resolution using 3d generative adversarial networks. CoRR abs/1812.11440 (2018)
- Sankaranarayanan, S., Balaji, Y., Jain, A., Nam Lim, S., Chellappa, R.: Learning from synthetic data: Addressing domain shift for semantic segmentation. In: Proc. CVPR (2018)
- Simpson, A.L., Antonelli, M., Bakas, S., Bilello, M., Farahani, K., van Ginneken, B., Kopp-Schneider, A., Landman, B.A., Litjens, G.J.S., Menze, B.H., Ronneberger, O., Summers, R.M., Bilic, P., Christ, P.F., Do, R.K.G., Gollub, M., Golia-Pernicka, J., Heckers, S., Jarnagin, W.R., McHugo, M., Napel, S., Vorontsov, E., Maier-Hein, L., Cardoso, M.J.: A large annotated medical image dataset for the development and evaluation of segmentation algorithms. CoRR abs/1902.09063 (2019)
- Sun, B., Saenko, K.: Deep coral: Correlation alignment for deep domain adaptation. In: Proc. ECCV (2016)
- Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014)
- Wei, C., Xie, L., Ren, X., Xia, Y., Su, C., Liu, J., Tian, Q., Yuille, A.L.: Iterative reorganization with weak spatial constraints: Solving arbitrary jigsaw puzzles for unsupervised representation learning. In: Proc. CVPR (2019)
- Yao, J., Summers, R.M.: Statistical location model for abdominal organ localization. In: Proc. MICCAI (2009)
- Zhang, R., Isola, P., Efros, A.A.: Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1058–1067 (2017)
- Zhou, Y., Wang, Y., Tang, P., Bai, S., Shen, W., Fishman, E., Yuille, A.: Semi-supervised 3d abdominal multi-organ segmentation via deep multi-planar co-training. In: Proc. WACV (2019)
- Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proc. ICCV (2017)
- Zhu, X., Pang, J., Yang, C., Shi, J., Lin, D.: Adapting object detectors via selective cross-domain alignment. In: Proc. CVPR (2019)
- Zou, Y., Yu, Z., Kumar, B.V., Wang, J.: Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In: Proc. ECCV. pp. 289–305 (2018)
- Zou, Y., Yu, Z., Liu, X., Kumar, B.V., Wang, J.: Confidence regularized self-training. In: Proc. ICCV (2019)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506320.28/warc/CC-MAIN-20230922002008-20230922032008-00029.warc.gz
|
CC-MAIN-2023-40
| 26,055
| 74
|
https://www.simscale.com/forum/t/structural-analysis/91235
|
code
|
Hi, this is osama. Actually I am trying to perform CFD on a structure but I am getting an error. I am new to simscale. Please can someone help me out with the CFD analysis. Here is the link to my project:
Hi Osama, welcome to the forum!
Your issue is most surely related to mesh quality, as explained in this article:
I can see you are currently running a new mesh, but here go my suggestions to improve the model:
- Reduce the enclosure size. This will allow you to reduce mesh cells and spend them where they are needed.
- Optimize the close region refinement. It should include the ground and it doesn’t need to go all the way to the top of the enclosure.
- Use automatic layers on the first runs, only change to manual when your simulation is working, you are confident in the results, and only want to perform a mesh independence analysis.
I copied over your project and performed the setup, you can find it here:
Hi @ggiraldof. I was able to get the mesh after almost 3 hours from the setup which you put. However I am getting an error while running the simulation. Please have a look and do help me out please.
What is the error you are getting? Any messages?
Also, I can see that the mesh is too big to be run with a community account, so maybe you should back down the refinement level on the close region.
ok so what should I take the minimum edge length as
I think you can augment the current value by a factor of 5 or 10.
Your first goal is to perform a draft simulation, but bear in mind that the results are highly sensible to the cell size, thus at the end you should end up with a size similar to what you have right now to get accurate results.
Another suggestion to to cut on mesh size is to make use of the symmetry of the structure if possible, i.e., reduce the enclosure in half :
@ggiraldof I have performed CAD cleaning operations to help me out with the simulation. Please can you help me out as to how can I proceed further from here to perform mesh in order to get exactly accurate results.
The general procedure is to perform a mesh independence analysis.
This consists of starting with a mesh you consider reasonable to get results, and then iterate with a refined mesh. This is repeated until the results do not change too much between mesh refinement iterations (setting a target variation, for example 5% or less)
If you start with a very coarse/bad mesh, this could take a lot of iterations. If done right, it should happen in three or four iterations.
My simulation is giving an error. Please can anyone help me with this. I have performed the mesh but it is giving me an error before starting the simulation. Please someone help me resolve this simulation
Why did you change the meshing algorithm?
I changed the algorithm because my simulation was failing
Hi, I successfully managed to perform CFD analysis. However the force Z plot which I am getting is very much unsteady. By theoratical calculations, I must be getting a pressure force of around 23000 N. However the pressure value plot in my CFD analysis is not steady alot. Please anyone can help me out with this ? @Retsam
Well, you will reduce initial wake in simulation by setting Potential flow initialization to Yes.
After 400 steps you have stable forces already (mind that the flow will be turbulent around your raft), but due to the simulation domain (enormous) size, so called Residuals are still on convergence slope.
But main problem in your simulation is your structure mesh:
In my opinion, you should not try any simulation with such a inappropriate mesh. So please focus on creating correct mesh.
oh okay. so @Retsam please can you help me out as to how can I improve my mesh by looking at the mesh settings that I have made ?
@osamarauf: If I tell your professor that from 2019 on, your ‘study time’ at SimScale was 28 minutes, he would not believe:
Joined Aug 1, '19 Read 26m
Please let me explain that you came here to learn and play with 3000 core-hours. CFD is not for everybody: it is for guys willing to learn. If I make simulation in your place, you will learn nothing.
So please start to study tutorials on meshes first:
Once done, go to different documentations:
I hope you should be ready in couple of weeks to restart your project. If you have more than 20 hours of Read time and you still fail with your mesh, please be back. I will be happy to help you with fine-tuning of your simulation.
ok @Retsam I will definitely read that. I just wanted to ask that is there a way that I can perform this simulation with a community plan account ? or do I have to go to the proffesional account.
Secondly I wanted to ask that is it the size of the bounding box and cartesian box that I have selected wrongly?
In my opinion, that kind of simulation can be performed without any problem with community plan. But without correctly crafting the mesh and simulation, you need to pay for more resources.
For aircraft / wing simulation your BMB may be correctly sized. For simple drag force calculation on blunt body it is too big.
(post withdrawn by author, will be automatically deleted in 24 hours unless flagged)
Hi @Retsam. I finally managed to run the simulation. The mesh seems to be quite better than before to me. The results also seem sensible. I just wanted you to see my simulation setup and please let me know if this one is correctly done. It would be really kind of you. Here is the link
Hi @BenLewis. Basically I have translated the results from CFD into FEA . I have used wind force and the structure consists of 144 solar panels. Each solar panel weights 24.5 kg. The size of each solar panel is 2180 mm x 996 mm x 40 mm. Furthermore, the structure is made up of mild steel. The top part of the structure consists of I beams . while the bottom rectangular part consists of C-channels.
Basically I want to find out if this material is safe to design or not. I have used fixed support at the bottom side. I performed FEA and the maximum value of von mises stress that I am getting is around 244 MPA. And we know that the tensile strength of mild steel is 250 MPA. So the factor of safety is coming around 1.03 which is less than 2. So I am assuming that the structure is unsafe.
I just wanted to ask that have I set up the simulation correctly. I have given the link to my simulation below. Secondly I wanted to ask that is this the correct approach to find out wheather the structure is safe or not.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514796.13/warc/CC-MAIN-20210118123320-20210118153320-00174.warc.gz
|
CC-MAIN-2021-04
| 6,422
| 43
|
https://coderanch.com/t/397983/java/URL-Object
|
code
|
I have one more... I am ATTEMPTING to write an application that retrieves from a weather website the temperature of a specific location (based on input from the user). Everything I have says once I have created a URL object, I can create the URL connection. I have some documentation on creating the connection but cannot find anything on creating the URL object. Again, thanks!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645824.5/warc/CC-MAIN-20180318145821-20180318165821-00622.warc.gz
|
CC-MAIN-2018-13
| 378
| 1
|
https://talentsingames.com/en/job/full-time-technical-designer-2/
|
code
|
With multiple awards to our name, Larian Studios has proven that we’re dedicated to delivering high-quality role-playing games. We’re now looking for a Technical Designer who can support and enable our design teams create amazing RPGs.
Study a powerful code gameplay system and apply your insights to extend and improve existing feature designs
Consult content designers on the capabilities of the system
Write code feature requests for the gameplay programming team based on existing design documents.
Implement new gameplay systems and content using script and a data-driven stats system
Extend their capabilities in C++ when necessary
Maintain existing features and improve them with polishing touches
Work with audiovisual departments to add flavour to existing mechanics
Improve UI feedback for gameplay features – tooltips, combat log, notifications, in-world indicators
Support and improve existing economy balancing tools
C++ skills a must
Minimum 1 year of experience as a technical designer, gameplay programmer, systems designer, combat designer or content designer
Or alternatively: personal projects that demonstrate awareness of game design fundamentals
Nice to Haves
Familiarity with RPG systems of popular tabletop and computer RPGs
Knowledge of basic calculus, combinatorics, statistics
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389309.17/warc/CC-MAIN-20200525161346-20200525191346-00476.warc.gz
|
CC-MAIN-2020-24
| 1,309
| 16
|
https://github.com/okfn/opendatasurvey/issues/844
|
code
|
Join GitHub today
Localization for Survey #844
During implementation of the new template design and survey strategy for the 2016 survey, localization wasn't explicitly supported. Much of the supporting code is in place, but new strings need to be marked for translation, and Transifex updated.
The strategy for localizing loaded content (Questions, Datasets, etc) has changed. Site admins are encouraged to manage Questions and Datasets within tabs within a single Google Spreadsheet doc (rather than extending a global Questions doc with translated columns).
For local sites where more than one language is supported, the multi-column translation strategy can still be used.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513844.48/warc/CC-MAIN-20181021094247-20181021115747-00274.warc.gz
|
CC-MAIN-2018-43
| 675
| 5
|
https://libwebsockets.org/pipermail/libwebsockets/2019-April/007925.html
|
code
|
[Libwebsockets] HS: UPGRADE malformed
andy at warmcat.com
Sat Apr 27 04:16:11 CEST 2019
On April 27, 2019 2:26:55 AM GMT+01:00, Paolo Denti <paolo.denti at gmail.com> wrote:
>what's wrong with you? I tried to help anyone that might have the same
>issue as soon as I found a potential solution.
*shrug* This is FOSS. Nobody owes you anything.
If you want to provide usable info to try to understand where your problem is coming from, let's do that as I described. Maybe there's something to do in lws.
You're talking to > 200 people + search engines when you post on this list, while a handful might care about your specific server, nobody cares to have 3 x drama emails from you (so far) in their inbox.
>I am not a maintainer of the library, it is my first time using it, I
I am the maintainer.
>struggled with a potential issue, I found a way to sort out a potential
No... a 'fix' is something different that requires definitively understanding the problem and a 'solution' would be figuring out some action that changes things so the root cause can no longer apply. For example, is this really actually caused by lws, or by your specific server and the other clients being liberal in accepting its deviations? Your original rant doesn't contain anything useful to tell, because you jumped to a conclusion already without finding out either the mechanism for, or the source of, the problem.
and I am posting what I found hoping someone could ask me how to
>further to help everyone.
>At least, that is what I do on my side when I get such requests on what
>I did not know how to get detailed information to supply in order to
>to fix the problem, otherwise I would have done it myself.
>Now that I see how to supply info, I will do that as soon as possible.
>Thank you for you help
>On Fri, Apr 26, 2019 at 5:47 PM Andy Green <andy at warmcat.com> wrote:
>> On April 26, 2019 11:14:01 PM GMT+01:00, Paolo Denti <
>> paolo.denti at gmail.com> wrote:
>> >I have been struggling for a very long time with error HS: UPGRADE
>> >malformed, even with very basic examples.
>> >I just was not able to write even a single super basic example, not
>> >starting from the existing examples.
>> >Whatever I tried to do, I just got this error, on my server (Java,
>> >based) fully working, with several clients, node.js. python, perl,
>> >But I was not able to have it working with libwebsockets.
>> >I was about to give up when, today, I browsed a little bit and I
>> >someone was having the same issue while upgrading to 3.1.0
>> >I tried to build 3.0.0 and everything magically worked, without
>> >changing a
>> >single bit from my last experiment, out of the 200K zillions I
>> I hope that was useful for you as therapy, because as an actionable
>> report I don't see what to do with it.
>> Can you show me some verbose logs from lws when it decides the
>> malformed, build with cmake .. -DCMAKE_BUILD_TYPE=DEBUG and -d 1151
More information about the Libwebsockets
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00206.warc.gz
|
CC-MAIN-2021-43
| 2,957
| 40
|
https://www.informit.com/articles/article.aspx?p=31696&seqNum=3
|
code
|
Pass the MCAD/MCSD: Learning to Access and Manipulate XML Data
This chapter covers the following Microsoft-specified objective for the "Consuming and Manipulating Data" section of the "Developing XML Web Services and Server Components with Microsoft Visual C# .NET and the Microsoft .NET Framework" exam:
Access and manipulate XML Data.
Access an XML file by using the Document Object Model (DOM) and an XmlReader.
Transform DataSet data into XML data.
Use XPath to query XML data.
Generate and use an XSD schema.
Write a SQL statement that retrieves XML data from a SQL Server database.
Update a SQL Server database by using XML.
Validate an XML document.
Extensible Markup Language (far better known as XML) is pervasive in .NET. It's used as the format for configuration files, as the transmission format for SOAP messages, and in many other places. It's also rapidly becoming the most widespread common language for many development platforms.
This objective tests your ability to perform many XML development tasks. To pass this section of the exam, you need to know how to read an XML file from disk, and how to create your own XML from a DataSet object in your application. You also need to be familiar with the XPath query language, and with the creation and use of XSD schema files.
You'll also need to understand the connections that Microsoft SQL Server has with the XML universe. You need to be able to extract SQL Server data in XML format, and to be able to update a SQL Server database by sending it properly formatted XML.
Finally, the exam tests your ability to validate XML to confirm that it conforms to a proper format. The .NET Framework includes several means of validating XML that you should be familiar with.
Accessing an XML File
- Understanding the DOM
- Using an XMLReader Object
- The XmlNode Class
- The XmlDocument Class
Synchronizing DataSet Objects with XML
- The XmlDataDocument Class
- Synchronizing a DataSet Object with an XmlDataDocument Object
- Starting with an XmlDataDocument Object
- Starting with a Full DataSet Object
- Starting with an XML Schema
- The XPath Language
- Using the XPathNavigator Class
- Selecting Nodes with XPath
- Navigating Nodes with XPath
Generating and Using XSD Schemas
- Generating an XSD Schema
- Using an XSD Schema
- Validating Against XSD
- Validating Against a DTD
Using XML with SQL Server
- Generating XML with SQL Statements
- Understanding the FOR XML Clause
- Using ExecuteXmlReader() Method
- Updating SQL Data by Using XML
- Installing SQLXML
- Using DiffGrams
Apply Your Knowledge
Use the XmlDocument and XmlNode objects to navigate through some XML files. Inspect the node types that you find and understand how they relate to the original XML.
Use the XmlDataDocument class to synchronize a DataSet object with an XML file. Save the XML file to disk and inspect its contents. Understand how the generated XML relates to the original DataSet object.
Use an XPath processor to run XPath queries against an XML file. Make sure you know the XPath syntax to select portions of the XML.
Use the methods of the DataSet object to create XSD files. Inspect the generated XSD and understand how it relates to the original objects.
Use XML to read and write SQL Server data. You can install the MSDE version of SQL Server from your Visual Studio .NET CD-ROMs if you don't have a full SQL Server to work with.
Use the XmlValidatingReader class to validate an XML file. Make a change to the file that makes it invalid and examine the results when you try to validate the file.
Review the XML Data section of the Common Tasks QuickStart Tutorials that ship as part of the .NET Framework SDK.
You can't use the .NET Framework effectively unless you're familiar with XML. That's true even if you're working only with desktop applications, but if you want to write XML Web Services and other distributed applications, XML knowledge is even more important. The .NET Framework uses XML for many purposes itself, but it also makes it very easy for you to use XML in your own applications.
The FCL's support for XML is mainly contained in the System.Xml namespace. This namespace contains objects to parse, validate, and manipulate XML. You can read and write XML, use XPath to navigate through an XML document, or check to see whether a particular document is valid XML by using the objects in this namespace.
XML Basics In this chapter, I've assumed that you're already familiar with the basics of XML, such as elements and attributes. If you need a refresher course on XML Basics, refer to Appendix B, "XML Standards and Syntax."
As you're learning about XML, you'll become familiar with some other standards as well. These include the XPath query language and the XSD schema language. You'll see in this chapter how these other standards are integrated into the .NET Framework's XML support.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817158.8/warc/CC-MAIN-20240417142102-20240417172102-00078.warc.gz
|
CC-MAIN-2024-18
| 4,859
| 53
|
https://geobabble.wordpress.com/2009/09/09/arcgis-9-3-1-sp1/
|
code
|
So the upcoming release of ArcGIS 9.3.1 SP1 addresses a lot of issues. I am somewhat curious why it’s not being called 9.3.2. Calling it a service pack seems to be mixing metaphors. Hopefully, ESRI will reconsider before it hits the street.
- There is a NAICS code (532230) for "Video Tape and Disc Rental." 31 minutes ago
- Full text of the Geospatial Data Act of 2017. Bypass all summaries and just read the bill. congress.gov/bill/115th-con… #gistribe 23 hours ago
- RT @Spatial_Punk: Hey #ESRI,I know we don't agree on much,but your @EsriStartups are @ risk because of Geospatial Data Act (GDA) of 2017 (S… 1 day ago
The information in this weblog is provided "AS IS" with no warranties, and confers no rights.
This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my opinion and probably incorrect.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320763.95/warc/CC-MAIN-20170626133830-20170626153830-00403.warc.gz
|
CC-MAIN-2017-26
| 861
| 6
|
https://cslanet.com/old-forum/9041.html
|
code
|
Dear ladies and sirs.
In CSLA 3.8 one could invoke the AuthorizationRules.AllowGet method passing in the type of some object. This could be done at any level and the object does not have to be a BO.
For instance, we have something called EntityManager. In its static constructor we have a line like this:
AuthorizationRules.AllowGet(typeof(SomeNonBOType), Roles.Authenticated, Roles.Guest);
Is it possilbe in CSLA 4? Or Csla 4 demands that authorization rules be applied to BO only?
The same thing should work. The per-type authorization rules in CSLA 4 don't restrict the object type to any CSLA type, so you should be able to define and check authorization rules for any type.
Thanks Rocky for the prompt reply.
I get it. I was confused by the fact that there is BusinessRules static class and BusinessRules property in a BO.
So, I just replace the old code with this:
BusinessRules.AddRule(typeof(SomeNonBOType), new IsInRole(AuthorizationActions.GetObject, Roles.Authenticated, Roles.Guest));
Copyright (c) Marimer LLC
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100476.94/warc/CC-MAIN-20231202235258-20231203025258-00897.warc.gz
|
CC-MAIN-2023-50
| 1,022
| 11
|
http://photoshopzilla.com/mysql-odbc/mysql-odbc-driver-downloads.php
|
code
|
Buying the bundle saves you money on both tools. Buy Download Query Builder for SQL Server Query Builder for SQL Server Tool for visual creation of any queries without code typing. You are logged in as . Buy Download Data Compare for Oracle Data Compare for Oracle Tool for safe and efficient comparison and synchronization of Oracle data, convenient data differences management in a well-designed interface. navigate here
Direct Mode gives your applications an unrivaled advantage - connection to MySQL databases directly via TCP/IP . LiteDAC provides an opportunity to work with SQLite directly by static linking of SQLite library in an application. Buy Download dbExpress Driver for SQLite dbExpress Driver for SQLite dbExpress Driver for SQLite is a database-independent layer that defines common interface to provide direct access to SQLite from Delphi and Contact MySQL Sales USA/Canada: +1-866-221-0634 (More Countries ») © 2017, Oracle Corporation and/or its affiliates Products Oracle MySQL Cloud Service MySQL Enterprise Edition MySQL Standard Edition MySQL Classic Edition
Get the basic functionality of the MySQL and MariaDB front end management tool for free. Connector/Node.js Standardized database driver for Node.js platforms and development. Once the files are copied to their final locations and the drivers registered with the Windows ODBC manager, the installation is complete. Mysql Odbc Connection String Now that the installation is complete, configure your ODBC connections using Chapter 5, Configuring Connector/ODBC.
License Trial version OS Windows 2000 MySQL ODBC driver is also compatible with: Windows 98 Windows 98 SE Windows ME Windows 2000 Windows NT Windows XP Windows 2003 Windows Vista Windows We do not encourage or condone the use of this program if it is in violation of these laws. New! Download TMetric TMetric TMetric is a time tracking web application for IT-professionals and companies.
Buy Download dbExpress Driver for InterBase and Firebird dbExpress Driver for InterBase and Firebird dbExpress is a database-independent layer that defines common interface to provide fast access to InterBase and Firebird Mysql Connector Jar Download MySQL open source software is provided under the GPL License. Learn more Developer Bundle for SQL Server Developer Bundle for SQL Server Take a full control over database development and management with dbForge Developer Bundle for SQL Server with sensational discount dbExpress Driver for SQLite also has support for SQLite database encryption to protect your data from unauthorized access.
Learn more Excel Add-in for Oracle Excel Add-in for Oracle Excel add-in that allows you to connect Microsoft Excel to Oracle, quickly and easily load data from Oracle to Excel, instantly check over here OEMs, ISVs and VARs can purchase commercial licenses. Buy Download SDAC SDAC Being a feature-rich and high-performance library of components that provides direct and native connectivity to SQL Server from Delphi, C++Builder, Lazarus (and Free Pascal) under Windows, MacOS, Buy Download Code Review Bundle Code Review Bundle Code Compare adds value to Review Assistant when tools are used together. Mysql Connector C#
Learn more SSIS Components for Zoho CRM SSIS Components for Zoho CRM A set of SSIS Data Flow components for SQL Server Integration Services (SSIS) packages that includes Source component with Buy Download Schema Compare for SQL Server Schema Compare for SQL Server Tool for quick and safe schema comparison and synchronization, easy analysis of database structure differences, and deployment of changes Excel Add-ins for Databases Excel Add-ins for Databases Excel add-ins that allow you to work with database data in Microsoft Excel as with usual Excel spreadsheets. his comment is here Cons: (10 characters minimum)Count: 0 of 1,000 characters 5.
Buy Download Data Compare for SQL Server Data Compare for SQL Server Tune your SQL database comparison, quickly analyze differences in a well-designed user interface and effortlessly synchronize data via a Mysql Driver Maven That's why they are sold in code review bundle. Download SQL Decryptor SQL Decryptor A free tool for restoring lost definitions and decrypting SQL procedures, functions, triggers, and views in SQL Server databases.
SSMS Add-ins SSMS Add-ins Plugins that add missing features to SQL Server Management Studio and improve your productivity while working with Microsoft SQL Server. Please submit your review for MySQL Connector/ODBC (64-Bit) 1. Buy Download dotConnect for MySQL dotConnect for MySQL An enhanced enhanced ORM enabled data provider MySQL built over ADO.NET architecture. Mysql Connector Download Buy Download dotConnect for Salesforce dotConnect for Salesforce An ADO.NET provider for accessing Salesforce data through the standard ADO.NET or Entity Framework interfaces.
IBDAC-based applications connect to the server directly using the InterBase or Firebird client. The tool allows you to capture data about each server event. The tool includes a huge collection of predefined generators with sensible configuration options. weblink Get dbForge Schema Compare for MySQL and Maria DB, and dbForge Data Compare for MySQL products and save about 25%.
Install the package before you click Retry and continue. ADO.NET Providers for Clouds ADO.NET Providers for Clouds The fastest way to create .NET applications, working with cloud data. Learn more SSIS Components for Dynamics CRM SSIS Components for Dynamics CRM A set of SSIS Data Flow components for SQL Server Integration Services (SSIS) packages that includes Source component with Download dbMonitor dbMonitor The tool provides visual monitoring of your database applications.
Buy Download Data Compare for SQL Server Data Compare for SQL Server Tune your SQL database comparison, quickly analyze differences in a well-designed user interface and effortlessly synchronize data via a Buy Download Query Builder for MySQL Query Builder for MySQL Visual tool for creating queries of any complexity without code typing. Buy Download Data Generator for SQL Server Data Generator for SQL Server Convenient and easy-to-use GUI tool for a fast generation of large volumes of SQL Server test table data. Direct Mode allows to avoid using MySQL client library, that increases developed application performance and simplyfies deployment process.
Buy Download Source Control Source Control A powerful SSMS add-in for managing SQL Server database changes in popular source control systems, delivering smooth workflow in a familiar interface. For a list of changes see the revision history.WindowsODBC Driver for MySQL 2.111.56 MbLinuxODBC Driver for MySQL 2.118.77 MbMac OS XODBC Driver for MySQL 2.124.57 MbDocumentationCHM documentation3.33 MbPDF documentation2.65 Mb 50% Connector/C++ Standardized database driver for C++ development. This includes to personalise ads, to provide social media features and to analyse our traffic.
Get the basic functionality for free. Buy Download dbExpress Driver for Oracle dbExpress Driver for Oracle dbExpress is a database-independent layer that defines common interface to provide direct access to Oracle from Delphi and C++Builder on Windows SQLite library is statically linked into developed applications, that increases application performance and simplyfies deployment process. Full support for standard ODBC API functions and data types implemented in our driver makes interaction of your database applications with MySQL fast, easy and extremely handy.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590329.25/warc/CC-MAIN-20180718193656-20180718213656-00271.warc.gz
|
CC-MAIN-2018-30
| 7,522
| 14
|
https://www.cybrhome.com/website/jobs.hasgeek.com
|
code
|
Job hunting portal by GitHub.
Search 80000+ tech jobs. Dice.com has business analyst, software engineer, QA jobs and many more. Manage your tech job search and IT career on Dice.
Hasjob is India’s best job board for tech startups
Here's a collection of best tools, resources, blogs and software for the modern entrepreneur.
Some interesting and helpful sites for teenagers and young adults.
A list of social networks and communities with more than 100 million active users.
Exclusively for people who love to travel and explore the world.
Online learning can take you places. Here are some sites that offer some of the best courses, resources and tutorials.
Fashion and modeling related sites on the web.
Resources, tutorials, tools and blogs for software professionals, developers and engineers.
This list is for people who live to eat and not eat to live.
Explore best tools, resources, sites and blogs for designers at one place.
UI/UX & Design
Free Stock Photos
Graphic Design Tools
Graphic Design Tutorials
Explore best tools, resources, sites and blogs required to build and scale a startup
Startup Accelerators (official)
VC Firms (official)
Explore best sites, blogs and portals related to entertainment
Torrent Search Engines
Explore best tools, resources, sites and blogs required to master programming and technology
Engineering Blogs of Companies
Data Science Blogs
Explore best tools, resources, sites and blogs required for your career and growth
Resume & CV (tools)
Learn a Language
Explore best sites, blogs and portals related to sports and games
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583826240.93/warc/CC-MAIN-20190122034213-20190122060213-00413.warc.gz
|
CC-MAIN-2019-04
| 1,565
| 28
|
http://download.cnet.com/ShowMeTheStats/3000-2066_4-65414.html
|
code
|
ShowMeTheStats! is a tool for easy currency rates checking and balance checking on your accounts. Do you know how much time you waste on balance checking on your accounts? Do you become irritated when you see a lot of annoying browser windows containing complex web interfaces? Do you need to perform multiple actions only to check the balances on your accounts? If you are a shareware vendor, webmaster, or an ordinary PC user and you need to check your balance or currency rates periodically, you may save your time effectively using ShowMeTheStats!
Forget about all web interfaces! Supported services are: shareware vendors and affiliates, Google, Banking accounts, PPC sponsors, adult sponsors. ShowMeTheStats! allows you to check all your balances directly from the program: ShowMeTheStats! connects to the HTTP or HTTPS web pages, gathers the desired information and delivers it to you. It is safe and reliable method of checking your balances. ShowMeTheStats! doesn't deliver or share any information with third parties. Moreover, your privacy is strictly kept thanks to the password protected access system. Nobody can access the program without your permission. In addition, all data in the program (login/password information, balances and related data) is encrypted with a password too. Once you have created a password you guarantee safety for all your information.
Using this tool you may check all your balances simultaneously. Built-in scheduler allows setting time intervals between balance checks. Built-in notifications (balloon tips, tray icon blinking etc.) are created to inform a user about balance changes on ones accounts. Furthermore, you can view the some specific parameters for every account type (e.g.: "Number of orders: 100"). If you need a tool that is always at hand you must choose ShowMeTheStats!.
More Products to Consider
- Take an open-source approach to office productivity.
- Convert PDF files to Microsoft Word format.
- Open, edit, and save files using the new file formats in 2007 v...
- View, navigate, and print PDF files.
- Convert JPG, BMP, TIFF, PNG images to PDF files.
- Create and share content with the help of a comprehensive set o...
- Transform PDF documents to Excel documents.
- Create, encrypt, and merge PDF files.
- Keep track of how you use your PC.
- Render PDF files within applications that support the print fun...
- Develop your own custom solutions visualizing various data.
- Read, create, and save files seamlessly in Office 2007/2010 and...
- Recognize text from images using the open source Tesseract OCR ...
- Capture screenshots to make annotations and manuals.
- Protect yourself while surfing the Web.
- Manage your passwords, fill your forms, and store your data.
- Analyzing web site log files.
- Convert Word, RTF, and TXT files into PDF documents.
- Allow complete user permissions for specific applications.
- Select color combination you need from the schemes.
- Create and publish Web pages without knowing HTML.
- Log in Web sites automatically and manage your privacy.
- Capture Web screenshots automatically.
- Create, manage, and collaborate with others on presentations.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010701848/warc/CC-MAIN-20140305091141-00082-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 3,157
| 28
|
https://jobs.crelate.com/portal/buckinghamsearch/search?locationState=il&location=des%20plaines
|
code
|
Buckingham Search is excited to partner with a leading, privately-held manufacturing company to hire an Internal Audit Manager.
Things to get excited about:
Excellent benefits like:
5% 401k match, 4 weeks PTO + federal holidays, phone reimbursement, tuition reimbursement.
Sweet perks like:
Onsite gym, FREE trainer, and bar in the office.
Opportunity to advance across multiple...
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371806302.78/warc/CC-MAIN-20200407214925-20200408005425-00057.warc.gz
|
CC-MAIN-2020-16
| 381
| 7
|
https://mailman.nanog.org/pipermail/nanog/2018-July/096352.html
|
code
|
Quickstart Guide to IRR/RPSL
job at ntt.net
Thu Jul 19 18:30:17 UTC 2018
On Thu, Jul 19, 2018 at 11:19:12AM -0700, Kenneth Finnegan wrote:
> As for ARIN-WHOIS, I think I had gotten confused whether it was
> additive or exclusive of IRR objects for allowing prefixes.
Indeed, in arouteserver it is 'additive'. Documentation from ARIN is
> > 2/ I'd delete the "Step 2: Document Your Autonomous System’s Routing
> > Policy" step, nobody uses this.
> Is the expectation that the only source of a network's as-set is
> PeeringDB then?
Yes, or the IX/transit operator can ask what AS-SET to use during the
turn-up of the circuit.
> I have reason to believe there are IRR consumers who do parse
> export/mp-export statements. I think at least documenting an mp-export
> to AS-ANY policy is reasonable, but I'll reconsider that.
Globally I think there are only 2 or 3 organisations left that parse
this information. The vast majority either autodiscovers via peeringdb,
or just explicitly asks for it during provisioning.
> > Can I update http://peering.exposed/ and add FCIX with a 'yes' to
> > both secure route servers & BCP 214? :-)
> Please do. :-) $0 for 10G, N/A for 100G.
> The next IRR puzzle for us is converting a CSV of member ASNs to their
> as-sets to generate the requested AS33495:AS-MEMBERS as-set so our
> members can also generate filters against the route servers. It seems
> like there's probably a tool like bgpq3 that can turn a list of ASNs
> into an as-set of their exports, but I'm not seeing it.
bgpq3 can only go from IRR sources (using the RADB IRRd protocol) to
outputs such as Cisco, Juniper, BIRD, JSON - not the other way around.
> Anyone have something at hand, or am I breaking out the python soon?
More information about the NANOG
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00278.warc.gz
|
CC-MAIN-2021-25
| 1,760
| 31
|
https://www.influxdata.com/comparison/azure-data-explorer-vs-postgres/
|
code
|
Choosing the right database is a critical choice when building any software application. All databases have different strengths and weaknesses when it comes to performance, so deciding which database has the most benefits and the most minor downsides for your specific use case and data model is an important decision. Below you will find an overview of the key concepts, architecture, features, use cases, and pricing models of Azure Data Explorer and PostgreSQL so you can quickly see how they compare against each other.
The primary purpose of this article is to compare how Azure Data Explorer and PostgreSQL perform for workloads involving time series data, not for all possible use cases. Time series data typically presents a unique challenge in terms of database performance. This is due to the high volume of data being written and the query patterns to access that data. This article doesn’t intend to make the case for which database is better; it simply provides an overview of each database so you can make an informed decision.
Azure Data Explorer vs PostgreSQL Breakdown
ADX can be deployed in the Azure cloud as a managed service and is easily integrated with other Azure services and tools for seamless data processing and analytics.
PostgreSQL can be deployed on various platforms, such as on-premises, in virtual machines, or as a managed cloud service like Amazon RDS, Google Cloud SQL, or Azure Database for PostgreSQL.
PostgreSQL license (similar to MIT or BSD)
Log and telemetry data analysis, real-time analytics, security and compliance analysis, IoT data processing
Web applications, geospatial data, business intelligence, analytics, content management systems, financial applications, scientific applications
Highly scalable with support for horizontal scaling, sharding, and partitioning
Supports vertical scaling, horizontal scaling through partitioning, sharding, and replication using available tools
Azure Data Explorer Overview
Azure Data Explorer is a cloud-based, fully managed, big data analytics platform offered as part of the Microsoft Azure platform. It was announced by Microsoft in 2018 and is available as a PaaS offering. Azure Data Explorer provides high-performance capabilities for ingesting and querying telemetry, logs, and time series data.
PostgreSQL, also known as Postgres, is an open-source relational database management system that was first released in 1996. It has a long history of being a robust, reliable, and feature-rich database system, widely used in various industries and applications. PostgreSQL is known for its adherence to the SQL standard and extensibility, which allows users to define their own data types, operators, and functions. It is developed and maintained by a dedicated community of contributors and is available on multiple platforms, including Windows, Linux, and macOS.
Azure Data Explorer for Time Series Data
Azure Data Explorer is well-suited for handling time series data. Its high-performance capabilities and ability to ingest large volumes of data make it suitable for analyzing and querying time series data in near real-time. With its advanced query operators, such as calculated columns, searching and filtering on rows, group by-aggregates, and joins, Azure Data Explorer enables efficient analysis of time series data. Its scalable architecture and distributed nature ensure that it can handle the velocity and volume requirements of time series data effectively.
PostgreSQL for Time Series Data
PostgreSQL can be used for time series data storage and analysis, although it was not specifically designed for this use case. With its rich set of data types, indexing options, and window function support, PostgreSQL can handle time series data. However, Postgres will not be as optimized for time series data as specialized time series databases when it comes to things like data compression, write throughput, and query speed. PostgreSQL also lacks a number of features that are useful for working with time series data like downsampling, retention policies, and custom SQL functions for time series data analysis.
Azure Data Explorer Key Concepts
- Relational Data Model: Azure Data Explorer is a distributed database based on relational database management systems. It supports entities such as databases, tables, functions, and columns. Unlike traditional RDBMS, Azure Data Explorer does not enforce constraints like key uniqueness, primary keys, or foreign keys. Instead, the necessary relationships are established at query time.
- Kusto Query Language (KQL): Azure Data Explorer uses KQL, a powerful and expressive query language, to enable users to explore and analyze their data with ease.
- Extents: In Azure Data Explorer, data is organized into units called extents, which are immutable, compressed sets of records that can be efficiently stored and queried.
PostgreSQL Key Concepts
- MVCC: Multi-Version Concurrency Control is a technique used by PostgreSQL to allow multiple transactions to be executed concurrently without conflicts or locking.
- WAL: Write-Ahead Logging is a method used to ensure data durability by logging changes to a journal before they are written to the main data files.
- TOAST: The Oversized-Attribute Storage Technique is a mechanism for storing large data values in a separate table to reduce the main table’s disk space consumption.
Azure Data Explorer Architecture
Azure Data Explorer is built on a cloud-native, distributed architecture that supports both NoSQL and SQL-like querying capabilities. It is a columnar storage-based database that leverages compressed, immutable data extents for efficient storage and retrieval. The core components of Azure Data Explorer’s architecture include the Control Plane, Data Management, and Query Processing. The Control Plane is responsible for managing resources and metadata, while the Data Management component handles data ingestion and organization. Query Processing is responsible for executing queries and returning results to users.
PostgreSQL is a client-server relational database system that uses the SQL language for querying and manipulation. It employs a process-based architecture, with each connection to the database being handled by a separate server process. This architecture provides isolation between different users and sessions. PostgreSQL supports ACID transactions and uses a combination of MVCC, WAL, and other techniques to ensure data consistency, durability, and performance. It also supports various extensions and external modules to enhance its functionality.
Free Time-Series Database Guide
Get a comprehensive review of alternatives and critical requirements for selecting yours.
Azure Data Explorer Features
High-performance data ingestion
Azure Data Explorer can ingest data at a rate of 200 MB per second per node, offering fast and efficient data ingestion capabilities.
Azure Data Explorer integrates seamlessly with popular data visualization tools like Power BI, Grafana, and Jupyter Notebooks, allowing users to easily visualize and analyze their data.
The Kusto Query Language (KQL) supports advanced analytics features such as time series analysis, pattern recognition, and anomaly detection, enabling users to gain deeper insights from their data.
Unlike traditional relational databases, Azure Data Explorer does not enforce constraints like key uniqueness, primary keys, or foreign keys. This flexibility allows for dynamic schema changes and the ability to handle semi-structured and unstructured data.
PostgreSQL allows users to define custom data types, operators, and functions, making it highly adaptable to specific application requirements.
PostgreSQL has built-in support for full-text search, enabling users to perform complex text-based queries and analyses.
With the PostGIS extension, PostgreSQL can store and manipulate geospatial data, making it suitable for GIS applications.
Azure Data Explorer Use Cases
Azure Data Explorer is commonly used for log analytics, where it can ingest, store, and analyze large volumes of log data generated by applications, servers, and infrastructure. Organizations can use Azure Data Explorer to monitor application performance, troubleshoot issues, detect anomalies, and gain insights into user behavior. The ability to analyze log data in near real-time enables proactive issue resolution and improved operational efficiency.
Azure Data Explorer is well-suited for telemetry analytics, where it can process and analyze data generated by IoT devices, sensors, and applications. Organizations can use Azure Data Explorer to monitor device health, optimize resource utilization, and detect anomalies in telemetry data. The platform’s scalability and high-performance capabilities make it ideal for handling the large volumes of data generated by IoT devices.
Time series analysis
Azure Data Explorer is used for time series analysis, where it can ingest and analyze time-stamped data points collected over time. This use case is applicable in various industries, including finance, healthcare, manufacturing, and energy. Organizations can use Azure Data Explorer to analyze trends, detect patterns, and forecast future events based on historical time series data. The platform’s advanced query operators and real-time analysis capabilities enable organizations to derive valuable insights from time series data.
PostgreSQL Use Cases
PostgreSQL is a popular choice for large-scale enterprise applications due to its reliability, performance, and feature set.
With the PostGIS extension, PostgreSQL can be used for storing and analyzing geospatial data in applications like mapping, routing, and geocoding.
As a relational database, PostgreSQL is a good fit for pretty much any application that involves transactional workloads.
Azure Data Explorer Pricing Model
Azure Data Explorer’s pricing model is based on a pay-as-you-go approach, where customers are billed based on their usage of the service. The pricing is determined by factors such as the amount of data ingested, the amount of data stored, and the number of queries executed. Additionally, customers can choose between different pricing tiers that offer varying levels of performance and features. Azure Data Explorer also provides options for reserved capacity, which allows customers to reserve resources for a fixed period of time at a discounted rate.
PostgreSQL Pricing Model
PostgreSQL is open source software, and there are no licensing fees associated with its use. However, costs can arise from hardware, hosting, and operational expenses when deploying a self-managed PostgreSQL server. Several cloud-based managed PostgreSQL services, such as Amazon RDS, Google Cloud SQL, and Azure Database for PostgreSQL, offer different pricing models based on factors like storage, computing resources, and support.
Get started with InfluxDB for free
InfluxDB Cloud is the fastest way to start storing and analyzing your time series data.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473738.92/warc/CC-MAIN-20240222093910-20240222123910-00588.warc.gz
|
CC-MAIN-2024-10
| 10,988
| 54
|
http://mathhelpforum.com/statistics/140497-how-do-i-find-mean-categorical-data-print.html
|
code
|
How do I find the mean of categorical data?
here is my problem:
I am doing this lab on streaks in basketball, like 0=miss, 1=hit once, 2= hit two in a row.. so on.
I need to find the mean of this data in terms of its streak, but fathom is not giving me an answer cause my table is all representative, here is my data:
For some reason I am just hitting a mental block at trying to find this...
the answer should be between 1 and 2, but I cant see how to do this.
thanks for your help :)
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118519.29/warc/CC-MAIN-20170423031158-00371-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 485
| 7
|
https://forum.videohelp.com/threads/226253-Create-a-divx-container-with-multiple-AC3-audio-and-subtitle?s=aa65bf33a5d556194454a9b38a17c191
|
code
|
This is just an overview of the programs and processes used to create a single divx file for Divx Certified players. It allows you to create up to 8 audio streams and 8 subtitle tracks, selectable with your remote. Note that these files have been tested and confirmed on an LG LRA-536 DVD Recorder and a Panasonic S49 DVD Player.
This guide assumes you have VOB files with AC3 audio and subtitles.
Well, there doesn't seem to be a program that can combine everything easily to create a single video file for playing on your home DVD player (by this I mean Divx Certified players). The problem has always been with subtitles. Trying to convert them manually is a pain, especially with the special characters (The four languages I use are English, French, Russian and German). Using the new Divx Create Bundle is good except that the audio is converted to MP3 (no surround for you!) and the quality is limited to a few non-editable profiles. The key was using Fuse to combine the two main programs (Dr Divx and Divx Creator).
Tools (chosen for ease of use):
DVD Decrypter (or some program to get the VOB files)
Dr Divx (for converting the main video)
Divx Create Bundle (for creating the subtitles)
DVD2AVI (for extracting the audio streams)
VirtualDubMod (for combining the audio streams and video)
Fuse (for creating the final divx file with subtitles)
Note that the resulting files will have a .divx extension, unless you opt to have no subtitles, in which case it will be an AVI file. For actual details on using the programs, please look at specific guides (and there are many good ones) for each one.
Step 1) DVD Decrypter:
Extract your movie/video to the hard drive, if you have not already done so. I use IFO mode (each title is individual), as Dr Divx sometimes has problems with FILE mode.
Step 2) Dr Divx:
Be aware that you do not need to apply the "Divx6 for Dr Divx" hack/patch unless you prefer to use that codec. I myself am quite happy with the Divx5 that comes with Dr Divx.
Encode your movie to an AVI file with Dr Divx and your favorite settings. Personally, I use 1400kbps one-pass and original AC3 audio. All other settings I keep on default. You can save a profile if you want with your settings. The divx file you just created can be used by itself, if all you want is video and one audio stream, with no subtitles.
Step 3) DVD2AVI:
If you want multiple audio, then you need to extract the audio streams from the VOB files. Use DVD2AVI to open the VOBs and save the project; This will extract ALL audio streams at once.
Step 4) VirtualDubMod:
If you need multiple audio, then you now have to add the extra streams to the video you created with Dr Divx. In VirtualDubMod, open the video file, set the video to "Direct Stream Output" and add the streams you want to the "Streams List" (i.e. the audio from DVD2AVI you extracted). You may have to listen to the streams to find the one you want, as the names are sometimes a bit confusing.
Save the final file as AVI. If you don't need subtitles, you are finished.
This file is NOT a .divx file, however, and should be tested for compatability. One way to create .divx file with multiple audio is to encode the video multiple times with a different audio stream, and then Fuse them together at the end (see the steps below). I've never tried this myself - it's only an idea and is not guaranteed to work. It may look something like this:
"Fuse -v video.avi -a audio2.avi -a audio3.avi -o final_output.divx
Step 5) Divx Create Bundle
Launch the Divx Creator and add your original VOB files. Deselect all the audio (you won't need it) and select the subtitles you want. Change the profile to "Handheld" as this encodes the fastest and we don't need the video portion. The resulting file be a .divx file with your subtitles embedded. The good thing about this is that all special characters are retained, and you don't have to do any processing yourself!
Step 6) Fuse
Now the fun part. Gather all your audio/video pieces in one folder and at the command line, Fuse them together. The command should look something like this (no quotes):
"Fuse -v video_from_drdivx_or_vdub.avi -s subtitles.divx -o final_output.divx"
Well. I hope that's as clear as mud. Again, I can't guarantee that this will all work perfectly for every situation, but I personally have success joining a DrDivx video file with AC3 audio to a subtitle file from Divx Creator. Using these methods, it takes me approximately 2hrs to encode a 2hr movie (15min for DVD Decrypter, 60min for DrDivx, 30min for Divx Creator, and the rest for other extracting/fusing, etc)
If you have other methods/programs that work (or work better), good for you! You can post them as you please.
Addendum: This whole process may be a moot point as word is the next version of Dr Divx is on it's way, and will support all kinds of good stuff.
+ Reply to Thread
Results 1 to 3 of 3
When I try "IFO mode", I could convert DVD to divx format with Divx Convertor (subtitles and audio streams selectable).
With "FILE mode" the subtitles did not show up in Divx Convertor.
I also was able to load subtitles only after choosing IFO mode -> enable stream processing. Then I selected video stream, subtitle streams no audio streams and decrypted it to one 5GB VOB file. DivX converter encoded a 3 hour movie in handheld quality in 5 min. but it took about an hour to encode 3 subtitle streams. The tricky part was to discover the "View List" button on "Drop files here" screen.
Cann't tell it's possible don't tell anything.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00283.warc.gz
|
CC-MAIN-2019-43
| 5,525
| 37
|
https://advancedexamples.com/lotus-development-corporation-reviews-show-the-level-of-product/
|
code
|
RAID technology. The total cost of ownership is perfectly justified. Unfortunately, there is no standard method of undeniable measurements, which often causes controversy. A common IBM / Lotus claim is that the cost of the platform is prohibitively high compared to the Lotus platform. However, IBM / Lotus is usually based on a study by the Radicati Group, about which Microsoft made many comments later, and the most recent version yielded results in favor of Microsoft; This new study shows that Exchange 2003 offers a 41% TCO advantage over Lotus Notes / Domino 6 for a number of things, including downtime and training.
Lotus Development Corporation. As mentioned in the section, many customer reviews show the level of product maturity in terms of consistent architecture consolidation and direct consequence in terms of reliability and availability. From a technological point of view, projects of geographical and / or homogeneous consolidation are returned for customers to “put more eggs in fewer baskets”, which places a special emphasis on the concept of architecture “without a single point”. weaknesses (there is no single point of failure). In this regard, Exchange Server 2003 provides encouraging answers: Better cluster support: split failover time in two, switchover strategies for monitored nodes (thanks to the API).
Better SAN Support. In terms of service availability, the Exchange Server 2003 platform brings some very interesting innovations compared to previous versions. In practice, the achievement of a high level of availability of the Exchange Server 2003 solution is the result of achievements at several levels: 1. The effect of minimized downtime on client computers: some inaccessibility can be “absorbed” by the client post. For example, managing the local cache in Outlook allows users to hide most of their response time.
Network delays make failover clusters transparent or even smoothly switch to offline mode for a purely local mode of operation in the event of a complete disruption of the infrastructure infrastructure, which allows users to continue processing (locally) previously received messages. 2. Exchange Server 2003 supports the hardware and software architectures of the “single point of failure”: usually active / passive clusters, possibly a geocluster, SAN and redundancy of critical software and hardware components (for example, DNS, domain controllers, global directories).
In particular, Server 2003 provides encouraging answers:
- Better cluster support: Failover time divided by 2, tilt strategies for monitored nodes (thanks to an API called affinity).
- Better SAN support.Reducing server downtime by introducing new tools and procedures: quick recovery scenarios, possibly gradual ones — for example, a “tone” scenario — using advanced functions, significantly reduce backup / restore times. These features make it possible to recover messaging scripts in minutes (except in the event of a serious hardware failure requiring the restoration of a new physical server).
- Provide working groups with appropriate procedures and oversight: Microsoft Operations Manager allows, for example: a. Configure proactive, consistent monitoring of Active Directory, DNS ,, Exchange, and SharePoint performance dynamics. b. Using management packs (there is one for Exchange), MOM allows you to apply control rules that combine the knowledge of Microsoft development teams on various components.
When discussing the security of messaging and collaboration, various areas / specific threats should be considered: – Combating spam. – The fight against viruses. – Confidentiality and data security: digital rights management, message signing and encryption. – Management of security breaches. Being a symbolic actor in the computer world, and especially with respect to hackers.
The fight against viruses. The problem of viruses affects the entire industry. Although Microsoft was especially targeted, especially because the offer is widespread, this problem also affects IBM.
The conclusion that everyone can make is, first of all, the publisher’s “loyalty” to his strategy regarding workstations. In particular, the company’s position on the client workstation is stable for a long period of time:
– Office, when an advanced client is required, for example, in the context of enhanced performance, mobility, offline mode.
– Access to platform services through a web browser for a thin client approach.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396495.25/warc/CC-MAIN-20200528030851-20200528060851-00569.warc.gz
|
CC-MAIN-2020-24
| 4,495
| 13
|
https://impactconnect.com.ng/apply-now-microsoft-free-ai-training-with-professional-certificate/
|
code
|
AI skills represent the third-highest priority for companies’ training strategies, right alongside analytical and creative thinking. That’s why, alongside data.org, Microsoft’s AI for Good Lab, and GitHub, have launched the Generative AI Skills Grant Challenge, an open grant program to explore, develop, and implement how nonprofit, social enterprise, and research or academic institutions can train and empower the workforce to use generative AI.
The Microsoft AI Skills Initiative includes new, free coursework developed with LinkedIn, including the first Professional Certificate on Generative AI in the online learning market; a new open global grant challenge in coordination with data.org to uncover new ways of training workers on generative AI; and greater access to free digital learning events and resources for everyone to improve their AI fluency.
Eligible Country: International
Application Closing Date: On goung
- You do not need to have backgroud in coding. The course is designed for those with little or no knowledge about data.
- Access to Laptop and stong internet connection will be a great advantage
- You must be able to read and write
- You just need to be willing to learn.
- Recognized certificate of Completion
How to Apply
Interested applicants must read the full details about the program before proceeding to the application page. Check the official announcement page for more details about the application process…
Click Here to Take the Course
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474649.44/warc/CC-MAIN-20240225234904-20240226024904-00786.warc.gz
|
CC-MAIN-2024-10
| 1,484
| 12
|