url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
http://enablingwebchat.com/contact
|
code
|
Please email us if you are interested or have any questions about our product and we will have someone respond to your request within 2 business days. Thank you your interest in Enabling WebChat .
Please prove you are human by selecting the Plane.
Enabling WebChat is a just one puzzle piece of the full picture. We are part of a larger company called Enabling Technologies Corp , we have been around for 20+ years. Our WebChat practice is a differentiator of all that we do as an organization. As a Microsoft Gold Partner, we have implemented, migrated, and supported over 1400 office 365, Skype for Business, Lync, and SharePoint projects
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00639.warc.gz
|
CC-MAIN-2022-40
| 640
| 3
|
http://srichand.blogspot.com/2006/03/free-from-mukthi.html
|
code
|
Mukthi 6.03 was a grand success!
I've captured it in pictures at http://srichand.net.in/
Over 250 participants from in and around Bangalore and Mysore took part.
The overwhelming response should have finally silenced some critics...
My talk went pretty well, or so I think. I gave a very short talk (we were behind schedule by over an hour!), and gave away cds to the crowd at the end of. I asked 10 questions, and those who could answer them, got the CDs.
Its a lot of fun talking to an interactive audience!
I've also added some resources useful for Computer Networks and UNIX system programming at the same site.
Posted by Srichand Pendyala Thursday, March 23, 2006 at 11:30 AM
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676599291.24/warc/CC-MAIN-20180723164955-20180723184955-00225.warc.gz
|
CC-MAIN-2018-30
| 680
| 8
|
https://trustednewsletters.com/avoid-newsletter-spam/
|
code
|
Avoid newsletter spam: Look out for common mistakes
These are the most common mistakes that trigger accidental spam filtering.
Using spammy phrases, like “Click here!” or “Once in a lifetime opportunity!”
Going crazy with exclamation points!!!!!!
USING ALL CAPS, WHICH IS LIKE YELLING IN EMAIL
(especially in the subject)
Colouring different fonts bright red or green
Coding HTML by converting a Microsoft Word file to HTML
Creating an HTML email that’s nothing but one big image, with little or no text because spam filters can’t read images; they assume you’re a spammer that’s trying to trick them.
Using the word “Test” in the subject line because agencies run into this all the time, when sending drafts to clients for their final approval.
Sending newsletters to multiple recipients within the same company because its email firewall might assume it’s a spam attack.
Designing HTML email in Microsoft Word and exporting the code to HTML
We are, of course, here to help you keep it clean.
Introductory text and a small embed picture (such as a logo) work great. Adding a YouTube link if you have one is even better. Add all the media to the video and not the newsletter. Change the video often. Make clients want to get your newsletter.
Spamming people is never a good idea, and accidentally spamming people is even worse. That is why these guidelines are worth noting. Do not get carried away, and make the newsletter easy to read.
We will help you do all this. You will be the first to receive your newsletter. When you are happy, we then send it to all your subscribers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233509023.57/warc/CC-MAIN-20230925151539-20230925181539-00859.warc.gz
|
CC-MAIN-2023-40
| 1,600
| 16
|
http://www.karelia.com/forum/viewtopic.php?pid=3870
|
code
|
I've updated my Blueball Qubit designs for Sandvox 2. After testing them out here extensively, I am looking for 3-4 Sandvox 2 users to test them out some more in Sandvox 2. Already having a site done using a Blueball Qubit design is not required but would be a bonus.
Drop me an email at themesupport (at) blueballdesign (dot) com or DM me on Twitter (@blueballdesign).
Sandvox sites that get noticed use Blueball Sandvox Designs!
Follow us on Twitter: twitter.com/blueballdesign
Count me in.
I am also interested in this. If need some more testers can contact me.
Didn't you already send me that one?? If not please do as aI have at least one site with it
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698289937/warc/CC-MAIN-20130516095809-00030-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 656
| 7
|
https://help.risevision.com/hc/en-us/articles/115002762803-Create-a-folder
|
code
|
You can create folders in Storage to organize the files you upload and you can upload files two ways, either individually or as an entire folder.
Create a folder in Storage
You can create folders and subfolders so you can quickly find what you’re looking for.
To create a subfolder, first double click on the folder you want to create it in to open it.
- Click Create Folder.
- In the Enter Folder Name field, type a name for the folder.
- Click Save.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371624083.66/warc/CC-MAIN-20200406102322-20200406132822-00226.warc.gz
|
CC-MAIN-2020-16
| 453
| 7
|
https://askubuntu.com/questions/1106067/how-do-i-copy-cd-to-my-android-cellphone-using-lubuntu-18-10
|
code
|
How do I copy an audio CD to my Android 8.1.0 cellphone using Lubuntu 18.10?
Here's what I did to get it working:
- connect the phone using the provided USB cable.
- verify it opens PCManfile and check it's listed via 'lsusb'.
sudo apt install sound-juicer
sudo apt install lubuntu-restricted-extras
- reboot computer (steps 3-5 are #ubuntu ioria's instruction)
- copy tracks from CD to computer with Sound Juicer, adding CD information to musicbrainz as needed.
- search 'USB' in phone settings, tap to open settings page, change option from 'charge' to 'transfer files' to enable write permissions.
- create a directory on your phone from within pcmanfile to store files (i chose the SD card)
- copy-paste files between pcmanfile folders to copy tracks to your phone.
- check to see if there's some 'safely remove' eject-external-drive feature on taskbar; disconnect cable from phone after all files have copied if not.
- Navigate to the 'Files' app on phone by default. Navigate to the folder, change the listing setting to see filenames, and tap on file to play.
- Consider installing better software to play files.
You need to extract files from your CD and copy them on your cellphone. For extracting, I personally use Asunder, which is available in Lubuntu repository and pretty simple to use. For copy, you'll need to connect your phone to your computer with Bluetooth or USB cable.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247503844.68/warc/CC-MAIN-20190221091728-20190221113728-00553.warc.gz
|
CC-MAIN-2019-09
| 1,390
| 15
|
http://maxwellfunk.com/work/eden/
|
code
|
Eden is a mobile application concept that helps citizens discover and use their local urban parks
The app opens with a map of Vancouver and it’s 21 neighbourhoods. Users can click on any neighbourhood to go to that neighbourhood’s page or click the search button to look up the neighbourhoods by name.
The neighbourhood map shows all the parks that are in that neighbourhood. Clicking on the search button will also show a list of parks. Users can click the activity button to search for activities to do.
When users click on the activity button, the activity list comes up. This is a predetermined list that would be augmented over time. It helps to inspire and activate park users.
Clicking on an activity from the activity list will show a data visualization of where that activity has been done in that neighbourhood. This is useful for citizens and City parks boards, as it shows areas of use and areas of neglect. Users can click on any park icon to go to that park’s page.
The tags on the park’s page are activities that other users have placed there. These are what create the neighbourhood data visualizations. Users can touch a tag to view it or can press the add button to add their own tag.
When users add a tag they pick an activity from the activity list and then have the option of writing a short text for their tag. Pressing the done button would go back to the park page. They would then click anywhere on the park to place the tag.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375096.65/warc/CC-MAIN-20210306131539-20210306161539-00280.warc.gz
|
CC-MAIN-2021-10
| 1,458
| 7
|
https://www.informit.com/articles/article.aspx?p=31729&seqNum=7
|
code
|
If Web services have all these problems, should I use them for production-level systems?
Although Web services do have some issues to be resolved, many of the alternatives do as well. Each technology has strengths and weaknesses. Now that you know what they are for Web services, you can make an informed decision on whether to use them for your project or not. If you're building a system to be used by many other people, Web services are probably your best bet.
If I go with Web services, how can I minimize the impact of the changes to the standards?
The easiest way to reduce impact is to limit yourself to using only standards-based options from the toolkit that you choose. That way, when the standards bodies agree on a solution for, let's say, security, you can easily add it to your code once your tool vendor updates its tools. Otherwise, you might be locked into a proprietary solution that might not be supported at a later date.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304798.1/warc/CC-MAIN-20220125070039-20220125100039-00112.warc.gz
|
CC-MAIN-2022-05
| 941
| 4
|
https://www.loroparque-fundacion.org/en/los-parametros-de-incubacion-de-huevos-y-el-comportamiento-de-incubacion-en-psitacidas/
|
code
|
Artificial incubation of parrots is a common practice in aviculture, however the parameters used are extrapolated from the poultry industry, and there is insufficient detailed documentation concerning their specific natural incubation patterns.
The immediate aim of the project is to gain knowledge about the natural incubation parameters and patterns of psittacines.
One strategic goal is to improve the artificial propagation techniques of this family of birds. Another is to lead to an investigation of the potential relationship between egg size, laying order (degree of hatching asynchrony), offspring quality (and sex) and selective maternal investment during incubation (in terms of egg rotation frequency and temperature) in different parrot species in an experimentally well-controlled and standardized setting.
Egg loggers provide an excellent technology to measure environmental conditions and rotation patterns accurately. In parallel, nest cameras provide the opportunity to observe and describe the behaviour during egg laying and incubation. These non-invasive tools will be used to provide fundamental data for improving artificial incubation in psittacines.
Max Planck Institute for Ornithology
Funding since 2016: 3,000$
Data logger egg
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00387.warc.gz
|
CC-MAIN-2022-21
| 1,254
| 7
|
https://fantashit.com/editor-contextual-help-for-classic-block-after-editor-deprecation/
|
code
|
As a user with a preference set for the WordPress.com/Calypso editor,
I want to use the classic block to edit my content
So that I can use an editor that is more familiar to me.
Given a user who
- Is in the editor deprecation group (
- Has an editor preference of
classicfor the site they are currently editing
- Is creating a new post for the first time after the editor deprecation
- Automatically insert a classic block into the post that is being created.
- Show a custom welcome guide that introduces the classic block and instructs how to insert it in the future.
- If the block editor Welcome Guide would normally show, hide it and show the classic block modal instead. Toggle the welcome guide off, so it isn’t shown the next time the user loads the editor.
This will load in the iframed block editor in WordPress.com only, so code will live in the wpcom-block-editor Calypso app.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487612537.23/warc/CC-MAIN-20210614135913-20210614165913-00519.warc.gz
|
CC-MAIN-2021-25
| 890
| 12
|
http://www.duckbrand.com/duck-tape-club/ducktivities/duckorate/ducktape-basket
|
code
|
Ducktivity provided by sockmonkeylova
1.)make 2 by 2 inch squares 2.)put them on the basket edge to edge in a checker board pattern, you can do as many colors as you want!it is easier to use an exacto knife but you can use scissors too!!! the time it takes to make this depends on how big your basket is.
Approximate Crafting Time:1:15
Supplies and Tools:
- duck tape
- exacto knife or scissors
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737913815.59/warc/CC-MAIN-20151001221833-00248-ip-10-137-6-227.ec2.internal.warc.gz
|
CC-MAIN-2015-40
| 394
| 6
|
http://www.nvnews.net/vbulletin/showpost.php?p=1219251&postcount=1
|
code
|
status of hdtv overscan via component outputs?
Seems this has been a problem since at least 2005. I've read the windows drivers have had some form of overscan compensation for qiute some time, but linux users don't get much in the way of options. I'd be perfectly content to run with custom modelines, but the drivers don't accept them when running with a TVStandard of 1080i. The next highest resolution to 1920x1080 that works is 1280x1024, but obviously this mode doesn't fill the entire screen.
A small amount of overscan is desirable, but the amount of overscan the nvidia drivers give in hd modes is too much.
Any chance we will see overscan shift or underscanning in the linux drivers in the next year?
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988308.23/warc/CC-MAIN-20150728002308-00199-ip-10-236-191-2.ec2.internal.warc.gz
|
CC-MAIN-2015-32
| 709
| 4
|
https://ionic-bootstrap-theme.herokuapp.com/
|
code
|
Bootstrap to Ionic
Bootstrap to Ionic takes a standard Bootstrap 3 theme and generates an Ionic theme (1.x/2.x) to match. Works on most themes, unless you've really butchered your bootstrap CSS (naughty programmer!)
Add or replace your
app/theme/app.variables.css with the below SCSS:
Add the SCSS below to your
scss/ionic.app.scss file. Make sure to enable Sass first.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991514.63/warc/CC-MAIN-20210518191530-20210518221530-00638.warc.gz
|
CC-MAIN-2021-21
| 369
| 6
|
https://diydrones.com/profiles/blogs/this-week-in-unemployment-1
|
code
|
Flight test photos. It sux, but renaming a ship is bad luck. We still
fly ships named after Russian Heroine. Scorpio is very prominent this
time of year, providing an ironic background.
So in using a model to aid Marcy-1 navigation, the velocity due to
cyclic is always a sine wave in all 3 dimensions resulting from all 3
controls over time. Your goal is to integrate the sine wave over enough
time to predict future position.
The neural network implementation ended up a lot simpler than it was in 2007,
but can't be applied to Vika 1 without serious magic to work with a 3
DOF IMU. Leaning towards finishing the back propagation through time we
started in 2008. It has long been a dream to have a lightweight,
recurrent neural network library.
Well, that actually showed the neural network managed to stabilize the X
direction. Not getting the weather needed to give it control of the Y
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00129.warc.gz
|
CC-MAIN-2019-43
| 889
| 14
|
http://www.wiihacks.com/showthread.php?t=34607
|
code
|
[burned] New Super Mario Bros stop working
I burned nsmb and ran it with NeoGammaR8beta15 , it was working fine last night. today i tried to play it again it started sending me back to Wii menu page. Any suggestions?
is this the only game that stopped working? did someone run a system update?
so far i only have NSMB game ):
and i dont think any one run update bcoz the wii was in my room all night
ok it is alot harder to tell you what the problem could be if you do not have other ways of testing to rule things out. that particular game took additional steps to get working and therefore could be limmited to the game itself. if you had other backups to try it would be easier to determine if the problem was the wii or the game.
now i'm burning Wii_Sports_Resort , will try with it
ok, i've tried both games and it's still sending me back to Wii menu page. any idea?
ok so were back to. did anyone update the system? what was it before ? what is it now? go to the wii settings and look at the top right side.
I'm pretty sure not one touch my wii but me
and my system b4 softmod was 4.1u
b4 problem was 3.2u
right now is 3.2u
Last edited by luisa; 12-10-2009 at 09:31 AM.
ok both of those games you tried take special steps to run. have a look in our games section for a guide on how to do this.
yes i have and also followed it step by step.
would it be any problem with the way i softmod it?
especially NSMB bcoz it was working):
Last edited by luisa; 12-10-2009 at 09:46 AM.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120881.99/warc/CC-MAIN-20170423031200-00403-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,480
| 19
|
https://www.mycplus.com/forums/reply/3354/
|
code
|
March 12, 2008 at 10:17 am #3354
Assalamualaykum Mohammed Saqib….
This is not the problem i think…. The Bhoot and I work together in a same institute…. Its also the same case with Char type choice i/p…
So i think there must be something else behind this cumbersome problem.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100448.65/warc/CC-MAIN-20231202172159-20231202202159-00624.warc.gz
|
CC-MAIN-2023-50
| 281
| 4
|
https://forum.peplink.com/t/blocking-ports-related-to-cve-2017-5689-intel-management-engine/10647
|
code
|
Using a Pepwave Surf SOHO with firmware 7.0.0, and I am trying to block the ports related to the Intel Management Engine vulnerability (CVE-2017-5689). While trying to block both external and internal network access to ports: 16992, 16993, 16994, 16995, 623, and 664, the firewall rules I created errored with an invalid IP range. So I must be doing it wrong. What would be the correct way to block all access to these ports? Both external from WAN and between computers internal to the LAN.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107900200.97/warc/CC-MAIN-20201028162226-20201028192226-00503.warc.gz
|
CC-MAIN-2020-45
| 491
| 1
|
http://www.futuremanjobs.com/index.php/2022/10/14/meeting-coworkers-in-person-but-coworker-doesnt-acknowledge-you-how-do-you-feel/
|
code
|
So I recently met my coworkers for the first time in person. We all work remotely and I’m not in the same city as anyone. I hugged and chatted with pretty much everyone, but this one coworker who I’ve met online just like didn’t acknowledge me at all. We were even in a circle together with a few other coworkers and they didn’t say hi to me. I tried saying hi but I got interrupted. Anyway, I feel a bit weird about this situation. Am I overreacting? How would you feel or what would you do in this situation?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710972.37/warc/CC-MAIN-20221204104311-20221204134311-00517.warc.gz
|
CC-MAIN-2022-49
| 518
| 1
|
https://blogs.msmvps.com/connectedhome/2013/07/07/fixing-broken-or-uninstallable-apps-in-windows-8-1-preview/
|
code
|
On both my test x86 desktop and my Surface RT, I’ve experienced some app issues after upgrading to the Windows 8.1 Preview.
Symptoms are any of the following:
1. App can’t be updated even though Windows Store shows Update Available
2. App is shown as owned in the Store , but does not appear either on the Start Screen main app list or on the All Apps list and cannot be reinstall from the Store because of this
3. App opens and crashes immediately
Before you do anything else:
Close the Windows Store App from Task Manager. To do this, type taskmgr on the Start Screen. Run Task Manager and End Task on the Windows Store
First things to try before bring out the heavy artillery is
Try the WSRESET tool:
On the Start Screen, type in WSRESET and press enter, then run the tool when it appears
Try the App Trouble Shooter
You can download it from
If these don’t work, and you can’t uninstall/update/run an app(s) it’s time to try Power Shell to uninstall the app so you can reinstall it again and hopefully get past the issue.
All that follows is entirely at your own risk. There is no guarantee that it will work for your Windows 8.1 Preview installation. For any help with Powershell, try the Technet Scripting Forum.
- Close the Windows Store if it is open (again)
- Download the zip file at http://gallery.technet.microsoft.com/scriptcenter/Remove-Windows-Store-Apps-a00ef4a4
- Acknowledge the license
4. Extract to a new folder like C:\scripts
5. Open the RemoveWindowsStoreApp.ps1 file with notepad or other text editor and Add the cmd
Remove-OSCAppxPackage to the end of the file.
6. Run the file (you may need to right click and select Run/Open with Powershell or press and hold and select Run/Open with Powershell on a Surface tablet). Type Y to any prompts.
7. You will see a Powershell windows like the one below, select the IDs (like 4,17 shown in the example) and press enter. Type Y to confirm each time it appears and follow the prompts.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738609.73/warc/CC-MAIN-20200810042140-20200810072140-00143.warc.gz
|
CC-MAIN-2020-34
| 1,960
| 22
|
https://careers.caia.org/jobs/11346849/quantitative-analyst-111589
|
code
|
CAIA's Career Center is an easy-to-use, comprehensive resource connecting job seekers with employers in the growing AI field. Use your knowledge and credibility to advance your career or build a talented team for your organization. Opportunities targeted to CAIA Charterholders are prioritized.
In order to search for jobs specifically for CAIA Charterholders or those pursuing the CAIA Charter please enter “CAIA” in the search panel.
This will enable you to search for CAIA specific roles globally.
We Offer The Risk Division is a highly visible, dynamic area of the firm where you can be an integral part of the decision making that supports the bank's business. Our responsibilities range from Enterprise Risk management to risk and finance reporting, and regional risk teams covering the risk management for our entities. The Risk division's long-term success depends on our ability to achieve our vision and fulfill our mandate. Ultimately, this depends on the skills, experience and engagement of our employees. We offer a collaborative and entrepreneurial environment that offers direct contact with senior management and encourages leadership at all levels.
The Exposure Analytics team is responsible globally for methodology, in-house development and production implementation of the Monte Carlo exposure models used by Credit Suisse to determine capital requirements for CVA and Default risk and to compute internal risk measures of Counterparty Credit Risk on derivatives positions.
The models developed in the team are the core components at the heart of the multi-year "Strategic-EPE" program of the Bank, which is aimed at re-defining exposure methodology and infrastructure across all regions.
We offer a challenging modelling role that includes:
Modelling the joint evolution of the stochastic risk factors underlying derivative trades across asset classes (e.g. equity, credit, FX, interest rates, commodities).
Pricing models for financial derivatives.
Prototyping, backtesting and benchmarking of model candidates.
Development of production implementation of models within the C++ library owned by the team, which forms part of the Front-to-Back analytics infrastructure of CS.
Addressing requests from model validation, internal/external auditors and various regulators (e.g. in the context of Basel framework) including analysis of the assumptions and limitations of the models.
Interaction with internal business partners such as Front Office, Credit Officers, Pre-trade Analysis, IT.
Open to discussing flexible/agile working.
She/he will offer M.Sc. or Ph.D. in Financial Mathematics, Quantitative Finance or equivalent working experience.
You will demonstrate work experience in derivatives pricing or risk modeling.
You will have a solid understanding of financial markets and derivative products.
Programming experience, particularly in C++. Other programming experience also desirable (e.g. Python, C#, Matlab, VBA, F#, Mathematica).
Suitability to work well within a team, engendering good team ethics.
Ability to write rigorous and clear model documentation.
Fluency in English as well as good communication skills.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213693.23/warc/CC-MAIN-20180818173743-20180818193743-00094.warc.gz
|
CC-MAIN-2018-34
| 3,150
| 21
|
http://compositewpf.codeplex.com/discussions/36581
|
code
|
Lately I’ve been working with porting my existing application to prism. It has been a painful experience mostly because, as I believe, I don’t understand how to use prism.
My existing application is heavy based on use of generics to avoid repeating code, Unity for dependency injection and other stuff I read about (NHibernate).
Prism would give me a way to modularize my application keeping memory footprint low and apply a better testing strategy. I also hoped to be able to rewrite some parts of the UI to achieve
better isolation between logic and look and feel.
So far this has been a complete failure and I have reached the state where I wonder,,,, why bother. The reason is that my code seems to be more and more cluttered with dependences instead
of modularized, I keep repeating my code, etc ……. sadly to say “ugly”
Could anyone give me a hint how to use prism……… the samples I have seen on the internet seems to be “to simple” and I unable to “interpolate” the examples to something useful for me.
My application then …….
The base is a collection of Persons. I have two views of persons, first PersonCollection and the PersonCRUD. Then I have a two other collection of objects both with two views: ObjecACollection,
ObjectACRUD, ObjectBCollection and ObjectBCrud. Both ObjectA and ObjectB contains references to Person, and ObjecA contains a reference to ObjectB. ObjectA is a quite complex object. (the real application has a lot more views)
Having this specification, how would you guys set up a prism project to for best modularity, avoid to repeat code and have good testability and extensibility???
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125849.25/warc/CC-MAIN-20170423031205-00602-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,643
| 11
|
http://www.linuxquestions.org/questions/linux-newbie-8/problem-with-icq2go-x-java-vm-90331/
|
code
|
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
so, as the search function has no matches for x-java-vm, here is my problem:
i tried alls the icqpendants i found (gaim, licq, centericq, ...) but nothing worked... so i tried http://go.icq.com , but it asks for a plug-in from a type called "application/x-java-vm" and i don't know what to do... i installes sun java by following these instructions
but that didn't work either
OK, I'm doing a little guessing here, but you probably need to set up your Java parameters correctly. Check out this thread where I've explained how to set up the PATH and JAVA_HOME variables.
What you may have to do in addition to that is set up a link to Java plugin for each program that needs to use it. You'll have to read the documentation for those programs to find out what directory to put the link in , but the link itself is usually something like:
hm it's similar to what i did with one exception: i did not use the rpm package, but the self extracting file:
First, create (as root) a "java" subdirectory in the /usr directory. Download the self-extracting (not RPM) file of j2re-1_4_2 from Sun. Move (as root) the file to /usr/java. Cd (as root) to the /usr/java and issue the following 2 commands:
then i performed these commands in the terminal like instructed in Tuvok's post:
well i looked at this thread but i do not understand... i downlaoded java again as rpm, but ist also was a bin-file... i am really confused and do not know what to do now... i also do not understand this thing with the /etc/profile file... ist has entries i don't understand and i am afraid of killing this file by adding something frong at the wrong place... this is the third time i tried to get java working but it does not wand
First, are you sure you installed java correctly? You should be able to find the java directory in your /usr directory, although it may bin in /usr/local if you installed from binary.
Second, in a console, type java -version. If you get anything that suggests the computer can't find java, then you need to set up your PATH and JAVA_HOME.
Third, if you don't feel comfortable messing with /etc/profile, you can create the same commands in your .bashrc file (you can create one if you don't have one, just be sure to have #!/bin/bash as your first line.
Lastly, what browser are you using? You need to find out where to put the plugin link for your browser.
[root@localhost root]# java -version
java version "1.3.1"
jdkgcj 0.2.3 (http://www.arklinux.org/projects/jdkgcj)
gcj (GCC) 3.2.2 20030222 (Red Hat Linux 3.2.2-5)
Copyright (C) 2002 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
3. I'll have a look for it when I wake up today
lastly, I'm using Mozilla 1.2.1 and i already tried the commands like tuvok says in his thread to link to my plug-in dir, but i didn't work i think.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189244.95/warc/CC-MAIN-20170322212949-00326-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 4,524
| 34
|
http://answers.unity3d.com/questions/17288/can-you-stop-the-jitter-of-a-character-controller.html
|
code
|
I was keeping my Character Controller's Step Offset at zero because I didn't need it. Then, I added in some teleporting functionality. I had to increase the Step Offset then, because sometimes the Character Controller would fall through the ground after teleporting, otherwise. While that seems to have solved the big problem, it introduced another. When I try moving directly into a sphere collider, there's jittery vertical movement now. I can get rid of that with Min Move Distance, but then I can't walk around slowly, so I can't go that route. Is there some solution? I can't imagine that many people are keeping the Step Offset at zero; it just happened to work for a while.
asked May 14 '10 at 12:32 AM
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705300740/warc/CC-MAIN-20130516115500-00033-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 709
| 2
|
https://inprogrammer.com/comments-and-print-function/
|
code
|
“Every success story requires a first step”
In this article, we will discuss the basic topics in python to get started. We will be discussing comments, and print functions in detail.
When we start to write code in python, it is always important to let users understand the code. This process of understanding code is made simple with help of comments. Whenever we write something as comments the compiler will ignore those lines from compiling. These comments are only for human understanding.
Imagine, after becoming a great coder, you will have to write thousands of lines of code, during that time writing comments will become a great practice to precisely explain the code to everyone.
In python, there are two types of comments,
- Single line comment
- Multi-line comment / Doc String
Single line comment
In python, comments are written with help of the hash symbol (#). Lines to be commented on are always prefixed with #. But # is used to comment on that particular line alone.
#statement - this statement is a comment
Statement # this statement is not a comment
Multi-line comment / DocString
Python does not contain any official methods to do multi-line comments, but we can consider three double quotes(“””) as multi-line commenting as it serves the purpose. When you write multi-line comments in this format the interpreter compiles them but when we use to # the interpreter ignores them.
Officially three double quotes are said as docstring in python. We will see in detail about docstring and their uses later.
""" Docstring can also
be written as multi-line comments """
We can also prefix every line we need to comment with hash which will also serve the purpose.
To comment or uncomment multiple lines simply select the desired code and press Ctrl+/ on windows or Cmd+/ on Mac.
The print function is used to print/display the output to the screen.
print("statements") # whatever we write inside double quotes will be printed
We can use either single quotes or double quotes to wrap the text that you need to print. But, using both at the same time is prohibited.
If we need to print something as it is without processing we use single/double-quotes.
print("5+6") #print 5+6 without any addition
print(5+6) # print 11 after adding the numbers
print("5"+"6") # prints 56 as this addition does concatenation
In the above code snippet,
- Line1 prints 5+6 as the whole line is inside double quotes it considered them as a string and did not process.
- Line2 prints 11 as there are no double quotes we consider them as two numbers and add them because of the + operator and print the result.
- Line3, prints 56 as 5 and 6 are inside double quotes they are considered as string and + operator found between two string will do concatenation means joining instead of addition.
Empty print statement
If we are writing empty print statements, that will by default print a new line/ it will create an empty line for the user.
print() # this will print the newline
The print function in python has an option to specify the compiler what to do after printing the line, by default it will print a new line, which means it will leave the cursor to the next line from the text printed. But we can explicitly change them using the end parameter
print("How are you?")
In the above code snippet,
- Line 1 prints Hi and since we mentioned end to comma, the cursor will just put a comma and still be in the same line.
- In line 2, the print statement prints how are you in the same line, and since there is no end parameter written it automatically goes to the next line.
Hi, how are you?
To perform some special operations inside string we use escape characters. The backslash (\) is considered an escape character in python.
For example, if I want to print a new line inside my print statement itself I can use \n whenever my compiler encounters \n it automatically makes the cursor go to the next line.
Some of the frequently used escape characters are,
|Used to put single quotes in a string surrounded by single quotes
|Used to put double quotes in a string surrounded by double quotes
|Used to insert a new line
|Used to insert a tab space
|Used to insert a backspace
|Used to insert a backslash
|Used to insert an octal value
|Used to insert hexadecimal value
- Write a program to print about yourself in 5 different lines. And add a comment above them telling the program’s content.
- Write a single print statement to print the following pattern,
You can always use the comment box to discuss the answers for your practice sheets, doubts can also be asked in the same.
I hope, this article gives a clear idea about comments and print functions in python.
“Practice makes a man perfect”
So do not forget to practice the practice sheets for cent percent learning.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817442.65/warc/CC-MAIN-20240419172411-20240419202411-00833.warc.gz
|
CC-MAIN-2024-18
| 4,780
| 55
|
https://lifehacker.com/speed-up-ubuntu-updates-with-a-mirror-server-384325
|
code
|
As anyone trying to download or upgrade the latest version of Ubuntu likely found, the servers at Ubuntu can get pretty overwhelmed, especially on new release days. The (unofficial) Ubuntu Blog points out a list of mirror sites you can use to speed up your software updates and avoid strained servers. Look through the "Mirror-Mirrors" list for a location near you, copy the "http://" or "ftp://" line, and then head to your system's sources list, found in /etc/apt/sources.list. Make a backup copy, and then replace all instances of
http://us.archive.ubuntu.com/ubuntu with your mirror server line, and you should notice faster response times when updating or downloading new packages.
22x Faster Upgrade [Ubuntu Blog]
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00087.warc.gz
|
CC-MAIN-2022-33
| 719
| 3
|
https://ask.libreoffice.org/t/how-do-i-reference-between-ole-embedded-calc-worksheets-in-same-document/7374
|
code
|
E.g, I create a new document with 2 embedded OLE Calc worksheets. I fill a field in one, and wish to reference this field in the other. Can I do this, and what is the syntax?
Please in what application have you embed calc worksheets?, and what is Libreoffice version?
Libre Office Writer
Libre Office 18.104.22.168
Sorry, when I said “field” I meant “cell”. Tried to do this in MS Office XP as well, and found nothing.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00395.warc.gz
|
CC-MAIN-2022-27
| 426
| 5
|
https://biology.anu.edu.au/news-events/events/climate-driven-range-dynamics-mountain-plant-species
|
code
|
Mountains harbour disproportionately high biodiversity, including rare and endangered species, but are in general poorly protected. Yet, climate is currently warming at a rapid pace, especially in mountain environments, forcing species to shift their distributions to higher elevations to track the conditions they are adapted to. These shifts to higher elevations have become a global phenomenon with important consequences for ecosystems and human well-being. However, our knowledge about responses of mountain biota to environmental change is still lacking in many respects. The majority of research has so far focussed on species’ responses at their upper, expanding elevational limits and little is known about the dynamics at the lower, retracting limits of species’ distributions or about changes of species’ abundances. Yet, the balance between the two opposing range limits determines whether species ranges expand or contract, and therefore co-determine species’ extinction risk in the future. Furthermore, delayed responses at both lower and upper range limits might cause disequilibria between species’ distributions and climatic conditions that will have to be paid off in the future. In my talk, I will give a global overview of elevational range dynamics of mountain biota with a special emphasis on lower elevational range limits, plant species and case studies from the European Alps.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103941562.52/warc/CC-MAIN-20220701125452-20220701155452-00288.warc.gz
|
CC-MAIN-2022-27
| 1,411
| 1
|
http://jonudell.net/udell/2006-01-09-scarcity-versus-abundance-of-talent.html
|
code
|
Over the weekend I listened to the ITConversations podcast of Barry Diller's appearance at Web 2.0. He's a thoughtful and articulate guy and I recommend the entire interview, but here's the crucial quote from the soundbite I bookmarked1:
There's not that much talent in the world, and talent almost always outs. There's very few people, in very few closets, that are really talented and can't find their way out. Somehow they get out.
Then I read Doc Searls' gloomy assessment of the DRM landscape in response this comment by Lloyd Shepherd:
At some level, there has to be an appropriate level of control over content to make it economically feasible for people to produce it at anything like an industrial level. And on the other side of things, it's clear that the people who make the consumer technology that ordinary people actually use -- the Microsofts and Apples of the world -- have already accepted and embraced this. The argument has already moved on. [Lloyd Shepherd: How can DRM be good?]Doc's not yet willing to concede, and neither am I, but it wasn't until I heard Diller's remarkable statement that I finally got to the crux of the issue. Is talent scarce or abundant? If you believe that talent is scarce, as Diller does, then it's going to have to be metered, and we're headed down the DRM path for sure. If you believe that talent is relatively abundant, as Doc and I do, then you imagine a very different future where technology favors use over control.
The scarcity argument can't be dismissed out of hand. Maybe Diller's right. But what if he's wrong? If the DRM train has already left the station, we'll never do the experiment and we'll never know the answer.
Content owners should be able to protect their property. But outside that realm of scarcity our technologies should support and encourage the abundance experiment.
Update: This question of control versus use is not, by the way, merely a DRM issue. Another audio program I listened to over the weekend, on a long hike, was a talk by Marsh McCall, a classics professor at Stanford. It's at itunes.stanford.edu, an Apple/Stanford joint project that's making selected talks available for download.
I'd like to link you directly to that freely-available talk, and also provide a link-addressable soundbite, but I can't. These audio programs aren't part of the web, they belong to a parallel mini-universe in which the only acceptable client is iTunes and the only acceptable player device is the iPod.
I recalled Tim Bray's foray into that universe, and I took a crack at navigating XML-over-ITMS (i.e., the iTunes Music Store HTTP-based protocol) as though it were XML-over-HTTP, but no joy. It seems that all paths lead even more inexorably into the closed world of iTunes than was true when Tim Bray ran his experiment almost two years ago.
The closure doesn't stop there. You're also expected to listen to these talks on an iPod. Well I've got one of those, but I also use a non-Apple gizmo. It plays MP3s (and WMAs) but not M4As. There are M4A-to-MP3 converters, of course, but finding and using them isn't something that most people will be able or willing to do.
The availability of these Stanford talks is precisely the kind of thing I advocate for here. And it's true that the iTunes/iPod combo is a majority platform at the moment, which makes it sensible to target this offering at that platform. So kudos to Apple and Stanford.
That said, it feels wrong not to be be able to form links to individual talks, cite them in blog entries, categorize them on shared bookmarking services, remix them on webjay, and quote soundbites from them. Stanford clearly intends to engage with the outside world, and that's fantastic. But because the design center for Apple's service is scarcity, not abundance, it doesn't do nearly as much as it could to facilitate that engagement.
Further update: Speaking of Webjay, this just in: Yahoo acquires Webjay. Updating the chart from here gives us:
1 I noticed two problems with my collection of soundbites. First, a lot of the older ITConversations links have gone stale. Fair enough, Doug Kaye warned us not to depend on them, but in general I would like to encourage the cool URLs don't change ethic for media URLs too. Second, I found and fixed a bug that broke the service when URLs are both parameterized and redirected. If you run into a problem I'd like to hear about it. To form a soundbite URL, use this pattern: http:\//udell.infoworld.com:8003/?url=URL-OF-MP3&beg=mm:ss&end=mm:ss.
Former URL: http://weblog.infoworld.com/udell/2006/01/09.html#a1366
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887423.43/warc/CC-MAIN-20180118151122-20180118171122-00591.warc.gz
|
CC-MAIN-2018-05
| 4,581
| 15
|
https://getkirby.com/docs/developer-guide/configuration/options
|
code
|
All available options can be set in your config file in
/site/config like this:
There are a number of available options in Kirby, which help you to customize the system just the way you want it. Check out the full list of options in the cheat sheet.
Your own options
If you need a simple way to store options throughout the system and access them in any plugin, template or snippet, you can define your own options in the same way:
You can fetch your options later with…
echo c::get('yourOption') // will output 'yourValue'
If you want to make sure to get a proper default value if the option is not set you can define a fallback:
echo c::get('undefinedOption', 'fallbackValue') // will output 'fallbackValue' if undefinedOption is undefined
Kirby has a built-in way to set different options based on the domain by adding additional config files containing the domain.
/site/config/config.localhost.php /site/config/config.staging.yourdomain.com.php /site/config/config.yourdomain.com.php /site/config/config.www.yourdomain.com.php
By setting different options in those config files you get a very flexible system, which can be deployed to different servers and react to the current environment accordingly.
Note that the settings in the standard
config.php file are always used. If you need different settings in another environment, you will have to overwrite those settings in the domain specific configuration file (or only set those options in your domain specific config file).
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160754.91/warc/CC-MAIN-20180924205029-20180924225429-00530.warc.gz
|
CC-MAIN-2018-39
| 1,485
| 14
|
https://mytrendingstories.com/lauren-emi/your-choice
|
code
|
Hey guys! Sorry it has been a while things have been a little crazy for me right now. I want to know what you guys want me to write about next. Anything. Really anything. Leave a topic below in the comments and I will get writing right away!
FOLLOW ME ON IG/SC: @LAURENNAKASHIMA
Published by Lauren Emi
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00389.warc.gz
|
CC-MAIN-2021-49
| 302
| 3
|
https://se.mathworks.com/matlabcentral/fileexchange/71902-uniform-manifold-approximation-and-projection-umap
|
code
|
Given a set of high-dimensional data, run_umap.m produces a lower-dimensional representation of the data for purposes of data visualization and exploration. See the comments at the top of the file run_umap.m for documentation and many examples of how to use this code.
This MATLAB implementation follows a very similar structure to the Python implementation, and many of the function descriptions are nearly identical.
Here are some major differences in this MATLAB implementation:
1) The MATLAB function eigs.m does not appear to be as fast as the function "eigsh" in the Python package Scipy. For large data sets, we initialize a low-dimensional transform by binning the data using an algorithm known as probability binning. If the user downloads and installs the function lobpcg.m, made available here (https://www.mathworks.com/matlabcentral/fileexchange/48-locally-optimal-block-preconditioned-conjugate-gradient) by Andrew Knyazev, this can be used to find exact eigenvectors for medium-sized data sets. We also give you the option of downloading our slightly altered version of lobpcg.m, which has equivalent results. 2) We have built in the optional ability to detect clusters in the low-dimensional output of UMAP. The clustering method we invoke is either DBM (described at https://www.hindawi.com/journals/abi/2009/686759/ ) for 2D reductions or DBSCAN (built in to MATLAB R2019a and later) for any sized reduction. This produces cluster ID output and visualizations as explained in the code examples. 3) We also have built in visual and computational tools for data group comparisons. Data groups (AKA subsets) can be defined either by running clustering (described above) on the data islands formed by UMAP’s reduction or by external classification labels provided for every row of the high dimensional input data given to UMAP. Our UMAP implementation uses external labels for supervised reductions and supervised template reductions as well as for comparing any reduction’s data islands directly to the external classification. We use a change quantification metric (QFMatch) which detects similarity in both mass & distance (described at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5818510/) as well as a score for measuring overlap when the groups are different classifications for the same data (described at https://en.wikipedia.org/wiki/F-score). For visualizing data groups we provide a dendrogram (described as QF-tree at https://www.nature.com/articles/s42003-019-0467-6) and sortable tables which show each data group’s similarity, overlap, false positive % and false negative %. In version 2.2 we added “UMAP dimension explorer” (UDE). UDE is a sortable table that shows characteristics of a data group’s unreduced data in each input dimension. These characteristics include the Kullback-Leibler divergence (KLD); the distribution as a density bar (colored using MATLAB’s jet colormap); and median, mean, SD and MAD. UDE supports data groups drawn by a MATLAB ROI tool (region of interest) on the UMAP output plot.
Without the aid of any compression this MATLAB UMAP implementation tends to be faster than the current Python implementation (version 0.5.2 of umap-learn). All UMAP reductions are made faster with C++ MEX implementations. Due to File Exchange requirements, we only supply the C++ source code for the MEX modules. Users must download or build the .MEX binary files themselves separately (the option to download or build the files is provided upon calling "run_umap"). As examples 13 to 15 show, you can test the speed difference between the implementations for yourself on your computer by setting the 'python' argument to true.
Additionally, users of supervised templates may request the post reduction services of supervisor matching, QF-tree, and QFMatch. The function run_umap.m returns the results of these services via the new 4th output argument: extras. The properties of extras are documented in the file umap/UMAP_extra_results.m.
The major improvements in our version 3.01 release are
- Significant acceleration of both dimension reduction as well as matching of resultant classifications. This is done by compressing the input data into probability bins. We invented probability binning some 2 decades ago as an early attempt at an open cover like UMAP’s fuzzy simplicial complexes. Hence the compression operation specializes in retaining high dimensional characteristics while reducing size significantly. In our testing with flow cytometry data sets, we see negligible loss of classification accuracy for up to 40 dimensions when running QFMatch on clusters from UMAP reductions with prior trusted classifications. However, we do notice some loss of global structure on the UMAP plots. See the fast_approximation argument comments in the run_umap.m file. To understand probability bins see https://onlinelibrary.wiley.com/doi/full/10.1002/1097-0320(20010901)45:1%3C37::AID-CYTO1142%3E3.0.CO%3B2-E
- A PredictionAdjudicator (PA) feature that helps determine how well one classification’s subsets predict another’s. PA reorganizes the predicting classifier’s subsets into predicting subsets: true positive, false positive and false negative subsets. PA determines whether the false positives or false negatives have more QFMatch based similarity to the predicted subset. PA guides UMAP dimension explorers into showing the measurement distributions and Kullback-Leibler divergence of predicting subsets stacked together with the predicted subset. Selections in the PA table are highlighted in the UMAP and EPP plots.
- A complementary independent classifier that generates labels both for supervising UMAP as well as for classification comparison research. This classifier is named “exhaustive projection pursuit” (EPP). EPP takes a more conservative approach to grappling with the curse of dimensionality than that taken by UMAP’s algebraic topology. For more background on the curse of dimensionality see https://www.nature.com/articles/nri.2017.150 . EPP scans all dimension pairs for the best split and then repeats on each split part until no further are found. Its key benefit for the flow cytometry community is that the decisions are more familiar and reviewable to the biologist than decisions made by most classifiers. EPP shows its decisions in a tree that resembles the biologists’ age-old manual gating trees that have been driving immunology research for decades! EPP is described at https://onedrive.live.com/?authkey=%21ALyGEpe8AqP2sMQ&cid=FFEEA79AC523CD46&id=FFEEA79AC523CD46%21209192&parId=FFEEA79AC523CD46%21204865&o=OneUp
Optional toolbox dependencies:
-The Bioinformatics Toolbox is required to change the 'qf_tree' argument.
-The Curve Fitting Toolbox is required to change the 'min_dist' argument.
This implementation is a work in progress. It has been looked over by Leland McInnes, who considers it "a fairly faithful direct translation of the original Python code". We hope to continue improving it in the future.
Provided by the Herzenberg Lab at Stanford University.
We appreciate all and any help in finding bugs. Our priority has been determining the suitability of our concepts for research publications in flow cytometry for the use of UMAP supervised templates and exhaustive projection pursuit.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300010.26/warc/CC-MAIN-20220116180715-20220116210715-00476.warc.gz
|
CC-MAIN-2022-05
| 7,311
| 16
|
https://www.semanticscholar.org/author/Jinsung-Jeon/2106414808
|
code
|
Share This Author
Large-Scale Flight Frequency Optimization with Global Convergence in the US Domestic Air Passenger Markets
ACE-NODE: Attentive Co-Evolving Neural Ordinary Differential Equations
This work presents a novel method of attentive dual co-evolving NODE (ACE-NODE): one main NODE for a downstream machine learning task and the other for providing attention to the main Node, which outperforms existing NODE-based and non-Node-based baselines in almost all cases by non-trivial margins.
OCT-GAN: Neural ODE-based Conditional Tabular GANs
- Jayoung Kim, Jinsung Jeon, Jaehoon Lee, Jihyeon Hyeong, Noseong Park
- Computer ScienceWWW
- 19 April 2021
This work significantly improves the utility of state-of-the-art tabular data synthesis methods by designing a generator and discriminator based on neural ordinary differential equations (NODEs) and conducting experiments with 13 datasets.
Linear, or Non-Linear, That is the Question!
This work is the first who design a hybrid method and report the correlation between the graph centrality and the linearity/non-linearity of nodes, which results in a hybrid model of the linear and non-linear GCN-based collaborative filtering (CF).
LT-OCF: Learnable-Time ODE-based Collaborative Filtering
The main novelty in the method is that after redesigning linear GCNs on top of the NODE regime, the optimal architecture is learned rather than relying on manually designed ones, and it consistently outperforms existing methods in terms of various evaluation metrics.
LightMove: A Lightweight Next-POI Recommendation forTaxicab Rooftop Advertising
This work presents a lightweight yet accurate deep learning-based method to predict taxicabs' next locations to better prepare for targeted advertising based on demographic information of locations.
Large-Scale Data-Driven Airline Market Influence Maximization
A prediction-driven optimization framework to maximize the market influence in the US domestic air passenger transportation market by adjusting flight frequencies and presents a novel adaptive gradient ascent (AGA) method.
Scalable Graph Synthesis with Adj and 1 - Adj
SPI-GAN: Distilling Score-based Generative Models with Straight-Path Interpolations
An enhanced distillation method, called straight-path interpolation GAN (SPI-GAN), which can be compared to the state-of-the-art shortcut-based distillation methods, and is one of the best models in terms of the sampling quality/diversity/time for CIFAR-10, CelebA-HQ-256, and LSUN-Church-256.
EXIT: Extrapolation and Interpolation-based Neural Controlled Differential Equations for Time-series Classification and Forecasting
This work redesigns NCDEs by redesigning their core part, i.e., generating a continuous path from a discrete time-series input, and proposes to generate another latent continuous path using an encoder-decoder architecture, which corresponds to the interpolation process ofNCDEs.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00508.warc.gz
|
CC-MAIN-2022-40
| 2,914
| 22
|
https://www.perfectlancer.com/freelance-jobs/data-scraping
|
code
|
Freelance Data Scraping Jobs
Get your job done with professional freelancers
Are you looking for a new freelance opportunity? Do you have experience in data scraping? If so, you may be interested in finding freelance data scraping jobs. There are many opportunities available, and it can be a great way to earn some extra money. Check out the suggestions below to get started.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506423.70/warc/CC-MAIN-20230922202444-20230922232444-00316.warc.gz
|
CC-MAIN-2023-40
| 376
| 3
|
http://www.meetup.com/PDX-PHP/members/76151652/
|
code
|
April 24, 2013
Security and implementation of MySQL. I Also like to learn different kinds of PHP framework.
Come from a "2D/3D game art" background. I recently finished up a web design degree and I would like to learn more back-end programming and networking with other web programmers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673632.3/warc/CC-MAIN-20151001215753-00216-ip-10-137-6-227.ec2.internal.warc.gz
|
CC-MAIN-2015-40
| 286
| 3
|
https://discourse.igniterealtime.org/t/how-to-connect-two-servers-openfire/59274
|
code
|
I need to know how I can have two openfire servers in different places and that can be seen between them.? Both users server “A” may see users “B” server and “B” server “A” server. I followed to the letter the instructions shown in various forums add the IP address of each server in the Server> Server Settings> Route Server Server Enabled and have left the service as must leave. If I go to sessions and server sessions then I says “No sessions initiated”. Help …!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401600771.78/warc/CC-MAIN-20200928104328-20200928134328-00296.warc.gz
|
CC-MAIN-2020-40
| 488
| 1
|
https://www.gostserver.xyz/category/
|
code
|
GOST presentation at OSGeo.nl day
Subscribe to this blog via RSS.
Since 1985, Geodan has grown into one of the leading geo-ICT companies in our country. We specialise in the provision of spatial information and are experienced in applying new, innovative technologies
President Kennedylaan 1
1079 MB Amsterdam
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321553.70/warc/CC-MAIN-20170627203405-20170627223405-00624.warc.gz
|
CC-MAIN-2017-26
| 309
| 5
|
https://simplypianostudio.com/2017/03/
|
code
|
I just need to get on my podium for just a moment!
It seems to that in the Vancouver, and outlying areas, that people seem to think that piano is full of scales, arpeggios, chords and a bunch of other technical skills to be a incredible player. That is simply not true!
What about Clementi, Czerny, Hanon, to name a few, even Chopin Etudes.
Greatness does not happen also until you are older in our years. There is the exception. But generally, later because we develop as students of ourselves and we understand who we are!
Simply Piano’s mission is to have people of all ages from children to adults uncover the greatness of who they are while studying something of they truly love not just practicing scales over and over and over again! YUK!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998369.29/warc/CC-MAIN-20190617022938-20190617044938-00260.warc.gz
|
CC-MAIN-2019-26
| 747
| 5
|
https://www.lukas-stappen.com/short-cv
|
code
|
Lukas Stappen received his M.Sc. in Data Science with distinction from King's College London, UK, in 2017 and joined Prof. Dobson’s group. His doctorate (Dr.-Ing./ engl. Doctor of Engineering) in multimodal deep learning was supervised by Prof. Schuller and defended with summa cum laude (engl. with the highest honours) in 2022. Recently, he left BMW Group Research to start his own venture aiming at improving human-to-human interaction analysis.
He (co-)authored 30+ publications (600+ citations, h-index=13+) in the research fields of multimodal deep learning, affective computing, multimodal sentiment analysis, linguistic, and multimodal/cross-modal representation learning. Further interests are denoising in-the-wild signals and machine learning in healthcare-related domains and social media.
He was the lead organiser of The Multimodal Sentiment Analysis Challenge - MuSe 2020 and 2021 at ACM Multimedia (A*), program committee member at the 4th (IJCAI 2019) and 5th (ECAI 2020) International Workshop on Knowledge Discovery in Healthcare Data, data chair at the Interspeech Computational Paralinguistics ChallengE (ComParE) 2020, reviewer for IJCAI, IJCNN, ICASSP, ACL, AACL (IJCNLP), IEEE Transactions on Cybernetics, IEEE Transactions on Multimedia, IEEE Transactions on Affective Computing and many more. Furthermore, he received several student academic excellence scholarships as well as the best paper award of the IEEE MMSP 2020.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643388.45/warc/CC-MAIN-20230527223515-20230528013515-00311.warc.gz
|
CC-MAIN-2023-23
| 1,449
| 3
|
https://www.jetbrains.com/help/idea/2016.3/adding-modules-to-a-project.html
|
code
|
Adding Modules to a Project
When necessary, you can add modules to your project. When doing so, you can create new modules from scratch, or by importing existing sources. These may be the sources originating from Eclipse, Flash Builder, Gradle, or Maven, or a collection of sources of "unspecified origin".
You can also add existing modules to a project.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104512702.80/warc/CC-MAIN-20220705022909-20220705052909-00636.warc.gz
|
CC-MAIN-2022-27
| 354
| 3
|
https://seattlecentral.edu/qelp/sets/017/017.html
|
code
|
ABOUT BODY WEIGHT VERSUS BRAIN WEIGHT OF MAMMALS
Biometrics is the quantitative analysis of ontogenetic ("of or relating to life cycle or development")
parameters such as height, weight, shape, morphology, age, etc. Biometrics is one key to understanding growth of
individuals with time. Biometrics can also be applied to groups of organisms (such as a population of Douglas fir
trees, or all butterflies, or all mammals) to gain insights into fundamental principles of growth or behavior in
Allison and Cicchetti (1976) provide data on body weight (in kilograms) and corresponding brain weight (in grams)
for 62 different terrestrial mammals (no whales). Students should question the meaning of these pairs of numbers
immediately. For example, there is only one pair of numbers for humans. Is this a single datum of a single human?
Is this the mean of many measurements? Old or young, male or female, well fed or malnourished? The "human"
in the table weighs 62 kilograms or 136 pounds; is this representative? In addition, these data were not collected
by a single investigator, nor were they collected in the same manner, perhaps adding complexity.
The values of body weight range over 6 orders of magnitude. To represent data with such a wide range of values
on a single graph requires logarithms. The plot of log body weight versus log brain weight shows a strong positive
correlation, as to be expected.
A linear regression through the log-log data has a fairly high correlation coefficient, suggesting a good fit
of a power law to the original ("unlogged") data. The slope of this line is less than 1, indicating a
variable ratio of body to brain weight as a function of size of the mammal. The exponent less than 1 indicates
that small mammals have relatively large brains compared to body size, probably because they have relatively large
Despite taking the log of both variables, a large amount of scatter remains. Thus the best fit power law to
these data will not have much predictive value; one would not use this empirical relationship to predict brain
weight of a mammal for which only body weight was known. Your prediction might be off by an order of magnitude
Mammals that plot above the best fit line have relatively large brains for their body size; monkeys, chimps,
babboons and humans all plot well above the line. And in case you're feeling smug, so do ground squirrels. The
opossum falls well below the line.
It would be interesting to see data for a single species, such as coyotes or rabbits.
Reference: Allison, T. and Cicchetti, D. V. (1976), Sleep in mammals: ecological and constitutional correlates;
Science, v. 194, pp. 732-734.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601040.47/warc/CC-MAIN-20200120224950-20200121013950-00029.warc.gz
|
CC-MAIN-2020-05
| 2,660
| 27
|
http://docs.endian.com/6.5/sb/preface.html
|
code
|
The Connect Switchboard is the heart of the Secure Digital Platform, Endian solution that links the IT Security with the Internet of Things. The latest updates and corrections to this manual, referred to the latest release of the Connect Switchboard, will be available online at http://docs.endian.com/6.5/sb/. If you think that you have found any errors, either simple typos or even content errors, feel free to provide us feedback using the Endian's bug tracker.
The Connect Switchboard Reference Manual 6.5 (“this document”) is copyright 2011-2023, Endian S.r.l. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the GNU Free Documentation License.
This document has been written by the Endian Team and its layout resembles the GUI of the 6.0 product line.
Reference manuals written for older Endian products can be found in the Documentation archive.
The information contained within this document may change from one version to the next and may also change over time without notice to improve the content, to correct any error or mistake, or to describe new or changed features. The date of the last update is always present at the bottom of every HTML page or on the cover of the PDF version.
All programs and details contained within this document have been created to the best of our knowledge and tested carefully. However, errors cannot be completely ruled out. Therefore Endian does not express or imply any guarantees for errors within this document or a consequent damage arising from the availability, performance, or use of this or related material.
Endian and the Endian logo are trademarks of Endian S.r.l., Italy.
The use of names in general use, names of firms, trade names, etc. in this document, even without special notation, does not imply that such names can be considered as free in terms of trademark legislation and that they can be used by anyone. All trade names are used without a guarantee of free usage and might be registered trademarks. As a general rule, Endian adheres to the notation of the manufacturer. Other products mentioned here could be trademarks owned by the respective manufacturer.
Security Certifications Awarded¶
New in version 6.1.0: BSI, OWASP Top 10, and IEC 62443 certifications.
In November 2020, the following security certifications have been awarded to Endian for its products Switchboard and Edge X:
BSI-Grundschutzkatalog, granted by the German’s Federal Office for Information Security. Official documentation is available (in German) on the BSI web site.
OWASP Top 10, the list of the 10 most exploited vulnerabilities in the wild is also available on the OWASP web site
IEC 62443-4-2 SL2 for Switchboard and 4i Edge X as single products
IEC 62443-3-3 SL2 for the combination of Switchboard and 4i Edge X as a complete solution
IEC 62443 was initially defined to reduce the threats and attacks against the security of Industrial Automation and Control Systems (IACS), and has later evolved into the industrial cybersecurity standards for all the industrial networks. More information about the IEC 62443 certification can be found in the IEC’s official publication (PDF Table of content available).
In order to comply with the certifications, a few improvements have been developed and included in release 6.1.0; all of them affect both the Switchboard all the clients connecting to it and to all the devices managed, be them either Gateways (i.e., 4i Edge X) or Endpoints.
The new functionalities can be configured on the Switchboard by an Administrator.
Two new options have been introduced to lock sessions after a period of inactivity by the user (soft lockout and hard lockout, see the box below).
The first option is called Session lock timeout, and can be configured underand defaults to five minutes.
In other words, after five minutes of inactivity, the user is required to log in again to continue their activities. This option concerns HTTP/HTTPS connections only.
The second option is available on CLI only and defines the hard lockout for all connection besides HTTP/HTTPS, including for example SSH, VNC, RDP, and so on. The option is called SESSION_TERMINATION_TIMEOUT and its value can be controlled with the following commands.
1root@switchboard:~ # datasource emi.settings.SESSION_TERMINATION_TIMEOUT 2Value EMI.SETTINGS.SESSION_TERMINATION_TIMEOUT 3 45 5root@switchboard:~ # datasource emi.settings.SESSION_TERMINATION_TIMEOUT=10 6Value EMI.SETTINGS.SESSION_TERMINATION_TIMEOUT 7 810
The command on line 1 returns the current value of the variable (5 minutes, which is the default), while the command on line 5 sets the value to 10 minutes.
Soft and hard lockout
There is a slight but important difference between soft and hard lockout in network connections. They both concern a period of inactivity by a (client) user and define how the server reacts to it.
- Soft lockout
After the inactivity period, the user is logged out and their next HTTP request will require a new login.
- Hard lockout
After the inactivity period, the user is logged out and the connection/socket is terminated as well.
In terms of Endian devices, soft lockout only implies that the user will need to provide username and password to continue the access to the appliance, while the hard lockout also triggers a disconnection event, i.e., the user’s connection to the Gateway or Endpoint is forcibly terminated.
To prevent hard lockouts, the client sends routinely a ping to the Switchboard: a hard lockout is triggered from the Switchboard only after the session timeout is reached and the pings from the client are not received anymore.
To mitigate the effects of brute force attacks, an account lockout policy has been implemented. Configuration is available under.
System use notification
The default value of existent option Welcome message under
Welcome to the Switchboard, access to the system is
Limit access for web crawler
Access to web crawler is prevented by appropriately configuring the
Switchboard's web server with the directive
X-Robots-Tag "noindex, nofollow". This is a much more robust approach
than using a robots.txt file in the web server root directory, as
noted in this article.
Among the new improvements described in this section, this functionality is the only one that can not be configured by the user.
Endian web sites¶
For more information about Endian S.r.l., Italy and its products, please visit Endian web site at https://www.endian.com/.
Many resources (tutorials, how-tos, examples) in this manual are taken from those web sites:
https://help.endian.com/hc/en-us/ The new support center for the Endian products, that should become the reference site to support customers and users. Several links to howtos on this site are provided on this documentation at the end of the various subsections.
Within the support center, Section https://help.endian.com/hc/en-us/categories/202695877-Switchboard contains useful resources (tutorials and how-tos) for the Switchboard.
https://jira.endian.com/ Endian’s bug tracker, the place in which to search for existing bugs and their resolution or workarounds and to report new issues. It replaces the older bug tracker located at http://bugs.endian.com/, which is still accessible but not maintained anymore.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510387.77/warc/CC-MAIN-20230928095004-20230928125004-00792.warc.gz
|
CC-MAIN-2023-40
| 7,515
| 49
|
https://www.arisbe.net/beyonce-super-bowl-4k-quality-2160p/
|
code
|
Beyoncé – Super Bowl [4K Quality 2160p]
Excellence must be pursued, it must be wooed with all of one’s might and every bit of effort that we have; each day there’s a new encounter, each week is a new challenge. All of the noise and all of the glamour, all of the color all of the excitement, all of the rings and all of the money. These are the things that linger only in the memory. But the spirit, the will to excel, the will to win, these are the things that endure.
Take the opportunity to connect and share this video with your friends and family if you find it useful.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473824.45/warc/CC-MAIN-20240222193722-20240222223722-00332.warc.gz
|
CC-MAIN-2024-10
| 581
| 3
|
https://www.vice.com/en_us/topic/flightless-birds
|
code
|
This Bird Went Extinct and Then Evolved Into Existence Again
"We know of no other example in rails, or of birds in general, that demonstrates this phenomenon so evidently.”
New Zealand Students Can Buy Beers With Rats
For centuries New Zealand flightless birds and slow-moving reptiles lived without fear of native predators. This golden era ended when the British showed up on rat-infested ships. Now rats are the key player in the destruction of the country's forestry.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572744.7/warc/CC-MAIN-20190916135948-20190916161948-00193.warc.gz
|
CC-MAIN-2019-39
| 473
| 4
|
https://www.reasonstudios.com/shop/product/europa-obsidian/
|
code
|
Dark & hypnotizing new sounds…. The Europa Obsidian Refill contains extremely high quality patches in unison with 12 included custom sample based wave tables designed to express the pure strength of the Europa Synthesizer. This sound bank has a healthy and unheard palette of unique new sounds ranging from filter submerged choirs, horrific textured pads, pulsing leads, gated synths, resonant bells, evil mono synths, all the way to heavily modulated poly synths. This refill has 102 patches. The Europa Obsidian Refill will definitely inspire the creativity in any producer or artist!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948217723.97/warc/CC-MAIN-20240305024700-20240305054700-00316.warc.gz
|
CC-MAIN-2024-10
| 588
| 1
|
https://community.rsa.com/docs/DOC-47263
|
code
|
|Applies To||RSA Product Set: SecurID|
RSA Product/Service Type: Authentication Manager
RSA Version/Condition: 8.1
|Issue||Customer gets this error when he tries to import new Security Questions file to Security console. |
"The security questions XML file you provided is not encoded in UTF-8 with BOM"
|Cause||The imported xml file is not encoded in UTF-8 with BOM.|
|Resolution||To change the file encoding, follow the steps below:|
1. Open the Security questions xml file with Notepad ++.
2. Navigate to Encoding and choose "Encode in UTF-8".
4. Re-import the xml file.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00036.warc.gz
|
CC-MAIN-2019-13
| 572
| 10
|
http://practical-scheme.net/wiliki/schemexref.cgi?open-output-string
|
code
|
SRFI-6: Returns an output port that will accumulate characters for retrieval by get-output-string. The port can be closed by the procedure close-output-port, though its storage will be reclaimed by the garbage collector if it becomes inaccessible.
Some implementations call this make-string-output-port.
About This Site
Concept:CaseSensitivity Concept:ExtendedLambdaList Concept:FileSystem Concept:HashTable Concept:Module Concept:Networking Concept:ObjectSystem Concept:Process Concept:RegularExpression Concept:UserGroup
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481994.42/warc/CC-MAIN-20190217132048-20190217154048-00402.warc.gz
|
CC-MAIN-2019-09
| 522
| 4
|
http://athomepets.weebly.com/at-home-pets-blog/shaya-x-manny-august-20-2013
|
code
|
Looked to be blue.
But it was dead, so it matters not.
Made a good nest though.
um... oh.. She is fostering two of Lena's for now. Just to test her mothering abilities.
I have been breeding rabbits for a quite a few years. I thoroughly enjoy them as animals and think they make great pets. I also like to take some of them to rabbit shows to see how they measure up to the standards.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320362.97/warc/CC-MAIN-20170624221310-20170625001310-00688.warc.gz
|
CC-MAIN-2017-26
| 383
| 5
|
http://aclouda.com/blog/soft/building-a-cluster-based-on-vmware-part-iii/
|
code
|
Username or Email Address
In the first article Building a Cluster based on VMware (part I) we saw Cluster Technologies types and understand how to install VMware ESXi, in the second article Building a Cluster based on VMware (part II) we look at how to deploy the vCenter Server Appliance (VCSA) 6.5. In this, a final part we will look at how to configure VMware HA Cluster.
High Availability (HA) – the clustering technology is designed to increase the availability of the system, and in case of failure of one of the ESXi nodes, it is possible to start its virtual machines on other ESXi nodes automatically, without the participation of maintenance personnel.
To create a cluster, we need all the virtual machines to be within the same disk storage, it can be not only hard disk storage but also be software. VMware implemented this with vSAN.
Isolation Response (6)
A script that runs if there are no cluster availability signals is determined by the value of Isolation Response, determines the actions of the ESXi node if it does not receive the Heartbeat signal. This happens if the ESXi host was either isolated from the cluster, as an example – a network card failure.
All events can be determined by two scenarios:
In the first case, we use the value of Isolation Response – “Leave powered on”, then all VMs will continue to work.
In the second case, it is necessary to select Power off or Shutdown (used by default) if the node stops receiving signals, HA will allocate the VM to the nodes remaining in the network, and the failed node must finish the work, so there would not be a conflict between the VMs.
Reservation of Resources (Reservation)
When calculating the Failover Capacity, the cluster creates slots defined by the Reservation parameter. It is calculated according to the maximum size of VMs on all working nodes.
Failover Capacity parameter
After Failover Capacity is determined, it determines the maximum number of nodes in the cluster that can fail. In this case, all VMs will work.
1 Open vCentre and on the main page click “Create Datacenter”:
2 Enter the Name of the Datacenter:
3 Create Cluster:
4 Set the Name, do not use the rest and click OK:
5 Next, after descending on the tree to the cluster “Add a host”:
6 Set IP address or Network node Name:
7 Set Username and Password for ESXi nodes:
8 Answer Yes to Security Alert:
9 Let’s check the information on the added node:
10 Next, we see the license management window on the node, we can add the necessary ones and click Next:
11 Determine the level of blocking the console node – the description is clear:
12 Location VM and click next:
13 And that is all, we have done:
Repeat steps 5-13 for each node in the Cluster.
At this stage, we get a non-configured cluster, without a shared storage and any add-ons.
Right-click on our Datacenter, go to the Distributed switch and click New:
Set the Name for the list:
Choose the generation of the switch. The generation is determined by the oldest generation in the network:
Then we set the number of ports, if necessary, specify the name of the port group:
Next, go to the settings created by the switch, and add the Hosts participating in the cluster to it:
Click Add host:
Add a New hosts:
Choose hosts that need to be added:
And click Next, after checking information:
Determine where we want to add ports (you can leave by default):
Now assign uplinks for the port group:
Create new adapters. To create, just click New adapter. Select the port group associated with the switch, and click Next:
Then specify the services for which the switch and the IP address of this virtual adapter are created. Click “Next” to complete the creation of the VMKernel adapter:
Check all information and if its correct – click Finish:
Re-create the VMKernel adapters for each ESXi host. In the end, you should have something like below:
Before configuring, the wizard will analyze the impact. Once everything is in order, click “Next”.
Go to the cluster settings. vSAN – General – Configure.
Determine the appropriate scenario and click on it further:
The system will check whether vSAN is allowed on all hosts on the network:
We mark disks for the cache and for storage in the manager vSAN:
Then finish the settings.
So we got the VMware based Cluster with the vSAN storage.
This procedure is not very simple and has a lot of configuration parameter, that is why this series of articles is not a complete instruction for setting up a cluster in production, and it does not give an accurate knowledge of how everything works but will give a general understanding of the configuration.
We wish you a greater experience in Clusters technology and you always may learn more about it on official VMware site.
HowTo, Software, VMware by Veniamin Kireev
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517745.15/warc/CC-MAIN-20190418161426-20190418183426-00507.warc.gz
|
CC-MAIN-2019-18
| 4,798
| 53
|
https://te.wikipedia.org/wiki/%E0%B0%B5%E0%B0%BE%E0%B0%A1%E0%B1%81%E0%B0%95%E0%B0%B0%E0%B0%BF:KCVelaga
|
code
|
I am Krishna Chaitanya (KC) Velaga, from India. When first got involved with Wikimedia projects in late 2014, I was extensively involved in military history content on English Wikipedia and other a bit India-related content. Later, my interest shifted to Wikidata for a short while, and eventually Wikimedia Commons, where I am currently active. My interest on WCommons is with vector graphics and filling the gaps where a subject has no image/media before. Along the way, somewhere in 2017, I got into organizing outreach activities for Wikimedia projects, including establishing a Wiki-club at my grad club, VVIT WikiConnect. After 2020, I started to inceasingly get involved with the technical areas of Wikimedia projects, especially, things that have to do with Python (bots, tools etc.), statistics, data analytics and visualizations, and outreach related to that. Currently, I am spend majority of my volunteer time with Wiki Loves Monuments, Indic MediaWiki Developers User Group, and Small wiki toolkits initiative.
Stuff that I can help with ...
projects, campaigns, outreach activities and events, where I had been been significantly involved with organizing, in my volunteer capacity
Disclaimer: I also work with the Wikimedia Foundation as staff. But the edits, the contributions, or my involvement using this account is purely as volunteer, and do not in anyway represent views of the Foundation. If you would like contact me about anything related to work in my staff capacity, please use the talk of page or email of my staff account.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00420.warc.gz
|
CC-MAIN-2022-33
| 1,549
| 4
|
http://slazebni.cs.illinois.edu/spring16/project.html
|
code
|
Progress report: due April 11, 11:59:59PM
The progress report should be two-four pages and should contain the following:
- An updated statement of the project definition and goals. If there are any changes, discuss them specifically, together with the reasons for these changes.
- Current member roles and collaboration strategy. Who is responsible for what? How are the code and data shared/maintained? How does the group interact -- e.g., are there regularly scheduled meetings?
- Proposed approach in the form of a detailed outline, pseudocode, or prose description. Be specific about how you plan to implement each step with references, pointers to external code, etc. One or more references are required at this stage.
- Data: specific description, including number of images, type of images and annotations, URL (if applicable), etc. Examples of your actual data are required.
- Initial results: brief description of which steps (if any) from #2 above you have already implemented. As a minimum, you should have collected your data (or have the collection process well underway) and compiled and tested any external code.
- Current reservations and questions (if any).
The progress report should be uploaded on Compass (Project Progress Reports under Course Content) by one designated group member (but make sure that everybody's names are on the document). The report will not receive a separate grade, but its content and quality will contribute holistically to the overall project grade at the end of the semester. Late submission will incur a 10% per day penalty on the final project grade.
Presentations: in class May 3, 8-11AM May 9
- Presentations will be 4 minutes long, with a possibility for one audience question afterwards. The time limit will be strictly enforced. Any videos or demos are counted in the 4 minute limit.
- All team members must be there for the presentation.
- We will compile all the slides on the same computer to ensure fast transitions, and make sure all the videos play properly. Please email the slides in PowerPoint format, including any videos or supplementary materials, to Liwei Wang (firstname.lastname@example.org). If the slides are large, email a link at which they can be downloaded.
- If you are presenting on May 3rd, you must email the slides by noon on Monday, May 2nd. If you are presenting on May 9th, you must email the slides by the end of Friday, May 6th. IMPORTANT: If you do not send us your slides by your deadline, your project grade will be reduced by 50% and you will not get a chance to present.
Final report: due date May 11, 11:59:59PM
The final report should be submitted in PDF format by one designated group member on Compass.
It should be (the equivalent of) at least six pages (single-spaced, 10 point font) and mimic
the style of a research paper. It is not necessary to submit code. Here is a rough outline to follow for the report:
- Introduction: Define and motivate the problem, discuss background material or related work, and
briefly summarize your approach.
- Details of the approach: Include any formulas, pseudocode, diagrams -- anything
that is necessary to clearly explain your system and what you have done. If possible, illustrate
the intermediate stages of your approach with results images.
- Results: Clearly describe your experimental protocols. If you are using training and
test data, report the numbers of training and test images. Be sure to include example output figures.
Quantitative evaluation is always a big plus (if applicable). If you are working with
videos, put example output on YouTube or some other external repository and include links in your
- Discussion and conclusions: Summarize the main insights drawn from your analysis and
experiments. You can get a good project grade with mostly negative results, as long as you show evidence of extensive
exploration, thoughtfully analyze the causes of your negative results, and discuss potential
- Statement of individual contribution: Required if there is more than one group member.
- References: including URLs for any external code or data used.
Grades will be based on the quality of the project (originality, thoroughness, extent of analysis,
etc.) and the clarity of the written report and presentation. Ideally, you will try
something new or apply ideas from class to your domain or research. More will be expected of
larger groups. You can still get a good grade if your ideas do not work out, as long as your presentation and
report show evidence of extensive analysis and exploration, and provides thoughtful explanations
of the observed outcomes.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590074.12/warc/CC-MAIN-20180718080513-20180718100513-00140.warc.gz
|
CC-MAIN-2018-30
| 4,613
| 38
|
https://www.clevx.com/why-is-it-taking-a-long-time-for-cloudkey-to-back-up-my-files/
|
code
|
Why is it taking a long time for USBtoCloud to back up my files
Usually Internet upload speed is much slower than download. But if upload is really slow then it’s likely due to your Internet connection. Also, if you have a lot of files on your USB drive the first-time backup will take longer. Once the initial backup has completed future backups will be much faster.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648695.4/warc/CC-MAIN-20230602140602-20230602170602-00255.warc.gz
|
CC-MAIN-2023-23
| 369
| 2
|
https://boinc.berkeley.edu/dev/forum_thread.php?id=14799
|
code
|
Dr Who Fan
Joined: 10 May 07
MLC@Home shutting down for now, and thank you!
MLC@Home Is shutting down
After over two years, some bumpy moments, and the tremendous support from our volunteers,
I, as MLC admin, am making the decision to shut down MLC@Home as a BOINC project for the
We've achieved the goals I set out to accomplish (and more!) with 4 complete datasets comprising
dozens of terabytes of data to analyze. Now we need to focus on analyzing the results and writing papers.
As a researcher, at some point you have to stop generating data and write; and my family, work, and
school commitments have limited the amount of time I can spend generating new experiments. This
should be evident as I've been less and less responsive to the community over the past 6 months,
for which I apologize. While we can always want more from any endeavor, I think we've accomplished
a lot for now, and want to put the project on indefinite hiatus until something new comes along.
This is a time to celebrate all that our volunteers have achieved together! This
community has been amazing between the forums and Discord. We're shutting down not because of any
problem, but because we've achieved the goals we set out to accomplish. For that, I couldn't be
The only bittersweet aspect to shutting the project down is that I hoped to grow MLC@Home beyond MLDS,
to become a platform for democratized machine learning research. I failed to gain traction with other
researchers and as such MLDS was the only project on MLC@Home. COVID is partly to blame, but there are
a number of other factors ranging from how research is funded in a hot field like ML to my own limited time
commitments. If other researchers express an interest we can revive the project in the future, but for now
I can not justify running the project without a real path to meaningful new work. That's wouldn't be fair to
What happens now?
First, as promised, the datasets will remain available (DS4 will require some thought and time to release, see
below), and the main MLC@Home website (https://www.mlcathome.org) and twitter feed will remain active so I
can post updates on any papers and how to access DS4 when available. For now, there are no changes
to the BOINC server portions of the website. I'll need to read up on how to properly archive the forums,
project pages, and stats so that they can remain available (read only) without becoming a magnet for spam
and the (currently hourly...) hacking attempts (sigh...). I will also be winding down the Discord community
over the next month or so.
For me personally, I will continue my research and work on publishing meaningful results. I'll also continue
to support other BOINC projects (I've been contributing to BOINC since the SETI@Home classic days)
and support the idea of volunteer computing. At some point, I'll write up my experience as a researcher
starting a new project and running it from the beginning to end; and hope that will be a resource for other
projects wanting to start out. It's generally been a positive experience, but there are some definite areas
For you, I encourage you to continue to support other great BOINC projects with your computing time. The
official list is here https://boinc.berkeley.edu/projects.php.
DS1/2/3 are up for download now, what about DS4?
DS4 is large over 12TB in size for just the Dense portion. So ti's going to require even more time to copy,
package, analyze, and upload. I intend to do this after my analysis and thesis is complete, which should
be in the next 6 months. If you are a researcher and want access to the dataset sooner, please contact
me directly and we can work something out.
The original idea for DS4 was to compute neural networks for each type of data using dense, LeCun-style
CNNs, and AlexNet CNNs. It turns out LeCun networks are so small and easy to compute that I can compute
50,0000 of them them locally on my won workstation in a day or two, so I didn't bother sending those out
as BOINC workunits (also because the current client crashes when computing LeNet5 on some platforms,
and it was faster to computer it locally than track down the bug). Since its debatable what scientific
benefit having AlexNet (another CNN) brings over LeCun networks I'll likely drop those from the dataset.
Even if nothing else happens, MLC@Home has been major success. We produced scientifically
interesting and unique datasets, introduced a whole new type of science (machine learning) to the BOINC
community, and showed that machine learning research can be conducted by a group volunteers over
There are a few groups and individuals I'd like to specifically thank for making this project such a success.
These include, but aren't limited to: the BOINC developers, especially Vitalii Koshura and the other
developers on the BOINC Discord server, for helping me develop the project from the very
beginning, Marcus (Delta on the BOINC Discord servers) for contributing directly to MLC@Home's
server backend processing software, and who, along with JRingo run the BOINC Radio podcast that
promoted and supported MLC@Home from very beginning. Mike from the PrimeGrid project for
providing some crucial early advice for running a new project. I'm sure I'm forgetting many others, just
know that we, as a community have many to thank for the success of this project.
I'd like to extend an extra thanks to the early volunteers on the project who helped make the forum a
helpful and welcoming place.
Thanks also to the CoRaL Labs and my advisor at UMBC for supporting the research and providing
funding for the new server after we quickly out-grew our original 2015-era ThinkPad laptop.
Finally, thanks to our 4200+ volunteers, who crunched over 12.5 million work units using more than
17000 hosts. I am truly humbled by your contributions and what we've achieved together. None of this
would have been possible without you. Thank you for giving a small unknown researcher a chance, and
I encourage you to seek out smaller projects in the future, as their success will help determine
whether BOINC continues to grow and thrive.
I leave you with one last, satisfying website screenshot:
Thanks again to everyone,
-- MLC@Home primary researcher and admin:
ID: 109958 ·
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948217723.97/warc/CC-MAIN-20240305024700-20240305054700-00354.warc.gz
|
CC-MAIN-2024-10
| 6,242
| 72
|
https://theaudioprogrammer.com/a-basic-understanding-of-digital-audio/
|
code
|
We get a lot of questions about DSP every day. In this article I am going to illustrate how Digital Audio works, and how your computer works with audio data. This is a high level overview that will introduce you to fundamental aspects of digital audio.
A Familiar Scene
Let us assume you are about to record yourself speaking into a microphone. Your microphone is plugged in to your computer via an Audio Interface. You have armed your track in Logic and press record. You speak in to your computer and you can see your audio waveform displayed on the screen as you record.
Now you want to listen back to your recording (because you conform to best practice of course…). So you put on your headphones which are connected to the headphone port of your audio interface. You press play and you can hear your recording.
For the user, this is a simple process. However, there is a lot of fundamental stuff that is happening that you need to understand for Digital Audio and DSP.
From Sound Waves to Data…
Sound is nothing more than a pressure wave that propagates in one direction through a medium such as air. When a sound wave hits your microphone’s diaphragm, that microphone creates a voltage potential across the cable terminating at the audio interface. That voltage is an analog signal. This is to say that it is infinitely precise and can be expressed over time like in the following figure. Infinitely precise, means that we can make infinite steps between two amplitude values or between two points in time.
That voltage is then processed by an Analog to Digital Converter (ADC) to create a digital version of that signal. Digital signals are different as they are quantized in both time and amplitude. This means that a value (sample) is taken at fixed intervals (sampling frequency). That value is rounded to the nearest value that can be expressed in bits (bit depth). This is shown in the illustration below. This means that our digital signal is less true to the original signal. However, if we take more samples and increase our bit depth, then we can have a more accurate representation of the analog signal.
You may be wondering: “How high does my Sampling Frequency need to be?”
Our minimum required sampling frequency is dictated by Nyquist-Shannon Sampling Theorem. The main idea of the theory is as follows:
At least two samples per period of a wave are required in order to express the positive and negative portions of a signal.
In reality more samples per period may be required. But as a rule of thumb, this means that our Nyquist Frequency (the maximum resolvable frequency) is half of the sample rate. So at 48000 Hz sample rate, the Nyquist Frequency is 24000 Hz which encompasses all of human hearing. Sometimes we do not need to sample as high (human speech). Sometimes we need higher sample rates (impulse response measurements and time warped samples). The plot below shows the values that will be replicated when a 500 Hz wave is sampled at 12 kHz. If you would like to experiment with this, please refer to the Python Script on my Github.
Quantization is a method by which we map input values (which are infinite precision in our case) to a smaller data set (which is finite in range and precision). Quantizers replace the sampled voltage values with a value represented in bits (combinations of 1’s and 0’s). Common word lengths in audio are 8 bits, 16 bit and 24 bit. The following picture shows the representation for 2 bits. Notice the significant rounding errors.
These rounding errors are called quantization error. Too much error can be audible and is quantifiable as the signal-to-quantization-noise ratio. This can be calculated as:
This means that a pristine analog recording, through a 16 bit ADC, will have a maximum signal-to-quantization-noise ratio of approximately 98.09 dB. If you would like to experiment with this, please refer to the Python Script on my Github.
How does your computer use this?
After the ADC does this sampling and quantization, the audio data is now a bit stream. It is a constant flow of bits to your CPU. The ADC is constantly feeding data to the CPU at a rate of 44.1 kHz, 48 kHz or some other rate. The audio software is doing whatever operations need to be done. This could be writing audio to disk or processing through an effect. Likewise, your ADC is constantly requesting and pushing audio data to your headphones and speakers in the reverse process while you are listening to playback.
However, a CPU or other processor does not typically do this sample by sample. It is more common for a processor to take a chunk of audio data called a buffer. The buffer has a fixed number of samples (128, 256, 512, 1024 samples). This buffer can either be processed all at once or on a sample by sample basis.
Why do you need to know this?
There are many topics I did not cover today including, aliasing, dithering and jitter. However, this concept of sampling and quantization is fundamental to Digital Signal Processing and the creation of audio effects. This limited precision will dictate how you handle data and calculations in your algorithms. It will be a limiting constraint for filter design, reverb design, distortion and other audio effects.
I hope this article helped others understand how audio works on a computer. In short, a computer takes an input voltage, samples it, quantizes it, and processes it in sets of samples called buffers. This, core idea has many implications in more advanced applications of DSP. I hope this helps guide your understanding so that you can understand later articles that I will be publishing.
Be good to each other and take it easy..
Will Fehlhaber is an Acoustics Engineer and Audio Programmer from the UK and Bay Area.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361510.12/warc/CC-MAIN-20210228145113-20210228175113-00031.warc.gz
|
CC-MAIN-2021-10
| 5,750
| 23
|
http://www.chessninja.com/boards/ubbthreads.php?ubb=showflat&Number=130352
|
code
|
Just tossing it up. I hope the change hasn't freaked you all out. Almost as much as I hope we didn't lose to much data or too many settings! It's not terribly clear what is supposed to have come over and what not, other than the users, forums, and posts. Sorry if you lost some settings, etc.
Coming features will include a shoutbox (live chat). Other aspects of this "bull**** service" will also be back on track and under reform.
Mig Greengard Is the Leningrad Dutch now the St.Petersburg Dutch?
It is going to take a while for me to figure what button is where. Intial learning curve. Thats ok.
P.S. funny emoticons
Suggestion 1. How about a different theme color for every day of the week. Should be pretty easy to implement?? In other words, by the color you can identify which day of the week it is. Btw, I don't want to freak you out with my suggestions, Mig.
Suggestion 2. Could the information message be this post is modified instead of this thread is modified when you edit it??
Edited by PircAlert (10/10/0808:09 PM)
Men make counterfeit money; in many more cases, money makes counterfeit men (read world champions!)
Looks like a great upgrade! A couple of my PMs seem to be missing, but they were both created today. But that's not a big deal. Also, one PM that was created earlier, but that was updated today seems fine.
Idon't rememberprevious version having colors.
Or an ability to embed videos:
Edited by Russianbear (10/10/0808:15 PM)
Congratulations to Magnus Carlsen on his victory in the Anand-Topalov 2010 World Championship match!
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703298047/warc/CC-MAIN-20130516112138-00063-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 1,554
| 14
|
https://www.artima.com/forums/flat.jsp?forum=106&thread=121406&start=15&msRange=15
|
code
|
Re: When it comes to Computer Science, don't reference Wikipedia
Posted: Aug 1, 2005 7:15 PM
I think what you forget here is that the Oxford dictionary is over 100 years old, while Wikipedia is not even 5. Meanwhile, wikipedia has over 500,000 articles and, if you do a search for a random topic, it will probably be better researched (have more diverse scholarly sources) than if you spent an hour googling the information. This being said, to use Oxford as "the bar" is just plain crazy. You want something right, get a dictionary or encyclopedia. You want something quickly or something unique/hard to find (particularly pop-culture or geek-culture), go to wikipedia or c2.
The problem as I see it isn't with wikipedia. It is just in the way people search for information. In social science, I learned the following:
- Peer reviewed sources are better than non-reviewed sources.
- Original data is better than an interpretation of that data.
- Quantitative data is (generally) better than qualitative data.
There are other rules, but what I want to get at is, in terms of finding sources for online arguments (or discussions), the following:
- Avoid making assertions without 3 sources of greatly differing backgrounds.
I don't take anyone seriously who does not present me with enough unbiased information. Particularly if I do a quick google and the first 3 matches disprove them right off the bat. And I won't be fooled by three articles that have the same sentence word for word, as they've obviously tainted each other.
As for cdiggins vs. wikipedia... in my opinion, it is a base and primative battle on both sides. It is always the same thing. One side says the other side sucks, they fight it out, and ends with Wikipedia creating anti-Christopher articles and Christopher creating anti-Wikipedia articles.
So to sum up:
1) You have good sources if I agree with you, and pathetic sources if I don't.
2) Oxford isn't the bar. I am the bar.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652184.68/warc/CC-MAIN-20230605221713-20230606011713-00466.warc.gz
|
CC-MAIN-2023-23
| 1,949
| 14
|
https://blogs.vmware.com/euc/2015/11/identity-manager-cloud-whats-new-october-2015.html
|
code
|
This overview of new technology represents no commitment from VMware to deliver these features in any other generally available product or service.
Welcome to the first of our “What’s New in Identity Manager” blog series. In this blog, we will provide an update about new features in VMware Identity Manager and address some of the most commonly asked questions.
Before we begin, we encourage you to watch the VMware Identity Manager video for an overview of the VMware Identity Manager solution.
In October, we were excited to announce the release of the following features in our cloud version:
- Inbound SAML just-in-time (JIT) provisioning
- Authentication-method chaining
- Single sign-out from third-party identity providers
- New data centers in the Europe and Asia Pacific regions
Now, we will go into more detail about the new features introduced in October.
Inbound SAML Just-In-Time Provisioning
With JIT provisioning, you can use a SAML assertion to create users on demand the first time they try to log in to VMware Identity Manager. This eliminates the need to create user accounts in advance. For example, if you recently added a partner user or an employee, you do not need to manually create the user in VMware Identity Manager. When they log in with single sign-on using a third-party SAML identity provider, their user account is automatically created for them, eliminating the time and effort with on-boarding the user. You can both create and modify user accounts this way. Because JIT provisioning uses SAML to communicate, your tenant must be configured with a third-party SAML identity provider such as ADFS, Ping Federate, or Google Apps.
Q. When using the SAML JIT provisioning feature, do you need to also deploy the VMware Identity Manager connector to connect to Active Directory?
A. No. The JIT provisioning feature can be used with or without the VMware Identity Manager connector.
Q. When should you use the VMware Identity Manager connector versus the SAML JIT provisioning feature?
A. The VMware Identity Manager connector synchronizes user information from the Active Directory into the VMware Identity Manager service at regular intervals.
Use VMware Identity Manager connector when you want to
- Set up a user in VMware Identity Manager before the user logs in the first time
- Disable or delete the user in VMware Identity Manager when the user is disabled or deleted in Active Directory
Use SAML JIT provisioning when you
- Already use a third-party identity provider (IdP) connected to Active Directory, and do not want to deploy another connector for Active Directory
- Want to integrate with user repositories other than Active Directory, such as Google Apps
- Do not want users to have to wait to log in until the connector synchronization job is complete; the job runs every 24 hours
Q. Can the JIT provisioning feature be used with other cloud directories?
A. The JIT provisioning feature can be used to connect to other cloud directories that act as a SAML IdP, such as Google for Work or Azure AD. If you are using 100% cloud deployment of these directories (not synchronized from on-premises AD), you can use this feature to log users in to VMware Identity Manager using Google or Azure AD credentials, and create or update users on demand at login time.
Q. How do you configure the JIT provisioning feature of VMware Identity Manager?
A. The SAML JIT provisioning feature is accessible through the Identity Providers tab in the VMware Identity Manager administration console. When creating or editing a third-party IdP, an administrator can enable JIT provisioning in VMware Identity Manager and define the user directory and domains where users will be provisioned and authenticated. For more details, refer to the VMware Identity Manager Administration Guide.
Authentication-method chaining allows you to mix and match authentication methods to create your own authentication chain. For example, you can set up an authentication policy to first authenticate using an AD username and password, and then pass the authenticated username to a second authentication method, such as RADIUS. You can even apply the second authentication method from another IdP. For example, you can use the VMware Identity Manager connector to authenticate using AD, and then use the SafeNet IdP to authenticate, for two-factor authentication (2FA). The authentication fallback feature continues to work with authentication chaining.
Previously in Identity Manager, you could use only one primary authentication method (such as username and password, Kerberos, Certificate, RADIUS, RSA SecurID, or others). If one authentication method failed, you could revert (or fall back) to a secondary authentication method to complete the login. But, you could not apply two authentication methods in sequence. For two-factor authentication, it was required that the primary authentication method perform the two-factor authentication.
Single Sign-Out from Third-Party Identity Providers
When using a third-party IdP to authenticate users into VMware Identity Manager, now you can sign out users from the third-party IdP upon user sign-out (logout) from VMware Identity Manager. This can be configured in two ways:
- If the third-party IdP supports the SAML single sign-out profile, then VMware Identity Manager can send the SAML message to sign out the user from the IdP.
- If the third-party IdP does not support SAML single sign-out, then you can redirect the user to the IdP’s sign-out endpoint or page, and if that endpoint supports redirect, you can redirect the user to the VMware Identity Manager login page.
To configure this feature, navigate to Identity Provider configuration, and enable the single sign-out check box, as shown in the following figure.
New Data Centers in Europe and Asia Pacific Regions
The VMware Identity Manager service is now available in the European Union (EU) and Asia Pacific regions. For the European region, the primary data center is in Germany, with a failover site in the United Kingdom. For Asia Pacific, the primary data center is in Australia, with a failover site in Japan.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00017.warc.gz
|
CC-MAIN-2023-06
| 6,146
| 35
|
https://gamedev.stackexchange.com/questions/136489/breakout-angle-calculation
|
code
|
If you think in terms of the horizontal and vertical velocities of the ball then this may make the problem more tractable.
So I assume that when a collision is detected that this is handled by adjusting the vertical velocity. The most trivial way to do this would be to invert it so that the ball will start travelling up instead of down. You could for example do this by multiplying the y-velocity by -1. So this gives you the basic bounce, but you want it to bounce differently based on where it hits the paddle so you want to change the x and you velocity by different amounts depending on where the paddle collides with the ball.
In the case you describe for example - assuming that you don't want to alter the speed that the ball is travelling you will need to create a function that uses pythagorus's formula to proportionately split the x and y velocities to a new ratio as a function of where it colliding with the bat.
In your example it looks like you want to transfer more of the y velocity to x velocity in your right-hand picture.
In your left hand picture it looks like you want to invert the x velocity as well as the y velocity but also redistribute the combined x+y velocity so that more is transferred to the inverted x than the inverted y.
Perhaps a few more examples to demonstrate the desired behaviour across the bat and we could explore some functions?
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474737.17/warc/CC-MAIN-20240228143955-20240228173955-00705.warc.gz
|
CC-MAIN-2024-10
| 1,375
| 6
|
http://i-need-closures.blogspot.com/2005/11/cocoa-lisp-google-maps-mashup.html
|
code
|
Saturday, November 05, 2005
A Cocoa-Lisp-Google Maps Mashup
Just your run-of-the-mill Google Maps page, except for a couple of things; the server part is in Portable AllegroServe (no big deal), and the addresses came out of my Mac's Address Book via OpenMCL (a bigger deal).
I got interested in this after hearing an O'Reilly "Distributing the Future" podcast where someone mentioned that not everyone will want all their information on the web and there will be applications where part of the data is on the web and part is kept in the user's local storage.
The basic understanding of the Mac's AddressBook library came from this Mac Dev Center article, and help from Gary Byers got OpenMCL loading and searching the Address Book in OpenMCL. The rest is basic Portable AllegroServe.
The geocoder I'm using is Ontok, which has a REST api that takes up to 10 addresses at one time and returns CSVs that start with latitute and longitude co-ordinates.
Wil Shipley mentions that instead of using a database for storing customer orders he keeps them in XML files and uses Spotlight for searching. Something similar could be done with Address Book, just keep addresses in the built-in library instead of a database.
Code is here if you'd like to see.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591296.46/warc/CC-MAIN-20180719203515-20180719223515-00107.warc.gz
|
CC-MAIN-2018-30
| 1,245
| 8
|
https://www.volunteermatch.org/search/org82305.jsp
|
code
|
Red Butte Garden
- Arts & Culture
- Health & Medicine
Location300 Wakara WaySalt Lake City, UT 84108 United States
Red Butte Garden cultivates the human connection with the beauty of living landscapes. We do this through plant displays and collections, education, conservation, and as a setting for cultural enrichment and events.
Red Butte Garden is a botanic garden providing a lovely place to enjoy beautiful landscapes, entertainment and private events.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00739.warc.gz
|
CC-MAIN-2023-06
| 457
| 6
|
http://reignofdragons.wikia.com/wiki/Evolution_Strategy_Guide
|
code
|
(If you find this page accepting and helpful please feel free to go to Menu -> Status -> Referral Code and enter my code: 8NKM6KM to get a rare card!)
Hello everyone, my name is Chaotix!
Alright let's cut the chit-chat and get right to the chase!
Evolution to most of you guys/girls out there is just getting dsa 4 copies of a rare card and up and evolving then maxing it out to the max, right?
Well there's alot more to it than that!
According to the RoD Team, if you were to take double the amount of the card you wish to fully max out and max out each and every copy you get then evolve those and and then repeat the process until you get the final form then the card would gain an additional 5K ATK!
Now I know what some of you guys/girls are thinking, 5,000 ATTACK, that's insane! How in the name of God can that be true?!
Well I'm going to explain it to you! I'm going to use my personal favorite card: Demonsword as an example.
Say for instance you obtained 8 copies of the card Demonsword.
Then you would get each and every last one of those 8 cards to LV50 (MAX)
Then after that you would evolve them so that you would then have 4 cards total all of them Stage 2 Demonswords.
Then you would repeat the Enhancing Process with those cards.
Then you would evolve them and have 2 copies of Demonsword at Stage 3.
Then you would repeat the Enhancing Process with those 2 cards.
Then you would Evolve them to get:
Demonsword (Bloodlust) LV25 out of LV60
Then you would do the Enhancing Process once more to get it to LV60 and it would then evolve into it's final form!
That final form would then have approximatly 5,000 more ATK than it would if you just Evolved 4 copies into the final one and then Enhanced it!
I will add more as I dig it up so keep a keen watch on this page guys/girls!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210463.30/warc/CC-MAIN-20180816054453-20180816074453-00119.warc.gz
|
CC-MAIN-2018-34
| 1,792
| 19
|
https://artsci.calendar.utoronto.ca/course/mat137y1
|
code
|
A conceptual approach for students with a serious interest in mathematics. Attention is given to computational aspects as well as theoretical foundations and problem solving techniques. Review of Trigonometry. Limits and continuity, mean value theorem, inverse function theorem, differentiation, integration, fundamental theorem of calculus, elementary transcendental functions, Taylor's theorem, sequence and series, power series. Applications.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504969.64/warc/CC-MAIN-20210622002655-20210622032655-00186.warc.gz
|
CC-MAIN-2021-25
| 445
| 1
|
https://www.guru.com/freelancers/rama-syaliandra
|
code
|
To provide quality web on time to meet your requirements and to fit into your budget.
I'm a freelance full stack developer, from graphic design on down to systems administration. I have founded two companies and led product development and engineering work at two others. I love data-driven design, continuous deployment, and customer deployment. I fully believe in applying the scientific method to everything I do.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141182794.28/warc/CC-MAIN-20201125125427-20201125155427-00021.warc.gz
|
CC-MAIN-2020-50
| 416
| 2
|
https://addons.mozilla.org/en-US/firefox/addon/urlbarext/reviews/78679/
|
code
|
Rated 3 out of 5 stars
At first I was really impressed! But then I realized that the most operations are slower than with keyboard: Ctrl+L and +C is much quicker than pressing the button to copy. I don't need "Up" and even if: I'm faster with either double clicking a part in the Urlbar deleting it pressing enter. I don't need to surf anonymously, I can do the site search via g keyword, and I don't need the tags... that leaves me with the tinyurl wich is real nice! And I was actually looking for something like that. But all that just for the tiny?
Nonetheless: I think this is a cool addon for mouse-addicts. I'd really really suggest to change the dialogs: The green, blue and the script-font are quite ugly!
Another Idea for more buttons: Ok there are already extensions: But arrows for forward and backward would be nice too: They could filter the last number in the URL and in-/de-crease it by one. So you could browse numbered pages or pictures.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865250.0/warc/CC-MAIN-20180623210406-20180623230406-00452.warc.gz
|
CC-MAIN-2018-26
| 955
| 4
|
https://www.lugot.org/how-do-i-restart-my-internet-on-ubuntu-2/
|
code
|
How do I restart my Internet on Ubuntu?
- Use the following command to restart the server networking service. # sudo /etc/init.d/networking restart or # sudo /etc/init.d/networking stop # sudo /etc/init.d/networking start else # sudo systemctl restart networking.
- Once this is done, use the following command to check the server network status.
How do I restart netplan?
- Graphical User Interface. Bring up network management window by right-click on the top right corner network icon and locate the network connection you wish to restart then click on Turn Off .
- Command Line.
- System V init.
What is the command to restart network service in Linux? (or whatever your network interface is called) to restart the network. When using sudo: sudo ifconfig eth0 down && sudo ifconfig eth0 up. Otherwise if you are connected over ssh, you will have to reboot machine.
Which command is used to restart a network? Restart network using ifup and ifdown
It’s one of the most basic networking commands on Linux. The ifdown command turns off all network interfaces and the ifup command turns them on.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506027.39/warc/CC-MAIN-20230921105806-20230921135806-00838.warc.gz
|
CC-MAIN-2023-40
| 1,097
| 10
|
http://rusarticlesjournal.com/1/16086/
|
code
|
How to support an exotic animal in house conditions? Python. We can read
In short encyclopedic data that pythons treat a subfamily of snakes, family of boas. Length of a body reaches 10 - 12 m, maximum - at a mesh python. 22 species of pythons live in tropics and subtropics of East hemisphere. The large individuals living in the wild nature can swallow entirely jackals, young boars, etc. Attack the person seldom. The tiger python is included in the Red List.
Snakes in general are considered as very beautiful animals, not without reason at many people of a dragon are a symbol of beauty and wisdom. And opinion that they - vile, wet and slippery - it is absolutely wrong. All who get to themselves snakes, are convinced of it.
If to touch them and to stroke, then you with surprise and pleasure feel what it is pleasant beings, and all that negative that it is accepted to tell about snakes, only fiction and slander, and they actually not slippery and not wet, and just very much - very smooth, gentle and brilliant, and especially not vile, and very much even darlings. And at the same time from them quite notable positive energy extends!
The most widespread kinds of boas who can be got for house contents are royal and tiger pythons. They are beautiful and elegant. But it is even more beautiful - their fellow, a tiger python - an albino. Albinos are individuals who from the birth have no protective pigmentation.
Such phenomenon when there is no coloring of skin, a hair and an iris of the eye of eyes, occurs among all animals, and also at the person. But if at the person it looks not absolutely esthetically, then the python - an albino can boast the, extraordinary beauty of appearance and dandyish is bright - yellow, brilliant skin.
So such exclusive handsome a python - the albino turns out in view of a freak of nature. In the wild nature such boas - albinos cannot live as owing to lack of protective coloring would be doomed to death in the early childhood, having fallen prey of predators or birds. Safe existence for them is possible only in artificial habitat - in nurseries, zoos, and in house terrariums. From - for their extraordinary beautiful skin even learned to remove artificially, and such individuals are in great demand at fans of reptiles.
It is better to support these original exotic animals in the special terrariums supplied with additional heating. It is necessary also that at boas water availability was obligatory. It can be pools or drinking bowls of any forms and the sizes. In a terrarium it is necessary to place enough horizontally located branches to bring closer habitat to natural.
For keeping of all boas high humidity of air - to 90% is necessary. For humidity maintenance the terrarium should be sprayed twice a day and whenever possible to establish the pool as far as the terrarium sizes allow. Temperature condition of contents has to fluctuate from 26 - 28 °C in the afternoon to 23 °C at night. The source of heating is the best of all to arrange over one of branches on which boas could be heated. Temperature of heating has to be to 35 °C.
Snakes are sluggish and as though are phlegmatic, but their force, wisdom, and … goodwill also consists in it. That`s it - goodwill. They will never rush all of a sudden - they as if consider the actions, slowly. And someone from angry people thought up comparison: “ looks how a snake “ meaning their stone look. Secret here that snakes simply - naprosto, have no century and eyes do not blink and are not closed so they should sleep with open eyes.
What is interesting - small children, and lips of the baby especially love snakes, as they say, the truth speaks. When I worked as the consulting physician in nursery of exotic animals, I often asked the children visiting nursery together with the parents: “ Well, who was pleasant to you more? “ And most of children answered with enthusiastic aspiration: “ Snakes! “ in spite of the fact that it was still full of the most different and interesting animals there.
And eyes at them at the same time and burned. Their mothers and fathers not always decided to touch a snake. Cautiously, and sometimes and with disgust they looked at snakes. Children delayed hands pleasure, without being afraid at all. What is it? Probably, all the matter is that the head of children is not stuffed by ordinary stereotypes therefore they perceive the world such what it is, without biases yet, and a dragon - such what they are - beautiful, graceful, flexible, beautiful creatures.
Sluggish in the actions, nonpoisonous pythons - boas can be, however, unexpectedly fast and prompt. What it is worth seeing a scene of their hunting. When to them give up production into a terrarium - for example, a rat or a rabbit, at first the python fades on the place. Then it is slowly developed towards the victim, - sharp, lightning attack, and already the victim fights in the hard rings twisted around her body.
Having snatched, the python at once snaps at the victim for the head that that did not escape and, having twisted around it with two - three elastic rings, begins to smother. On each exhalation of the victim - a rabbit or a rat, killing embraces contract more and more closely. It at it is resulted by paralysis of the respiratory center, and she perishes. Rabbits and rats is the main food of boas and pythons in bondage.
Having convinced that production is dead, the python begins to swallow slowly and gradually it, since the head. Its mouth is opened wide widely - widely, the throat and a neck stretch as a rubber pipe, and it slowly, but surely begins to stretch, literally, on the production. On all process no more than twenty minutes leave.
To give to reptiles dead production to prevent the act of suffocation and torture of the victim, it is useless … Snakes do not show any interest in the cooled-down body. Already slaughtered animals the snake is will not become since the corpse has no infrared (thermal) radiation which - that just and serves a weak-sighted reptile as a signal to attack. Having gorged on, the full python is turned in a cozy hard spiralka and settles more conveniently for digestion.
In the wild nature the periods between meals at snakes can vary very much. If production was quite large then till the following lunch can pass both three weeks, and month, and in the conditions of house contents it is recommended to feed snakes of times in two weeks. It is necessary to remember that the speed of digestion of food at snakes depends on ambient temperature: at high temperature snakes use bigger quantity of a forage and in shorter terms. For example, the python two and a half meters long at 28 °C digests food of 4 - 5 days, at 22 °C - seven days, and at 18 °C more than fifteen days.
Periodically all reptiles fade, updating the clothes from time to time. Young people, the growing snakes fade approximately once a month - it is and it is clear, they grow and as the children who are gradually growing up from the dresses and panties have to update the clothes. Dragons are more senior fade less often. It depends on a number of conditions - from age and the size of a snake, from a surrounding situation. Someone from them changes skin of times in two - three months, someone - time in half a year.
And whether you know that the snakeskin dumped during a molt brings good luck? And moreover - money? At east and southern people dealing with snakes it is considered that those small scales - cells of which if to look narrowly, and the snakeskin consists represent coins, money.
Before a molt of a dragon it is very apathetic. Lies, having been curtailed and as though listening to the internal feelings. Her appearance considerably changes - skin becomes faded, dim, on it there are folds, wrinkles, it dries up. And there comes the moment when the snake simply - naprosto creeps out of the skin, as from a stocking.
But instead of old, on it - shining, bright, new dress, and a transparent top layer - epidermis or on - national - “ vypolzok “ it is necessary to lie a dry stocking. And if to cut off or tear off a piece of it “ " stocking; and to put in a purse, it is considered that it attracts good luck and money. It should be carried with itself everywhere. I have such beautiful, pleasant rag, and at you?]
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893629.85/warc/CC-MAIN-20180124090112-20180124110112-00453.warc.gz
|
CC-MAIN-2018-05
| 8,324
| 21
|
http://present5.com/biological-and-biomedical-modeling-using-compu-cell-3/
|
code
|
- Количество слайдов: 73
Biological and Biomedical Modeling Using Compu. Cell 3 D Tutorial Indiana University Bloomington, Indiana Maciej Swat, James Glazier
Tutorial Goals • Introduce Glazier Graner Hogeweg Model (GGH) aka Cellular Potts Model (CPM) and its potential applications • Introduce Compu. Cell 3 D – GGH based Modeling Environment • Teach how to design, build and run GGH models using Compu. Cell 3 D Recommended but not required: Laptop computer with MS Windows, OS X or latest Ubuntu/Debian Linux
Timeline • • GGH model introduction – 30 minutes Introduction to Compu. Cell 3 D – 40 minutes Demo of Compu. Cell models - 30 minutes Hands on tutorials – 120 minutes
Somitogenesis In most animal species, the anteroposterior (AP) body axis is generated by the formation of repeated structures: segments. The brain, thorax and limbs are formed through segmentation. In vertebrates segmentation, mesodermal structures called somites gives rises to the skeletal muscles, occipital bone, vertebrae, ribs, some dermis and vascular endothelium.
Dictyostelium morphogenesis Slug Formation (from Nick Savill)
Contact Inhibition of Motility “Context-dependent” effect of VEGF-A (Vascularendothelial growth-factor A: stimulates vasculogenesis) • VE-Cadherin clusters at adherens junctions between endothelial cells • VE-Cadherin-binding → dephosporylation of VEGFR-2 • VEGF-A signaling: – in presence of VE-Cadherin: AKT/PKB ↑ • cell survival – In absence of VE-Cadherin: ERK/MAPK ↑ • Actin polymerization: cell motility / filopodia In model: suppress chemotaxis at cell interfaces
Vasculogenesis Roeland Merks, Abbas Shirinifard
Vascular System Development in 3 D Abbas Shirinifard
GGH Model - an Introduction Context • How does the pattern of gene expression act through physical and chemical mechanisms to result in the structures we observe? Genetics is just the beginning. • Same mechanisms occur repeatedly in different developmental examples. • Begin by using phenomenological descriptions. In many cases very complex pathways have fairly simple effects under conditions of interest.
Main Processes in Development • • • Cell Differentiation Cell Polarization Cell Movement Cell Proliferation and Death Cellular Secretion and Absorption
Key Questions Concerning Differentiation • What are the types of cells in a given process? • What signals cause cells to change types? – – Due to diffusible substances? Due to Cell-Cell Contacts? Due to Cell History? Due to Cell-Extracellular Matrix Contact? • What are thresholds for these transitions? • How do these signals interact? • What are the rates or probabilities of these transitions?
Cell Movement and Adhesion • Cells Move Long Distances During Development. • Move by Protruding and Retracting Filopodia or Lamellipodia (Leading Edge) • Shape Changes During Movement May be Random or Directed. • Move By Sticking Selectively to Other Cells (Differential Adhesion) • Move By Sticking to Extracellular Material (Haptotaxis) • Move By Following External Chemical Gradients (Chemotaxis) • Can also have Bulk Movement: • Secretion of ECM • Differential Cell Division • Oriented Cell Division Chemotaxis: Play Movies
Cells Adhesion • Cells of a given type have characteristic adhesion strengths to cells of the same or different types. • The cells comprising an aggregate are motile. • The equilibrium configuration of cells minimizes their interfacial energy summed over all the adhesive contacts in the aggregate.
Key Questions • How strongly do cells of one type adhere to cells of another type? • How strongly do cells of a given type adhere to ECM? • How does cell adhesion change in time?
Cells Send and Respond to Signals—Chemotaxis (Haptotaxis) • Cell moves up (down) a gradient of a diffusible (non-diffusible) chemical. • Cell senses diffusible chemicals through their receptors on surface. • Intracellular signal transduction and cytoskeleton rearrangement.
Regular Chemotaxis Gunther Gerisch (JHU)
Key Questions • How do cells move in response to chemical signals in their environment? • How do cells change type in response to these signals?
Cell Growth and Death • What signals cause cells to grow? • What signals cause cells to die? (In many cases very little cell growth or death during a given developmental phase)
Secretion and Absorption • What chemicals do cells secrete and absorb? • If they diffuse, how rapidly do these chemicals diffuse? • If they do not diffuse, what are their mechanical properties? • How stable are they (what is their decay rate)?
Feedback Loops • Not Simply: Signal Differentiation Pattern (Known as Prepatterning). • Cells Create Their Own Environment, by Moving and Secreting New Signals, so Signaling Feeds Back on Itself. • Hence Self-Organization and Robustness.
Cell-Centered Modeling • Genetics primarily drives the individual cell – Response to extracellular signals; secretion of signaling agents and extracellular matrix proteins. • To understand how genetics drive multicellular patterning, distinguish two questions: – How does genetics drive cell phenomenology? – How does cell phenomenology drive multicellular patterning?
Why a Cell Level Model? • Most mammalian cells are fairly limited in their behaviors. They can: – – – Grow Divide Change Shape Move Spontaneously Move in Response to External Cues (Chemotaxis, Haptotaxis) Stick (Cell Adhesion) Absorb External Chemicals (Fields) Secrete External Chemicals Exert Forces Change their Local Surface Properties (Send Electrical Signals) A long list, but not compared to 1010 gene product interactions. Many cells have relatively simple phenomenological behaviors most of the time.
Physical and Mathematical Background • The Glazier-Graner-Hogeweg Model (GGH) is a Metropolis-Type Lattice-Based Pseudo. Hamiltonian Model • Monte Carlo Methods – Metropolis Algorithm (Statistical Kinetic) • Pseudo-Hamiltonian Lattice-Based Methods – Ising Model
• Monte Carlo Methods Use Statistical Physics Techniques to Solve Problems that are Difficult or Inconvenient to Solve Deterministically. • Two Basic Applications: – Evaluation of Complex Multidimensional Integrals (e. g. in Statistical Mechanics) [1950 s] – Optimization of Problems where the Deterministic Problem is Algorithmically Hard (NP Complete—e. g. the Traveling Salesman Problem) [1970 s]. • Both Applications Important in Biology.
GGH Model Basics Lattice based model where cells are represented as spatially extended objects occupying several lattice sites x 20 Experiment Mathematical/Computer Representation
Cell Id=20 Type Id=1 Cell Id=21 Type Id=2 s(x) –denotes id of the cell occupying position x. All pixels pointed by arrow have same cell id , thus they belong to the same cell Cell Id=25 Type Id=4 Cell Id=23 Type Id=3 t(s(x)) denotes cell type of cell with id s(x). In the picture above blue and yellow cells have different cell types and different cell id. Arrows mark different cell types
Cell motility – GGH dynamics GGH is Monte Carlo algorithm where cells randomly are trying to extend their boundaries by overwriting neighboring pixels. This results in volume increase of expanding cell and volume decrease for cell whose pixel is being overwritten Change pixel Spin copy “blue” pixel (new. Cell) replaces “yellow” pixel (old. Cell)
Not All Pixel Copy Attempts Are Created Equal – Energy of Cellular System GGH Model is based on energy minimization using Metropolis algorithm. Most biological interactions between cells are encapsulated in the Effective Energy, E. • H is generally the sum of many separate terms. • Each term in H encapsulates a single biological mechanism. • Additional Cell Properties described as Constraints. • Metropolis algorithm: probability of configuration change
• The key to the GGH is its use of an Effective Energy or Hamiltonian, H, and Modified Metropolis Dynamics to provide the Cell Lattice Dynamics. • This Dynamics means that cells fluctuate, with an Intrinsic Motility T, representing their cytoskeletally-induced motility. • The Cell Lattice evolves at any time to gradually reduce the Effective Energy with a velocity proportional to the gradient of the Energy (Perfect Damping). For a given DH, the Acceptance Probability is: Y is a Dissipation Threshold. Also introduce concept of Copy or Protrusion Direction , which May Affect the Acceptance Probability.
invalid attempt reject valid attempt accept
Constraints • Most Important Constraints: – Cell Volume – Cell Surface Area • Additional Examples: – Cell Elongation – Viscous Drag
Volume Constraints • Most Cells (except Generalized Cells representing fluid media) have defined volumes. • Provides an easy way to implement Cell growth: • And Cell Death:
Surface Constraints • Many Cells also have defined membrane areas. • The ratio: (d=dimension) controls the Cell’s general shape: • Small R means the Cell is floppy (underinflated basketball) • Large R means the Cell is spherical and rigid.
Field Equations • Most Fields evolve via diffusion, secretion and absorption and cells and by decay. Diffusion Decay Secretion Absorption • Sometimes we couple two or more Fields via Reaction-Diffusion Equations of Form:
In GGH we can couple evolving fields to cell properties/behaviors • Chemotaxis/Haptotaxis • Chemical Concentration Dependent Cell Growth rate - mitosis • Chemical Concentration Dependent Cell Differentiation
Chemotaxis Term – Most Basic Form If concentration at the spin-copy destination pixel (c(xdestination)) is higher than concentration at the spin-copy source (c(xsource)) AND l is positive then DE is negative and such spin copy will be accepted. The cell chemotacts up the concentration gradient C(x) Lower concentration Higher concentration x Chemorepulsion can be obtained by making l negative
Chemotaxis – Example Compu. Cell 3 D simulation
What Is Compu. Cell 3 D? 1. Compu. Cell 3 D is a modeling environment used to build, test, run and visualize GGH-based simulations 2. Compu. Cell 3 D has built-in scripting language (Python) that allows users to quite easily write extension modules that are essential for building sophisticated biological models. 3. Compu. Cell 3 D thus is NOT a specialized software 4. Running Compu. Cell 3 D simulations DOES NOT require recompilation 5. Compu. Cell 3 D model is described using Compu. Cell 3 D XML syntax and in the case of using Python language , a Python script(s) 6. Compu. Cell 3 D platform is distributed with a GUI front end – Compu. Cell Player or simply Player. The Player provides 2 - and 3 -D visualization capabilities. 7. Models developed by one Compu. Cell 3 D user can be “replayed” by another user regardless the operating system/hardware on which Compu. Cell is installed. 8. Compu. Cell 3 D is a cross platform application that runs on Linux/Unix, Windows, Mac OSX
Why Use Compu. Cell 3 D? What Are the Alternatives? 1. Compu. Cell 3 D allows users to set up and run their simulations within minutes, maybe hours. A typical development of a specialized GGH code takes orders of magnitudes longer time. 2. Compu. Cell 3 D simulations DO NOT need to be recompiled. If you want to change parameters (in XML or Python scripts) or logic (in Python scripts) you just make the changes and re-run the simulation. With hand-compiled simulations there is much more to do. Recompilation of every simulation is also error prone and often limits users to those who have significant programming background. 3. Compu. Cell 3 D is actively developed , maintained and supported. On www. compucell 3 d. org website users can download manuals, tutorials and developer documentation. Compu. Cell 3 D has approx. 10 releases each year – some of which are bug-fix releases and some are major 4. Compu. Cell 3 D has many users around the world. This makes it easier to collaborate or exchange modules and results saving time spent on developing new model. 5. The Biocomplexity Institute organizes training workshops and mentorship programs. Those are great opportunities to visit Bloomington and learn biological modeling using Compu. Cell 3 D. For more info see www. compucell 3 d. org
Compu. Cell 3 D Architecture Object oriented implementation in C++ and Python Visualization, Steering, User Interface Python Interpreter Biologo Code Generator Kernel Runs Metropolis Algorithm Plugins Calculate change in energy PDE Solvers Lattice monitoring
Typical “Run-Time” Architecture of Compu. Cell. Player Compu. Cell can be run in a variety of ways: • Through the Player with or without Python interpreter Python • As a Python script Compu. Cell 3 D Kernel Plugins • As a stand alone computational kernel+plugins
Compu. Cell 3 D terminology 1. Pixel-copy attempt is an event where program randomly picks a lattice site in an attempt to copy its value to a neighboring lattice site. 2. Monte Carlo Step (MCS) consists of series pixel-copy attempts. Usually the number of pixel copy-attempts in single MCS is equal to the number of lattice sites, but this is can be customized 3. Compu. Cell 3 D Plugin is a software module that either calculates an energy term in a Hamiltonian or implements action in response to pixel copy (lattice monitors). Note that not all spin-copy attempts will trigger lattice monitors to run. 4. Steppables are Compu. Cell 3 D modules that are run every MCS after all pixelcopy attempts for a given MCS have been exhausted. Most of Steppables are implemented in Python. Most cell behavior alterations are done in steppables 5. Steppers are modules that are run for those pixel-copy attempts that actually resulted in energy calculation. They are run regardless whether actual pixelcopy occurred or not. For example cell mitosis is implemented in the form of stepper. 6. Fixed Steppers are modules that are run every pixel-copy attempt.
Compu. Cell 3 D Terminology – Visual Guide Change pixel Pixel copy - “blue” pixel (new. Cell) replaces “yellow” pixel (old. Cell) 100 x 1 square lattice = 10000 lattice sites (pixels) MCS 21 MCS 22 10000 pixelcopy attempts MCS 23 10000 pixelcopy attempts MCS 24 10000 pixelcopy attempts Run Run Steppables
Nearest neighbors in 2 D and their Euclidian distances from the central pixel 4 4 3 2 4 1 1 4 1 2 1 4 4 3 2 3 3 2 2 4 4 2 D Square Lattice 2 D Hexagonal Lattice Neighbo r Order Number of Neighbors Euclidian Distance Number of Neighbors 1 4 1 6 2 4 3 4 4 8 6 6 12 Euclidian Distance 2 3 4 2 1 1 4 4 3 2 3 4 4 4
Your First Compu. Cell 3 D Simulation – Cell Sorting • Users can describe their simulations using XML, Python, or both XML and Python • Most recent version (development version) of Compu. Cell 3 D has Java interface => support of many scripting languages through Java Script. Engine
def configure. Simulation(sim): import Compu. Cell. Setup ppd=Compu. Cell. Potts. Parse. Data() ppd. Steps(20000) ppd. Temperature(5) ppd. Neighbor. Order(2) ppd. Dimensions(Compu. Cell. Dim 3 D(100, 1)) ctpd=Compu. Cell. Type. Parse. Data() ctpd. Cell. Type("Medium", 0) ctpd. Cell. Type("Condensing", 1) ctpd. Cell. Type("Non. Condensing", 2) cpd=Compu. Cell. Contact. Parse. Data() cpd. Energy("Medium", 0) cpd. Energy("Non. Condensing", 16) cpd. Energy("Condensing", 2) cpd. Energy("Non. Condensing", "Condensing", 11) cpd. Energy("Non. Condensing", "Medium", 16) cpd. Energy("Condensing", "Medium", 16) vpd=Compu. Cell. Volume. Parse. Data() vpd. Target. Volume(25. 0) vpd. Lambda. Volume(1. 0) Configure lattice and general simulation parameters Tell Compucell 3 D what cell types you will use. Remember to list Medium with type id 0 Type Id Type Name
Specifying initial configuration of cells bipd=Compu. Cell. Blob. Initializer. Parse. Data() region=bipd. Region() region. Center(Compu. Cell. Point 3 D(50, 0)) region. Radius(40) region. Types("Condensing") region. Types("Non. Condensing") region. Width(5) Cell types use to fill region Width of a single cell Register Parse. Data objects #remember to register Parse. Data Compu. Cell. Setup. register. Potts(sim, ppd) Register lattice configuration section Compu. Cell. Setup. register. Plugin(sim, ctpd) Compu. Cell. Setup. register. Plugin(sim, cpd) Compu. Cell. Setup. register. Plugin(sim, vpd) Compu. Cell. Setup. register. Steppable(sim, bipd) Register energy functions and cell type specification Register initial configuration steppable
Complete listing def configure. Simulation(sim): import Compu. Cell. Setup ppd=Compu. Cell. Potts. Parse. Data() ppd. Steps(20000) ppd. Temperature(5) ppd. Neighbor. Order(2) ppd. Dimensions(Compu. Cell. Dim 3 D(100, 1)) ctpd=Compu. Cell. Type. Parse. Data() ctpd. Cell. Type("Medium", 0) ctpd. Cell. Type("Condensing", 1) ctpd. Cell. Type("Non. Condensing", 2) cpd=Compu. Cell. Contact. Parse. Data() cpd. Energy("Medium", 0) cpd. Energy("Non. Condensing", 16) cpd. Energy("Condensing", 2) cpd. Energy("Non. Condensing", "Condensing", 11) cpd. Energy("Non. Condensing", "Medium", 16) cpd. Energy("Condensing", "Medium", 16) vpd=Compu. Cell. Volume. Parse. Data() vpd. Lambda. Volume(1. 0) vpd. Target. Volume(25. 0) bipd=Compu. Cell. Blob. Initializer. Parse. Data() region=bipd. Region() region. Center(Compu. Cell. Point 3 D(50, 0)) region. Radius(40) region. Types("Condensing") region. Types("Non. Condensing") region. Width(5) #remember to register Parse. Data Compu. Cell. Setup. register. Potts(sim, ppd) Compu. Cell. Setup. register. Plugin(sim, ctpd) Compu. Cell. Setup. register. Plugin(sim, cpd) Compu. Cell. Setup. register. Plugin(sim, vpd) Compu. Cell. Setup. register. Steppable(sim, bipd) 35 lines of straightforward code vs at least 1000 lines of C++/Java/Fortran code
To finish the simulation code - reuse boiler-plate code from Compu. Cell 3 D examples import sys from os import environ import string sys. path. append(environ["PYTHON_MODULE_PATH"]) import Compu. Cell. Setup sim, simthread = Compu. Cell. Setup. get. Core. Simulation. Objects() configure. Simulation(sim) Compu. Cell. Setup. initialize. Simulation. Objects(sim, simthread) from Py. Steppables import Steppable. Registry steppable. Registry=Steppable. Registry() Compu. Cell. Setup. main. Loop(sim, simthread, steppable. Registry)
Opening a Python-based simulation in the Player Go to File->Open Simulation ; Click Python script “Browse…” button to select python script. Do not forget to check “ Run Python script” checkbox!
Cell-sorting in XML - cellsort_2 D. xml
Exercise 1 • Modify simulation so that cells produce checkerboard pattern
Crawling Neutrophil Chasing Bacterium Richard Firtel (UCSD)
Simulation Building Blocks in Compu. Cell 3 D • Four Cell Types: Bacterium, Macrophage, Wall, Red Blood Cells • Assumption 1: Bacterium secretes chemoattractant (call it ATTR) which diffuses and Macrophage responds to the ATTR gradient • Assumption 2: Macrophage secretes chemorepellant (REPL) which affects Bacterium
ppd=Compu. Cell. Potts. Parse. Data() ppd. Steps(20000) ppd. Temperature(15) ppd. Flip 2 Dim. Ratio(1. 0) ppd. Dimensions(Compu. Cell. Dim 3 D(100, 1)) ctpd=Compu. Cell. Type. Parse. Data() ctpd. Cell. Type("Medium", 0) ctpd. Cell. Type("Bacterium", 1) ctpd. Cell. Type("Macrophage", 2) ctpd. Cell. Type("Wall", 3, True) cpd=Compu. Cell. Contact. Parse. Data() cpd. Energy("Medium", 0) cpd. Energy("Macrophage", 15) cpd. Energy("Macrophage", "Medium", 8) cpd. Energy("Bacterium", 15) cpd. Energy("Bacterium", "Macrophage", 15) cpd. Energy("Bacterium", "Medium", 8) cpd. Energy("Wall", 0) cpd. Energy("Wall", "Medium", 0) cpd. Energy("Wall", "Bacterium", 50) cpd. Energy("Wall", "Macrophage", 50) cpd. Neighbor. Order(2) vpd=Compu. Cell. Volume. Parse. Data() vpd. Lambda. Volume(15. 0) vpd. Target. Volume(25. 0) spd=Compu. Cell. Surface. Parse. Data() spd. Lambda. Surface(4. 0) spd. Target. Surface(20. 0) Make Wall cells frozen
chpd=Compu. Cell. Chemotaxis. Parse. Data() chfield=chpd. Chemical. Field() chfield. Source("Fast. Diffusion. Solver 2 DFE") chfield. Name("ATTR") chbt=chfield. Chemotaxis. By. Type() chbt. Type("Macrophage") chbt. Lambda(2. 0) fdspd=Compu. Cell. Fast. Diffusion. Solver 2 DFEParse. Data() df=fdspd. Diffusion. Field() diff. Data=df. Diffusion. Data() secr. Data=df. Secretion. Data() diff. Data. Diffusion. Constant(0. 1) diff. Data. Decay. Constant(0. 001) diff. Data. Field. Name("ATTR") diff. Data. Do. Not. Diffuse. To("Wall") secr. Data. Secretion("Bacterium", 200) pifpd=Compu. Cell. PIFInitializer. Parse. Data() pifpd. PIFName("bacterium_macrophage_2 D_wall. pif") Compu. Cell. Setup. register. Potts(sim, ppd) Compu. Cell. Setup. register. Plugin(sim, ctpd) Compu. Cell. Setup. register. Plugin(sim, vpd) Compu. Cell. Setup. register. Plugin(sim, spd) Compu. Cell. Setup. register. Plugin(sim, chpd) Compu. Cell. Setup. register. Steppable(sim, pifpd) Compu. Cell. Setup. register. Steppable(sim, fdspd) Chemotaxis: choosing PDE solver and chemical field name Setting chemotacting type and chemotaxis strength Diffusion field ATTR Diffusion and decay constants Preventting ATTR from entering Wall cels Bacterium secretion constant
Exercise 2 - Making simulation look more realistic • Introduce moving Red Blood Cells instead of rigid walls • Make Bacterium small and Macrophage large • Introduce few Macrophages and Bacteria • Introduce new chemorepellant (REP) secreted by Macrophage and afecting bacterium (Exercise 2 a)
Using PIFInitilizer Use PIFInitializer to create sophisticated initial conditions. PIF file allows you to compose cells from single pixels or from larger rectangular blocks The syntax of the PIF file is given below: Cell_id Cell_type x_low x_high y_low y_high z_low z_high Example (file: amoebae_2 D_workshop. pif): 0 amoeba 10 15 0 0 This will create rectangular cell with x-coordinates ranging from 10 to 15 (inclusive), y coordinates ranging from 10 to 15 (inclusive) and z coordinates ranging from 0 to 0 inclusive. 0, 0 Python syntax: pifpd=Compu. Cell. PIFInitializer. Parse. Data() pifpd. PIFName(“amoebae_2 D_workshop. pif")
Let’s add another cell: Example (file: amoebae_2 D_workshop. pif): 0 Amoeba 10 15 0 0 1 Bacteria 35 40 0 0 Notice that new cell has different cell_id (1) and different type (Bacterium) Let’s add pixels and blocks to the two cells from previous example: Example (file: amoebae_2 D_workshop. pif): 0 Amoeba 10 15 0 0 1 Bacteria 35 40 0 0 0 Amoeba 16 16 15 15 0 0 1 Bacteria 35 37 41 45 0 0 To add pixels, start new pif line with existing cell_id (0 or 1 here ) and specify pixels.
This is what happens when you do not reuse cell_id Example (file: amoebae_2 D_workshop. pif): 0 Amoeba 10 15 0 0 1 Bacteria 35 40 0 0 0 Amoeba 16 16 15 15 0 0 2 Bacteria 35 37 41 45 0 0 Introducing new cell_id (2) creates new cell. PIF files allow users to specify arbitrarily complex cell shapes and cell arrangements. The drawback is, that typing PIF file is quite tedious task and , not recommended. Typically PIF files are created using scripts. In the future release of Compu. Cell 3 D users will be able to draw on the screen cells or regions filled with cells using GUI tools. Such graphical initialization tools will greatly simplify the process of setting up new simulations. This project has high priority on our TO DO list.
PIFDumper - yet another way to create initial condition PIFDumper is typically used to output cell lattice every predefined number of MCS. It is useful because, you may start with rectangular cells, “round them up” by running Compu. Cell 3 D , output cell lattice using PIF dumper and reload newly created PIF file using PIFInitializer. pifpd=Compu. Cell. PIFDumper. Parse. Data() pifpd. PIFName(“amoebae. 100. pif") pifpd. frequency=100 Above syntax tells Compu. Cell 3 D to store cell lattice as a PIF file every 100 MCS. The files will be named amoebae. 100. pif , amoebae. 200. pif etc… To reload file , say amoebae. 100. pif use already familiar syntax: pifpd=Compu. Cell. PIFInitializer. Parse. Data() pifpd. PIFName(“amoebae. 100. pif")
Writing Python Extension Modules for Compu. Cell 3 D • Most of Compu. Cell 3 D simulations will require certain level of customization. • Using “traditional” approach , this would be done in C++/Java/Fortran and would require recompilation • Compu. Cell 3 D allows users to conveniently develop their own extension modules using Python that DO NOT NEED to be recompiled • Typically users develop steppable modules (called every MCS) which alter cellular behavior as simulation progresses.
Printing information about all the cells present in the simulation class Info. Printer. Steppable(Steppable. Py): def __init__(self, _simulator, _frequency=10): Steppable. Py. __init__(self, _frequency) self. simulator=_simulator self. inventory=self. simulator. get. Potts(). get. Cell. Inventory() self. cell. List=Cell. List(self. inventory) def step(self, mcs): for cell in self. cell. List: print "CELL ID=", cell. id, " CELL TYPE=", cell. type, " volume=", cell. volume Class constructor – used to initialize Steppable object. Creating iterable cell inventory Iterating through cell inventory and printing basic cell information Code of the constructor is a boiler-plate code and typically is reused without any alterations in many steppables.
class Info. Printer. Steppable(Steppable. Py)………… #include earlier code def configure. Simulation(sim)……………. . #include earlier code #import useful modules import sys from os import environ from os import getcwd import string #setup search patths sys. path. append(environ["PYTHON_MODULE_PATH"]) sys. path. append(getcwd()+"/demo") #add search path import Compu. Cell. Setup sim, simthread = Compu. Cell. Setup. get. Core. Simulation. Objects() Compu. Cell. Setup. initialize. Simulation. Objects(sim, simthread) #Add Python steppables here steppable. Registry=Compu. Cell. Setup. get. Steppable. Registry() info. Printer. Steppable=Info. Printer. Steppable(_simulator=sim, _frequency=10) steppable. Registry. register. Steppable(info. Printer. Steppable) Compu. Cell. Setup. main. Loop(sim, simthread, steppable. Registry)
Info Printer results
Exercise 3 • Enhance cell-sorting simulation by writing a Python steppable that at the beginning of the simulation assigns Type ID=1 to cells in the upper half of the lattice and Type ID=2 to cells in the lower half of the lattice. Hint: you have to include compd=Compu. Cell. Center. Of. Mass. Parse. Data() To ensure that Compu. Cell 3 D updates COM position for each cell: center. Of. Mass. X = cell. x. CM / float(cell. volume)
Exercise 4 • For cell-sorting simulation, write Python Steppable that every 100 MCS switches cell types 2 -> 1 and 1 ->2
Summary • Compu. Cell 3 D is indeed environment rather than specialized program • It can be extended by writing modules in Python, C++, Java. • Actively developed and supported • Annual Training Workshops in Bloomington, Indiana • www. compucell 3 d. org
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512268.20/warc/CC-MAIN-20181019020142-20181019041642-00518.warc.gz
|
CC-MAIN-2018-43
| 27,084
| 69
|
https://freework.ai/tool/haddock
|
code
|
What is Haddock?
Haddock is a platform that specializes in generative AI tools for gaming engines. It enables users to access an extensive library of code created using state-of-the-art AI models like GPT-4, Copilot, and others. By utilizing text-based input, users can generate code specifically tailored for popular gaming platforms like Roblox, Unity, Minecraft, and Unreal. Haddock's primary objective is to establish the most extensive collection of AI-generated code accessible at no cost, while also delivering top-notch code generation tools to expedite the game development process.
Pros VS Cons
- Haddock offers personalized travel recommendations, collaborative trip planning, unique event generation, and over 12,000 travel inspirations, making it ideal for various trip types and inspiring off-the-grid experiences.
- However, it lacks offline access, is limited to popular cities, doesn't support flight or accommodation booking, lacks cross-platform compatibility and multilingual support, and doesn't have integrations with other apps or a user community for reviews, thus lacking travel emergency features and depending on an active internet connection.
Display Your Achievement: Get Our Custom-Made Badge to Highlight Your Success on Your Website and Attract More Visitors to Your Solution.
- Monthly visits19.49K
- Avg visit duration00:05:46
- Bounce rate34.37%
- Unique users9.14K
- Total pages views97.71K
Access Top 5 countries
- What is ScripterAI?
- What are the pricing tiers for ScripterAI?
- What is Haddock's mission?
- What do customers say about Haddock's products?
- What are some useful links related to Haddock?
Haddock Use Cases
Access a library of code generated with paid AI tools like GPT-4, Copilot, etc. for free. Generate code for Roblox, Minecraft (Forge 1.19.X), Unity, and Unreal using text. Expanding soon to Blender.
Haddock's mission is to create the largest library of AI-generated code available at no cost, while also providing the highest-quality code generation tools accessible to gamers.
AI-powered code generation tool. Use specialized models for creating scripts from text. Free access to GPT-4, Copilot, and a code-engineered version of ChatGPT (plus) included.
Access a library of AI-Generated Roblox Scripts. Generate Roblox Code with ScripterAI (and GPT-4, ChatGPT, etc.)
Access a library of AI-Generated Unity Scripts. Generate Unity Code with ScripterAI (and GPT-4, ChatGPT, etc.)
There are many modes available for the AI tools we provide. Examples include Simulator, Module, NPC, GUI, etc.
Use advanced AI models like GPT-4 to get step-by-step instructions on creating gaming components (like 'Loading' GUIs).
There are many modes available for the AI tools we provide. Examples include RPG, NPC, Simulator, GUI, etc.
Here's what some of our customers are saying about our products.
Our Search Library will be forever free to use. For our code generation tool ScripterAI, we have three pricing tiers:
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476180.67/warc/CC-MAIN-20240303011622-20240303041622-00077.warc.gz
|
CC-MAIN-2024-10
| 2,963
| 28
|
https://pro.arcgis.com/en/pro-app/latest/tool-reference/cartography/how-smooth-line-and-smooth-polygon-work.htm
|
code
|
Smoothing is a generalization operation that removes sharp angles in a line or outline. Smoothing is often used to improve aesthetic appearance in cartographic output. The Smooth Line, Smooth Polygon and Smooth Shared Edges geoprocessing tools offer two different smoothing algorithms.
Polynomial Approximation with Exponential Kernel (PAEK)
The Polynomial Approximation with Exponential Kernel (PAEK) option calculates smoothed lines using a parametric continuous averaging technique. The current point coordinates are calculated by the weighted averaging of the coordinates of all points of the source line. The weights of each point decrease with the distance along the line to the current point. In addition to averaging, approximation with polynomials of the second degree is used. The smoothed line doesn't necessarily contain all or any vertices of the source line except the end points. The result depends on one parameter. The method is stable—a minor change to the parameter causes a minor change in the result. In general, this algorithm gives better results than the Bezier Interpolation option in terms of the smoothed shapes. This option is based on the algorithm defined by Bodansky, et al, (2002).
The Smoothing Tolerance parameter is used by the (PAEK) algorithm only. This tolerance specifies the length of a moving path along an input line used to calculate the smoothed coordinates by the (PAEK) algorithm. The longer the path, the more smoothed the resulting lines. Each new location is calculated using the information within the specified length of the path that is centered at the location. In this way, the tolerance defines the region within which all coordinates are taken into account.
The Bezier Interpolation option fits Bezier curves through every line segment along an input line. The Bessel Tangent is used to connect the curves smoothly at vertices (Farin, 1997). The resulting lines pass through input vertices. This option is based on the algorithm defined by Farin, (1997).
Bodansky, Eugene; Gribov, Alexander; and Pilouk, Morakot (2002) "Smoothing and Compression of Lines Obtained by Raster-to-Vector Conversion", LNCS 2390, Springer, p. 256-265.
Farin, Gerald (1997) Curves and Surfaces for CAGD, a Practical Guide, 4th Edition, Academic Press, USA.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818732.46/warc/CC-MAIN-20240423162023-20240423192023-00031.warc.gz
|
CC-MAIN-2024-18
| 2,291
| 7
|
https://www.shroudoftheavatar.com/forum/index.php?threads/making-water-from-fountains-wells-gather-faster.149787/
|
code
|
The current gathering of water from wells and fountains is prohibitively slow. Is there any way we could get it to be faster? Currently people just buy water from NPC vendors because the time to draw it is not worth it the 2 gold a bucket. There are several possible ways to fix this: 1: Draw multiple buckets at a time. 2: Decrease draw delay. 3: Make drawing water rate skill dependent. 4: Make the number of buckets of water drawn at a time skill dependent. 5: All of 1-4 6: Any of 1-4 I personally like option 3 and 4 but they are a little harder to code, so options 1 and 2 might be a better option.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347402457.55/warc/CC-MAIN-20200529054758-20200529084758-00515.warc.gz
|
CC-MAIN-2020-24
| 604
| 1
|
https://sonichours.com/when-ethics-and-law-overlap-it-is-called/
|
code
|
The relationship between law and ethics is shaped by our values and value systems. The relationship between law and ethics is most clearly visible in our society. In other words, ethical behavior is not always forbidden under the law. Hence, it is not possible to justify killing a human being or robbing him of his property. But the consequences of such actions are often severe, and the punishment is not always fair.
The relationship between law and ethics is not always clear. Just because a certain action is legal does not necessarily make it ethically correct. For example, an act of violence can be deemed right by law without being ethically justified. The Overlap Thesis refers to this problem. There are many examples of overlaps between law and ethics. The video prompts viewers to think of specific examples in which the overlap between the two can be most obvious.
In the Western legal tradition, equity was the foundation of law. Thus, in most legal theories, law and ethics go hand in hand. The term “equity” is derived from the Latin root aequus. The word aequus has two different meanings, which led to opposing political theories on law and justice. However, the overlap theory has since been criticized. This article explores how the overlap between law and ethics is manifested in various scenarios.
When ethics and law overlap, it is called the overlap thesis. If they do, there is a conflict between them. For example, a law that is morally justified may be unethical, and an ethically justifiable act is not. The Overlap Thesis denies the co-existence of ethics and law. The Separability Thesis argues that laws are not consistent with morality. It also argues that law is not always consistent with the demands of ethics.
Regardless of the legal and ethical principles, there are often conflicts between the two. In many cases, the legal aspect of a situation may overrule the ethical one. Moreover, a person might not have the same moral obligation as a law-aggressor. While it is possible to find an example of an ethics-based action, it is not always possible to determine whether a law and ethics overlap.
Sometimes, the overlap between law and ethics is not very clear. When a legal action is justified, it is not ethically justifiable. If a particular action is not ethically justifiable, it may not be ethical. A corresponding law, on the other hand, could be morally justifiable, but it is not always right. Therefore, a good example of an overlap between law and ethics is not legal-justifiable.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100550.40/warc/CC-MAIN-20231205073336-20231205103336-00035.warc.gz
|
CC-MAIN-2023-50
| 2,533
| 6
|
https://intellijobs.ai/job/Thoughtworks-Embedded%20C++%20Developer-lt27dd3BP4qOjX5ONCZc
|
code
|
Experience- 4 - 9 years of experience
Skills Required- Embedded C++, Object Oriented Programming (OOPs), Test driven development (TDD), linux
As a C++ developer at Thoughtworks, you will develop software that runs cutting-edge systems from consumer devices to industrial robots. You will use agile engineering practices to deliver high-quality software.
Found this job inappropriate? Report to us
- Expertise in C++ (v11, v14 or newer)
- Object-Oriented Programming and Design
- TDD in C++, ideally using GoogleTest
- Systems thinking (Build scalable, fault-tolerant systems)
- Knowledge of tools used to manage code, dependencies, build, package and test C++ systems (eg: make, Automake, Autoconf, GCC, Bazel, git, etc)
- People from any domain like automotive, autonomous, electrical vehicles
- Experience in memory management
- Experience in debugging using debuggers like GDB
- Knowledge of operating systems, especially Linux
- Exposure to agile engineering and eXtreme Programmings (XP) practices such as Test-Driven Development (TDD), Continuous Integration (CI), and Pair Programming.
- Ability to work in a global, distributed team environment
- Excellent communication skills in written and verbal English
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362297.22/warc/CC-MAIN-20211202205828-20211202235828-00512.warc.gz
|
CC-MAIN-2021-49
| 1,215
| 16
|
http://www.guido.rincon.dial.pipex.com/livesite/lineup/kevin/kevin.htm
|
code
|
Yamaha Maple Custom Drums,
Zildjan, Paiste & Sabian Cymbals
and Taylor Acoustic Guitar.
Kevin served his apprenticeship as a sporty* fresh faced teenager
playing in dance and big bands - learning to read drum music to
the tunes of Woody Herman and Glenn Miller.
Influenced by Barriemore Barlow (Jethro Tull), Andy Ward (Camel)
and Pierre Van Der Linden (Focus) he later toured with various
rock bands before guesting on Shave the Monkey's second album 'Dragonfly'.
John Cleese taught him to play
the flute during one episode of Monty Python Flying Circus and
unable to practice drums
in the comfort of his own home decided to learn guitar - a sample
of which can be heard on "Tune for a mop Fair" on the
album 'Mad Arthur'
The only single member of the band, kevin dreams
of falling in love with a blue eyed, leggy, long blonde haired,
slim cellist. So if you are female and fit the bill, please contact
the band. Short dark haired bombard players also considered.
*Ed's note - sporty should read spotty.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164943590/warc/CC-MAIN-20131204134903-00037-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 1,004
| 20
|
https://community.canvaslms.com/thread/35928-roll-call-attendance-report-including-sis-student-id
|
code
|
How do I access the report described below (please see link and description)? I've downloaded the roll call report, but it doesn't included the SIS Student ID column, which is essential for my project. I believe the report I am looking for is called a "Roll Call Attendance Report for an Account" and it is an Administrator level report.
The report is described as containing the following information:
The downloaded CSV report displays all content in a list. Reports always include the following data fields: Course ID, SIS Course ID, Course Code, Course Name, Teacher ID, SIS Teacher ID, Teacher Name, Student ID, SIS Student ID, Student Name, Class Date, Attendance, and Timestamp.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540531917.10/warc/CC-MAIN-20191211131640-20191211155640-00263.warc.gz
|
CC-MAIN-2019-51
| 685
| 3
|
https://gumroad.com/l/learnistio
|
code
|
This book guides you through setting up your environment, deploying services, using different Istio service mesh patterns, and observing your released services. You will learn and understand how Istio service mesh works and how to use it with your services.
You need this book if you have wondered:
- How to set up my development environment with Kubernetes and Istio?
- How to expose your Kubernetes services through a gateway and associate domains to them?
- What is the most efficient way to run multiple versions of a service in production?
- What are different ways to split traffic between services?
- Are my services behaving properly?
- How to test your services using a service mesh?
- The 199-page book in PDF, Mobi, and ePub formats
- Istio and Kubernetes YAML files with explanations
- Free future updates and releases
Learn more at https://learnistio.com
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525821.56/warc/CC-MAIN-20191210041836-20191210065836-00028.warc.gz
|
CC-MAIN-2019-51
| 867
| 12
|
http://ixda.org/category/tags/interaction-design-course
|
code
|
I'm very interested in interaction/UX design that I'd like to do this full time. After reading books like "About Face", "What makes them click", "Why we love or hate everyday things"..etc I started developing solutions for issues I had in mind, with the purpose of practicing design in general.
To work in this field full time, I was advised to enroll in a program. However, I'd like to keep my full time job in Dublin while studying.
I am in a dilemma and hoping to get some help here. I have graduated as an industrial designer in June 2011 and had applied to Masters in interaction design program at Ivrea (which is under Domus Academy, Milan now) and Copenhagen Institute of Interaction Design (CIID). I have been accepted in both the institutes for masters course starting in January 2012 and I chose to attend the course at Domus Academy.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982954852.80/warc/CC-MAIN-20160823200914-00094-ip-10-153-172-175.ec2.internal.warc.gz
|
CC-MAIN-2016-36
| 844
| 3
|
http://www.tomshardware.com/forum/233217-45-should-slowness-system
|
code
|
My window XP Pro sp2 boots process takes long period although all the icons for networking, audio, logitech mouse, trend micro antivirus already appeared on the task bar.
My system has 1GB DDR RAM dual channel 400MHZ 184 Pin OCZ brand.
Maxtor HD 200GB
Maxtor HD 120GB win 2003 on second HD
western digital 80GB strictly storage.
I already defraged HD using diskkeeper 10 and optimize PC with tune up utilities 2006.
Should I add more 1GB more RAM ? The system performance seem to degrade after long use. I notice the PC use more paging file although the system isn't runnning any other applications other than firefox browser for surfing the net to get email.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663637.20/warc/CC-MAIN-20140930004103-00212-ip-10-234-18-248.ec2.internal.warc.gz
|
CC-MAIN-2014-41
| 659
| 7
|
https://github.com/spindlelabs
|
code
|
You got your platform in my application management! You got your application in my platform management!
forked from spray/spray
A suite of scala libraries for building and consuming RESTful web services on top of Akka: lightweight, asynchronous, non-blocking, actor-based, testable
forked from kif-framework/KIF
Keep It Functional - An iOS Functional Testing Framework
forked from elastic/elasticsearch
Open Source, Distributed, RESTful Search Engine
forked from spray/twirl
The Play framework Scala template engine, stand-alone and packaged as an SBT plugin
Adjusts TCP_USER_TIMEOUT for some destinations by intecepting calls to connect(2).
forked from enormego/EGOTableViewPullRefresh
A similar control to the pull down to refresh control created by atebits in Tweetie 2.
forked from Cocoanetics/DTCoreText
Methods to allow using HTML code with CoreText
forked from Cocoanetics/DTFoundation
Standard toolset classes and categories
forked from timkay/aws
Easy command line access to Amazon EC2, S3, SQS, ELB, and SDB (new!)
forked from arashpayan/appirater
A utility that reminds your iPhone app's users to review the app.
forked from square/WaxSim
Hack to get the iPhone Simulator to run on the command line
forked from steipete/SDURLCache
URLCache subclass with on-disk cache support on iPhone/iPad. Forked for speed!
forked from bengottlieb/Twitter-OAuth-iPhone
An easy way to get Twitter authenticating with OAuth on iPhone
forked from dropwizard/metrics
Capturing JVM- and application-level metrics. So you know what's going on.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270527.3/warc/CC-MAIN-20160524002110-00113-ip-10-185-217-139.ec2.internal.warc.gz
|
CC-MAIN-2016-22
| 1,534
| 28
|
https://clearify.com/forums/qqube/excel/1625/creating-date-range-buckets-for-sales-summary-like-aging-buckets
|
code
|
I want my inventory sales by item summary to include columns of multiple date ranges to reveal sales trends, we use the following: Last 7 Days, Last 30 Days, Last 60 Days, Last 90 Days.
We currently generate each of these reports in QB, export each and combine in a single Excel spreadsheet and use lookups to create the actual report. Very time consuming.
In QQube, I understand how to use CalYr Txn Date to create a Report Filter for the entire report. But I need those four date ranges in a single pivot table. The A/R and A/P reports have Aging Buckets with date ranges. Can something similar be done in the Inventory module?
You are on right track...Using a QQube dynamic list..Add a 'calculated column' for your 'buckets'. (It has to be right next to your dynamic list (no blk columns inbetween). You would need to set up an anchor date from which to measure the 30-60-90 days. I usually insert 1-2 rows at row 1(so my 'list' starts at row 3). This allows me to add a title, last update or or information to my list. I usually set my anchor dates somewhere in the first row.
Your formula in calculated column would be something like: if(anchordate-txndate<=7,"Last 7 days",anchordate-txndate<=30,"Last 30 days",anchordate-txndate <=60,"Last 60 days","")
Then on you pivot, you make your "bucket" column a "column" and filter it so the ""(blanks) do not show...
Hop that helps,
Again, thank you for the response!
I have created an Inventory List with the data I need. I then added a calculated column that fills in the field with the text phrase, as noted above (only a slight change needed in the formula!). Once the data all looked good, I converted the List to a Pivot Table and began recreating the layout. The problem is, I don't see my custom column (simply labeled Aging) anywhere in the fields available and so I cannot add it to the Pivot Table.
Where is my custom column hiding?
For the heck of it I tried converting back to a List and my original List layout and data was completely gone. So I guess that it was a mistake to take my data-ready List and convert it to a Pivot Table. The wiki doesn't help with this. What step am I missing?
Thank you, Fran!
Glad to help... You may have to "change data source" under pivot table options to expand selection to cover new column.
So, I have rebuilt my Inventory List with the data I need including the custom bucket column as we have been discussing. I have saved a back-up of this file. Upon clicking Convert to Pivot Table, my Inventory List is gone and I am presented with a blank Pivot Table. Converting to a Table is not preserving anything I did in the List. My custom bucket column is not available for use. The "change data source" option under Pivot Tables only allows me to change external data source. There must be something I am doing wrong. The wiki describes how to create calculated columns in a List or in a Table, but doesn't mention being able to pull it from one to the other.
Fran is giving you good assistance here.
Even though QQube has the data, it requires some Excel manipulation to get this done.
What you need to do in Excel:
You should have your fields in a blank pivot table - including the ones you created with the formulas. There are two keys here:
You will have both a list on one sheet, and a pivot table on the other.
Please also remember that you can't filter out a pivot table using dates, without filtering out the inventory history. In order to sum up the proper quantities, all transactions from day one are encompassed in the pivot table. You need to create formulas, as outlined here, to put your transactions into buckets.
Choose a location
Your session has expired. You are being logged out.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00729.warc.gz
|
CC-MAIN-2023-50
| 3,698
| 22
|
https://www.minecraftforum.net/forums/mapping-and-modding-java-edition/minecraft-mods/2980739-twitch-vs-minecraft
|
code
|
Twitch Vs Minecraft (aka Streamer Vs Chat) lets your Twitch viewers interact with your game while you play, by typing commands in chat.
This is a mod I've been working on since June, and I really want to get more streamers to try it out. Interactive streaming has finally come to Minecraft. The mod is currently available on CurseForge for Minecraft 1.12.2. All information is available on the CurseForge page, and the source code can be found on GitHub. I hope you enjoy playing my mod, and I would love to get some feedback as well as ideas for new features and commands!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00316.warc.gz
|
CC-MAIN-2021-43
| 573
| 2
|
http://www.i3dthemes.com/blog/changing-a-hyperlink-in-html/
|
code
|
One of the most important basics in HTML code is the “Hyperlink”. A typical hyperlink uses the “A” tag (A is short for Anchor) and is typically found in the following format:
<a href="http://www.somewebsite.com/">your link text</a>
The bit between the quotes, also called a URL (Universal Resource Locator), is where the link will take your visitor when they click on the words “your link text”.
There are also a couple of different types of URLS: absolute and relative.
An absolute URL is a location that is absolutely defined, so that no matter where you are putting the link, it will always find the item that you are linking to (web page, image, pdf, etc). An example of an absolute URL would be:
A relative URL has a location that is relative to the location of the page that you place it on. Examples of a relative URL would be:
As you can see, there are a couple of ways of referring to a relative document.
../ means “back up” the directory structure. So if the web page that you had the link on was in a folder such as:
And you wanted to link to
The relative version of that would be:
Using a / before the relative URL will shortcut all that .. stuff.
If you don’t use
then it simply uses your current location as a base. So, if you are on http://www.mywebsite.com/myfolder/mywebpage.htm and you specify images/myimage.jpg, you would be linking to http://www.mywebsite.com/myfolder/images/myimage.jpg
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247512461.73/warc/CC-MAIN-20190222013546-20190222035546-00194.warc.gz
|
CC-MAIN-2019-09
| 1,427
| 13
|
https://learn.adafruit.com/circuitpython-motorized-camera-slider/3d-printing
|
code
|
The parts for this project are designed to be 3D printed with FDM based machines. STL files are oriented to print "as is". Parts require tight tolerances that might need adjustments to the slice settings. Reference the suggested settings below. Parts do not require any support material.
The parts can further be separated into small pieces for fitting on printers with smaller build volumes. Note: a STEP file is included for other 3D surface modeling programs such as Solidworks, Maya and Rhino.
The parts were printed and tested in PLA filament. For parts with more strength or usage outdoors we suggest using PETG filament. The parts were sliced using Ultimaker CURA 4.x software on a Creatly CR-10S 3D printer.
List of all the 3d printed parts with filenames. Note the slider-rail-mount need to be printed twice.
- 2x slider-rail-mount.stl
- 2x slider-feet.stl
Design Source Files
The project assembly was designed in Fusion 360. This can be downloaded in different formats like STEP, SAT and more. Electronic components like Adafruit's board, displays, connectors and more can be downloaded from our Adafruit CAD parts GitHub Repo.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817144.49/warc/CC-MAIN-20240417044411-20240417074411-00235.warc.gz
|
CC-MAIN-2024-18
| 1,137
| 8
|
https://soicon.de/en/home-2/about-us-2/
|
code
|
Working at SOICON is more than consulting.
One of the core values of SOICON is, that every consultant will increase his/her market value. This is more than experience on projects and certificates.
At SOICON, every consultant has to work on three development goals: For the top management suitable analyses, which are consistent, compact and concise. Enduring long periods of complex detailed work to be able to draft reliable white papers. Communicating actively with the client organization to ensure that the implementation is always on track.
We develop skills that are useful to all stakeholders of a consultant: the clients, SOICON and every employee of consultant.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517559.41/warc/CC-MAIN-20210119011203-20210119041203-00322.warc.gz
|
CC-MAIN-2021-04
| 670
| 4
|
https://www.cubebrush.co/pzuh/products/fmvla/the-dungeon-top-down-tileset
|
code
|
The Dungeon - Top Down Tileset
Set of tiles that can be used to create a level/map for top-down games.
With medieval dungeon or castle theme. Very suitable to create fantasy RPG games, or other top-down games with similar theme.
Features: - 80++ tiles - 40++ objects and decoration. From banner, crate, barrel, to animated torch, chest, and many more. - Original files is in full vector. You can export it to any size, change color, or customize them as you want. - Supported file format: Adobe Illustrator, CorelDraw, EPS, SVG, PNG - 256px PNG files. So it ready to be implemented on your game projects. Or build your own tileset using your preferred tools, ie Shoebox, Texture Packer, etc.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506481.17/warc/CC-MAIN-20230923130827-20230923160827-00039.warc.gz
|
CC-MAIN-2023-40
| 691
| 4
|
https://stackoverflow.com/questions/35858735/apply-multiple-functions-with-map/36050674
|
code
|
I have 2D data that I want to apply multiple functions to. The actual code uses
xlrd and an
.xlsx file, but I'll provide the following boiler-plate so the output is easy to reproduce.
class Data: def __init__(self, value): self.value = value class Sheet: def __init__(self, data): self.data = [[Data(value) for value in row.split(',')] for row in data.split('\n')] self.ncols = max(len(row) for row in self.data) def col(self, index): return [row[index] for row in self.data]
Creating a Sheet:
fake_data = '''a, b, c, 1, 2, 3, 4 e, f, g, 5, 6, i, , 6, , , , , ''' sheet = Sheet(fake_data)
In this object,
data contains a 2D array of strings (per the input format) and I want to perform operations on the columns of this object. Nothing up to this point is in my control.
I want to do three things to this structure: transpose the rows into columns, extract
value from each
Data object, and try to convert the value to a
float. If the value isn't a
float, it should be converted to a
str with stripped white-space.
from operators import attrgetter # helper function def parse_value(value): try: return float(value) except ValueError: return str(value).strip() # transpose raw_cols = map(sheet.col, range(sheet.ncols)) # extract values value_cols = (map(attrgetter('value'), col) for col in raw_cols) # convert values typed_cols = (map(parse_value, col) for col in value_cols) # ['a', 1.0, 'e', 5.0, '', ''] # ['b', 2.0, 'f', 6.0, 6.0, ''] # ['c', 3.0, 'g', 'i', '', ''] # ['', 4.0, '', '', '', '']
It can be seen that
map is applied to each column twice. In other circumstances, I want to apply a function to each column more than two times.
Is there are better way to map multiple functions to the entries of an iterable? More over, is there away to avoid the generator comprehension and directly apply the mapping to each inner-iterable? Or, is there a better and extensible way to approach this all together?
Note that this question is not specific to
xlrd, it is only the current use-case.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621519.32/warc/CC-MAIN-20210615180356-20210615210356-00128.warc.gz
|
CC-MAIN-2021-25
| 1,992
| 20
|
http://think2cents.blogspot.com/2011/02/my-own-energy-drinks.html
|
code
|
The argument and debate on whether energy drink is healthy or risky is not new and may become hotter and hotter. Before any conclusions are drawn, I prefer not drinking them, neither my family. Sometimes it is hard to see the effect immediately. When you see it, it is too late. We each person has only one life, there already have been enough healthy risky factors around us that is hard to escape. If we can, why not avoid as much as possible.
If I really need energy drink, I will make it by myself from natural products. The easiest way is to make ginseng tea, or jujube tea, Chinese Wolfberry tea, or a combination of the two or three. The way to make it is simple: Put them into boiled cool water and soak for at least half hour, drink the tea and eat the ginseng or the fruit at the end.
Note: This is my personal experience and not recommendation for any body.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864940.31/warc/CC-MAIN-20180623035301-20180623055301-00545.warc.gz
|
CC-MAIN-2018-26
| 868
| 3
|
https://www.acmicpc.net/problem/18459
|
code
|
|시간 제한||메모리 제한||제출||정답||맞은 사람||정답 비율|
|1 초||512 MB||19||10||10||62.500%|
You are given the degree sequence of a tree (degrees of all its vertices, in arbitrary order).
Among all trees with the given degree sequence, find a tree with the largest maximum matching.
The first line of input contains one integer t (1 ≤ t ≤ 100 000): the number of testcases.
Next lines contain t descriptions of a test case.
The first line of each test case contains one integer n (2 ≤ n ≤ 200 000): the number of vertices.
The next line contains n integers d1, d2, . . . , dn (1 ≤ di ≤ n − 1), the degree sequence of a tree.
It is guaranteed that Σdi = 2(n − 1) and that there is at least one tree with the given degree sequence.
Also, it is guaranteed that the total sum of n in all test cases is at most 200 000.
For each test case, print one integer: the largest maximum matching among all trees with the given degree sequence.
2 10 1 1 2 2 2 2 2 2 2 2 5 4 1 1 1 1
In the first test case, you can construct a path with 10 vertices, it will have the same degree sequence and the largest possible maximum matching.
In the second test case, the only possible tree is a star (one vertex connected with all others), and the largest matching for it is 1.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738855.80/warc/CC-MAIN-20200811205740-20200811235740-00306.warc.gz
|
CC-MAIN-2020-34
| 1,292
| 14
|
http://archive.gamedev.net/archive/reference/programming/features/dx81shader1/index.html
|
code
|
We have seen ever-escalating graphics performance in PCs since the release of the first 3dfx Voodoo cards in 1995. Although this increase in horsepower has allowed PCs to run graphics faster, it arguably has not allowed graphics to run much better. The fundamental limitation thus far in PC graphics accelerators has been that they are mostly fixed-function. Fixed-function means that the silicon designers have hard-coded specific graphics algorithms into the graphics chips, and as a result the game and application developers have been limited to using these specific fixed algorithms.
For over a decade, a graphics language known as Pixar Animation Studio's RenderMan has withstood the test of time and has been the choice of professionals for high-quality photo-realistic rendering.
Pixar's use of RenderMan in its development of feature films such as "Toy Story" and "Bug's Life" has resulted in a level of photo-realistic graphics which have amazed audiences worldwide. RenderMan's programmability has allowed it to evolve as major new rendering techniques were invented. By not imposing strict limits on computations, RenderMan allows programmers the utmost in flexibility and creativity. However, this programmability has limited RenderMan to only software implementations.
Now, for the first time, low-cost consumer hardware has reached the point where it can begin implementing the basics of programmable shading similar to the RenderMan graphics language with real-time performance.
The main 3D APIs have evolved alongside graphics hardware.. One of the most important new features in DirectX Graphics is the addition of a programmable pipeline that give you an assembly language interface to the transformation and lighting hardware (vertex shader) and the pixel pipeline (pixel shader). Such a programmable pipeline gives you a lot more freedom to do things never done before.
Shader programming is the new and real challenge for game coders. Face it...
What You Are Going To Learn
This course covers two key aspects of the new Direct3D 8.1 API and teaches you how to use them to produce stunning effects: Vertex Shader and Pixel Shader Programming. From now on, you are going to learn all the stuff necessary to program vertex and pixel shaders for the Windows-family of operating systems from scratch.
We will deal with
and much more ...
As with all my other online tutorials in the past, this tutorial will change and perhaps grow in the next couple of weeks a lot. I always update, change or clarify things, sometimes with e-mails from readers in mind. So don't stop writing me e-mails :-). Just watch out for the "Last modification: ..." date at the beginning to get the newest version. You will always find the newest version at www.direct3d.net.
What You Need to Know/Equipment
You need a basic understanding of the math typically used in a game engine and you need a basic to intermediate understanding of the DirectX Graphics API. It helps if you know how to use the Transform & Lighting (T&L) pipeline and the SetTextureStage() calls.
I recommend working through an introductory level text first. For example "Beginning Direct3D Game Programming" might help :-).
Your development system should consist of the following hardware and software:
If you are not a lucky owner of a Geforce3, RADEON 8500 or another new graphic card that supports vertex shaders in hardware, the standardized assembly interface will provide you highly-tuned software vertex shaders that AMD and Intel have optimized for their CPUs. So software vertex shaders are the easiest way to get portable SIMD code for the main CPU suppliers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00456.warc.gz
|
CC-MAIN-2023-14
| 3,632
| 16
|
https://www.paratope.co/blog/cross-platform-development
|
code
|
From the very beginning, we decided that targeting as many platforms as possible would provide long term success for Skyclimbers.
That included emerging platforms such as cloud gaming, which has been pioneered by companies such as Google and Microsoft. In particular Google Stadia came to the forefront as a valuable platform with a true “Library in the Clouds” business model, and one of the largest companies in the world behind it.
To support platforms like Stadia, Steam Deck, Switch,
and next generation consoles, we needed a versatile engine and an adaptive design philosophy. This led us to choose Unity as a baseline to build what would become our Ion Engine today. Technologies such as Unity’s Scriptable Render Pipeline, and the Vulkan graphics would mean greater support for mobile/Switch, Linux/Stadia, and low end Windows PC architecture.
Since the release of our Alpha on Steam,
we have had over 1,000 participants and had the opportunity to test on dozens of hardware profiles. This has been an eye opening experience, despite our best efforts to cover all performance profiles proof to release. We have dedicated resources to building benchmark systems in our offices of varying degrees, in addition to gathering groups of dedicated testers in order to cover as many test cases as possible.
Ultimately, we need more time than initially projected to cover all hardware profiles on Steam
ranging from high end discrete GPUs and dedicated CPUs from AMD, Intel, and NVIDIA all the way down to APUs such as those found in the Steam Deck. This will have an effect on our Stadia release timeline, and therefore we have to push Alpha testing duration into a TBA state, as to not mislead our community.
In order to maintain cross-platform support
and eventually cross-play we need to consider porting requirements for each platform in question. We have already forked a version of our codebase that is specific to Linux based platforms such as Stadia and Steam Deck, but are also keeping our options open to automatic porting protocols such as Proton for Linux based Steam devices. Google has also confirmed a similar feature is in development with select partners for running unmodified Windows games on Stadia. Unfortunately this technology is still a ways out, but should it arrive sooner our release plans will be expedited.
A commitment to our players,
since our priority has shifted to focus primarily on the Steam ecosystem for the short term, we are offering any Stadia player to also redeem a key on Steam for the Alpha and are working with Valve to provide the same opportunity to other backers from console platforms. As we get closer to release on new platforms, players will have access to closed Alpha and Beta testing outside of Steam.
We couldn't be happier with the response from each gaming platform, and in particular we would like to say thank you to #TeamStadia for their outspoken support.
Until next time...
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474377.60/warc/CC-MAIN-20240223085439-20240223115439-00419.warc.gz
|
CC-MAIN-2024-10
| 2,943
| 14
|
https://communities.intel.com/thread/52278
|
code
|
I got much further by switching to Vagrant precise32 and installing the dependencies from the BSP guide. Now it's stopped the build here:
ERROR: Function failed: do_validate_branches (see /home/vagrant/galileo_build/meta-clanton_v1.0.0/yocto_build/tmp/work/clanton-poky-linux-uclibc/linux-yocto-clanton/3.8-r0/temp/log.do_validate_branches.27492 for further information)
ERROR: Logfile of failure stored in: /home/vagrant/galileo_build/meta-clanton_v1.0.0/yocto_build/tmp/work/clanton-poky-linux-uclibc/linux-yocto-clanton/3.8-r0/temp/log.do_validate_branches.27492
Log data follows:
| DEBUG: Executing shell function do_validate_branches
| ERROR: Function failed: do_validate_branches (see /home/vagrant/galileo_build/meta-clanton_v1.0.0/yocto_build/tmp/work/clanton-poky-linux-uclibc/linux-yocto-clanton/3.8-r0/temp/log.do_validate_branches.27492 for further information)
ERROR: Task 236 (/home/vagrant/galileo_build/meta-clanton_v1.0.0/meta-clanton-bsp/recipes-kernel/linux/linux-yocto-clanton_3.8.bb, do_validate_branches) failed with exit code '1'
NOTE: Tasks Summary: Attempted 919 tasks of which 725 didn't need to be rerun and 1 failed.
This log message:
suggests me it may be resource exhaustion on your VM, I saw such things in the past when builds were failing with mysterious errors just because of that. Adding resources to the VM or decreasing amount of threads and tasks in the local.conf always helped for such errors.
Default compilation settings in the yocto_build/conf/local.conf are rather aggressive and require something like at least 2 vCPU VM with 4GB vRAM (at least that's the config which worked for me and e.g. 2GB vRAM were failing). The easiest way to fix this type of errors is to tweak two variables in local.conf:
BB_NUMBER_THREADS = "12"
PARALLEL_MAKE = "-j 14"
The above are defaults, the rule of thumb is to set 2 threads and make processes per vCPU, that is a safe conservative setting. With such settings 2GB vRAM worked fine for me, AFAIR.
As far as dependencies are concerned - the BSP Build Guide has them all, I didn't need anything else after carefully installing all what's mentioned there.
BTW, a while ago I've created a VM image, which has all the necessary prerequisites installed and checked to compile fine, check this thread out, maybe it will be of use for you: Linux VM pre-configured for Galileo BSP building - released
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511175.9/warc/CC-MAIN-20181017132258-20181017153758-00078.warc.gz
|
CC-MAIN-2018-43
| 2,372
| 16
|
http://joyofexcellence.com/Home/Blog
|
code
|
I've had a web site for years now starting with a self-hosted (yes, I had my own web servers at home) Microsoft FrontPage website. Through the years I've used them for many things such as sharing vacations pictures, my library, wine, motorcycles, and technology. What I've realized now is that a blog meets many of those needs. The only need for my personal site that are not met by that is my wine inventory and my library.
So with this version I am learning to use Microsoft's ASP.NET MVC (Model-View-Controller) 2 web capabilities. I've been impressed with it for quite some time but realized that my timeframe was such that I should wait until the release of Visual Studio 2010 and MVC 3.
So going forward, there really won't be much on this site other than books and wines. The rest will be on my blog site.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00480.warc.gz
|
CC-MAIN-2021-43
| 812
| 3
|
https://kronosnxs.dev/installing-cloudpanel-on-ubuntu-22-04/
|
code
|
Cloudpanel is a simplistic hosting control panel, that lets you install WordPress, PHP applications, Node.js, Python and static websites in an instance.
To install CloudPanel you need a dedicated server or a VPS with a clean installation of Ubuntu 22.04 or Debian 11 with root access. In this guide we will be using Ubuntu 22.04.
If you don’t have a server or VPS see our guide on how to deploy a server with Vultr.com: Deploying a VPS with Vultr.com
Login to your server via SSH, for Windows users you can use PuTTY (https://putty.org/), linux and mac users can use their favorite SSH client.
sudo apt update
sudo apt upgrade
anwser with y to upgrade available packages.
Also make sure the following packages are installed with the following commands:
sudo apt install curl
sudo apt install wget
When all the above is done it’s time to run the CloudPanel installer. You can choose 4 different Database flavors. We will list the commands for all 4 of them, so you can choose which one you like most. (MariaDB 10.6, would be our personal choice)
sudo curl -sSL https://installer.cloudpanel.io/ce/v2/install.sh | sudo bash
sudo curl -sSL https://installer.cloudpanel.io/ce/v2/install.sh | sudo DB_ENGINE=MARIADB_10.9 bash
sudo curl -sSL https://installer.cloudpanel.io/ce/v2install.sh | sudo DB_ENGINE=MARIADB_10.8 bash
MariaDB 10.6: (our choice)
sudo curl -sSL https://installer.cloudpanel.io/ce/v2/install.sh | sudo DB_ENGINE=MARIADB_10.6 bash
When the installation is completed you will be told to visit: https://yourserverip:8443
In most browsers you will see a warning about a self-signed certificate, you can ignore this warning and continue to your panel and create your administrator account.
CloudPanel Custom Domain
If you want to access your control panel through its own domain e.g.: cp.example.com you will need to do the following.
First make sure that your cp.example.com domain points to your server IP address. (if you are using Cloudflare for your DNS make sure to turn off proxy for this subdomain)
Now in CloudPanel click on Admin Area
Followed by clicking on Settings:
On the General tab enter the subdomain you want to use for your CloudPanel, followed by pressing on Save. When you click on save a Let’s Encrypt Certificate will be issued for the custom domain.
Now you can visit your CloudPanel with your own custom domain. (https://cp.example.com without :8443)
This tutorial has been last tested on 12 October 2022 on a Vultr.com Cloud Compute instance. See: Deploying a VPS with Vultr.com on how to deploy a VPS on Vultr.com capable of running CloudPanel.
Problems during installation
If during your update and upgrade process some packages are hold back you can install them manually with the following command: (replace <package_name> with the real package name)
sudo apt install <package_name>
In case there are multiple packages not upgraded use the same command but add all the packages behind each other with a space between package like: (replace <package_name> <package_name-2> etc with the real package names.)
sudo apt install <package_name> <package_name-2>
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816875.61/warc/CC-MAIN-20240414064633-20240414094633-00803.warc.gz
|
CC-MAIN-2024-18
| 3,099
| 31
|
http://www.wmtw.com/blitz-8/Fitzpatrick-Trophy-Semifinalists-named/17580678?view=print
|
code
|
Fitzpatrick Trophy Semifinalists named
Award is given to top senior high school football player in state.
FITZPATRICK AWARD SEMIFINALISTS
Bobby Begin-Thornton Academy
Copyright 2012 by WMTW.com All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983077957.84/warc/CC-MAIN-20160823201117-00047-ip-10-153-172-175.ec2.internal.warc.gz
|
CC-MAIN-2016-36
| 289
| 5
|
https://www.phpflow.com/php/php-cache-how-to-cache-xml-file-in-php/
|
code
|
Cache is important terminology for website performance. The cache is playing very important role to improve the performance of web application.
For example when we are accessing web service and method is accessing data very frequently. The web method response does not change frequently.
At that point, if we are accessing each time web service, it’s very costly in terms of website performance.
At that time we will keep an XML copy of response in the cache folder and use it again and again until response will not change. Now our mind has a question on how to identify the response or file has been changed.
You Can also checkout other tutorials of PHP Cache,
There are two method to control cache expiration:
- Set expiry date time of file from cache folder.
- File dependent caching(compare file created time of cache file as well as source file).
Below is code to create cache file with fixed expiry datetime.
$path = 'cache/phpflow.xml';
if ((!file_exists($path) || time() - filemtime($path) > 60) && $cache = fopen($path, 'w+'))
$cache = fopen($path, 'r');
How To Use PHP Cache Method in HTML file
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643380.40/warc/CC-MAIN-20210619020602-20210619050602-00080.warc.gz
|
CC-MAIN-2021-25
| 1,107
| 13
|
https://docs.mcafee.com/bundle/advanced-threat-defense-4.4.0-installation-guide/page/GUID-DA7FF8E2-3F9F-4E51-BD4F-342432915EE2.html
|
code
|
Plan your deployment Before you install Advanced Threat Defense, verify that you have everything you need, and that your environment meets the minimum system requirements. Requirements To ensure that your deployment is successful, your environment must meet the minimum requirements. Hardware specificationsBefore you set up the Advanced Threat Defense Appliance, review the hardware specifications. System environmental limits These are the system environmental limits for the Advanced Threat Defense Appliance. Default ports used in Advanced Threat Defense communicationThe Advanced Threat Defense Appliance uses many ports for network communications. Warnings and cautionsRead and follow these safety warnings when you install the Advanced Threat Defense Appliance. Failure to observe these safety warnings could result in serious physical injury. Deployment checklist To make sure that your network is ready to set up Advanced Threat Defense, review the deployment checklist.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360745.35/warc/CC-MAIN-20210228084740-20210228114740-00169.warc.gz
|
CC-MAIN-2021-10
| 979
| 1
|
https://community.jamf.com/t5/jamf-pro/ot-apple-s-ast-vs-asd/td-p/70137
|
code
|
I was poking around our GSX account looking for AST updates, when I came across ASD. Thinking they were the same just they changed the name (I'm about as n00bish at the GSX site as you can get), I downloaded the latest version. Upon discovering this ASD thing needed to be on a USB stick / HD i proceeded to install on there.
Since I just had gotten a machine that needed to have a hardware test run I ran it on there, but it said it wasn't compatible. Do I need to load a specific version for each machine? I ended up downloading one for that specific machine which worked, but that seems to be a bit of a PITA to do for every hardware model we have.
What's the point of AST (which said that the machine passed), but then the OS version of the ASD found a failure...
They are both suites of tests. Short answer: Use AST (by which I mean MRI) to verify everything is plugged in. Save ASD for when you're really stumped about why something is busted or to completely rule out hardware failure in troubleshooting.
Long Answer: AST most notably includes MRI, which is basically a roll call and quick test for available hardware, basically "is everything plugged in?" It also includes things like interactive keyboard and trackpad tests, battery tests, and cooling systems diagnostics. AST also hooks into GSX as it is a net-booted test environment, so it may need to be run to qualify a repair in-warranty (which is why you see the people at the Genius bar run it at Apple stores during appointments). ASD on the other hand runs in two variants, EFI or OS and yes, needs to be loaded on a hard drive or thumb drive, though you can partition a drive many times and pre-load all the model-specific tests onto different partitions. EFI tests lower level hardware, OS can test all kinds of things, like sleep/wake, graphics, etc. Keep in mind no test is perfect and both AST and ASD can have false positives. For a full list of the ASD version you need, you can search "ASD" in the big search bar on GSX and look for the marketing name - like MacBook Pro ( 15-inch, Late 2011) - of the model you're looking for and it'll give you a download link for the appropriate version. There's extension documentation on AST too if you wanna know what MRI checks.
AST was designed to be a customer-facing utility for AASPs. Techs are required to use it and like said, it does a quick check of a given component.
ASD is what I tend to trust more. I can set it in looping mode and deal with intermittent issues and it thoroughly can check an individual component (say the logic board) for individual smaller failures.
I use them in tandem myself...I check in with AST, do my work, make sure it passes ASD and then use AST again for any reason Apple calls for it (say OS tests or required specific tests for an issue.) Once it passes both or is within Apple spec, only then do I consider a repair done.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00181.warc.gz
|
CC-MAIN-2022-40
| 2,881
| 8
|
http://lists.mplayerhq.hu/pipermail/mplayer-docs/2006-November/008434.html
|
code
|
[MPlayer-DOCS] [matrix-encoded audio] - other files
Dominik 'Rathann' Mierzejewski
dominik at rangers.eu.org
Thu Nov 2 01:05:52 CET 2006
On Thursday, 02 November 2006 at 01:02, Corey Hickey wrote:
> Yeshua Watson wrote:
> >I don't have a high end system to test so all of this is useless to me.
> Neither to I. I just use -af channels, -af pan, and, if worst comes to
> worst, I crawl under the table and plug my headphones into different
> places on the sound card. :)
I have a 4.0 setup, so if you need any surround testing, just throw
an URL at me.
MPlayer developer and RPMs maintainer: http://rpm.greysector.net/mplayer/
There should be a science of discontent. People need hard times and
oppression to develop psychic muscles.
-- from "Collected Sayings of Muad'Dib" by the Princess Irulan
More information about the MPlayer-DOCS
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644574.15/warc/CC-MAIN-20230529010218-20230529040218-00168.warc.gz
|
CC-MAIN-2023-23
| 835
| 17
|
https://sourceforge.net/directory/license:ibm/
|
code
|
Extra tools for OpenOffice under weak copyleft or other licenses
A space to store classic OOo dependencies that cannot be easily redistributed in Apache OpenOffice's SVN tree, Initially this was meant for copyleft tarballs only but it is also pretty handy to mirror other file dependencies.
The IBM Toolbox for Java / JTOpen is a library of Java classes supporting the client/server and internet programming models to a system running OS/400, i5/OS, or IBM i. JTOpenLite is a set of lightweight Java classes suitable for use on mobile devices. It provides Java application access to IBM i: DDM access, basic JDBC access, command call, program call access, etc. Packages are delivered by following PTF on IBM i platform: V7R3: SI66703 V7R2: SI66702 V7R1: SI66701 The latest version is JTopen 9.5, released 08 May 2018.
JikesTM is a compiler for the JavaTM language. The Jikes project strives for strict adherence to the Java Language and Java Virtual Machine Specifications. Jikes' most popular feature is it's extremely fast compile speed.
Epydoc is a tool for generating API documentation for Python modules, based on their docstrings. Epydoc supports two output formats (HTML and PDF), and four markup languages for docstrings (Epytext, Javadoc, ReStructuredText, and plaintext).
The Sleuth Kit is a C++ library and collection of open source file system forensics tools that allow you to, among other things, view allocated and deleted data from NTFS, FAT, FFS, EXT2, Ext3, HFS+, and ISO9660 images.
The latest versions of OpenSSH for AIX are available on https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp . The latest version of OpenSSH for AIX 5.8 has been released on 25-Oct-2011. If you have any questions for OpenSSH development on AIX you can now send email to: firstname.lastname@example.org.
The Aglets Software Development Kit (ASDK) is a framework and environment for developing and running mobile agents. Mobile Agents are a type of software agents that have the unique ability to transport themselves from one system to another. Doing so, an
Fully packaged linux distribution to provide internet access and resource management to small and medium companies.
The librtas package can now be found on github: https://github.com/nfont/librtas The librtas package provides an interface for Run-Time Abstraction Services (RTAS) calls on PAPR platforms. The libraries allow users to examine and manipulate hardware, and parse RTAS events.
Driver for the ApplePro keyboard for all Windows 32 Bit Versions
Java Console is a Java Command promt tool intended for Software Developers, System Administrators and as a plug in to other applications. It is a very powerful console client just run it and have fun
RPGUnit is a regression testing framework similar to JUnit (http://www.junit.org/). Developers use it to implement unit tests in RPG ILE, a language found mainly on the iSeries (a.k.a. AS/400) platform.
NT command line tool for automating FTP transfers
This is a client side XML Editing and XML Message generation framework which is flexible/powerful yet standard compliant and easy to implement. The idea is based on SGML Architectural Form Processing and XHTML Modularization.
The objective of this project is to provide nice look and feel for Java Applets running on current browsers without the support of Java 2. All components are based on AWT, but they are written by taking the Swing as a reference.
This application is a minimal Cocoon 2 application. Its goal is to be a good start for any new Cocoon 2 project
This project provides a couple of Log4j appenders to log into Domino databases from various contexts like servlets, Java clients and, of course Domino Java agents.
Dynamic Probe Class Library (DPCL) is an object based C++ class library that provides the necessary infrastructure to allow tool developers and sophisticated tool users to build parallel and serial tools through technology called dynamic instrumentation.
EML50Combine merges two or more XML schema files defined by OASIS (http://www.oasis-open.org) in the Election Markup Language 5.0 (EML). The merged schema file can be inputted into language binding generators like JAXB or Apache Xmlbeans.
An umbrella project for a number of small eclipse enhancements
ForeignDesk - Integrated Translation Environment
A modular and distributable performance harness for load testing application servers. Developed in java and designed in the same spirit as JUnit (www.junit.org).
Klang is a stack-based, post-fix calculator language.
The Open SystemC Initiative (OSCI) is a collaborative effort to support and advance SystemC as a de facto standard for system-level design. SystemC is an interoperable, C++ SoC/IP modeling platform for fast system-level design and verification
This will be a server-client MMORPG similar to a mud (ASCII based). However, it will include sounds and graphic, but still enough ascii to leave the user enough freedom of imagination. Written in Java with full OOD, this will be multi-platform. Both Cl
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863811.3/warc/CC-MAIN-20180520224904-20180521004904-00570.warc.gz
|
CC-MAIN-2018-22
| 5,034
| 26
|
https://ukblacktech.com/job/software-engineer-sensor-platform/
|
code
|
Vivacity Labs is a start-up that is growing exponentially in one of the fastest growing sectors, with a talented team and the very brightest of futures. We make cities smarter. We gather real-time data from our sensors to reduce congestion, spot dangerous manoeuvres on the road to improve safety, and support autonomous vehicles.
Using Reinforcement Learning techniques at the forefront of academic and research thinking, our award winning teams optimise traffic lights to prioritise cyclists and improve air quality. To support this, we gather anonymised data through simple, efficient, and affordable Computer Vision based sensors.
Our work makes a real difference to real people. All our solutions are community-centric, using ‘privacy by design’ principles. Our ultimate goal is to make the European vision of a Smart City – one which makes the city work effectively, for the community.
We are innovators in our field and have a strong, open and friendly culture that supports those looking for opportunity, challenge and autonomy.
Salary range: £45,000-65,000
The production engineering teams at Vivacity produce tools to administer our advanced sensor network, implement pipelines processing huge volumes of traffic data, build beautiful dashboards, and manage our world-leading AI systems controlling critical traffic infrastructure.
You will join one of our agile, 5-7 person teams, working with modern languages including Golang, and building on top of cutting-edge infrastructure such as Kubernetes, Kafka and TimescaleDB.
You will participate in extensive code reviews, pair programming, and contribute towards comprehensive testing. You will also have the opportunity to take ownership of products and shape their future.
You are an experienced software engineer with strong technical and teamwork skills. You are familiar with modern software development methodologies and comfortable working in an agile environment. You have:
– 3 years of professional software development experience
– Experience of Kubernetes, Docker, Prometheus and Kafka
– Enjoy learning and picking up new technologies, frameworks and skills
– Like brainstorming and collaboratively solving open problems
Vivacity welcomes applications from candidates of all backgrounds and embraces diversity within our teams. If you are in any doubt as to whether you would be a good fit, please get in touch, or apply anyway and we will get back to you. We look forward to hearing from you!
We offer flexible working policies and a benefits package that includes a personal development budget, an annual company away trip, and regular, varied events. This is an exciting opportunity to take an active part in shaping the future of an energetic company dedicated to revolutionising the way our cities work.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506028.36/warc/CC-MAIN-20230921141907-20230921171907-00652.warc.gz
|
CC-MAIN-2023-40
| 2,794
| 15
|
https://www.internations.org/mumbai-expats/forum/where-to-live-432170
|
code
|
Where to live (Mumbai)
I will be moving to Mumbai shortly (within 1 week) and I'm looking for suggestions on where to look for an apartment. The adress of my office will be:
HDIL Kaledonia Commercial Complex,
2nd Floor, Koldongri Road,
Vijay Nagar, Andheri East
Mumbai - Protected content
Where should I mainly be looking for apartments? It's just me so Protected content will be enough and the budget is not my main concern.
Thankful for any help.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864546.30/warc/CC-MAIN-20180622143142-20180622163142-00448.warc.gz
|
CC-MAIN-2018-26
| 448
| 8
|
https://www.rdocumentation.org/packages/spatstat/versions/1.3-4/topics/mpl
|
code
|
Fit Point Process Model by Maximum Pseudolikelihood
Fits a point process model to an observed point pattern by the method of maximum pseudolikelihood.
mpl(Q, trend=~1, interaction=NULL, data, correction="border", rbord=0, use.gam=FALSE)
- A data point pattern (of class
"ppp") to which the model will be fitted, or preferably a quadrature scheme (of class
"quad") containing this pattern.
- An Rformula object specifying the spatial trend to be fitted.
The default formula,
~1, indicates the model is stationary and no trend is to be fitted.
- An object of class
"interact"describing the point process interaction structure, or
NULLindicating that a Poisson process (stationary or nonstationary) should be fitted.
- An optional data frame of spatial covariates (evaluated at the
locations given in the quadrature scheme
- The name of the edge correction to be used. The default
"border"indicating the border correction. Other possibilities may include
correction = "border"this argument specifies the distance by which the window should be eroded for the border correction.
- Logical flag; if
TRUEthen computations are performed using
This function fits a point process model
to an observed point pattern by the method of maximum pseudolikelihood
The model may include
spatial trend, interpoint interaction, and dependence on covariates.
The algorithm is an implementation of the method of
Baddeley and Turner (2000), which approximates the pseudolikelihood
by a special type of quadrature sum generalising the Berman-Turner (1992)
Q should be either a point pattern
or a quadrature scheme. If it is a point pattern, it is converted
into a quadrature scheme.
A quadrature scheme is an object of class
which specifies both the data point pattern and the dummy points
for the quadrature scheme, as well as the quadrature weights
associated with these points.
Q is simply a point pattern
then it is interpreted as specifying the
data points only; a set of dummy points specified
default.dummy() is added, and the default weighting rule is
invoked to compute the quadrature weights.
The usage of
mpl() is closely analogous to the Splus/Rfunctions
The analogy is:
The point process model to be fitted is specified by the
which are respectively analogous to
family arguments of glm().
Systematic effects (spatial trend and/or dependence on
spatial covariates) are specified by the argument
trend. This is an Splus/Rformula object, which may be expressed
in terms of the Cartesian coordinates
or the variables in the data frame
data (if supplied), or both.
It specifies the logarithm of the first order potential
of the process.
The formula should not use the names
which are reserved for internal use.
trend is absent or equal to the default,
the model to be fitted is stationary (or at least, its first order
potential is constant).
Stochastic interactions between random points of the point process
are defined by the argument
interaction. This is an object of
"interact" which is initialised in a very similar way to the
usage of family objects in
See the examples below.
interaction is missing or
NULL, then the model to be fitted
has no interpoint interactions, that is, it is a Poisson process
(stationary or nonstationary according to
trend). In this case
the method of maximum pseudolikelihood
coincides with maximum likelihood.
data, if supplied, must be a data frame with
as many rows as there are points in
The $i$th row of
data should contain the values of
spatial variables which have been observed
at the $i$th point of
Q. In this case
Q must be a
quadrature scheme, not merely a point pattern.
Thus, it is not sufficient to have observed
a spatial variable only at the points of the data point pattern;
the variable must also have been observed at certain other
locations in the window.
The variable names
are reserved for the Cartesian
coordinates and the mark values,
and these should not be used for variables in
correction is the name of an edge correction method.
correction="border" specifies the border correction,
in which the quadrature window (the domain of integration of the
pseudolikelihood) is obtained by trimming off a margin of width
from the observation window of the data pattern.
Not all edge corrections are implemented (or implementable)
for arbitrary windows.
Other options depend on the argument
interaction, but these generally
"periodic" (the periodic or toroidal edge correction
in which opposite edges of a rectangular window are identified)
(the translation correction, see Baddeley 1998 and Baddeley and Turner
For pairwise interaction there is also Ripley's isotropic correction,
The fitted point process model returned by this function can be printed
(by the print method
to inspect the fitted parameter values.
If a nonparametric spatial trend was fitted, this can be extracted using
the predict method
This algorithm approximates the log pseudolikelihood
by a sum over a finite set of quadrature points.
Finer quadrature schemes (i.e. those with more
quadrature points) generally yield a better approximation, at the
expense of higher computational load.
Complete control over the quadrature scheme is possible.
quadscheme for an overview.
Note that the method of maximum pseudolikelihood is believed to be inefficient and biased for point processes with strong interpoint interactions. In such cases, it is advisable to use iterative maximum likelihood methods such as Monte Carlo Maximum Likelihood (Geyer, 1999) provided the appropriate simulation algorithm exists. The maximum pseudolikelihood parameter estimate often serves as a good initial starting point for these iterative methods. Maximum pseudolikelihood may also be used profitably for model selection in the initial phases of modelling.
See the comments above about the possible inefficiency
and bias of the maximum pseudolikelihood estimator.
The accuracy of the Berman-Turner-Baddeley approximation to
the pseudolikelihood depends on the number of dummy points used
in the quadrature scheme. The number of dummy points should
at least equal the number of data points.
The parameter values of the fitted model
do not necessarily determine a valid point process.
Some of the point process models are only defined when the parameter
values lie in a certain subset. For example the Strauss process only
exists when the interaction parameter $\gamma$
is less than or equal to $1$,
corresponding to a value of
less than or equal to
The current version of
mpl maximises the pseudolikelihood
without constraining the parameters, and does not apply any checks for
sanity after fitting the model.
trend formula should not use the names
which are reserved
for internal use. The data frame
data should have as many rows
as there are points in
Q. It should not contain
as these names are reserved for the Cartesian coordinates
and the marks.
If the model formula involves one of the functions
(e.g. applied to spatial coordinates
the fitted coefficients can be misleading.
The resulting fit is not to the raw spatial variates
but to a transformation of these variates. The transformation is implemented
poly() in order to achieve better numerical stability.
resulting coefficients are appropriate for use with the transformed
variates, not with the raw variates.
This affects the interpretation of the constant
term in the fitted model,
Conventionally, $\beta$ is the background intensity, i.e. the
value taken by the conditional intensity function when all predictors
(including spatial or ``trend'' predictors) are set equal to $0$.
However the coefficient actually produced is the value that the
log conditional intensity takes when all the predictors,
including the transformed
spatial predictors, are set equal to
0, which is not the same thing.
If you wish to fit a polynomial trend,
we offer an alternative to
polynom(), which avoids the
difficulty induced by transformations. It is completely analogous
poly except that it does not orthonormalise.
The resulting coefficient estimates then have
their natural interpretation and can be predicted correctly.
Numerical stability may be compromised.
Values of the maximised pseudolikelihood are not comparable
if they have been obtained with different values of
Baddeley, A. and Turner, R. Practical maximum pseudolikelihood for spatial point patterns. Australian and New Zealand Journal of Statistics 42 (2000) 283--322. Berman, M. and Turner, T.R. Approximating point process likelihoods with GLIM. Applied Statistics 41 (1992) 31--38. Besag, J. Statistical analysis of non-lattice data. The Statistician 24 (1975) 179-195. Diggle, P.J., Fiksel, T., Grabarnik, P., Ogata, Y., Stoyan, D. and Tanemura, M. On parameter estimation for pairwise interaction processes. International Statistical Review 62 (1994) 99-117. Jensen, J.L. and Moeller, M. Pseudolikelihood for exponential family models of spatial point processes. Annals of Applied Probability 1 (1991) 445--461. Jensen, J.L. and Kuensch, H.R. On asymptotic normality of pseudo likelihood estimates for pairwise interaction processes, Annals of the Institute of Statistical Mathematics 46 (1994) 475-486.
library(spatstat) data(nztrees) Q <- quadscheme(nztrees) # default quadrature scheme mpl(Q) # fit the stationary Poisson process # to point pattern or data/dummy quadrature scheme Q mpl(Q, ~ x) # fit the nonstationary Poisson process # with intensity function lambda(x,y) = exp(a + bx) # where x,y are the Cartesian coordinates # and a,b are parameters to be estimated mpl(Q, ~ polynom(x,2)) # fit the nonstationary Poisson process # with intensity function lambda(x,y) = exp(a + bx + cx^2) library(splines) mpl(Q, ~ bs(x,df=3)) # WARNING: do not use predict.ppm() on this result # Fits the nonstationary Poisson process # with intensity function lambda(x,y) = exp(B(x)) # where B is a B-spline with df = 3 mpl(Q, ~1, Strauss(r=0.1), rbord=0.1) # Fit the stationary Strauss process with interaction range 0.1 # using the border method with margin rbord=0.1 mpl(Q, ~ x, Strauss(0.1), correction="periodic") # Fit the nonstationary Strauss process with interaction range 0.07 # and exp(first order potential) = activity = beta(x,y) = exp(a+bx) # using the periodic correction. data(soilsurvey) mpl(soilsurvey, ~ bs(pH,3), Strauss(0.1), rbord=0.1, data=soilchem) # WARNING: do not use predict.ppm() on this result # Fit the nonstationary Strauss process # with intensity modelled as a third order spline function of the # spatial variable "pH" in data frame 'soilchem' ## MULTITYPE POINT PROCESSES ### data(lansing) # Multitype point pattern --- trees marked by species mpl(lansing, ~ marks, Poisson()) # fit stationary marked Poisson process # with different intensity for each species mpl(lansing, ~ marks * polynom(x,y,3), Poisson()) # fit nonstationary marked Poisson process # with different log-cubic trend for each species <testonly># equivalent functionality - smaller dataset data(ganglia) mpl(ganglia, ~ marks * polynom(x,y,2), Poisson())</testonly>
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178372367.74/warc/CC-MAIN-20210305122143-20210305152143-00593.warc.gz
|
CC-MAIN-2021-10
| 10,967
| 169
|