url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://physics.stackexchange.com/questions/572565/why-are-aerodynamic-streamlined-shapes-always-stumpy-at-the-front/572571
|
code
|
I'm building an autonomous boat, to which I now add a keel below it with a weight at the bottom. I was wondering about the shape that weight should get. Most of the time aerodynamic shapes take some shape like this:
The usual explanation is that the long pointy tail prevents turbulence. I understand that, but I haven't found a reason why the front of the shape is so stumpy. I would expect a shape such as this to be way more aerodynamic:
Why then, are shapes that have good reason to be aero-/hydrodynamic/streamlined (wings/submarines/etc) always more or less shaped like a drop with a stumpy front?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100047.66/warc/CC-MAIN-20231129010302-20231129040302-00338.warc.gz
|
CC-MAIN-2023-50
| 603
| 3
|
https://www.vn.freelancer.com/projects/php-website-design/optimize-wordpress-site/
|
code
|
I need some cleanup work done on my WordPress site,
1. I want to install a premium theme to take care of the design,
2. I need it updated to the latest version of WordPress
3. I need all of the plugins that are not being used, or slowing it down, removed
3. I want the PHP and web server settings checked to make sure they are set properly
5. Any maintenance to the database an option as long as backup protocols are followed.
I am looking for a provider who has a lot of successful WordPress projects to help me with this.
Ongoing maintenance maybe an option - SEO services a bonus. The next step will be to purchase and install the cms-submit plugin for community news program
If you can provide premium themes, that would be a benefit, I am interested in Sniper at Theme Forest. I like the black header and the slider, similar options considered.
What I want to achieve is faster loading pages and quicker response when using the admin.
Được trao cho:
7 freelancer đang chào giá trung bình $79 cho công việc này
Expert in doing this sort of stuff... No upfront needed, all payments through Milestone Payment (Escrow).. Online 16 Hours a day, Can start right away.. Thanks
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867666.97/warc/CC-MAIN-20180625111632-20180625131632-00323.warc.gz
|
CC-MAIN-2018-26
| 1,186
| 13
|
http://lists.squeak.org/archives/list/vm-dev@lists.squeakfoundation.org/message/BUCOP5ZTVJ5MHYESLQ6NQ4PPI3R4BZLF/
|
code
|
Andreas Raab wrote:
I'm wondering whether TCP_NODELAY should be default on or off for Squeak. The default should be the same on ALL platforms, in any case, and documented. For "real-time" stuff like Croquet and VoIP, TCP_NODELAY should be on (as it was for Windows). For batch file transfer, TCP_NODELAY should be off, because it helps by packing packets full of data.
Since the option can be set/reset for each socket, the default setting should be the one which satisfies most naive socket uses, and IMHO this would be the batch file transfer kind of stuff. TCP_NODELAY only makes sense for realtime applications where interactive response is critical, such as Croquet, and these applications should know what they're doing and should set TCP_NODELAY...
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817171.53/warc/CC-MAIN-20240417173445-20240417203445-00542.warc.gz
|
CC-MAIN-2024-18
| 755
| 3
|
https://www.tek-tips.com/faqs.cfm?fid=4716
|
code
|
dbExpress is a relatively new database access technology from Borland. It is designed to replace the old Borland Database Engine (BDE).
dbExpress is cross platform. It operates under Windows and Linux whereas the BDE and Microsoft's ADO only operate under Windows. This may be an important consideration for organisations trying to reduce costs.
dbExpress is fast. This is a result of dbExpress being a thin piece of code over the underlying DBMS's API.
dbExpress is very small. Again as a result of being a thin layer of code, dbExpress is about 4% of the size of the BDE. This can be a considerable advantage when distributing applications.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00560.warc.gz
|
CC-MAIN-2021-43
| 642
| 4
|
https://community.phones.nokia.com/support/discussions/topics/7000020962
|
code
|
I've been using Nokia 6 for 3 weeks now but I really miss the wp8.1 (not Microsoft, the os's itself thanks to you guys it was pretty much perfect)
İt was had the all the basic and useful apps that I need
Can you please develop some apps for Android the current version is nice yes, it's so pure but this is the problem; it's so empty.
I know you guys prefer fast update but please add some original apps, don't go overboard just add a gallery app that we are manage both Google drive and OneDrive photos, (with black background of course) actually you can make something like the original wp8 gallery app; it was pretty useful; you were able to get all of the photos and picture apps from one app (when you open the app the app shows you the pictures you took and downloaded first, than when you swipe to right it was showing the folders and one more swipe and voila all the camera apps )
And we should get a Nokia video &music hub too; similar to wp8 video-music app (damn you Ms what kind of idiot would split them to 2 useless apps)
And please add a physical camera button to Nokia 9 it's hard to use the camera without the button
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513760.4/warc/CC-MAIN-20181021052235-20181021073735-00137.warc.gz
|
CC-MAIN-2018-43
| 1,134
| 6
|
https://ssl.redditgifts.com/gallery/simpsons-2015/gift/cant-somebody-else-do-it/
|
code
|
I've been putting off posting this mostly because I'm super lazy. So I let the garbage pile of shame just build up and up, and then did nothing about it.
Anyway, I got an id holder and a mug. The mug is actually really cool, but it's hard to photograph mugs, so I had to try twice. It's got monstrous versions of several characters.
I'm actually a little confused about where my gifter is from. The tag is from Australia, but the mug seems to be from England. I don't know whether to thank them for saving my ass in WWIII or for sharing Yahoo Serious with the world. More importantly, which way does their toilet flow?
A cool id holder with Simpsons comics on it.
The coin is there by accident, not for scale.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107892062.70/warc/CC-MAIN-20201026204531-20201026234531-00489.warc.gz
|
CC-MAIN-2020-45
| 709
| 5
|
https://pastebin.com/0y5nrYUP
|
code
|
- Thank you for chosing PyLight.
- Please read the information below carefully.
- 1) You will need Python already installed on your PC.
- 2) Python version 2.7.6 is advised.
- 3) When the software opens the first time, it will prompt you to specify the installation path for Python. Please do so.
- 4) If you have Python 2.7.6 installed and are running on x64Bit windows PC. Then please download 'py2exe'. If you don't know what that is, please search for it and download it from the internet.
- 5) Install py2exe. (Don't worry, you dont need to know how to use it or what that is)
- 6) If all the requirements above are fulfilled then all the features of PyLight will be successfully unlocked.
- 7) Any missing feature from the list above (except point 1) won't disturb the basic functioning of PyLight.
- 8) Code and enjoy!
a guest May 4th, 2014 182 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
RAW Paste Data
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525004.24/warc/CC-MAIN-20190717001433-20190717023433-00028.warc.gz
|
CC-MAIN-2019-30
| 942
| 13
|
https://applewatchbands.store/products/apple-watch-band-travel-organizer-carrying-case
|
code
|
*Actual Apple Watch Bands NOT included. We are not an Apple Inc reseller.
- ⌚Large Capacity: Watch Band Case is designed of 4 flaps with 5 elastic loops for each section to hold 40 watch bands perfectly, and 2 accessories pockets with zippers for keeping watches, earphones, cables, pens ect.
- ⌚Strong Compatibility: The Watch Band Organizer folded dimensions: 9.06’’ * 7.09’’ * 1.97’’, can keep 40 watch straps and watch accessories easily. The elastic loops width 1.8 inches, suitable for various size of watch bands which is universally applicable and practical.
- ⌚Portable Carry: The Watch Band Storage Case is made of durable and lightweight material. It is designed with compact size, easy to carry by backpack, luggage, briefcase and handbag during travel, business trip. Also can use it as a handbag through the sofa handle, fashion and stylish.
- ⌚Great Protection: The water resistant material of the watch band bag protects watch bands from water damage. The thick sponge layer on both sides to prevent scratches and external crushing damage to your watch straps. The double zipper design outside of the bag prevents accidental falling out or lost.
- ⌚Specially Designed: This watch bands carry case has two especially elastic slots for holding the pins of removing watch band. Make it easy and convenient to change kinds of watch band style to adjust for different occasion from this storage bag.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656869.87/warc/CC-MAIN-20230609233952-20230610023952-00580.warc.gz
|
CC-MAIN-2023-23
| 1,432
| 6
|
https://climatemodeling.science.energy.gov/presentations/creation-synthetic-surface-temperature-and-precipitation-ensembles-through
|
code
|
Typically, uncertainty quantification of internal variability relies on large ensembles of climate model runs under multiple forcing scenarios or perturbations in a parameter space. Computationally efficient, standard pattern scaling techniques only generate one realization and do not capture the complicated dynamics of the climate system (i.e., stochastic variations with a frequency-domain structure). In this study, we generate large ensembles of climate data with spatially and temporally coherent variability across a subselection of Coupled Model Intercomparison Project Phase 5 (CMIP5) models. First, for each CMIP5 model we apply a pattern emulation approach to derive the model response to external forcing. We take all the spatial and temporal variability that isn’t explained by the emulator and decompose it into non-physically based structures through use of empirical orthogonal functions (EOFs). Then, we perform a Fourier decomposition of the EOF projection coefficients to capture the input fields’ temporal autocorrelation so that our new emulated patterns reproduce the proper timescales of climate response and “memory” in the climate system. Through this 3-step process, we derive computationally efficient climate projections consistent with CMIP5 model trends and modes of variability, which address a number of deficiencies inherent in the ability of pattern scaling to reproduce complex climate model behavior.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00838.warc.gz
|
CC-MAIN-2023-06
| 1,445
| 1
|
http://stackoverflow.com/questions/11887622/how-i-can-change-mailenables-hourly-sending-limit
|
code
|
It seems MailEnable limits users with 100 email per hour as default. I want to change that limit but couldn't find any option for this.
I digged manuals and configuration files but no luck.
Do you have any idea?
I believe this is doable in Mailenable Standard (v6).
Open the Mailenable interface and then expand
Here you can set a per-hour send limit.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929272.10/warc/CC-MAIN-20150521113209-00199-ip-10-180-206-219.ec2.internal.warc.gz
|
CC-MAIN-2015-22
| 351
| 6
|
https://www.brandmotion.io/downloads/male-scientist-examins-holo-orb/
|
code
|
Template designed to work best with single words before final logo reveal. Avoid using sentences. There is not enough time to read them, and text auto-scale will make text too small. Logo is a hologram, and darker colors will appear transparent and not very visible. Use this template for logos which are well defined by bright colors.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947759.42/warc/CC-MAIN-20180425080837-20180425100837-00085.warc.gz
|
CC-MAIN-2018-17
| 335
| 1
|
http://electricbeach.org/?p=148
|
code
|
This post is the second in a series of posts about the Behaviors model in Blend 3. In this post, we’ll talk about some of the important concepts of Behaviors in Blend 3. The next posts in the series will bring some samples.
For introductory information on Behaviors the motivation for them, please see my previous post here.
Triggers, Actions, Behaviors
The Behaviors mechanism in Blend 3 is built around three important related concepts for building interactivity: Triggers, Actions and Behaviors.
If you are familiar with WPF, Triggers and Actions are not really new to you: We use exactly the same model, with a few important additions:
- Triggers and Actions in Blend 3 are extensible. That is, you can add your own.
- We provide an implementation of Triggers and Actions for Silverlight
- We add Behaviors, the concept that really is the namesake for the whole feature area in Blend: Behaviors build on Triggers and Actions and allows to encapsulate interactivity patterns that are far richer than those supported by Triggers and Actions alone.
So, what exactly are Triggers, Actions and Behaviors?
Triggers & Actions
Let’s begin with Triggers and Actions. Think of the following sentence:
When I press the button, the door opens.
This sentence includes an action, the opening off the door, and a trigger that causes the action to happen, pressing the button.
And that really is all an Action is: An activity in the most general sense. We will have built-in Actions that do common things such as playing storyboards, setting properties, setting state and many more, but really, an Action can do anything someone decides to write. Your imagination is the limit.
Just like Actions, Blend 3 will supply built-in Triggers, for example for common events. And again, Triggers are extensible so the community can create new ones. Here are a few examples for possible Triggers: TimerTrigger fires when a timer expires. One of our developers, Jeff Kelly, wrote a trigger right before mix09 that fires when you draw a particular mouse gesture. You could have triggers that fire when a data base element changes. Or when the network connection on your machine goes down. Again, only your imagination is the limit.
There are many bits of interactivity that cannot easily be encapsulated with Triggers and Actions. For example, if you want to make something drag-able on a canvas, you need to deal with at least three events: You need to begin a drag when the mouse is pressed, update when the mouse is moved during a drag, and terminate the drag when the mouse is released. Also, you need to preserve state. And this is exactly what behaviors allow you to do. Behaviors let you encapsulate multiple related or dependent activities plus state in a single reusable unit.
We will explain more details in future posts.
Target & Source
As we talk about Triggers and Actions, there are a couple of other important pieces that play into this. Let’s repeat the sentence from above, slightly modified:
When I press the “open exit door” button, the exit door opens.
This is getting rather colorful. Let’s look at the magenta part of the sentence first. Imagine we have a room with multiple doors that can be opened from a control panel. “The door opens”, as in the original phrase, is not specific enough, we need to know which door the “open” action should be applied to. Many actions therefore have a Target property that points to the element that the action should be applied to.
As you apply Actions by dragging & dropping them on a UI element on the artboard, we make the object you are dropping the Action on the default target for your Action. We also give you a property editor in the property inspector that lets you choose any other object in the scene as your target.
The first part of the original sentence also is not specific enough: “When I press the button” does not really tell me which button. The revised sentence therefore has a clarification, in orange, that clearly states which button we mean. Many Triggers therefore have a Source property that allows you to configure on which UI element your trigger is supposed to listen.
Next in this series, we will discuss an example Action. Stay tuned…
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118851.8/warc/CC-MAIN-20170423031158-00296-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 4,226
| 24
|
https://xrmdynamicscrm.wordpress.com/2020/06/09/dynamics-crm-365-custom-activity-entity-not-appearing-on-timeline/
|
code
|
While I was working on one of the business requirement, I created new Custom Activity Entity and the requirement was to show on Timeline of Lead entity as well.
I though it will be visible as soon as we create entity and publish. However the result was different on UCI app. So I decided to explore and found out, In order to show Custom Activity Entity on Timeline we may have to look at below points.
Custom Activity Entity should be added to respective odel driven App. Otherwise this will not be visible on Timeline.
Open App Designer -> Select Entity-> Save and Publish
We have to select Entity in Timeline configuration of Formm Editor. Say you want this entity to be visible on Lead Timeline.
Go to Customize the system-> Entity->Lead->open Lead Form Editor-> select Social Pane->Click on properties-> Validate if new Cutom Activity Entity Selected in Activity Tab
Enable Quick Create and Enable for Unified Client check box on Entity configuration.
After configuring above steps I was able to see records on Lead Timeline.
Hope this helps!
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00293.warc.gz
|
CC-MAIN-2022-40
| 1,047
| 9
|
https://community.cloudera.com/t5/Support-Questions/HDP-Search-Configuration-and-Data-Directories/m-p/216578
|
code
|
Within the various Solr config settings on Ambari, I am a bit confused on the role of "solr_config_conf_dir" parameter. At the moment, it only contains log4j.properties file. As HDPSearch is mainly meant to be used with SolrCloud, wondering what is the significance of this directory as the configurations are always maintained on ZooKeeper.
Another question is when the indexes for SolrCloud collections are stored on HDFS, what is the significance of "solr_config_data_dir"? Is the solr_config_data_dir directory used ONLY for collections for which directory factory settings are set to local? If so, is it safe to assume that this is not needed when HDFS is being used?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00050.warc.gz
|
CC-MAIN-2021-21
| 672
| 2
|
http://crossplatform.net/linux-is-burning-my-laptop/
|
code
|
Update: I ended up buying a new Intel laptop with on-board graphics. I exclusively use Ubuntu Linux there. It has been quite a while now, and I’m pretty much used to the workflow in Linux.
It’s not that I don’t like Linux. My brother did manage to install Ubuntu on my other laptop. But you had to be very careful while installing updates. Installing any kernel updates would break the system and you’ll be stuck in console.
I don’t say one cannot recover from that, but hell I don’t consider that to be my job. If there are a bunch of commands I can run to fix it, then why hasn’t someone created an automated method for rectifying it? I love Linux, but the same laws of software apply to it as well. If it doesn’t work, it’s broken.
Every year, I get a weird feeling that I have to switch to Linux. And this has been going on since about 2004. But my current attempt almost burned my Laptop to ashes.
A little history
Back then, the internet connections in my country were not that fast. So, I used to purchase discs from a local website, which used to download new images as they came up. A little later, we started getting DVDs, which meant we could use a lot of applications without needing a fast internet connection to download them first.
I’m a programmer. And I understand how powerful Linux can be. However, whenever I used to install Linux along with Windows, it’d always mess up my Windows installation a few days later. Last time, fiddling with Linux made me lose my Master File Table. I had all the files, but just didn’t know where they begun and ended.
My current burning attempt at Linux
Last weekend, I decided to wipe out my freshly bought Windows 8 and replace it with Linux. I’m always leaning towards Ubuntu, since it’s the simplest to get started with.
I love my Windows 8, mind you. The boot time of 15 seconds is a life-saver.
However, my Linux installation had a problem. My laptop would start heating up rapidly. The safety mechanism would shut down my laptop to prevent damage. This wouldn’t even let my installation complete.
It didn’t make sense. But the half-installed Linux had already wiped my Windows installation.
It’s happening to a lot of people
Research a bit on the internet, I found that Linux did result in overheating of Laptops. Some power management gone wrong?
In the end, I somehow managed to get Ubuntu installed, maybe because the weather turned cool in the evening. But, running Ubuntu was also resulting in sluggishness and overheating. Eventually, my session would shut down automatically.
Back to square one
It was almost Sunday evening, and I had to get my Windows installation up and running with all the tools to prepare for the week.
My attempt at switching to Linux completely failed due to a problem I did not expect. My earlier attempts were all on desktops.
This problem seems to be there with a lot of distributions. And if this helps, I have a HP G2 2005ax (AMD based). Right now, I’m learning to live with Windows.
Can this be solved?
I know that there are a lot of hackers out there who might be able to diagnose and fix the problem. Can you guys figure out something? Switching to Linux can itself be a big learning curve (though not as much as Vim), but this makes it even worse.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145751.1/warc/CC-MAIN-20160205193905-00048-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 3,281
| 21
|
https://espei.org/en/latest/contributing.html
|
code
|
Contributing to ESPEI#
This is the place to start as a new ESPEI contributor. This guide assumes you have installed a development version of ESPEI.
The next sections lay out the basics of getting an ESPEI development set up and the development standards. Then the Software design sections walk through the key parts of the codebase.
Even though much of ESPEI is devoted to being a multi-core, stochastic user tool, we strive to test all logic and functionality. We are continuously maintaining tests and writing tests for previously untested code. As a general rule, any time you write a new function or modify an existing function you should write or maintain a test for that function.
Some tips for testing:
Ideally you would practicing test driven development by writing tests of your intended results before you write the function.
If possible, keep the tests small and fast.
See the NumPy/SciPy testing guidelines for more tips.
ESPEI uses pytest as a test runner. The tests can be run from the root directory of the cloned repository:
For most naming and style, follow PEP8. One exception to PEP8 is regarding the line length, which we suggest a 120 character maximum, but may be longer within reason.
ESPEI uses the NumPy documentation style. All functions and classes should be documented with at least a description, parameters, and return values, if applicable.
Examples in the documentation is especially encouraged for utilities that are likely to be run by users.
espei.analysis.truncate_arrays() for an example.
Documentation on ESPEI is split into user tutorials, reference and developer documentation.
Tutorials are resources for users new to ESPEI or new to certain features of ESPEI to be guided through typical actions.
Reference pages should be concise articles that explain how to complete specific goals for users who know what they want to accomplish.
Developer documentation should describe what should be considered when contributing source code back to ESPEI.
You can check changes you make to the documentation by going to the documentation folder in the root repository
Running the command
make html && cd build/html && python3 -m http.server && cd ../.. && make clean from that folder will build the docs and run them on a local HTTP server.
You can see the documentation when the server is running by
visting the URL at the end of the output, usually
localhost port 8000 <http://0.0.0.0:8000>``_.
When you are finished, type ``Ctrl-C to stop the server and the command will clean up the build for you.
Make sure to fix any warnings that come up if you are adding documentation.
The docs can be built by running the docs/Makefile (or docs/make.bat on
Windows). Then Python can be used to serve the html files in the _build
directory and you can visit
http://localhost:8000 in your broswer to
see the built documentation.
For Unix systems:
cd docs make html cd _build/html python -m http.server
cd docs make.bat html cd _build\html python -m http.server
Since ESPEI is intended to be run by users, we must provide useful feedback on how their runs are progressing. ESPEI uses the logging module to allow control over verbosity of the output.
There are 5 different logging levels provided by Python. They should be used as follows:
- Critical or Error (
Never use these. These log levels would only be used when there is an unrecoverable error that requires the run to be stopped. In that case, it is better to
raisean appropriate error instead.
- Warning (
Warnings are best used when we are able to recover from something bad that has happened. The warning should inform the user about potentially incorrect results or let them know about something they have the potential to fix. Again, anything unrecoverable should not be logged and should instead be raised with a good error message.
- Info (
Info logging should report on the progress of the program. Usually info should give feedback on milestones of a run or on actions that were taken as a result of a user setting. An example of a milestone is starting and finishing parameter generation. An example of an action taken as a result of a user setting is the logging of the number of chains in an mcmc run.
- Debug (
Debugging is the lowest level of logging we provide in ESPEI. Debug messages should consist of possibly useful information that is beyond the user’s direct control. Examples are the values of initial parameters, progress of checking datasets and building phase models, and the acceptance ratios of MCMC iterations.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00170.warc.gz
|
CC-MAIN-2022-33
| 4,518
| 44
|
https://jobs.craftventures.com/companies/returnly/jobs/33720397-senior-analyst-revenue-intelligence
|
code
|
Senior Analyst, Revenue Intelligence
Affirm is reinventing credit to make it more honest and friendly, giving consumers the flexibility to buy now and pay later without any hidden fees or compounding interest.
We’re looking for a hard-working, driven data analyst to join our Revenue Analytics team. Revenue Analytics serves as the analytics backbone of the Revenue organization at Affirm. We follow a data-driven approach that combines elements of both strategy and analytics to drive decisions around both GTM strategy and revenue management with the goal of simultaneously scaling and strengthening Affirm’s commercial offerings.
As a Senior Analyst on the Revenue Analytics team, you will build robust data products that enable Affirm’s sales teams, revenue leadership, and other revenue analysts. Your work will shape the direction of revenue strategy and provide teams with a better understanding of the health of our business. Your work will include building end-to-end data solutions spanning problem formation, ingestion, data modeling, metric definition, dashboard development, analysis, and enablement. The ideal candidate will have deep technical and analytical problem solving skills and be comfortable working closely with Revenue and Data Engineering teams to develop reporting infrastructure and perform analysis.
What You’ll Do:
- Develop use cases for data sources, dashboards, metrics and automation processes for sales/client success, revenue leadership, and other revenue analysts
- Identify growth opportunities and drive strategy for existing portfolio of merchant accounts through data analysis and insights
- Collaborate with Business Systems and Data Engineering teams to define data infrastructure specific to Revenue, maintain a solid understanding of our evolving data warehouse, and build data models in dbt and Looker
- Work with enablement teams to develop data products (self-serve dashboards, BI tools, reports), scale usage, and shape training content to improve adoption
- Develop processes and foundations to scale the impact of analytics within Revenue
What We Look For:
- 3+ years work experience in a business intelligence or a data analyst role
- Strong working knowledge of SQL, Python, data modeling, and data visualization
- A passion for finding insights in data and motivating change based on those insights
- Hands-on experience with BI tools (Looker/Tableau), Databricks, and cloud data warehouse/lake technologies (Snowflake, s3). Experience with dbt preferred
- Familiarity with Salesforce and supporting revenue generating areas of the business
- Ability to identify user needs and translate them into robust, scalable data products
- Ability to start with an ambiguous problem, deconstruct it into tangible steps, and work towards an impactful solution
- Ability to communicate findings and recommendations clearly to both technical and non-technical audiences
- Experience working with multi-functional teams and collaborating with business engineering partners in data management and analytics initiatives
If you got to this point, we hope you’re feeling excited about the job description you just read! Even if you don’t feel that you meet every single requirement, we still encourage you to apply! We’re eager to meet people that believe in Affirm’s mission and can contribute to our team in a variety of ways - not just candidates who check all the boxes.
Compensation & Benefits
We offer a competitive package, with some highlights listed below. However, the given figures are not guaranteed compensation ranges; rather, they are unbinding, approximate indications of what the salary may be for your awareness. The actual salary may be less than the lower range or greater than the upper range, depending on skills and experience. No employee is guaranteed salary at the amount of the lower range.
- Flexible Spending Wallets for tech, food and lifestyle
- Generous time off policies
- Away Days - wellness days to take off work and recharge
- Learning & Development programs
- Parental leave
- Robust health benefits
- Employee Resource & Community Groups
Pay Grade - ESP29
Employees new to Affirm or promoted into a new role, typically begin in the min to mid range.
ESP base pay range per year:
We are able to offer visa sponsorship for this role, but do require that someone is based in Spain for the role.
Location - Remote Spain
Affirm is proud to be a remote-first company! The majority of our roles are remote and you can work almost anywhere within the country of employment. Affirmers in proximal roles have the flexibility to work remotely, but will occasionally be required to work out of their assigned Affirm office. A limited number of roles remain office-based due to the nature of their job responsibilities.
We’re extremely proud to offer competitive benefits that are anchored to our core value of people come first. Some key highlights of our benefits package include:
- Health care coverage - Affirm covers all premiums for all levels of coverage for you and your dependents
- Flexible Spending Wallets - generous stipends for spending on Technology, Food, various Lifestyle needs, and family forming expenses
- Time off - competitive vacation and holiday schedules allowing you to take time off to rest and recharge
- ESPP - An employee stock purchase plan enabling you to buy shares of Affirm at a discount
We believe It’s On Us to provide an inclusive interview experience for all, including people with disabilities. We are happy to provide reasonable accommodations to candidates in need of individualized support during the hiring process.
[For U.S. positions that could be performed in Los Angeles or San Francisco] Pursuant to the San Francisco Fair Chance Ordinance and Los Angeles Fair Chance Initiative for Hiring Ordinance, Affirm will consider for employment qualified applicants with arrest and conviction records.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474853.43/warc/CC-MAIN-20240229202522-20240229232522-00062.warc.gz
|
CC-MAIN-2024-10
| 5,927
| 43
|
http://solution-dailybrainteaser.blogspot.com/2014/06/thinking-fast-question.html
|
code
|
Thinking Fast Question Solution - 19 June
Ten frogs are sitting on a log floating on the surface of a river. Two of them decides to jump off in the water.
How many frogs are there on the log at this moment ?
Update Your Answers at : Click Here
Ten frogs only. The two frogs have just decided to jump, they have not jumped till now actually.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589350.19/warc/CC-MAIN-20180716135037-20180716155037-00490.warc.gz
|
CC-MAIN-2018-30
| 340
| 5
|
https://discourse.nodered.org/t/move-from-using-request-to-alternative-for-http/55892
|
code
|
On the basis that
request is deprecated, as part of looking at the fitbit node (Fitbit nodes - working? - #4 by borpin), I thought the first step might be to move the module away from
request and to
node-fetch (looking at the various docs, this seems the best alternative, but not wedded to that).
The current module uses
request to generate the HTTP requests.
Before I dive in and reinvent the wheel, does anyone have a readymade update to this that will work? Looking through the code, it seems this is the only point that
request is actually used, however, I do appreciate that the overall structure might be inappropriate when not using
request so may need a more fundamental rewrite (which is probably beyond my skills).
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303717.35/warc/CC-MAIN-20220121222643-20220122012643-00264.warc.gz
|
CC-MAIN-2022-05
| 725
| 9
|
https://forum.xda-developers.com/t/read-1st-post-off-topic-no-question-is-noobish-here.2089279/page-115
|
code
|
There is issue with the full screen version of the app...doesnt fit the screen properly.
The refresh button is below the screen.
Sent From You know what!
yes, there is problem with refresh...i generally work with auto-refresh.
There is also black on black bug in city list.
I'm searching for more widgets, will tell you if i find something interesting!
try searching on google play, there are many apps for this!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072082.26/warc/CC-MAIN-20210413031741-20210413061741-00564.warc.gz
|
CC-MAIN-2021-17
| 412
| 7
|
https://community.filemaker.com/thread/99736
|
code
|
field name "labels" don't appear if there is no data to merge in
i'm working in filemaker pro 11. i have a form that lists student information. the trouble is, in some of the boxes (all except the top one in the example attached) the labels (name, phone number etc) do not appear if there is no actual data to merge into them. i still want the labels and blank lines to appear even if there is no data. Thanks!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591332.73/warc/CC-MAIN-20180719222958-20180720002958-00542.warc.gz
|
CC-MAIN-2018-30
| 410
| 2
|
https://ccm.net/forum/affich-654214-my-pc-won-t-display-enything-after-bsod
|
code
|
It's probably a hardware error. With over-heating the most likely cause. If it doesn't start after it cools then, you will need to take it to a qualified repair shop for a diagnostic.
Yesterday I forgot to tell that when I push the power buton, the pc starts for a few seconds then it turns off by its self and after a few seconds it starts by its self then nothing happens only thet the pc is pwered on. Today in the morning the pc started and it worked correcty but when I come home later the pc started to show thw same problem again. What does this mean, (sorry for my english) and thank you for your advice.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00340.warc.gz
|
CC-MAIN-2022-27
| 612
| 2
|
https://www.mathworks.com/matlabcentral/fileexchange/8998-surface-fitting-using-gridfit
|
code
|
Those wishing to model a surface from data in the
form of z(x,y) from scattered or semi-scattered
data have had few options in matlab - mainly
Griddata is a valuable tool for interpolation of
scattered data. However it fails when there are
replicates or when the data has many collinear
points. Griddata is also unable to extrapolate
beyond the convex hull of the data unless the 'v4'
option is used, which is slow.
Gridfit solves all of these problems, although it
is not an interpolant. It builds a surface over a
complete lattice, extrapolating smoothly into the
corners. You have control of the amount of
smoothing done, as well as interpolation methods,
which solver to use, etc.
This release allows the user to solve much larger problems using a new tiling option. There is essentially no limit on the size of the suface one builds now, as long as you have dense enough data and enough memory to store the final gridded surface.
Example uses are found in the file gridfit_demo.m,
as well as comparisons to griddata on the same
John D'Errico (2021). Surface Fitting using gridfit (https://www.mathworks.com/matlabcentral/fileexchange/8998-surface-fitting-using-gridfit), MATLAB Central File Exchange. Retrieved .
Great,, I was so stressed looking at my scattered data and it was not possible to sort them grid them and make it usable to feed in for interpolation methods
Amazing. Spent hours looking for a solution.
This product seems to perform well relative to your CFC. Thanks again.
Thanks A lot ! Robust Code with all options you could dream off. Just used it on a set of 50GB of .xyz data splitted into 50 files depending on the discretization of the output you want it runs as fast or even faster than any Geographical Information Software (GIS) on the market. I added a time estimate to completion to help the user visualizing the time needed for very large data set. It would be great to have a code that estimate the best Tile Size for a give xyz. Right now I am fixing the tile to 10% of the biggest size of element x or y with a maximum to 1000.
To be added in line 1019
time_estimate_x = (cputime-tx)* (nx-xtind(end))/tilesize;
disp(['Time Estimate :' num2str(time_estimate_x/3600) 'Hr /' num2str(time_estimate_x/60) 'Min']);
Excellent function! it solved many troubles I had. Thanks a lot!
This is more than just a useful function. The code is extremely well documented, and includes a separate documentation file explaining the idea behind the function! I love that this function is both useful for my application, but also that I can learn from the author by reading the notes and looking at the code!
Masoom Kumar, when I get a chance this summer, I will upload more about how to do this with all the various types of constraints.
Is there any possibility to apply bounds on the fitting ? In other words can we restrict the fit between some upper and lower bounds ?
A relevant toolbox for numerical differentiation called MaxPol has been recently published and made available here
MaxPol provides a framework to design variety of numerical differentiation kernels with properties like:
(1) Cutoff (lowpass) design with no side-lob artifacts (for noise-robust case)
(2) Arbitrary order of differentiation
(3) Arbitrary polynomial accuracy
(4) Derivative matrix design
(5) 2D Derivative Kernels with Steering moments
(6) Intuitive examples in Signal and Image processing
Howerver , I don't understand the difference between this one and "scatteredinterpolant" in the function hub of maltab
If you like this function, you should try regularizeNd. It is nD version of gridfit with some bug fixes and improvements.
There is a bug related to the number of 2nd derivative equations added to the fidelity equations, A. The effect is that have added equations to the system that look like 0*x = 0 when trying to solve for x. Because it is a least squares problem, this equations doesn't destroy the solution and ends up only causing minor issues but it is a bug.
Also, gridfit doesn't scale the 2nd derivative equations. This is important when the units of 1 dimension are different than the other or the scales are very different. The smoothness parameter needed in one dimension should be similar in value but they are not in gridfit when dimensional scaling is different.
John! This was very helpful. My huge vectors imported from abaqus results were struggling to get fit through griddata. gridfit helped solve the issue.
Dear John! What a great tool. It helps me a lot fit a grided surface to my scattered data. I wonder if there is any opportunity to create a sfit-object with gridfit? What i actually want to do is differenting the fitted surface at some spot and later integrate again. So is it somehow possible to get the surfave as an sfit object?
John, as usual a wonderfully useful utility applicable to so many problems.
I thought I might ask - can you see any way that periodic boundary conditions may be implemented into the workings of gridfit?
I have encountered this problem on a few occasions and so far I've just gone with some hacks to approximate periodicity. For example, if in my data I expect periodic output along the x direction (such that the first and last columns of gridfit's output surface should ideally match each other in position and slope), I've just taken the regular gridfit output, padded that output with mirrored copies of the opposite few columns, then applied some gaussian-like smoothing to those edge columns before removing my padded additions.
As I said, it's always been a hack and is never perfect - but something like this has often been "good enough" for my purposes at the time.
So, can you comment on the feasibility of adding constraints within gridfit to match the position/slope of the first and last columns (or rows)?
Thanks for this wonderful function. I make use of it a lot.
It appears there is a 'hidden' input argument, called mask. What is the purpose of it?
Kind regards, Boudewijn Verhaar
hi,i'm a learner of matlab, anyone can tell me where i should paste this file for use?thanks million time!
RegularizeData3D is a fork by Jamal where he has:
1) fixed normalisation issues, such that chaning grid resolution does not change the shape of the fit.
2) added bicubic interpolation which can make a small difference in some cases.
Sadly, neither version works on 2D data. Some hacking is required to create a fake 3rd dimension with something like [0 1 0 1...]
i have xyz values i,e .mat file, i need to reconstruct surface with them i am using this code but getting errors can anyone suggest me the procedure how to use this code properly.
it is not working for non-monotone increasing vectors
I'm just come across this and it looks really great!
I was wondering if this also works for segmented surfaces, like two surfaces intersecting each other at one point.
Would that be possible?
I could not figure it out how to use it in that case.
Thanks for any help!
This is a great tool - much more efficient than griddata.
I was wondering if there is a way to extract the formula of the fitted surface using this code?
Does this create a formula of the fit?
after using this very robust function for years I finally stumbled upon a question I can't answer myself and that is how to use weighted points. Since this is not implemented I wonder if anyone here could point me to an alternative.
I see Matthias was working on something in march?
I use it often, thanks for your job
When using the gradient regulizer, why does
the doc file says gridfit attempts to force the ''first'' partial deratives to be equal around a node? When I look at the code, it seems to me that is forcing the second partial deratives to be equal to zero around every cell. Not exactaly like the laplacien method indeed, but still dealing with the second derivative.
code where I see a second derivative under the gradient regulazer:
I tried implementing a weighted version myself. I did it by replacing the Normal equation in the "normal" solver (gridfit line 631) by the weighted equivalent
zgrid = reshape((A'*W*A)\(A'*W*rhs),ny,nx);
where W is a diagonal matrix with the point weights on the first nPoints entries of the diagonals and ones on the rest. The weights are normalized so that they sum to nPoints. Is that a reasonable approach?
Thank you very much for making this great function available!
I'd like to be able to apply different weights to different data points and am wondering if there is a rigorous way to do this. I guess I could create copies of data points according to their weights but that seems like an inelegant solution.
Can someone help with this error?
Undefined function 'sparse' for input arguments of type 'single'. Thanks.
Great submission as always John. Thanks.
Already asked below, but can you add bi-cubic interpolation to this submission?
hey thanks very much for the code.
i have a question:
my 2d data are formatted in matrix already, do i have to convert them into column to use the function?
Okay, thank you.
Think of the result from gridfit as essentially a spline. This is really what it is, a piecewise function. Piecewise functions have no simple representation of the form f(x,y). At best, you could write that evaluation in the form of a call to interp2, because what gridfit returns is your function over a rectangular array of points.
Finding some general function of the form f(x,y), in a nice form that you could write down for a paper, etc., is a difficult problem to do (automatically) by computer. Such a function would in general be a nonlinear function of x and y, and it would involve parameters that must be estimated. To do this, you first need to decide on a nonlinear model, then use a nonlinear estimation tool to determine the parameters. None of that is in gridfit, nor can it be.
Hey, :) Thank you very much for this code.
I have another question:
Is it somehow possible to determine the generated surface as a function z=f(x,y)?
I can't find information (or i don't understand the information ;) ).
No other implementations that I know of, at least that are available for free. I have heard of others asking if it was ok to write it in some other language, but I've not seen any implementations pop up. This does not say nobody has written one, only that they may have chosen to keep it out of the public domain. Given that it may take some effort to get it working, I would not blame someone who wrote such a code but kept it to themselves. For example, I wrote an n-dimensional version of this long ago, but that code belongs to my past employer.
This code is wonderful.I was wondering, is there a C++ or VTK implementation to this?
Think of the surface produced by gridfit as a moderately flexible plate, where we can control the overall flexibility. Now, attach springs that attach to the plate, connecting the plate to each data point. Springs have the property that the potential energy stored in the spring is proportional to the square of the extension, so a least squares kind of thing. Double the extension, and you multiply by 4 the energy stored in the spring. The springs are designed to store energy only in the z-direction.
The basic idea to the smoothing parameter is that we can adjust the flexibility of the plate relative to the spring constant for those connecting springs. So if we make the springs stronger, then they will distort the shape of the plate more. Make the springs very weak, then the plate will approach a planar surface.
Excellent code - speedy and works straight from the box! What more could you ask for.
One question, Could you explain a little more about how the smoothing parameter works. I have tweaked to get the desired result, but want to understand, and be able to explain to others a little more about how this works.
Thanks for the great code.
I have the same request as Jia Le Ngai.
It expands over the data and fills the whole rectangular area. Is there a way to stop it extending?
I have used this program for years and it works well! Has anyone tried converting gridfit to C using Matlab Coder ?
Gridfit is not an interpolation tool. If you want that, use tools like griddata, triscatteredInterp, scatteredInterpolant, etc.
Thank you John for the code. However Is there a way to make the just to interpolate the coordinate that I input without the 'extend' property?
The code is great tool and the discussion is very helpful. I have run demos, the results are very impressive.
I notice you have another toolbox which able to fit surface on spherical coordinates. Could you please send me a copy of it?
Thank you very much!!
Thanks so much. I am very appreciative.
Whatever value of z0 that seems to work should be fine there.
You can think of it as a truncated Taylor expansion if you like, which it essentially is. I simply chose a straight line that matched the slope and value of the log function at the break point, to create a continuous, differentiable function that will extrapolate linearly. So sort of a spline too.
Thank you, John; your suggestion worked. I chose Z0 to be 0.5% of max(Z) of 2E12 in the data I'm fitting. Now, using gridfit to fit the conditionally log transformed data yields a positive surface which does not overshoot.
A question: how did you arrive at the second part of the transform, i.e. ((z - z0)/z0 + log(z0)).*(z>z0)? It works but it's not immediately obvious to me why. (Some sort of Taylor expansion of the log?)
Of course, my response should be - picky, picky, picky!
This is a consequence of a log transform I guess. You could use the 'springs' method as a regularizer to avoid that, but it might be too extreme.
So an alternative is to use a piecewise transformation. Thus, choose some breakpoint, above which the transformation will be linear, and below which, it will behave as a log transform. I'll call that value z0. So we can write the transform function W(z) as
W = @(z,z0) log(z).*(z<=z0) + ((z - z0)/z0 + log(z0)).*(z>z0);
You can plot it, picking perhaps z0=3 as a breakpoint for an example.
Of course, the transformation is invertible. This should work:
Winv = @(w,z0) exp(w).*(w<=log(z0)) + z0*(w - log(z0) + 1).*(w>log(z0));
Again, we can plot it, here for z0 = 3 again, as:
Something like that should give you the best of all worlds, and all you need to do is choose some value for z0 that makes sense to you. I might start with the mean of z as a good choice for z0.
Hi John. Thanks for the tip regarding using a log transform. It seems to be a sensible approach and I tried it. The fitted surface is now all positive, as expected. However, it also overshoots: the highest peak (Z) in the fitted surface is now 3 times the largest data point. Previously the fit closely followed the data.
I should add that the data did have 0's in it and I didn't want to completely remove them because they represent boundary conditions for the somewhat sparse data I am fitting to; for example, the spectrum vanishes at a certain energy (e.g. E=0) or at some angles. Instead, as you suggested, I replaced the 0's with a small number; I tried 1E-10, 1E-5, 1E-1, and 1. The overshooting appeared in all cases and is probably unrelated to the presence of 0's or small numbers in the data.
I tried to attach pictures of the fitted surface before and after the log transform, but there doesn't seem to be a way to do that on this forum.
Neil - the answer is definitely yes, and no. Or maybe it is the other way around.
No, it would be difficult to make gridfit work as you would like to see via internal changes, since that would be a bound constraint on a fairly massively large scale linear algebra problem. So while I could in theory use a tool like lsqnonneg or lsqlin instead of backslash for the solve, it would take forever to terminate on any problem of reasonable size.
Regardless, there is a solution that will work for you, and is quite simple. Typically when there is a bound constraint at zero because of physical issues like this it is because the problem really should be transformed to make it more linear. The logical transformation is a log transform. Thus instead of fitting Z as a function of X and Y, fit the surface in the form of log(Z) as a function of X and Y. Clearly there will no longer be any negativity issues, because to recover the surface you really want, you will simply exponentiate.
If you actually had any data points that were an exact zero, it would be best to drop them completely. Alternatively you could replace them with some small number, but that would cause the problem to be less smooth and might introduce bugs in the surface. Of course, the non-smooth region will be in areas where the log will be very negative, so after exponentiation it will be essentially zero anyway. So either way will work on those exact zeros.
I hope the log transform idea solves your problem. It is an approach I have used often with good success.
Thank you for making this code available. Is there a way to tweak it to add the additional constraint that the fitted surface should also always be positive? The X,Y,Z data I'm fitting represent spectrum (Z) vs. energy (Y) and angle (X). Therefore, negative Z is unphysical.
Not only a great tool, the source code and the discussions here are a great education. Thanks, John, for extending the docs and examples
It usually does everything I need. When really off-the-wall needs have come up, it's been a great base to start from. Clear and well structured code.
Of course, the simple answer for Chad is to put a basic wrapper around gridfit, one that tests the data, and compares it to the nodes in advance. If the nodes were not chosen to contain the data, then fix them so they are. No warning need ever be generated then. Then just have the wrapper code pass all arguments into gridfit.
I (or you) could change the code as Chad has suggested. Personally, I don't think this is a good idea, since gridfit appends new nodes, expanding the grid when it finds this condition. That means the size of the grid as an array is no longer as you might have expected it, so a warning seems merited. And even if the nodes were just moved without telling you, I think this is a bad idea to say nothing.
As it is now, if you really positively don't want a warning, then you can turn that warning off. That is a documented option for the 'extend' parameter. Essentially, as I have it written now, if the code does something to the grid that is not as you expect it, it generates a warning unless you tell it not to generate a warning. I'm just not comfortable with the idea of a wishy-washy warning, that may or may not generate a warning when it encounters something surprising.
My xnodes and ynodes are typically ~1 km apart, but if gridfit needs to extend the domain by a nanometer in any direction it'll give a warning message. I recommend adding a clause which will limit warnings to only cases where the domain is extended by more than half of a dx or dy. In the switch params.extend statement there are four possible warning cases. The warning clause I'm suggesting would look like this:
warning('GRIDFIT:extend',['xnodes(1) was decreased by: ',num2str(xnodes(1)-xmin),', new node = ',num2str(xmin)])
xnodes(1) = xmin;
And similarly for the other three warnings the if statement would be
This is a fantastic function and it runs perfectly with my 2D-data. Do you also provide a gridfit3d version? That whould save a lot of time in my post-processing work. Thanks in advance.
How to cited your code in research paper?
Evgheny - If you seriously need to fit such a surface in spherical coordinates, I might be able to help you out with a separate, un-posted toolbox that is capable of fitting a surface in spherical coordinates. You would need to contact me directly.
Thanks, function does do its job great!
Does anyone know alternative of this function for closed surface (sphere-like).
If I try this function with spherical coordinates, it works fine, but there are problems with the seam (line of joint)
Sorry. I've never heard of a Java implementation. Not that it counts, I've written it 7 times in MATLAB, once each in Fortran and APL, those last were very old though. The difference each time was what I learned from the previous incarnation. Were someone to do a Java implementation perhaps the most important thing is to use sparse linear algebra capabilities for the solve, else it would take forever.
is there any implementation of this in java? that get some points location and value and give a grdi?
Just to add my $0.02 to this whole sphere discussion. By math definition of what is function it means for each x we have only one F(x). If there are more than one F(x) relation between x and F(x) is not a function.
I wish I could remove my previous question, 2 seconds on google pointed me to Matlab's interp2 which I'm using now on the output of your excellent gridfit. Thanks for sharing!
I found that it is very straightforward to fit a nice looking surface to some 55 data-points using gridfit. I was wondering though, is there a way to get values of the surface that are not on the nodes? I wish to interpolate between my 55 samples using the surface.
For those of you who don't appreciate why Felon's comment is silly, think of it like this. Gridfit fits a surface of the form f(x,y), over a rectangular grid. It does so quite well, as many people have found over the years.
While you may think of the surface of a sphere as a surface, it is not of the form that gridfit can fit. It is multi-valued, so for any single (x,y) pair, there will be zero, one, or two values of z that would apply. As well, that "surface" (better to call it a manifold) has derivative singularities, if we were to look at it as a function of x and y. So even a hemisphere will be problematic for this tool.
You would not use gridfit to fit something that is not representable as a function of two variables over a rectangular grid, any more than you would expect it to do numerical integration, numerical optimization, or compute an FFT. Nor would you expect it to cook dinner for you, do your laundry, etc. Use the right tool to solve your problem, but if you try to force the wrong tool to solve a random problem, expect poor results and don't complain about what you get.
So you are using a tool that builds a single valued function to fit something that is obviously not. What did you expect? Magic?
Software does what it is programmed to do. It does not magically rewrite itself when you give it a problem of a completely different sort. In fact, I fail to understand why you would downrate a tool for not solving a class of problem it is explicitly not designed to solve.
If you have a closed manifold, like a ball or some other multivalued form, then don't use this tool. I have NEVER claimed it would solve that problem. Instead, you might look into tools like convex hulls, alpha shapes, CRUST, etc. Or, you might choose to convert the problem into spherical coordinates, at which point gridfit would be able to build a viable surface.
Or maybe you just wanted to complain with no good reason.
Still it cannot surface a cluster of points that looks like a ball. The result the interpolation is always a surface of single-value function.
No, gridfit does not explicitly allow you to apply derivative constraints. That does not say it is impossible, only that I did not offer it as an option.
The main reason why not, is it would require a set of linear inequality constraints on the unknowns. For a not uncommon grid of size 100x100, there are 100*100=10000 unknowns to solve for. This is not a problem, since the linear system is a sparse one. However, to solve a sparse linear inequality constrained system, one would need to use LSQLIN, or a solver like it. And the last time I checked, LSQLIN was not set up yet to handle sparse large scale inequality constrained problems. (That may have changed with the most recent release, but I have not checked.) If I made all of the matrices full ones, the solve time would probably be incredibly slow and memory intensive.
So I'm sorry, but gridfit will not handle the problem as is.
If you were willing to build a fairly coarse grid, AND add the constraint system, it would probably be doable in a reasonable time. I don't know how small the grid would need to be to make the solve time reasonable. And your definition of reasonable would surely differ from mine, depending on how badly you needed the answer.
Hey John, would gridfit allow me to constraint the slope of the fitted surface for a given range of data?
I'm needing to model a surface in the form of z(x,y) from scattered data points while keeping the first derivative of z negative in both directions.
I'm trying to make a surface approximation to describe the required input torque to drive a hydraulic motor operating under certain load conditions. My inputs are: output flow, output pressure and input shaft speed. I have scattered measurement data where the relationship between these variables are to be found. I read in some of the earlier posts that gridfit could be extended to higher-dimension fits. Do you have any version where a 3rd dimension can be included that I could try?
Gridfit is very nice. I have written an update that adds bicubic interpolation and improves the internal scaling (makes surface smoothness completely independent of the x/y grid size). If I send you the code, would you update Gridfit?
Alejandro - while I would like to implement that capability one day in the future, that day may be a long time away before I find the time. Until then, all I can offer is to use a finer grid, which will mimic a higher order interpolant.
Dominik - Sorry, but no. You could use this tool to generate a gridded surface, then use interp2, to interpolate, but that is perhaps too much if you only wanted to interpolate a few scattered points.
i wonder, can this code also be used like an interpolant (e.g. gridded interpolant), i.e. such that the output is not the smooth funtion on a grid, but on certain points queried?
I was just wondering: would it be difficult to extend it for a cubic interpolation? (it was mentioned in the code as a future enhancement).
Exactly what I was looking for, the help could start with a simpler example:
[X,Y] = meshgrid(1:5);
Zsmooth = gridfit(X,Y,A,1:5,1:5,'smooth',10)
0.9638 1.9517 2.9512 3.9517 4.9638
1.9517 2.8162 3.7338 4.8162 5.9517
2.9512 3.7338 4.5266 5.7338 6.9512
3.9517 4.8162 5.7338 6.8162 7.9517
4.9638 5.9517 6.9512 7.9517 8.9638
Andrew - It sounds like your goal is to reproduce the local shape of your surface over a region that lies inside the bounds of your data. Gridfit can do this, if you are willing to expand that grid so that it fully contains the data. Simply add some nodes that extend to the boundaries of your data in both x and y. The nodes used by gridfit need not be equally spaced, so this should not be a problem, but to retain the essential shape of the surface, you may need to add a few such nodes in each direction. Once the fit is obtained, then you can always extract only that portion of the surface that you wish to retain.
If this is not what you are looking for, then feel free to contact me directly, perhaps with some data so that I can better understand your problem.
This is an amazing program. I do have one question. I have a scattered grid of latitude(Y) and longitude(X) coordinates corresponding to brightness(Z) values. I've been using GRIDDATA to interpolate to a new grid that is not scattered and within a smaller range of latitude and longitude values, but GRIDDATA smooths the data so much that it becomes useless.
I understand that GRIDFIT is not an interpolant, and from my reading of the help, it does not seem that it was meant to be used in this fashion, but I just want to make sure.
Collin - What you wish is essentially a 3-d version of gridfit, that could use time as the third dimension. Sorry, but perhaps one day...
Atul Ingle - Yes, it is possible to extend gridfit to higher dimensions. In practice it tends to turn into a memory & cpu hog before long. 5 or 6 dimensions were the practical extreme when I've done it before, although computers are bigger and faster since those days. 64 bit MATLAB and lots of RAM will help here, but even with that, the curse of dimensionality is huge and arrives quickly. Sadly, the n-d version I wrote many years ago is not mine to give away.
Is it possible to extend this idea to fit 3d volume to scattered data, and in general, to higher dimensions? The only limitation I can see is matrix size.
Excellent bit of code that has made my life a lot easier.
If you have knowledge of the evolution of your surface in time, how would you suggest incorporating knowledge of a previous state to help improve interpolation/extrapolation of the current state?
Laura, I do have a tool (based on the same philosophy as gridfit) to handle fitting over a non-rectangular domain. That set of tools can also test for points inside a general domain, although inpolygon can handle that problem in 2-dimensions. I've not posted the code although it is fully functional, because I've never been sufficiently happy with the entire toolset. feel free to send me e-mail if you want to try it though.
I recently (today) submitted a question entitled "availble solutions for 2D interpolation on non monotonic scattered data". Right after I discovered gridfit.
Thanks a lot for such a routine, really solid and with the best fitting I was looking for!.
However I would like to ask one thing that partially has already been addressed. Indeed I am most interested in the good interpolation it performs (although not the objective) than the extrapolation. My x y data points do not define a square domain but more a distorted "trapezoidal" one with curved contours. I would like to get rid of the data out of the original domain.
Ignacio proposed an interesting way (maybe not very clean but working) to know which points fall inside or outside the domain. But I can not apply it to my case since due to these convex borders the Delaunay triangulation produces spurious triangles there that indeed are out of the domain.
Could you suggest me in which way I could identify which of the points of my "square box grid" fall actually inside the trapezoidal domain?
Thanks a lot
Bruno - There are a couple of issues to watch out for here.
First of all, if you wish to find the area that is strictly above zero and below the surface, be careful if the surface goes below zero. (Some individuals miss this point, but it is crucial.) You could simply compute the area by calling trapz(trapz(max(0,Z))). This would be an approximation of course, missing out on the subtle issue of exactly where the surface passes below zero.
Next, depending on how the surface is intended to be interpolated, if you want the EXACT integral of that volume, then be careful. For the first order tensor product surface, i.e., that which interp2 would call 'linear', or what is called 'bilinear' in gridfit, then it suffices to interpolate at the center of each cell in the lattice of points. Sum up those values and you will have an exact volume. (Be careful to scale this result by the width of the cells in each dimension.) As pointed out, trapz (called twice, so once in each dimension) will also be exact here. It turns out that merely computing the average value over each of these cells is also equivalent to that act, then sum up the volumes in each cell of the lattice.
For the triangular faceted surface, trapz is not exact, but the difference may be subtle. Here we end up with a weighted linear combination of the values of each corner of the square cell in the lattice, but the 4 corners are not weighted equally. Send me an e-mail and I can detail how to do this more clearly.
Really though, the easy answer is to use trapz, but you should not ignore these subtleties I've pointed out.
John - thank you, 'springs' seems to be doing the job!
Bruno - you can use trapz twice: http://blogs.mathworks.com/loren/2011/06/13/calculating-the-area-under-a-surface/
Excellent work. I have one question... how can I calculate the volume beneath the interpolated surface and above the plane xy? Maybe it's a simple task but i can't figure out how it's done.
IainIsm - You cannot (easily) place bound constraints on this fit. That would result in a very large scale bound constrained fit. While doable in theory with a quadratic programming tool or lsqlin, these problems are huge. It would simply take too long to solve.
Having said that, there is a possible solution that might work. Choose the 'springs' regularizer as an option. That option was designed to avoid extrapolation wherever possible, so if your data never goes negative, then the surface will hopefully not do so either.
Lovely function - just one question, is it possible to put constraints on the range of values that zgrid can take please? for example, I'm using this to create a surface for a motor rpm/torque/efficiency map (which I am then querying with interp2 to provide an efficiency estimate for a given speed and loading), but every now and then I find that some of the zgrid values are negative. I could set these to zero using find, but I'm hoping to find(!) something a little more elegant!
Great stuff! Additionally, and refreshingly, it worked "right out of the box". Thanks John!
Thanks a lot. Not only did I enjoy an excellent bit of code, but also for your variety of stimulating and entertaining responses. Thank you for breaking up my day with at least 5 minutes of enjoyable rhetoric and exceptional references (Mark Twain / god and little green apples) ... brilliant! Have a few extra stars.
It is wonderfull. Easy solve my problem!
Oops. In my explanation above P = 1/sqrt(2), not sqrt(2).
Lanis, John, et al.
This is how to get consistent behavior with different grid sizes.
Save a copy of gridfit.m to gridfit2.m.
In gridfit2.m make the following changes at the locations of the commented-out code (near Lines 550-600):
% Sorry, John D.
% dy1 = dy(j(:)-1)/params.yscale;
% dy2 = dy(j(:))/params.yscale;
dy1 = dy(j(:)-1);
dy2 = dy(j(:));
% Sorry, John D.
% Append the regularizer to the interpolation equations,
% scaling the problem first. Use the 1-norm for speed.
%NA = norm(A,1);
%NR = norm(Areg,1);
%A = [A;Areg*(smoothparam*NA/NR)];
FidelityEquationCount = size(A, 1);
RegularizerEquationCount = nx * (ny - 2) + ny * (nx - 2);
NewSmoothParam = smoothparam * sqrt(FidelityEquationCount / RegularizerEquationCount);
A = [A; Areg * NewSmoothParam];
Run the following script to demonstrate:
x = [0, 1.333333333, 2.666666667, 4, 0, 0, 4, 4, 2];
y = [2, 2, 2.1, 1.8114, 0, 4, 0, 4, 0];
z = [0.5, 0.82, 0.4, 0.7, 0.5, 0.7, 0, 0.7, 0];
% Set the smoothness for the original GridFit. This smoothness produces inconsistent results if the grid size changes.
Smoothness = 10;
% Set up the grids with different spacings...
x_coarse = linspace(0, 4, 11);
y_coarse = x_coarse;
x_fine = linspace(0, 4, 41);
y_fine = x_fine;
% We're taking a slice through each surface at the same coordinate.
% They have different indexes but the same coordinate of 1.2.
Index_coarse = find(y_coarse == 1.2);
Index_fine = find(y_fine == 1.2);
% Run GridFit.
z_coarse = gridfit(x, y, z, x_coarse, y_coarse, 'Smoothness', Smoothness);
z_fine = gridfit(x, y, z, x_fine, y_fine, 'Smoothness', Smoothness);
% Try the same thing with the updated GridFit. Note that the smoothness parameter is on a different scale compared to
% the original GridFit. However, all choices of smoothness value should produce consistent results for any grid size.
UpdatedGridFitSmoothness = 1;
% Run the updated version of GridFit, GridFit2.
z_coarse2 = gridfit2(x, y, z, x_coarse, y_coarse, 'interp', 'bilinear', 'Smoothness', UpdatedGridFitSmoothness);
z_fine2 = gridfit2(x, y, z, x_fine, y_fine, 'interp', 'bilinear', 'Smoothness', UpdatedGridFitSmoothness);
% Show the results for GridFit...
% These lines only line up when you use a very large or very small smoothness value.
subplot(1, 2, 1)
plot(x_coarse, z_coarse(Index_coarse, :), 'color', 'blue');
plot(x_fine, z_fine(Index_fine, :), 'color', 'red')
title('Profiles from the Original GridFit');
axis([1, 4, 0.3, 0.6]);
% Show the results for GridFit2...
% These lines have the same profile for all smoothness values.
subplot(1, 2, 2)
plot(x_coarse, z_coarse2(Index_coarse, :), 'color', 'blue');
plot(x_fine, z_fine2(Index_fine, :), 'color', 'red')
title('Profiles from the Updated GridFit');
axis([1, 4, 0.3, 0.6]);
Lanis, John, et al.
How it works:
GridFit is a function that regularizes a dataset. It constructs a series of linear equations that are the "fidelity equations" (how closely the output matches your input data) and the "smoothness equations" (how close the surface's second derivatives are to zero). It assembles these equations and solves them in a way that minimizes the sum of the squared error (SSE) of the output surface.
The key is to recognize that the SSE is just a sum of squared residuals, the fidelity squared residuals (SSE_F) and the smoothness squared residuals (SSE_Sm).
SSE = SSE_F + SSE_Sm
If we have a certain number of input points, that tells us how many terms are in SSE_F. The number of grid points (more precisely, the number of second derivatives within the grid) tells us how many terms are in SSE_Sm.
Imagine that we are trying to fit this parabola: y = x^2 to our data. The fidelity terms are some finite value (which doesn't matter for this example), such as 100. Because we're using this parabola, the second derivative is always 2. If we have a grid such that there are 20 smoothness terms, then SSE_Sm = 20 * (2^2) = 80.
The ratio SSE_F / SSE_Sm = 100 / 80 = 1.25. This ratio determines how to weigh the smoothness against the goodness of fit.
Now suppose that we suddenly double the number of grid points. Now SSE_Sm = 2 * 20 * (2^2) = 160. That means SSE_F / SSE_Sm = 100 / 160 = 0.625. Uh-Oh! The ratio has changed, which means our fit will favor smoothness more than it did before!
The way to fix this is to reduce the values of the second derivatives by multiplying them by a constant, which we'll call P. In this example, if P = sqrt(2), then the individual terms in SSE_Sm will have half of the value they had before. That's ok because there are twice as many of them. In other words, 0.5 * 2 = 1.
Now using P, SSE_Sm = 2 * 20 * ((P * 2) ^ 2) = 80. This provides the same balance of fidelity vs. smoothness that we originally had.
What I did in GridFit2 is a little more complicated because it also considers the smoothness parameter and the number of fidelity equations. I hope this helps!
Peter - if you can do better, feel free to write your own code. (It is NOT the use of the 1-norm that impacts your question.)
I'm not convinced that the smoothness parameter produces consistent results for different grid sizes. I think the problem comes from the use of the 1-norm instead of just using the numbers of fidelity equations vs. derivatives.
Here is an example.
x = [0, 1.333333333, 2.666666667, 4, 0, 0, 4, 4, 2];
y = [2, 2, 2.1, 1.8114, 0, 4, 0, 4, 0];
z = [0.5, 0.82, 0.4, 0.7, 0.5, 0.7, 0, 0.7, 0];
Smoothness = 10;
x_coarse = linspace(0, 4, 11);
y_coarse = x_coarse;
Index_coarse = find(y_coarse == 1.2);
x_fine = linspace(0, 4, 41);
y_fine = x_fine;
Index_fine = find(y_fine == 1.2);
z_coarse = gridfit(x, y, z, x_coarse, y_coarse, 'Smoothness', Smoothness);
z_fine = gridfit(x, y, z, x_fine, y_fine, 'Smoothness', Smoothness);
plot(x_coarse, z_coarse(Index_coarse, :), 'color', 'blue');
plot(x_fine, z_fine(Index_fine, :), 'color', 'red')
yes, Hendawi Mohamed I removed points below the edge of my data. My data is roughly bounded by a triangle in X-Y so I used:
Z=griddata(... same limits);
to removed the complete estimated results. When I plotted my data, the points on the graph corresponded to area were I had data.
You would normally first decide what size result you wish to see, thus the number of nodes in x and y. Then you would choose some amount of smoothing.
I've set it up so that as well as I could so that changing the grid size for a fixed smoothness will have a minimal impact on the overall look of the surface. For example, try this example:
xy = rand(100,2);
z = rand(100,1);
zgrid50 = gridfit(xy(:,1),xy(:,2),z,50,50);
zgrid200 = gridfit(xy(:,1),xy(:,2),z,200,200);
The two surfaces will look qualitatively similar, although the second one will look less jagged because it is 4 times finer in each dimension. As expected, this is what you gain by using a finer grid, at some cost in time for the solve. In this sense, smoothness is fully decoupled (as well as I could do that) from the size of the grid itself, for any given set of data.
Given a choice of grid, a smoothness of 1 means that there is an equal amount of importance associated with reducing the residuals compared to the overall smoothness of the surface. Be careful however, in making the smoothness too large or too small, as then you may find numerical artifacts generated from the solution. Floating point arithmetic only goes so far.
By the way, a good rule of thumb is if you cannot get the result you wish with a smoothness somewhere between 0.001 and 1000, then I would look to see if there may be a fundamental problem in the data.
As well, if one chose to fit a 2x2 surface through 1000 data points, you will never get a perfect fit regardless of how small a smoothing parameter you choose, UNLESS the data happens to perfectly fit the implicit model used to interpolate inside a unit square. (For example, tensor product linear interpolation uses an implicit local model of a0 + a1*x + a2*y + a3*x*y within any cell.) There simply are not sufficient degrees of freedom in the model for a very coarse grid if you have many data points.
Hi! Thank you for this useful function ! I needed to fit scattered points with noise added, and Gridfit handles that nicely ! Now, I would like to test with different values of smoothness and number of x-y- nodes in order to find the best couple of parameters for my problem. So, I have some questions about the 'smoothness' parameter. You explain in the help that the smoothness parameter is ''normalized'' so a value of '1' may generate reasonable results. I would like to know with respect to which parameter do you normalize the ''smoothness'' value ? If I use either 'gradient ' or ' laplacian' as regularizer, is the smoothness defined in the same way ?
Gridfit is not a tool to do 2-d interpolation. Use interp2 instead.
How could I use gridfit to interpolate a set of 2D data on a image?
Just let all z equals 0 ?
All your submissions I have tried are great quality! Thank you very much. I have a directory for your functions, they could easily be part of Matlab's distribution in my opinion.
Re gridfit, do you still have in mind submitting a 3D/ND version of it?
What can I say? You can have a good result, or a faster one. Pick only one.
The fact is, gridfit is actually blazingly fast given what it does, which in this case requires the solution of a linear system with 90,000 unknowns. A quick test shows that took only about 2 seconds on my system. Perhaps you want me to do magic? Even a wave of a magic wand takes about 2 seconds.
Hi, this is an awesome program! But it seems to be very memory consuming. I can solve for Zgrid points with a rather small dimension only (e.g. 300x300). Griddata seems to be able solver for a larger matrix faster, but the interpolation is not nearly as good. Is there anyways to obtain higher resolution?
By default, it uses triangular interpolation. If you read the help, you will see that a bilinear (quad based) interpolation method is an option. Is one better than another? In general, I'd claim that if you can see the difference, then your grid is far too coarse! In any event, HAD you actually read the help before posting, in your example you might have tried adding one more option to the call.
[zGrid, xGrid, yGrid] = gridfit(SourceData(:, 1), SourceData(:, 2), SourceData(:, 3), xNodes, yNodes, 'smoothness', 0.1,'interp','bi');
Your second comment is about the smoothing parameter. Note that gridfit allows a common scheme for the smoothness penalty function - that the Laplacian is biased towards zero. This is the sum of the second partials, and it is arguably the logical choice for SOME physical surfaces. HOWEVER, note that the default is NOT that method. In fact, the default is a method that DOES uncouple those second partials!!!!!!!!!!!!!!!! Again, read the help, rather than assuming you know the answer and then asking a question based on that wrong assumption.
As for the optimal smoothness parameter changing based on the grid spacing, I have found that for virtually any method of choosing a smoothness parameter, I can also come up with a case where it will not be the best. Gridfit uses a default smoothing parameter that is "reasonable" for many problems out of the box. Is it optimal? Probably not. Any degree of true optimality might involve algorithms that would be slower to run, and algorithmic speed is desirable here. Should there be an adaptive option in gridfit, that would allow the user to set it and walk away, knowing that it will be slower to run, but it will give an always optimal result, for all users? Truly good adaptive methods that will never fail are also terribly difficult to write. Look at the numerical integration tools in MATLAB (quad, quadl, quadgk, etc.) I can easily make those tools fail by a careful choice of integrand and interval.
All that I can suggest is to use the facilities built into gridfit as a computational tool. If it produces by default not exactly what you want, then I have provided a few knobs you can adjust. Only you know if you like the surface you create with a tool like gridfit. But at least the knobs are there to turn to generally produce something that will make you happy.
GridFit is pretty good, but I have a few questions.
1) The interpolation seems to be triangular, which causes some symmetry problems. Below is an example program to illustrate this. Does anybody know if there is an advantage to using triangular interpolation (three points) instead of rectangular interpolation (four points)?
2) If I change the grid spacing, I also have to adjust the smoothness parameter or GridFit gives different results. This appears to be related to the way GridFit sets up its derivative equations. It mixes horizontal and vertical second derivatives in the same equation. For example, if the residual in the horizontal direction is +0.1 and the residual in the vertical direction is -0.1, the regularizer wrongly deems that point a "perfect fit" because the derivatives are added together. Would it be better to separate the derivatives into separate equations (separate rows in Areg)? If so, is it possible to produce the same results (same smoothness or curve profile) for a single "smoothness" parameter, no matter what grid spacing is used? I think that would be really helpful and convenient to GridFit users so that we wouldn't have to fiddle with the smoothness number manually.
% Choose four points at each of four corners and one point in the center.
% Make sure everything is symmetric around the center.
SourceData = [
0, 0, 0
0, 1, 0
1, 0, 0
1, 1, 0
0.5, 0.5, 1
% Create a 4x4 grid.
xNodes = linspace(0, 1, 4);
yNodes = xNodes;
% Run GridFit. The smoothness parameter has little effect on the result
% for this demonstration.
[zGrid, xGrid, yGrid] = gridfit(SourceData(:, 1), SourceData(:, 2), SourceData(:, 3), xNodes, yNodes, 'smoothness', 0.1);
% Display the surface so that we can see the asymmetry.
surf(xGrid, yGrid, zGrid);
% Calculate the asymmetry.
disp(['GridFit asymmetry is ', num2str(zGrid(2, 2) - zGrid(3, 2)), '.']);
This will definitely have much better performance around the edges compared to a delaunay based tool. They often show serious interpolation artifacts around the edges.
In terms of how good is the fit, I'm surprised that I did not return an array of residuals or predicted values, but this is easy enough to build. Use interp2 to compute predicted values, since what comes out of gridfit is now a nice regular one that interp2 will handle. Once you have the predictions, residuals are easy enough to get by subtraction, or you could compute a standard error, etc.
Very nice, currently using this to perform a calibration between an imager and a gimbaled laser pointer. It's not quite optimized to get the best expected performance over the full FOV, but it's close. Toward the edges, it works much better than other routines I've tried. Is there a way to see how good the fit is?
Thanks for the great tool. Trying to find a way around "pull ins" when the regularizer has a significant low spatial frequency bias over extended areas... for example, around the top of a higly resolved gaussian peak with 'gradient'. Using a higher order regularizer doesn't help (I've implemented a 4th order gradient): that just gives a surface that's highly "puckered" around indiviual sample errors. Any ideas?
I'm sorry - that was a typo: I meant gridfit when I typed griddata. I since figured it out it was a MATLAB environment issue, unrelated to gridfit. Contour plots just weren't showing up in the display. Restart solved the problem. I would edit or remove my original post, but this forum doesn't allow for it. Thanks for the quick reply.
How can anybody guess what you are doing wrong? How exactly does it fail? What are you doing exactly? What is your data? And why are you saying that griddata output is a problem, in a question about gridfit?
I'm trying to use gridfit to produce a contour map (functions: contour and contourf). MATLAB seems to interpret the griddata output okay for the surf() function, but fails with contour(). Anyone else run into this problem? I think it has something to do with grid orientation.
Thank you!!! Saved the day. (Actually, night.)
I tried some smaller parameters (say 0.1 for smoothness), but then I get wave-like artifact pattern in my image.
What I am looking for is a combination: griddata in the inner part (so no smearing at all), and gridfit at the edge (where griddata produces NaN). Hence smearing is minimized. Can I achieve that by tuning parameters? Thank you very much!
Smaller values of the smoothing parameter give less smoothing, although zero as a value may result in a singularity.
Hi John, I have a set of 2d scattering data, and I want to have interpolation values at a regular (1:N,1:N) lattice. What parameters should I use to minimize the smoothing? (I need to iterate the interpolation for 50+ times, setting "smoothness"=1 for 50 times makes the resulting image very blur).
I got error information now, but not last year. Seemingly it is because the code is not consistent with the new Matlab version (2011a). Maybe I am wrong. Do you have ideas about this? The error is as following:
??? Error using ==> mldivide
MLDIVIDE is not supported for one sparse input and one single input.
Error in ==> gridfit at 616
zgrid = reshape(A\rhs,ny,nx);
i'm pretty new in matlab. so maybe i'm going to do a stupid question.
i have my data already in a matrix form.
i cannot use your function because it expect 3 vectors and nodes ...
can you help me ?
basically i have this surface, results of observation (along x and y there are categorical variables) and i'm expecting a monotonic decreasing surface ... i got it but with some outliers ... i think your function can help me.
any suggestion ?
Melissa - gridfit merely generates a function defined at a set of discrete node points, on a regular lattice. It does not deliver an interpolated prediction at any point. But you can easily enough get that prediction from the surface produced from gridfit.
There are three interpolation methods that are allowed in gridfit. Those methods define how gridfit treated your data when it built the surface itself. Nothing requires that you use the same method to interpolate that gridfit used when it built the surface though.
The nice thing is, we already have interp2 to do 2-dimensional interpolation once the surface is defined on a regular lattice, with several good methods already there. The case of 'linear' for interp2 is equivalent to that employed in gridfit for the 'bilinear' case. It turns out that this is also a method used by tools like photoshop for image interpolation.
You can also use the other methods of interp2 though - splines for example. Or, if you prefer a simplicial interpolant, where each square of the lattice is split into a pair of triangles, then a truly linear interpolation is done across those triangles, you can find my interpns on the file exchange.
So yes, you can easily do interpolation at any arbitrary point that lies inside the lattice as defined by gridfit.
This is an amazing piece of work John. I bow to your superior programming skills and knowledge. I have a question though, is it possible for me to evaluate a point or a series of points on the surface?
Excellent tool that much improved my visualization of some photographic exposure data. Thanks, John, great to use one of your tools (again).
Great piece of work John. No hassle and easy to use. Thanks!
One way of using gridfit just for interpolation is :
- Use gridfit in the whole grid zifit=gridfit(X,Y,Z,xi,yi)
- Use griddata in the same grid
- Use isnan to know which elements are NaN in zidata, and apply it to zifit
At least for me it worked
- Use a isnan(griddata) to know which points are inside the original data, and apply it to the points created with gridfit
Thank's for your answer John D'Errico.
You have asked this question in multiple places. I have no idea what simulink does, so I cannot answer your question, nor can I even be sure what it is you wish to do.
Matt already answered (as an answer) that if you just desire to interpolate the function at a specific location, then interpne (from the file exchange) will solve that problem. I believe that is the correct answer from what little I know about your problem.
If you wish to create a new surface that extends further out, gridfit can do that. Just supply the new coordinates that include the old x and y, but will create a new grid that extends out as far as you wish.
Remember that any extrapolation is a risky business, trying to predict the unknown. It is RARELY accurate or even intelligent, and is only as good as the data it is built on. If you are trying to extrapolate a long distance out, don't be upset at the result.
I'm having the following problem. I have a 2d table that contains data from some measurements. How can I do the same kind of Extrapolation that is possible in SIMULINK 2-D table lookup using interpolation-extrapolation lookup method, but in Matlab. As I figured out 'griddata' and 'interp2' can not do the job for me.
This is the dimension of my data:
x <1x37 double>
y <1x28 double>
z <28x37 double>
Thanks in advance.
Very nice.. Just what I have been looking for
Jason - This is a problem of extrapolation, something often difficult to do intelligently. My standard response is to quote Mark Twain:
“In the space of one hundred and seventy six years the Lower Mississippi has shortened itself two hundred and forty-two miles. That is an average of a trifle over a mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oölitic Silurian Period, just a million years ago next November, the Lower Mississippi was upwards of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-pole. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo [Illinois] and New Orleans will have joined their streets together and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.”
"Life on the Mississippi", Mark Twain, 1884
The point is, extrapolation is difficult, especially if all you use is a tool that tries to take data that it sees and predict into the unknown. How should matlab know that zero is a special number? How should gridfit know that it is ok to predict smoothly beyond the extent of your data, yet stop at zero?
Having said all of that, there are several options open to you, both of which might work nicely. First, you can use the 'springs' method in gridfit. It tries to prevent extrapolation beyond your data. This is sometimes the proper solution. Try it, and you might be happy, or not. This I cannot say.
A second option is a transformation. Very often when a system has a property that it cannot go less than zero, you are working in the wrong "space". The trick here is to use a transformation to fix it. Here, I might log your data. Now, use gridfit to model the surface, allowing it to extrapolate smoothly as it wishes with the default options. Now, exponentiate the result. In effect, you have done an interpolation in the log domain, but then transformed back. Do you care if the logs went negative? Of course not. exp is a function that is positive for all real inputs. This trick often works very nicely. In effect, it is just recognizing that the interpolation is best done in the proper domain, one where your system is truly more additive.
I have scattered 3d data of a membrane subject to tensile loads at the corners resulting in transverse displacement. I'm trying to use gridfit to look at a the 3d surface representing the data. The problem is around the edges the representation gridfit and surf produce is very inaccurate. In fact, at the corners the resulting plot shows negative values when there are no negative z displacements in the data! Any ideas? Thanks.
There is no functional form for the surface, at least unless you are willing to accept a tool like interp2 to interpolate it at any point. The best that you can do is to view that surface as a low order spline in two dimensions, but as a spline, all you have are a large number of tiny linear segments all neatly connected together. (This applies to the default method, which uses a triangulation of the regular lattice. The alternative is the bilinear interpolant, which in effect is not even truly linear, but an oddly and mildly piecewise quadratic form.)
The virtue of this approach is you can fit any set of data that follows the general form of a function z = f(x,y). You need not formulate a model as other modeling tools force you to do. But you can't get a model out of this either.
If you really want to do empirical modeling, you need to make the effort of posing a model, of choosing some form that will represent your surface. That model may be polynomial (as my polyfitn would allow you to fit) or it may be nonlinear. Then you need to use a tool that can fit that model to your data. There are many such tools on the file exchange to serve that purpose, or you can use the curve fitting toolbox.
Hi there! I have a question in relation to this tool. Suppose you have already fit a surface through the data. Is there a command to extract the formula/function of that surface? Thanks!
Just wanted to extend my gratitude that you made this available. The documentation is excellent, it works as advertised and saved me several days of setting up and debugging a regularization scheme for fitting an appropriately smoothed surface through some unequally spaced thermodynamic data that have error and do not extend to all corners of the needed P-T grid. It was a relief to be able to focus on the science rather than on debugging the scheme to process the data.
Still stumbling about these issues.
Thanks for the right directions!
Gridfit is set up to ignore any nan data. So your question is meaningless and irrelevant, because gridfit in fact ignores the nan data. See the following code fragment, taken directly from the code...
% also drop any NaN data
k = isnan(x) | isnan(y) | isnan(z);
So giving this code a rating of only 4 stars because a different code (griddata) did not work for you seems a bit silly.
If your goal is to fill in the nan elements, while leaving the actual data alone, then use inpaint_nans to interpolate. Use the right tool to solve your problem.
If your goal is to do smoothing, while also interpolating the empty (nan) data, then use gridfit. It will solve your problem, but only if you call gridfit and not griddata.
Finally, if your problem is that griddata is not working as you wish or that you don't understand griddata, then why in the name of god and little green apples, why are you asking it here, in a place intended for comments on gridfit?
I have an 512x512 array z where ~11000 points are non zero and the rest must be nans! Why is the following code giving me an array zgd filled with only nans? How does this program deal with nans?
xi = linspace(0,1,512);
x = 1:512;
y = 1:512;
zgd = griddata(x,y,z,xg,yg);
excellent job on meteorological balloon data!
+1 vote for gridfitn ;)
Thanks John for your code. I enjoy it very much!
the new setting "'smoothness',[xsmooth ysmooth]" is a Godsend. Many Thanks
I like Adam's idea, and this is easily enough put into the code.
This is brilliant, but a capability to vary the degree of smoothing in x and y exclusively would be very useful
I've responded via direct e-mail, but the gist of it is that I have plans to introduce a gridfitn one day, but it will take some time to write, and I have many things to write on that same list.
Hi there! Is there a gridfit3d available at all? I have a 3d array of electric field data (369x551x325) on a irregular but highly collinear grid and griddata3 therefore does not work. As I understand, the gridfit code would be perfect for this if it could deal with 3d data. Thanks!
Alternatively, a simple citation style for a FEX submission is found here, if all you wish is a citation:
Perhaps I should add this...
If you really desire something written, many years ago I was granted a pair of US patents on a similar idea, there applied to higher dimensional problems in color modeling. Gridfit only works in 2-dimensions of course.
US Patent # 4992861, 4941039
At least for me, actually reading a patent is about as miserable a pastime as I can imagine.
Excellent work! I needed a sound extrapolation algorithm to pad some gravity data to use with both vertical continuation and source depth inversion. It worked very well for me. Might I add this has one of the best and most easily understood helps I've seen for Matlab code, even for someone like me whose first language is not English.
SOmeone asked before about journal reference and I did not see your reply. I'd like to know as well if there's anything published we can reference. Thank you.
John thank you for you prompt reply.
I really need a tool that only interpolates in a given arbitrary domain, as I need to create contour lines in it, which wouldn't make sense out of the domain boundaries.
Could you please post a message here when you post your new tool?
In answer to the question of extrapolation, there are several issues here.
Gridfit tries to extrapolate gracefully, that is, it will extrapolate as smoothly and linearly as possible. This is the default mode. It is also possible to specify the 'springs' method for gridfit. This method tries to extrapolate minimally, while still generating solutions over the entire domain of interest. Think of that method as extrapolating as a constant, where it can do so, at least as a continuous, smooth function.
Regardless, these methods still do extrapolation. If you absolutely must avoid any extrapolation, leaving the entire domain outside of the convex hull empty, then an option is to use griddata. Griddata is of course an interpolant. It will do no smoothing as gridfit allows you to control. But griddata will interpolate inside the convex hull, and leave points outside as NaN.
The last possibility is if you need to do smoothing, but only inside a given, arbitrary domain, convex or not. This requires a new tool that is like gridfit, one that I have written but not posted on the FEX yet, but will do so.
Is it possible to totally avoid any extrapolation?
Use of an interpolant followed by a smooth is a poor second choice, for several reasons.
Gridfit finds the surface that is as smooth as possible, that is consistent with the data. Smoothing a interpolated surface after the fact does not ensure that the result is consistent with the data. When you do a posterior smooth of the surface, the act of smoothing is now disconnected from the data.
Next, if you use griddata to interpolate a surface in advance, you will only get a result that lies within the convex hull of the data. Griddata will not extrapolate unless you use the v4 option, and that option is VERY slow for any significant number of points. Gridfit can extrapolate using several methods, depending upon your goals.
Extrapolation is an important capability of gridfit. But, extrapolation can come in many different forms. For example, consider a data set which is just slightly concave along one edge of the data. The published demo has a good example of this. See the second example, fitting a trigonometric surface. Along the edges of that data, see that the griddata interpolant generates long, thin triangles. Long thin triangles are terrible for interpolation, so what happens is you see strange interpolation artifacts along the edges.
Gridfit allows you to have replicates in your data, treating them properly in a least squares sense to generate the surface. Try out griddata with replicate data points. Even near replicate points can introduce nasty artifacts in the interpolant. Worse, the delaunay triangulation used in griddata will often have problems if you have sets of collinear data points. This is no problem at all for gridfit.
Finally, you can easily control the extent of smoothing done by gridfit.
In short, griddata has its purposes. There are general circumstances when I recommend griddata. But I would never recommend the use of griddata to be then followed by a smoothing operator.
What is the advantage of using gridfit instead of griddata followed by smoothing?
Hello John, the function is really nice but I'm wondering if it would be possible to use it with non-rectangular domains where the nodes are just specified with (xn,yn) pairs n=1,..., N
Thanks for any help !
Thank you mr. D'Errico, for making my life easier.You piece of code is really good.
This routine is almost magic...it is the only routine I have found that finds surfaces I know are there amidst very noisy elevation data. Thanks for contributing it.
This is spectacular. I hope the Mathworks is paying you some sort of royalty for your efforts!
wow! looks awesome! was wondering if there was any way to fit a surface around a 3D scatter? maybe this could be used on the data of for e.g x,y data corresponding to max(z) and then repeating for max(x) and max(y)?
Good job John, beautiful tool! Very parameterizable and helps a lot in giving you a feel of how your data is behaving.
Does anyone know how I can anchor the surface to any given point? I'd like to force the surface to pass through a given set of (x,y,z) anchor points, something like establishing an "infinite" weight to these keypoints, while not altering the weight of the other "normal" points...
Versatile smoothing function.
The review by Andrey is a bit of self serving braggadocio, since he does not actually offer Matlab code on the file exchange that replicates in any form what gridfit does. By the way, the link he gives is for a .exe file - BEWARE.
His time comparisons are also meaningless of course, since the times referred to are for a now wildly obsolete computer, and comparing an interpolation code to this class of modeling code makes no sense anyway.
Finally, converting an interpolation code to a noise reducing surface modeling code is not at all trivial. I welcome Andrey to do so and provide MATLAB code for the purpose, if that is truly so easy.
You wrote "For example, my computer took 1020 seconds to solve a 500x500 problem." What an old comp do you use? I need 5 second to solve 500x500 problem. Download my prog from http://www.smartfills.com/Html/2D.zip . Compare with your own method. My own is in fact interpolation, but could be modified for approximation without much problems.
it is very good toolbox ?thanks
Excellent utility!! A default setting did everything I needed with minimal error.
I've been using your incredible program for some years and in a wide range of applications. My question is, do you have written any kind of documentation in a scientific journal, for example, for any citation and also to know a little bit more about this super "surface fitting". Any reference can help too.
BTW: VP uses the more silly test to "compare" GRIDDATA and GRIDFIT, and, as John says, he doesn't bother to read that by default the former is 'linear' and the latter uses 'smooth' to 1. But, any way...
1. In strict accordance with the help, xi and yi WERE x-nodes and y-nodes of interest, generated by MESHGRID. If this kind of input is not accepted by GRIDFIT, this means that GRIDFIT must be improved.
2. This is also necessary for comparison with GRIDDATA, which has no problems with my example.
3. If these arguments are still insufficient, and you insist that I am incorrect, then you have to improve the description of xi and yi in your help and define them as VALID INPUT arguments for MESHGRID. However, this will again mean an improvement.
4. If this is not convincing for you, look at the example below. If it is not a strong indication of necessary improvements, then I have to follow the principle of "haende khokh" - this is much better than the cold war around an empty egg.
The problem that VP has is he failed to read the help, or apparently bother to understand what gridfit does. I'd recommend reading the help, and perhaps looking at the examples when you don't understand what you are using. I do understand that real men don't need no steenkin help. Perhaps this is the approach that VP has followed.
Gridfit does not interpolate a surface at some random scattered list of points, as would griddata.
Gridfit generates a surface from scattered data on a complete, regular lattice of points. The node arguments are what he might have passed into meshgrid. Again, READ THE HELP.
What is wrong with calling GRIDFIT? Needs improvement.
??? Error using ==> gridfit at 404
xnodes and ynodes must be monotone increasing
How to use gridfit with polar coordinates ?
[th,r] = meshgrid((0:5:360)*pi/180,0:5:300);
[X,Y] = pol2cart(th,r);
Henrique - My guess is that you have called gridfit with no arguments. This code is not a gui or command. See the demos for examples of the use of this code. It truly does work, at least when called with data. John
It is not working! Matlab says:
"??? Input argument "x" is undefined.
Error in ==> gridfit at 325
I tried to change x(:) to x(:,1), but the problem kepps the same. My Matlab is version 7. It had to be more recent?
Thanks heaps for the function, it saved me a lot of work. Surface fitting has come a long way! absolutely awesome fit to my data. and really easy to use.
I just write to say..."Thank you"!
PS. but I add: It would be perfect if there was a way to sidestep the extrapolation of the the gridnodes beyond the datapoint boundary. It should be added in matlab next release!
Great tool and insightful docs! Gridfit succeeds where griddata fails; that is, with noisy & ill-spaced data such as exist in the real world. Thanks!
Thanks for your programs. It solves my problem now:)
by the way, would you like to work out some way to interpolate the data just like the surfer does
This is very very good work!
This is simply great, with all we can expect for a replacement to griddata (which fails too often).
Very very good John! Take good care of Amy!
Excellent function, John - this has been extremely useful to me.
The ability to turn off extrapolation beyond the bounds of the data would be very useful, though.
A note for the inexperienced user:
If you have repetitions (multiple points with the same (x,y) but different (z)), this algorithm will handle where griddata might fall. In such a case, think hard about why you have these repetitions; it may be that volumetric plotting is more appropriate than surfaces. If so, you could get a good-looking surface out of this algorithm; which isn't totally meaningful.
That's not a fault with this algorithm; it just means that you have to think hard about why you are getting repeated points in Z: If it's acceptable to average these out to a surface, this algorithm is for you!
Excellent! This is exactly what I was searching for. Thank you very much for this great contribution.
Excellent program and documentation.
A question: the help states "Gridfit is not an interpolant." Why is that? It seems to be perfectly able to be used as such, given the outputs and how they were generated. One use I see in my work is to use this function to interpolate where repeated values are present. I've yet to find another Matlab solution that works as well as this does.
I'll invite Håvard Torpe to contact me via e-mail to discuss this idea. But while I COULD allow the user to turn off extrapolation in gridfit, this will likely cause singularities - serious numerical problems, for most users who might do so.
Works like a charm right from the moment you start using it.
However: I'd like gridfit() NOT to extrapolate values for gridnodes that has no datapoint in the vicinity. Any ideas?
Should be included into the default distribution of matlab! perhaps somehow integrated into griddata?
Does a great job!
This is an excellent program. It makes it easy to fit surfaces to 3D data.
Great ! Thank you!
Well done John, I have been looking for this while ago and I think it is a must have code. Thank you very much for being so helpful to mathlab community
I was struggling against griddata since days, beacause of its problems when extrapolating...this is absolutely great!
Thank you for your great help to the Matlab community. We love you
This is exactly what I am looking for... Thanks you!!! Save me a lot of time to write one...
simply said: just another extremely useful member of the d'erriconian-draconian family of must-haves if you're professionally serious about nd-data investigation / reconstruction / interpolation / visualization, which smoothly unites with its now-famous siblings (consolidator, inpaint-nans) into a very nice toolbox for those everyday cheating-with-data endeavors...
the code is very clean and the profiler reveals no unnecessary hot-spot...
HOWEVER, in contrast, the help bit is just unbelievably clumsy and almost indigestible without sever anxiolytic medication - JOHN, JOHN! this part requires some serious overhaul and sprucin'-up...
altogether, very well done - and thank you for this snippet
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613380.12/warc/CC-MAIN-20210614170602-20210614200602-00007.warc.gz
|
CC-MAIN-2021-25
| 75,148
| 517
|
https://neeness.com/when-do-snakes-not-move/
|
code
|
Table of Contents
When Do Snakes Not Move?
What happens when a snake starves? The results show that starving snakes reduce their resting metabolic rate and change to metabolising lipids while sparing their protein stores. This was done to a degree where all snakes were able to increase in length despite a significant weight loss.
Why does a snake look twisted? Ebola-like virus may be result of two viruses merging. Scientists have finally found the cause of a mysterious disease that makes snakes tie themselves up into knots, stare off into space, and waste away—the reptiles are infected with an Ebola-like virus, a new study says.
How do you tell if a snake is hibernating or dead? If your snake is old enough to be close to dying of old age, it will be less of a shock when it dies. In short, a hibernating snake will be responsive when you move close by or handle it. However, a dead snake will be completely unresponsive. If you’re unsure, you should always check with a veterinarian.
When Do Snakes Not Move – Related Questions
What do snakes do before they die?
A dead snake will still have its eyes open, but it will not react to your touch. When they are dead, they will instead hang limply in your hand when you lift and handle them. Simply put, the easiest way to distinguish between a live snake and a dead one is responsiveness to your touch.
How long can I leave a thawed mouse in my snakes cage?
How long can you leave a thawed mouse? You can leave a thawed mouse in your snakes’ cage for about 24 hours.
What happens to a snakes body when it dies?
Even if the snake has been dead for a few hours, the ions in the snake’s nerves are still active and will respond to stimuli. If a dead snake is touched or moved, the nerves will react and send electrical impulses throughout the body, triggering muscle movements.
How long can I leave a live mouse in my snakes cage?
You can leave a live mouse in your snake’s cage for about 10 to 30 minutes. You should not leave the rodent with the snake for more than half an hour if your snake is not hungry. The mouse can injure your snake pet.
What does it mean when a snake stares at you?
A snake usually stares at its owner because it wants to be fed. Other reasons include protecting its environment, sensing heat, and lacking trust. In some cases, it can be a sign of stargazing, which is a dangerous condition requiring medical treatment.
What is too cold for snakes?
Best Temperatures for Snakes
Below 60 degrees Farenheit, snakes become sluggish. Above 95 degrees F, snakes become overheated.
Why does my snake keep trying to escape?
Most of the time, this behavior can be explained by the fact that your snake is crepuscular and naturally becomes more active at night — they are not waiting until the lights are out to make a grand escape, just waking up at a normal time of day for their species and trying to get some exercise in.
Do snakes fart?
Snakes can and do fart. However, due to being strict carnivores, they are less likely to fart than other mammals (as diet plays a crucial role in this behavior and the creation and buildup of gas). In a healthy snake, farts are infrequent and unlikely to be heard and smelt.
How often should I feed my snake?
How often should I feed my snake? That all depends on your snake’s age, size, and activity level. Smaller or younger snakes usually eat twice each week, while larger, more mature snakes typically eat once every week or two. Female snakes approaching breeding season can be fed more frequently.
What months do snakes hibernate?
Snake brumation can begin anytime from September to December and last until March or April, depending on the weather pattern. In addition, snakes may come out of brumation if a warm front changes the weather, warming their blood and making them more active.
Can you smell a snake in your house?
In most cases, you won’t know if you have a snake in your home until you see it, but some venomous snakes, like copperheads (which are found in 28 U.S. states), can smell like cucumbers, according to experts.
Do snakes die if cut in half?
The separated pieces of snakes and lizards may seem to be alive but they will eventually stop moving and die because their blood supply is cut. It’s impossible for cut vessels and organs and nerves to reattach or realign on their own.
Will a dead snake attract other snakes?
“It is possible that a dead female snake might attract a male, but only because male snakes recognize receptive females by chemical cues and don’t understand death.”
How long can you leave a thawed mouse?
About 24 hours is the max. Usually only overnight though.
How do you tell if a mouse is thawed?
Since it’s the thickest (normally the gut of the stomach) the inside could still be slightly frozen, so even though it’s warm when you first take it out, you need to wait and make sure that part doesn’t cool off quicker than it should. When you feel it get cold again from the inside, you’ll know it.
Can you feed a snake two days in a row?
Re: Is it ok to feed two days in a row
You should wait at least 4 days in-between feeding. You should feed your BP a rodent that is about the same width as the fattest part of the snake. I would try feeding a 60 gram rat next feeding day and see if that fixes the issue. That’s the problem.
How long can a snake survive without its head?
If a mammal loses its head, it will die almost immediately. But snakes and other ectotherms, which don’t need as much oxygen to fuel the brain, can probably live on for minutes or even hours, Penning said. “Severing the head isn’t going to cause immediate death in the animal,” Penning told Live Science.
Can you leave a thawed mouse in the tank?
Yes the food item can be left over night and the next day safely. You should NEVER EVER refreeze thawed meat of any description. If the snake does not eat the food it should be removed and in this case because this is a new animal it should be thrown away.
Can 2 snakes live in the same tank?
Is Keeping Two Snakes Together Recommended? It’s rarely a good idea to keep a pair of snakes in the same tank. However, much depends on the species, size, temperament, and sex combination. * Snakes that are housed together MUST be the same size and separated during feeding.
Where do snakes like to be touched?
Snakes. There are quite a number of snakes that enjoy being held and handled on a daily basis. Some like to rest on your arms and shoulders and even gently wrap around your hands. Despite the bad rap they get, snakes can be very gentle and friendly pets if you have the right kind.
What smell do snakes hate?
Ammonia: Snakes dislike the odor of ammonia so one option is to spray it around any affected areas. Another option is to soak a rug in ammonia and place it in an unsealed bag near any areas inhabited by snakes to deter them away.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00096.warc.gz
|
CC-MAIN-2021-49
| 6,874
| 50
|
https://forums.adobe.com/thread/1099676
|
code
|
Apologies if my questions are strange - but I'm a newbie to CQ
We are in the process of implementing CQ, and would like to feature author user documentation on a wiki-like site (e.g. Confluence). Unfortunately, we don't have a LDAP available and I would much prefer not to maintain users in two separate systems.
Is there a way to use CQ as authentication source for the wiki-site (i.e. CQ is the user-store)? Or is there any other good way to restrict the access to the wiki-site to only users of CQ authors?
Thanks for your input.
There's nothing out of the box which would do this, but there's also nothing stopping you from doing it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267164925.98/warc/CC-MAIN-20180926121205-20180926141605-00522.warc.gz
|
CC-MAIN-2018-39
| 637
| 5
|
http://www.grayduckllc.com/2016/03/09/international_travel.html
|
code
|
Recently, I took a ski trip with a few consulting friends to Chamonix, stopping in Geneva on the way (and Berlin after). We used TripSplit to settle, but ran into an issue with our expenses being in multiple currencies. Typically, my international trips have only been to one country (and thus currency) (e.g., Mexico and Pesos), so expenses can just be divvied in that currency and converted after.
Since our trip touched on 3 currencies (Swiss Francs, Euros, and USD for the AirBNB), we ran into a bit of an issue translating those into USD. We ended up just converting before entering it into the spreadsheet (~1.1 for Euros, flat for Francs), but that was inconvenient. To change that, I updated the spreadsheet to include a "currency" field for each line item, with a simple vlookup to grab the exchange rate.
Locking in an Exchange Rate
I toyed with two options for entering that exchange rate, manual and automatic.
Automatic was the easier option, but it ran into issues as it would change daily (or more frequently), so depending on when you settled, it would change how much you owed. While currency fluxuations are usually fairly minor, I happened to be in Argentina during the most recent presidential turnover (December 2015), with a 30% jump in one day, so was wary of that issue.
Likewise, I wanted it to be slightly automated to not force people to look up exchange rates, so a full manual approach was out.
My compromised approach is to automatically pull down the current exchange rates (shoutout to fxexchangerate.com for their unwitting participation), but write a brief google sheets script to lock in a given rate*, thus avoiding currency fluxuations. It also updates the date of the "pull" to make sure it isn't too far off. I'll leave it to each group to decide when to lock in rates.
I was unable to figure out how to add buttons to call the function as with VBA macros in Excel, but turns out there's a nifty onOpen() function in Google App Scripts, which I used to add a one-item menu to the Google Sheets toolbar.
*Using a glorified paste values
- Keep working on mobile friendly version
- Potentially incorporate a rolling 7 day exchange rate
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989006.71/warc/CC-MAIN-20210509153220-20210509183220-00293.warc.gz
|
CC-MAIN-2021-21
| 2,171
| 11
|
https://security.stackexchange.com/questions/126157/how-can-i-protect-my-credit-card-on-sites-where-2fa-is-not-an-option
|
code
|
I'm using Visa Express, and sometimes when I'm shopping, the store
doesn't support Verified by Visa - so that means that there is no 2FA option.
Normally when there is that option, it will force me to enter my phone number
where I will get OTP which has time experation in order to verify payment.
I called my bank and they said that I should avoid
those web shops, who doesn't support that option.
So my question here is, is there any other option to protect myself,
or is it maybe more secure to use paypal instead my credit card?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00543.warc.gz
|
CC-MAIN-2023-14
| 532
| 8
|
https://community.oracle.com/thread/2483647?tstart=44
|
code
|
Client has OBIA 184.108.40.206 Marketing analytics module; i.e., OBIEE 10g integrated with Siebel 8.0. Now they want to upgrade OBIEE to 11g. Based on the OBIEE certification matrix, the OBIEE 220.127.116.11 Segmentation engine is compatible with Siebel 18.104.22.168 onwards. It is still not clear if they would upgrade Siebel.
Now is there any alternate way so that segments created in OBIEE 11g can be used in Siebel 8.0, like, can we export the Segments from OBIEE 11g and import it to Siebel 8.0 and then execute the campaign?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423269.5/warc/CC-MAIN-20170720161644-20170720181644-00176.warc.gz
|
CC-MAIN-2017-30
| 531
| 2
|
https://iberaula.es/Forum/VerForo?id_foro=10625
|
code
|
Marcos Sanz Ramos
ERROR: time increment<0.0000001 (Courant)
This error is quite difficult to solve when modelling sediment transport because the mesh is updated each time step considering the bed load on each element. If the parameters are not properly implemented, for example, an element could be eroded enourmously and, at this point, abnormal results on the bed elevation could provide numerical instabilities.
Please, check if you have implemented the bed load parameters within the usable range and if some abnormal erosions are produced (the message also indicate the coordinates where the Courant condicion is not fulfilled).
|ERROR: time increment<0.0000001 (Courant)||Rodier Rose||08/06/2021 10:23|
|ERROR: time increment<0.0000001 (Courant)||Marcos Sanz Ramos||09/06/2021 07:17|
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00689.warc.gz
|
CC-MAIN-2022-40
| 789
| 6
|
http://meusite1.com.br/gb/blaq+diamond+slond+uthando+song+download/xhtml
|
code
|
Download Blaq Diamond Slond Uthando Song Download Mp3
This song has been watched 195,033 times. Download Uthando Mp3. You can download blaq+diamond+slond+uthando+song+download.mp3 for free on meusite1.com.br. Free Blaq Diamond Slond Uthando Song Download Mp3 download. This song uploaded by Blaq Diamond - Topic with like 1,197.
Provided to YouTube by Believe SAS Uthando · Blaq Diamond Inqola ℗ Ambitiouz Entertainment Released on: 2017-11-22 Music Publisher: Ambitious...
Blaq Diamond- Ubani owayazi [Lyrics]✔ Download
Thanks for watching🙌💙 Download our latest Track slikouronlife.co.za/song/154506/vivid-ride Please show your support by just following us on our...
Miss Pru Dj - Price to pay Ft Blaq Diamond & Malome Vector (Official Audio)✔ Download
Being in demand does not come easy nor does it happen overnight. Miss Pru has shown us this by following her passion day and night and dedicating...
Blaq Diamond - Sthandwa (Official Music Video)✔ Download
Get it here on iTunes: itunes.apple.com/za/album/sthandwa-single/id1250045699
Blaq diamond uthando dance video (KIDS)✔ Download
SUPERSTARDAN WITH KASI KIDS DANCE CLASS
Blaq Diamond - Umuthi (Live)✔ Download
Blaq Diamond - Isoka (Official Music Video)✔ Download
Afro-pop duo, Blaq Diamond release music video for Isoka, to celebrate young love. #Isoka #BlaqDiamond #ambitiouzEnt Buy #Isoka Here: iTunes: ...
Thee Legacy, DJ Maphorisa - Thando ft. Mlindo The Vocalist✔ Download
Official Music Video for 'Thando ft. Mlindo the Vocalist' by Thee Legacy and DJ Maphorisa. Downlod or stream the track here - ...
Naima Kay - thando ( All about Love)✔ Download
Song from Naima Kay's 3rd studio album. Available on iTunes For Bookings Email: firstname.lastname@example.org mobile: +27 82 792 9693 / 079 731 3681
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360107.7/warc/CC-MAIN-20210228024418-20210228054418-00435.warc.gz
|
CC-MAIN-2021-10
| 1,787
| 18
|
https://www.bleepingcomputer.com/forums/t/498014/win7-stuck/
|
code
|
Jump to content
Posted 14 June 2013 - 02:36 AM
Posted 14 June 2013 - 11:54 AM
Try following these steps:
Ignore the first step since you can't get into Safe Mode. Report back once you've tried them and let us know if they worked.
Some other steps you can try are here: http://support.microsoft.com/kb/927392
Does your computer have a recovery partition, or did the CD come with the computer? It's not recommended to use a generic installation CD as you may not have all the correct drivers once the OS installs. If you have a recovery partition, you can try to repair your OS from there.
But if you want to do a reinstall, just put the CD in, start up or restart the computer, and follow the prompts to install the OS. If it won't boot from the CD, you may have to go into the BIOS and select the CD option in the boot order to make sure it tries to boot from the CD before it boots from the hard drive. Make sure you have the product key handy.
0 members, 0 guests, 0 anonymous users
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039750800.95/warc/CC-MAIN-20181121193727-20181121215727-00240.warc.gz
|
CC-MAIN-2018-47
| 984
| 9
|
https://unform.com/unform10/documentation/server_manager.htm
|
code
|
|Top Previous Next
The server manager is a browser-based tool accessed by adding ?sm=1 to the standard UnForm web server URL. For example:
An administrator login is required. Once logged in, many server administrative tools are provided, including a job history table, an active connections table, a log viewer and analyzer, a configuration tool, scheduled jobs editor, and a server restart option.
The server manager window presents a toolbar menu many options to specific management tasks. These task windows are presented in full width frames below the menu. The task options presented include:
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816942.33/warc/CC-MAIN-20240415045222-20240415075222-00881.warc.gz
|
CC-MAIN-2024-18
| 597
| 4
|
https://phys.washington.edu/events/2022-02-24
|
code
|
Quantum systems arising in solid state physics, chemistry and biology invariably interact with their environment, and need to me modelled as open systems. While the theory of Markovian open quantum systems has been extensively developed, their non-Markovian generalization remains less well understood. In this talk, I will first review quantum stochastic calculus which provides a mathematically rigorous description of a unitary group generating Markovian sub-system dynamics. From a physical standpoint, this description formalizes a model with a delta-function memory kernel - I will propose a generalization of this description to non-Markovian open quantum systems described by memory kernels describable as complex-valued tempered radon measures, and rigorously establish the well definition of the resulting dynamics. I will also show a systematic construction of a Markovian dilation of these systems with the error between the Markovian dilation and the exact non-Markovian dynamics growing only polynomially with time. Finally, I will consider the non-Markovian many-body quantum systems with n qubits and show that there is a poly(1/eps, n) quantum algorithm which approximates the state of the n qubits with epsilon trace distance error, thus placing dynamics of non-Markovian many-body systems in the BQP complexity class.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100912.91/warc/CC-MAIN-20231209134916-20231209164916-00417.warc.gz
|
CC-MAIN-2023-50
| 1,336
| 1
|
https://ipchimp.co.uk/2011/02/23/multiple-desktops-on-multiple-monitors-how-to-cope-with-screen-overload/
|
code
|
So. Your office is now paperless. Your desk is (mainly) clear of paper. You feel proud of embracing the future.
However, over time, slowly but surely, your old paper clutter infects your digital world. You have multiple applications, email clients, web browsers; multiple cases, matters and tasks. Your monitor becomes a confusing tangle of windows upon windows and your newly gained productivity slowly drops again. What do you do?
A first step is to get extend your desktop onto another monitor. You can pick up a 21-inch USB monitor for around £139 (at the time of writing). Also many offices now have a glut of old 15-inch LCD monitors floating around; a second graphics card to give you a second VGA port will cost around £30. (You can also use any additional ports, e.g. DVI/HDMI to add further monitors). One screen can be used for “always-open” applications such as email or a web browser; the other can be used solely for work products (e.g. office-applications, CAD programs, IDEs etc).
Those who have been smugly using Linux operating systems (e.g. Ubuntu) for ages will understand the usefulness of multiple desktops. Basically, a little icon in the corner of your screen or on your task bar allows you to have multiple instances of your desktop that exist simultaneously. You can switch between each instance using the mouse or assigned hotkeys. This effectively gives you multiple computer workspaces.
Windows has been slow to get involved in the multiple desktop party. To my knowledge no native Windows tool exists. However, through my travels I have come across a variety of third party tools that provide multiple desktop functionality:
- For XP there is a small Power Toy called Virtual Desktop Manager (see link for download) that provides up to four desktops with links on the taskbar and a handle preview feature. I started using this but found it a little slow and buggy.
- There is also a small Sysinternals tool called Desktops (see link for download) that provides similar functionality (up to four desktops). It also has a handle system bar icon. However, I found that applications crashed quite often when using it.
- Finally, I came across VirtuaWin, which is a freely distributed program and is licensed under the GNU General Public License. It supports all versions of Windows (I have only tried XP) and offers a usable portable version to avoid install conflicts. It is by far and away the fasted and most stable tool. I have the Windows key and the arrow keys set up as hotkeys to switch between the four offered desktops and the system bar icon offers a handle one-click representation of your open programs across the desktops.
I now have two monitors and four desktops working nicely with each other. One quantifier for Windows is that a relatively heavy duty machine is required (however, most multiple core Intel machines with GBs of RAM should be fine). I can have a dedicated desktop for each matter I am working on and move all distractions to other desktops. Productivity is restored.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104690785.95/warc/CC-MAIN-20220707093848-20220707123848-00373.warc.gz
|
CC-MAIN-2022-27
| 3,032
| 9
|
https://answers.presonus.com/77060/how-to-adjust-note-volume-dynamics-on-midi-import-in-notion-6
|
code
|
How does one adjust the volume of notes in Notion 6 from a midi import? Adding the score dynamics to the page doesn't seem to alter the volume of the imported notion. Nor does the Tweak Dynamics feature. Given the great instrument library I'd like to adjust the volume in Notion, but can't seem to find a way or online resource that tells me how to do this. But hoping that there is something I'm just missing. Thank you!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816832.57/warc/CC-MAIN-20240413180040-20240413210040-00529.warc.gz
|
CC-MAIN-2024-18
| 421
| 1
|
https://awesomeopensource.com/project/vincentbernat/wiremaps
|
code
|
Wiremaps is an application to gather wiring (aka L2) information on a network using protocols like LLDP, EDP, CDP and SONMP. It also gathers information from the FDB (MAC-port table on switches), the ARP table (MAC-IP table) and some miscellaneous information like interface names.
Warning: Wiremaps is not well-maintained. It is aging and may be unlikely to produce good results out-of-the box. I intend to rewrite it at some point but didn't get time yet.
The ARP table is only used to link IP addresses to MAC addresses (and vice-versa). We don't use the information about the interface where this information came from.
GNU/Linux workstations need an LLDP daemon using SNMP to export information gathered. Otherwise, almost no information can be extracted from those hosts.
The situation is the same for Windows. However, there exists a commercial one.
To use this application, you need the following Debian packages:
You then need to create a database and install the corresponding
schema. As postgres user (
su - postgres), you can use the following:
createuser -P wiremaps createdb -O wiremaps wiremaps
You need to write a
wiremaps.cfg file. See
for an example. The default path for this file is
/etc/wiremaps/wiremaps.cfg. You can alter it with
You can install the application with:
python setup.py build sudo python setup.py install
Errors about missing
twisted/plugins/__init__.py can be ignored. You
need to have the appropriate libraries and development tools to be
able to compile Python modules. On Debian/Ubuntu, this is
package. You also need Net-SNMP and its development files. On
Debian/Ubuntu, this is
If you do not wish to install the application, you still need to compile the module to build SNMP queries. This can be done with:
python setup.py build_ext --inplace
You can launch the application by hand
twistd -no wiremaps
twistd -no wiremaps --config=/etc/wiremaps/wiremaps.cfg
By default, wiremaps only listens on localhost. You can change this using:
twistd -no wiremaps --interface=0.0.0.0
You can also use
debian/init.d as a base for an init script (work
only if the application is installed). The init.d script also allows
to use older version of Twisted (2.4).
Indexation is not done automatically. You must browse
http://localhost:8087/api/1.0/equipment/refresh to initiate a whole
refresh. Put this command in a crontab:
16 */3 * * * nobody curl -s http://localhost:8087/api/1.0/equipment/refresh
In the git repository (
git clone git://github.com/vincentbernat/wiremaps.git),
there is a
debian/ directory that builds a Debian package (with
dpkg-buildpackage -us -uc). It does not setup the database.
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version.
See LICENSE file for the complete text. Moreover, to avoid any problem with SNMP bindings using NetSNMP which may be linked with OpenSSL, there is an exception for OpenSSL:
In addition, as a special exception, a permission to link the code with the OpenSSL project's "OpenSSL" library (or with modified versions of it that use the same license as the "OpenSSL" library), and distribute the linked executables is given. You must obey the GNU General Public License in all respects for all of the code used other than "OpenSSL". If you modify this file, you may extend this exception to your version of the file, but you are not obligated to do so. If you do not wish to do so, delete this exception statement from your version.
The SVG files are licensed under Creative Commons Attribution 3.0. See LICENSE-CC for the complete license.
snmp.c is licensed under MIT/X11 license. See the license at the top of the file.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00752.warc.gz
|
CC-MAIN-2022-40
| 3,774
| 47
|
https://forums.sonicretro.org/index.php?threads/source-code-accidently-compiled-into-games.19503/page-2
|
code
|
Code (Text): move #$0305,r0;105 set control register Lord Almighty! An interrupt for every three audio samples! And at 22kHz, that's over 7000 interrupts per second. Talk about overhead! I don't know why programmers used interrupt driven audio on the 32X when SEGA went to all the trouble of giving them DMA driven PWM. Just set channel 1 of the DMA in either SH2 and let it rip! If anyone is contemplating 32X homebrew with audio, I released an example of double-buffered DMA driven PWM for the 32X over at SpritesMind. If folks are interested, I could post that here as well.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511364.23/warc/CC-MAIN-20231004084230-20231004114230-00864.warc.gz
|
CC-MAIN-2023-40
| 577
| 1
|
https://serverfault.com/questions/152733/creating-swap-files-faster
|
code
|
You didn't indicate what method you were trying to avoid.
Traditionally, you would issue a
dd command that would in turn pump out a zero'd file of the appropriate size, then run
mkswap, add the entry to
/etc/fstab, and then
swapon to activate it. I've attached a rather hastily-written example shell script that I'm sure has errors (it's late where I'm at and the
fstab entry is far from perfect)
# --- allocate 10Gbyte of swap space as 10 separate 1Gbyte files
# --- that are brought online sequentially during processing
for swpidx in 01 02 03 04 05 06 07 08 09 10
dd if=/dev/zero of=/swapfile.$swpidx bs=16738 count=65536
echo "/swapfile.$swpidx swap swap default 0 0" >> /etc/fstab
However, it sounds like you are trying to avoid this method. The fastest solution that I could provide would be to use a swap partition, which does not require the zero-out process, and can be brought online in minutes. If your instance is running LVM and you have an existing volume group that you could carve a partition out of, that would work just as well, and the allocation could be completed in just a few minutes.
I think I should mention that carving out a swap space of this size is a bit unusual, even for a server; and I only say that because most servers have several gigs of RAM attached when dealing with programs/data of that size. Not to pry or anything, but are you really needing that much swap space?
Another thing you may wish to consider is re-tuning your workload, rather than trying to dynamically allocate swap space. While it's great to have that much "on-demand", as you yourself pointed out, it will quickly become a bottleneck due to the slow I/O throughput on your server instance. By the time you exhaust your memory and you're essentially "living in swap", you'll find that the 20Mbyte/sec transfer rate turns your instance into a 386SX.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151866.98/warc/CC-MAIN-20210725205752-20210725235752-00140.warc.gz
|
CC-MAIN-2021-31
| 1,855
| 15
|
https://msdn.microsoft.com/en-us/library/microsoft.sqlserver.management.smo.synonymevents_methods.aspx
|
code
|
Assembly: Microsoft.SqlServer.Smo (in Microsoft.SqlServer.Smo.dll)
Returns the currently selected event notifications.
Starts receiving events.
Stops receiving events.
Specifies the synonym events to receive.
Specifies the synonym events to receive and the event handler that handles the events.
Clears all event settings, and removes all event handlers.
Clears the specified event settings, and removes all event handlers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607325.40/warc/CC-MAIN-20170523025728-20170523045728-00194.warc.gz
|
CC-MAIN-2017-22
| 423
| 8
|
https://www.anoopcnair.com/windows-10-21h1-upgrade-using-sccm-task-sequence/
|
code
|
In this post, we will see the steps to perform the Windows 10 21H1 Upgrade using SCCM In-place upgrade Task Sequence. There are various Windows 10 deployment scenarios available to install the latest version of Windows 10 21H1. To successfully deploy the latest operating system and choose among these scenarios, It’s important to understand the capabilities and limitations.
If you are looking for deploying a new device, or wipe an existing device and deploy with a fresh image (Bare metal) – Deploy Windows 10 21H1 Using SCCM Task Sequence | ConfigMgr | Step by Step Guide
The simplest path to upgrade PCs currently running Windows 7, Windows 8, or Windows 8.1 to Windows 10 is through an in-place upgrade to automate the process completely with the SCCM task sequence.
In-place upgrade, which provides a simple, automated process that leverages the Windows setup process to upgrade from an earlier version of Windows automatically. This process automatically migrates existing data, settings, drivers, and applications.
Microsoft has built-in extremely robust fallback options. If something goes wrong, the In-place Upgrade can easily revert the Windows Update to the previous version by going back to an earlier build. The process can be automated and handled remotely with deployment tools.
Starting in Configuration Manager version 2103, you can also use a feature update without an OS upgrade package. More Deploy Windows 10 Feature Update Using SCCM Task Sequence | ConfigMgr
Prerequisites – Windows 10 21H1 Upgrade
Make sure that your device has enough space. Your device requires at least 16 GB of free disk space to upgrade to a 32-bit OS or 20 GB for a 64-bit OS to upgrade to Windows 10.
Supported Deployment Tools
You will be required to use Configuration Manager Version 2103 to manage Windows 10 version 21H1.
What are the upgrade paths available?
You can perform an upgrade to Windows 10 from Windows 7 or a later operating system. Migrating from one edition of Windows 10 to a different edition of the same release is also supported. Check out the Summary of the available Windows 10 upgrade paths.
Note – In-place upgrade from Windows 7, Windows 8.1, or Windows 10 semi-annual channel to Windows 10 LTSC is not supported.
Windows 10 Upgrade Phases – Overview
The upgrade process consists of four phases- Downlevel, SafeOS, First boot, and Second boot. The computer will reboot once between each phase in a successful Windows 10 upgrade.
Why Considering an In-Place Upgrade?
The following tasks aren’t compatible with the in-place upgrade –
You can’t use a captured or custom image for an upgrade.
- Changing disk partitions.
- Changing the system architecture (x86 to x64 bits).
- Modifying the base OS language.
- When we have dual or multi-boot systems.
- Changing the computer’s domain membership or updating the local Administrators group.
- Outdated device drivers.
- WinPE Offline operation and third-party disk encryption.
Add an Operating System Upgrade Package
You need to import the complete Windows 10 installation media for creating upgrade packages. We will use this upgrade package to upgrade an existing Windows to Windows 10 21H1.
Launch Configuration Manager Console, Go to Software Library > Operating Systems > Operating System Upgrade Packages.
Right-click Operating System Upgrade Packages and select Add Operating System Upgrade Packages. (you can create a custom folder for selection)
In Data Source, click Browse and specify the network shared path to the root folder where you extracted the source of an ISO file. Select the option to Extract a specific image index from the specified WIM file. Then select the Image index from the drop-down list. Click Next.
You can now specify to automatically import a single index rather than all image indexes in the file. Using this option results in a smaller image file.
In the General tab, provide information for the upgrade package Name, Version, and Comment. Click Next.
Review the provided information, click Next to complete the wizard.
Please wait for a moment, while exporting is in progress.
After successfully completion, click close to exit the wizard.
The new Operating System Upgrade Package now appears in the configuration manager console’s Operating System Upgrade Packages node.
This is a known issue that you might be experienced. Let’s understand why Windows 10 21H1 OS Version Appears wrong 10.0.19041.928 in SCCM Console Operating Systems Node.
Distribute Operating System Upgrade Packages
Let’s follow the steps to distribute the OS image to distribution points.
After importing the upgrade package, you must distribute the content to the distribution point. Right-click on Upgrade Packages and select Distribute Content.
Review the selected content for distribution. Click Next.
Add the distribution point or distribution point groups to distribute the content. Review the selected distribution points, and groups. Click Next.
On the Summary page, review the settings. Click Next.
Click Close to complete the Distribute Content wizard.
You can monitor the content status, if its showing yellow color that means distribution is in progress.
Select the Content Status node. This node displays the packages. If it’s showing the yellow color, that means distribution is in progress.
If the content distribution were successful, it would appear with Green color as shown.
Create an In-place upgrade task sequence to upgrade an OS
In the Configuration Manager console, Go to the Software Library workspace, expand Operating Systems, right-click Task Sequences and select Create Task Sequence.
Select Upgrade an operating system from an upgrade package, and then select Next.
On the Task Sequence Information page, specify the following settings and click Next.
- Task sequence name: Specify a name that identifies the task sequence
- Description: Optionally specify a description
- Select Run as high performance power plan check box
On the Upgrade, the Windows Operating System page, specify the following settings and click Next.
- Upgrade package: Specify the upgrade package that contains the OS upgrade source files. Click on the Browse option to select the source file. Verify that you’ve selected the correct upgrade package by looking at the information in the Properties pane.
- Edition index: If multiple OS edition indexes are available in the package, select the desired edition index. By default, the wizard selects the first index if you already extracted the index while adding the upgrade OS Package.
- Product key: Specify the Windows product key for the OS to install.
On the Include Updates page, specify whether to install required, all, or no software updates. Then select Next.
If you specify to install software updates, Configuration Manager installs only those updates targeted to the collections of which the destination computer is a member.
On the Install Applications page, specify the applications to install on the destination computer, and then select Next.
If you select more than one application, also specify whether the task sequence should continue if the installation of a specific application fails. I’m leaving it default. You can also add it later to the task sequence if needed.
Review the task sequence details, and click Next.
After completion successfully, Click Close to complete the wizard.
The new Windows 10 21H1 upgrade task sequence now appears in the Task Sequences node of the Configuration Manager console. You’ve finished creating an In-place upgrade task sequence.
Edit Upgrade Task Sequence
Use the following procedure to modify an existing upgrade task sequence
Under Software Library > Operating Systems > Task Sequences. Right-click on the task sequence and select Edit.
An In-place upgrade task sequence will give you more granular control to –
- Perform pre-deployment checks
- Manage drive encryption state
- Uninstall known problematic drivers and apps
- Upgrade the Operating System
- Install additional drivers and apps
- Manage drive encryption state
Review other settings added in Task Sequence, you have made any changes here click Apply and OK.
Deploy Windows 10 Upgrade Task Sequence
Use the following procedure to deploy a task sequence to the computers in a collection.
In the Task Sequence list, select the task sequence that you have created, Right-click and select Deploy.
On the General, click Browse to select your device collection where you wish to perform the deployment.
You can manage the behavior for high-risk task sequence deployments. Learn more about How to Configure Collection Size Limits for Task Sequence Deployment Settings | Configuration Manager | SCCM
On the Deployment Settings, select the Purpose of the deployment and click Next.
Available – The task sequence will appear in software center, Process will start only when users initiates.
Required - Configuration Manager automatically runs the task sequence according to the configured schedule. If the task sequence isn't hidden, a user can still track its deployment status.
To use a Upgrade OS deployment, For the Make available to the following setting, Only Configuration Manager clients is already selected.
On the Scheduling tab, you can specify the schedule for this deployment. Click Next.
On the User Experience tab, leave the default selected options. Click Next.
On the Alerts page, leave it default. Click Next.
On the Distribution Points page, you can specify how clients interact with the DPs to retrieve content from reference packages. Click Next.
To understand the available options in the Distribution Points tab during task sequence deployment – SCCM Task Sequence Available Deployment Options in Distribution Points Tab | ConfigMgr
Review the selected settings and click Next.
The Deployment targeted successfully collection. Click Close to exit the wizard.
Results – Windows 10 In-Place Upgrade
To run the in-place upgrade task sequence on the computer. Launch the Software Center, select the Upgrade Task Sequence deployment, and then click Install.
Important – If multiple users are signed into the device, package and task sequence deployments may not appear in Software Center.
Confirm you want to upgrade the operating system on this computer by clicking Install again.
To provide maximum information to your end-users about task sequences deployment in Software Center. Check out this article to create a Custom Software Center User Notification for SCCM Task Sequence Deployment | ConfigMgr
The deployment will take around 60-90 minutes when the upgrade process starts, depending on the environment. Allow the Upgrade Task Sequence to perform an automated upgrade. The target computer has started to apply an in-place upgrade and apply your added steps.
After the task sequence completes, the computer will be fully upgraded to Windows 10 Version 21H1.
How to Find the Latest Windows 10 Version Number? Windows 10 21H1 Version Number | Build Number | Best Easiest Way to Find out Build Numbers
Suppose task sequence deployments have not appeared in Software Center. Open Command Prompt and Run the following command – Control smscfgrc. On the Actions tab, select Machine Policy Retrieval & Evaluation Cycle, click Run Now and then click OK in the popup dialog box that appears.
Starting in Configuration Manager version 2103, if the task sequence fails because the client doesn’t meet the requirements configured in the Check readiness step, the user can now see more details about the failed prerequisites What’s New Improvement with SCCM Task Sequence Check Readiness Step | ConfigMgr
Let’s check the Windows 10 Deployment Upgrade Process Logs to troubleshoot Windows 10 upgrades issues – Windows 10 Deployment Upgrade Process Logs for SCCM Admins
During In-Place Upgrade Task Sequence, if you want to remove or uninstall modern UWP apps, one of the best options is to use Microsoft store for business to sync all inbuilt apps to remove and run and uninstall deployment – SCCM Sync with MSfB Microsoft Store for Business | ConfigMgr
- Perform an in-place upgrade to Windows 10 using Configuration Manager
- Easiest Option to Upgrade to Latest Version of Windows 10 21H1 | No Need to Download 21H1 | Best Upgrade Option
- Deploy Windows 10 21H1 Using SCCM Task Sequence | ConfigMgr | Step by Step Guide
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476396.49/warc/CC-MAIN-20240303142747-20240303172747-00745.warc.gz
|
CC-MAIN-2024-10
| 12,339
| 107
|
https://developer.blender.org/T52893
|
code
|
Operating system and graphics card
NVidia GeForce GTX 1060
MSi Dominator GT62VR laptop
Broken: (example: 2.69.7 4b206af, see splash screen)
2.78a, 2.79, blender-2.79.0-git.a8f11f5-windows64
When performing a boolean operation, sometimes it drops faces, creates vertices where there should be none, and generally mangles the output mesh. I was able to reproduce one of the problems with a very simple test scene. In this test scene, there is a box cut out of a triangular box object. The cutout looks successful as far as vertices go, but the inside face is not filled so it creates non-manifold edges.
Exact steps for others to reproduce the error
Based on a (as simple as possible) attached .blend file with minimum amount of steps
The object "InfillNegative" has a Boolean Difference modifier for the hideen "ToeCutout" object. There are two missing faces where the "ToeCutout" object has been subtracted, leaving a non-manifold "InfillNegative" object.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313536.31/warc/CC-MAIN-20190818002820-20190818024820-00480.warc.gz
|
CC-MAIN-2019-35
| 955
| 9
|
https://navajyothicollege.org/language-lab/
|
code
|
Language Training in Navajyothi college is facilitated with the help of a well-equipped Language Lab. The linguistic inculturation in the future generation is a major concern of the Lab. Hence, the lab is well-equipped modern language training facilities. The modernized lab is a networking system with several computer nodes attached to a server unit. It accommodates 40 students at a time. The module of the software is such that it is both self-taught and teacher-controlled. Phonetics, Grammar, Composition, Word Power, etc. are the major concern of the Language lab system.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817670.11/warc/CC-MAIN-20240420153103-20240420183103-00050.warc.gz
|
CC-MAIN-2024-18
| 578
| 1
|
https://stackoverflow.com/questions/14353119/how-to-get-messagesource-in-src-groovy-correctly/20434292
|
code
|
I need to get the
messageSource in a class in
src\groovy. This class is used in
UrlMappings.groovy, and at the stage I'm using this class the application is not completely started yet.
Currently I'm using the following and it works:
MessageSource messageSource = ApplicationHolder.application.mainContext.getBean('messageSource') String message = messageSource.getMessage("code", null, "default", locale)
ApplicationHolder is deprecated, is there a way to achieve the same goal without using
//I'm using Grails 2.0.1
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100016.39/warc/CC-MAIN-20231128214805-20231129004805-00493.warc.gz
|
CC-MAIN-2023-50
| 516
| 8
|
https://forum.facepunch.com/t/cs-refuses-to-mount-to-my-server/214373
|
code
|
Alright, so CS:S refuses to mount to my ttt server. I have no idea what to do.
If anyone can help this is what is in my mount.cfg folder.
// Use this file to mount additional paths to the filesystem
// DO NOT add a slash to the end of the filename
// “tf” “C:\mytf2server f”
my cstrike folder is also in the correct place which is specified.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153971.20/warc/CC-MAIN-20210730154005-20210730184005-00329.warc.gz
|
CC-MAIN-2021-31
| 349
| 6
|
https://careers.gijobs.com/virtual-usa/unix-midrange-engineer/ADDF07329B4B4043A7FF51B53CD60FFC/job/?vs=28
|
code
|
Combined Insurance Unix Midrange Engineer in United States
Position Summary: The successful candidate for this engineering position will be a member of a core team focused on the design and 3rdlevel support of Chubb’s UNIX, storage, and backup and recovery systems.
Primary Job Responsibilities: * Provide 3rdlevel support for: UNIX systems including hardware and systems software; storage systems; Spectrum Protect (formerly TSM) * Provide technical leadership, and hands-on resolution to critical and severity-1 problems (this includes business and non-business hours) * Continually review and enhance systems monitoring and alerting * Partner with PMO and technical teams on project execution * Partner with application and infrastructure teams to identify platform requirements * Develop solutions that meet business requirements, and execute the deployment * Ensure new technology is introduced and deployed on plan and on budget; remain accountable for technology introduction, and proper turnover including documentation for administrators and operations * Identify opportunities to maximize the efficiency of the existing platforms or identify platform alternatives, and opportunities to reduce costs * Provide hands-on oversight of our disaster recovery solutions ensuring there are no gaps*Knowledge, Skills and Competencies:*
* Five years (or equivalent knowledge) of hands on technical UNIX support including performance tuning; AIX, Solaris, SLES, and RHEL preferred * Five years (or equivalent knowledge) of hands on support of enterprise storage arrays; XIV, XtremIO, V7000, Isilon, NetApp preferred * Hands-on knowledge of IBM Spectrum Protect (formerly Tivoli Storage Manager) preferred * Hands-on knowledge of Spectrum Scale (formerly General Parallel File System), including performance tuning * Hands-on experience with vSphere and vCenter * Hands-on experience with Oracle’s SuperCluster is a plus * Competent in scripting (Bash, Perl, Python) * Good understanding of Networking concepts * Excellent problem determination skills, and the ability to debug complex-cross systems problems, and document root cause including remediation and detection * Ability to work independently and on a team with colleagues across the globe * Excellent written and verbal communication skills * Self-driven with the ability to manage workload without direct supervision
Job: *Information Technology
Title: Unix Midrange Engineer
Requisition ID: 316404
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513760.4/warc/CC-MAIN-20181021052235-20181021073735-00317.warc.gz
|
CC-MAIN-2018-43
| 2,461
| 7
|
http://www2.edc.org/WISE/WISE_webpage/WISE_Sample.html
|
code
|
This project is funded by US
Dept. of Education, FIPSE grant P116B70787
The tutor window consists of 4 sections (see Figure 7):
Figure 7 – The tutor window where the user can start working on the selected scenario problem.
The students starts the diagnose and fix session by using the bottom half of the Tutor Control section. There are four main actions that the student can take. These actions are investigate (indicated by the detective icon), inspect (indicated by the magnifying glass icon), fix (the wrench icon), and simulate or run the machine to see whether the problem is fixed (the gear icon).
The student starts by clicking on the “detective” icon and a choice of machine parts appears (see figure 8). Figure 8 is a detailed blowup of the Tutor control section of the window. Since the problem is paper build up in the pickup area, the student selects to investigate the pickup area.
After the user selected to investigate the Pickup Area, a picture of the pickup area appears in the lower left of the window showing the paper buildup. The user/tutor dialog window also shows that the action “[Student] I want to inspect the PickupArea” (See Figure 9).
Figure 8 – Detailed section of the Tutor Panel where the user has selected to investigate the PickupArea
Figure 9 – Tutor shows the buildup in the pickup area.
The student then proceeds to the repair action (after clicking the wrench icon) “Remove the paper scrape from the pickup area” and the tutor responses with the confirmation “Action performed” and a picture of the clean area (See figure 10). The student then runs the simulation to check whether the problem is fixed. The tutor answers that the problem has been fixed (See Figure 11).
Figure 10 – Student has clean up the pickup area.
The example so far is a simple problem. The tutor can give some hints and point out incorrect steps. Figure 12 shows a detail on the Dialog area. It shows that the students selected to investigate Feeder 1 and the tutor told the student that inspecting feeder 1 is not consistent with the symptom (i.e., paper buildup in the Pickup area). Similar response is given when the student wants to investigate the Control Panel. After these two mistakes, the student asks the tutor for help by clicking the “What next?” button. The tutor suggests the student to investigate the pickup area.
Figure 11 – The student checks his/her repair action by running the simulation. He/she has fixed the problem.
Figure 12 – A student/tutor dialog in which the tutor told the students about his/her wrong actions and when the student asks “What Next?”, suggests the correct solution path.
Return to main WISETutor Page newINDEX
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699201808/warc/CC-MAIN-20130516101321-00047-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 2,703
| 15
|
https://en.bitcoin.it/wiki/ASIC?ref=blog.blockstream.com
|
code
|
An application-specific integrated circuit (abbreviated as ASIC) is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use. In Bitcoin mining hardware, ASICs were the next step of development after CPUs, GPUs and FPGAs. Capable of easily outperforming the aforementioned platforms for Bitcoin mining in both speed and efficiency, all Bitcoin mining hardware that is practical in use will make use of one or more Bitcoin (SHA256d) ASICs.
Note that Bitcoin ASIC chips generally can only be used for Bitcoin mining. While there are rare exceptions - for example chips that mine both Bitcoin and scrypt - this is often because the chip package effectively has two ASICs: one for Bitcoin and one for scrypt.
The ASIC chip of choice determines, in large part, the cost and efficiency of a given miner, as ASIC development and manufacture are very expensive processes, and the ASIC chips themselves are often the components that require the most power on a Bitcoin miner.
While there are many Bitcoin mining hardware manufacturers, some of these should be seen as systems integrators - using the ASIC chips manufactured by other parties, and combining them with other electronic components on a board to form the Bitcoin mining hardware.
Bitcoin ASIC development pace
The pace at which Bitcoin ASICs have been developed, for a previously non-existent market, has seen some academic interest. One paper titled "Bitcoin and The Age of Bespoke Silicon" notes:
We examined the Bitcoin hardware movement, which led to the development of customized silicon ASICs without the support of any major company. The users self-organized and self-financed the hardware and software development, bore the risks and fiduciary issues, evaluated business plans, and braved the task of developing expensive chips on extremely low budgets. This is unheard of in modern times, where last-generation chip efforts are said to cost $100 million or more—Michael Bedford Taylor, University of California, http://cseweb.ucsd.edu/~mbtaylor/papers/bitcoin_taylor_cases_2013.pdf
The Bitcoin and Cryptocurrency Technologies online course by Princeton University notes:
The amazing thing about Bitcoin ASICs is that, as hard as they were to design, analysts who have looked at this have said this may be the fastest turnaround time - essentially in the history of integrated circuits - for specifying a problem, which is mining Bitcoins, and turning it around to have a working chip in people's hands.—Joseph Bonneau, Postdoctoral research associate, Princeton University, https://www.youtube.com/watch?v=jXerV3f5jN8#t=26m40s
A timeline overview for CoinTerra's Goldstrike 1 chip also shows this as 8 months between founding the company and shipping a product.
Bitcoin ASIC specifications
A Bitcoin ASIC's specification could be seen as having a certain hash rate (e.g. Gh/s) at a certain efficiency (e.g. J/Gh). While cost is another factor, this is often a relatively fixed factor as the minimum cost of a chip will be determined by the fabrication process, while the maximum cost will be determined by market forces, which are outside of post-fabrication technological control.
When reading the specifications for ASICs on this page is that they should be interpreted as being indicative, rather than authoritative. Many of the figures will have come from the manufacturers, who will present their technology in the best light - be that high hash rates that in practice may not be very efficient and require additional cooling, or very high efficiency at a cost of hash rate and risking being slow in the race against difficulty adjustments.
Complicating the matter further is that Bitcoin ASICs can often be made to cater to both ends of the spectrum by varying the clock frequency and/or the power provided to the chip (often via a regulated voltage supply). As such, chips can not be directly compared.
Comparing Bitcoin ASICs
Two proposals have been made in the past for attempts at comparing ASICs - Gh/mm² and η-factor.
Gh/mm² is a simple measure of the number of Gigahashes per second of the chip, divided by its die area (area of the the actual silicon). This measure however does not take into account the node size which affects how many logical cells can fit in a given area.
As a result, η-factor was suggested at the BitcoinTalk Forums which attempts to take the node size into account, by multiplying the Gh/mm² value by the half the node size, three times.
Although the merit of these approaches can be debated, ultimately these figures are not as important as the ones that detail what is required to make an ASIC work. If an ASIC requires highly stable power supply, then the power supply circuitry on a board may be more expensive than for another ASIC. If the ASIC has a complex communications protocol, additional relatively expensive components may be required. If an ASIC's die is large, fewer (rectangular slices) can be obtained from a (circular) wafer, defects affect its design dispropotionately, and cooling solutions are generally more complex compared to smaller die chips which in turn have other overhead. Chips with a BGA design are less simple to integrate than a QFN, requiring more expensive (inspection and testing) equipment.
Nevertheless, for historic purposes they are included in listings here where sufficient information is available.
Number of cores
One other oft-mentioned number statistic for an ASIC chip is the number of cores or hashing engines that are on the chip. While this number is directly related to performance, it is not necessarily a comparitive relation.
Bitmain Technologies' BM1382 calculates 63 hashes per clock cycle (Hz), while their more efficient BM1384 calculates 55 hashes per clock cycle. Similarly, while these hashes per clock cycle are spot-on for the claims regarding the number of cores, BitFury's BF756C55 is claimed to have 756 cores, but yields around 11.6 hashes per clock cycle. This is because the reference to cores sometimes mean different things, and certain designs result in less straightforward calculation
Nevertheless, when a designer makes claims regarding hash rates at certain clock frequencies, one can determine if A. there is a straightforward calculation and B. if the designer is being imprecise (rounding values) or even intentionally dishonest, as the ratio between clock cycles and hash rate should remain the same.
- video about the technological aspects of custom processor design
- article about the economical aspects of custom processor design
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100448.65/warc/CC-MAIN-20231202172159-20231202202159-00381.warc.gz
|
CC-MAIN-2023-50
| 6,571
| 26
|
https://resource.dopus.com/t/inline-rename-help-please/635
|
code
|
According to 22.214.171.124 changelog:
If filename extensions are hidden in Lister, inline rename no longer fails to select the text after a '.' in the filename (e.g., test.one.txt -> .txt would be hidden, in inline rename, previously only 'test' would have been selected automatically).
How do I get this to work ?? I still see test and not test.one.txt when I press F2. Do I have to set it in prefs somewhere ????
Thanks in advance
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647810.28/warc/CC-MAIN-20230601110845-20230601140845-00416.warc.gz
|
CC-MAIN-2023-23
| 433
| 4
|
https://duncan.land/posts/recursion
|
code
|
December 12, 2022
6 min read
Recursion, in its essence, is a method of problem-solving that employs the solution to be expressed in terms of itself. This means that a function that employs recursion will call upon itself repeatedly until it reaches a specific base case, at which point the recursion will cease and the function will return a result.
Recursive functions can be very useful in a variety of situations. They are a way of defining a function in terms of itself, which can be a powerful tool for solving certain types of problems.
One example of where recursive functions can be useful is in computer science. For example, recursive functions can be used to traverse tree data structures, allowing for an elegant and efficient solution to the problem of visiting and processing each node in the tree.
In this context, a recursive function would call itself on each of the child nodes of the current node, allowing it to visit and process each node in the tree in turn. This approach can be much more efficient and elegant than using a loop to traverse the tree.
There are more practical applications of recursive functions as we would find out. Let's look at some basic examples of recursive functions.
Factorial of a Integer
To calculate the factorial of a number, you simply multiply that number by every positive integer less than it. For example, to calculate the factorial of 5, you would do the following:
5! = 5 x 4 x 3 x 2 x 1 = 120
Let's write a simple recursive function that will calculate the factorial of a number
The image shows the execution of the
The factorial function calculates the factorial of a given number by calling itself repeatedly with the input number minus one each time. This continues until the input number is 0, at which point the function returns 1 (the base case) and the recursion stops.
Fibonacci number at a given Index
A Fibonacci number is a number in a sequence of numbers where each number is the sum of the previous two numbers, starting with 0 and 1. The nth Fibonacci number is calculated by adding the
(n-2)th numbers in the sequence.
For example, to calculate the 10th Fibonacci number, you would add the 9th and 8th numbers in the sequence (34 and 55) to get 89.
This image shows the execution of the
The fibonacci function calculates the
nth number in the fibonacci sequence by calling itself repeatedly with the input number minus one and minus two each time. This continues until the input number is 0 or 1, at which point the function returns 1 (the base case) and the recursion stops. The sum of the previous two fibonacci numbers are then returned, allowing the recursion to return to the previous call and continue until the original call to
fibonacci(n) returns the nth fibonacci number.
Sum of all elements in an Array
Given an array of integers, you can use recursion to calculate the sum of all the elements in the array.
The function calculates the sum of all the numbers in a given array by calling itself repeatedly with the input array minus the first element each time.
This continues until the input array is empty, at which point the function returns 0 (the base case) and the recursion stops. The sum of the current element and the previous sum is then returned, allowing the recursion to return to the previous call and continue until the original call to
sum(arr) returns the sum of all the numbers in the input array.
Recursion in React Components
Recursion can be used in React components in cases where a component needs to render a nested structure of data, such as a list of comments on a blog post or a social media site like Reddit.
Let's consider the following React component that renders a the comments and their replies like the one above.
CommentThread component receives a
comments prop that represents a list of comments and their replies. The component renders a list, with each list item representing a single comment. If the comment has replies, the component recursively renders a new
CommentThread with the comment's replies as the
comments prop. This continues until all comments and replies have been rendered.
Recursion can also be computationally expensive and may not always be the most efficient approach. Therefore, it's important to carefully consider whether recursion is the right approach for a particular problem before implementing it in your code.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474661.10/warc/CC-MAIN-20240226162136-20240226192136-00009.warc.gz
|
CC-MAIN-2024-10
| 4,362
| 34
|
https://discourse.slicer.org/t/re-sampling-vs-harden-transform-for-a-dose-volume/4963
|
code
|
Operating system: Windows 10
Slicer version: 4.8.1 Stable
I have a general question about how Slicer handles and applies deformable registration files that are exported from TPS software with respect to grids.
I would like to apply a deformation vector field (DVF) to a dose volume. My question is,
If I apply the DVF to a dose volume within Slicer’s Transform module (i.e., Apply Transform -> Harden Transform) what happens to the grid spacing of the warped dose volume?
Is it still uniform (through some internal re-sampling?) or is the grid of the dose volume ‘smeared’ out in a warped non-uniform fashion?
For reference, here is the transform information of my Deformable Registration Grid
As an extra question 1)
What is the difference between using the Transforms module or a Re-sampling module when applying a DVF to a dose volume?
Extra question 2)
What is the difference between applying a ‘Displacement field’ transform and a ‘b-Spline’ transform to a dose volume?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522741.25/warc/CC-MAIN-20220519010618-20220519040618-00428.warc.gz
|
CC-MAIN-2022-21
| 988
| 11
|
http://gamedev.stackexchange.com/questions/50591/perspective-camera-for-2d
|
code
|
I'm very new to LibGDX and I'm trying to use DecalBatch with PerspectiveCamera, simply to have Z-coordinate for my sprites, as SpriteBatch does not offer that. However, I don't know how to calculate the Z-coordinate for the PerspectiveCamera to have "pixel-perfect 2D-projection". How to do that?
What "Pixel perfect" is depends on the size of your textures divided by the mesh's area. First, It's important you keep the aspect ratio between your mesh and textures: If your texture is 64x32, your mesh/quad/Decal will be 2:1 (for example, 2x1).
Now, closer to what you desire, the most important part is to have a consistent and analogous unit measurement. Let's go back to the example before, but let's make the Decal 64x32 (we'll make the 3D unit be equivalent to a "pixel" -notice the quotes, as it's not really the case, just a illusion to help us-). Next, you have to calculate which depth/distance from the general decal plane (assuming your decal's will be coplanar, if not, you can't achieve "pixel perfect" on all -well, you can, but decals with the same resolution should be coplanar-) using the inverse of the equations exposed here: http://docs.unity3d.com/Documentation/Manual/FrustumSizeAtDistance.html (calculating the inverse will be left as exercise for you, or you can always iterate to aproximate)
And, with the distance calculated, you set the camera to look at the plane from that distance: Suposing a XY plane, set z=thisDistance You may need to toy around with the Field Of View depending on the resolution/aspect ratio, or you'll only achieve "Pixel Perfect" on a single dimension.
This is one area that I also found quite lacking. Instead of using
Technically, I created my own wrapper/subclass of
In terms of performance, try to maintain the sorting/ordering outside of the actual drawing (don't re-sort on each draw call, only when you add/remove a sprite).
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558066265.80/warc/CC-MAIN-20141017150106-00051-ip-10-16-133-185.ec2.internal.warc.gz
|
CC-MAIN-2014-42
| 1,884
| 7
|
http://www.immihelp.com/forum/showthread.php/96149-Permanent-Address-in-Passport-Changed-I-129-Form
|
code
|
Can you please let me know:
There is a change in permanent address and hence provided the updated address as the permanent address in I 129 form. But the address in the passport is no more valid. Will there be any issues? Shall the address in the passport should be changed? Please guide.
Sorry for asking silly doubts, but your precious time and suggestions matters a lot.
Thank you very much in advance
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710006682/warc/CC-MAIN-20130516131326-00070-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 404
| 4
|
https://www.techrepublic.com/article/quick-tip-pick-from-hundreds-of-themes-in-microsofts-personalization-gallery/
|
code
|
If you were using the Windows operating system in the Windows 95 and Windows 98 time frame, then you remember the Microsoft Plus! and the Microsoft Plus! 98 products. While these products contained a variety of add-ons, such as games and additional utilities, the primary feature in these packages were the desktop features which allowed you to change the look and feel of Windows by selecting various themes, which included wallpapers, color schemes, screen savers, and sounds.
Over the years, Microsoft has released a myriad of themes from their Microsoft Download page. And while there have been a lot of them, they were released sporadically and hidden among lots of other downloads. Fortunately, Microsoft recently modernized its delivery system, called it the Personalization Gallery, and created a centralized location for a huge collection of Windows Themes and Desktop Backgrounds, thus making it easy for you to customize your Windows 7 or Windows 8 system. Here’s how it works.
To get started, just point your browser to the Personalization Gallery. When you arrive, you’ll discover that there are separate areas for the Windows Themes and Desktop Backgrounds, as shown in Figure A. The Windows Themes packs are only designed for Windows 7 and Windows 8 while the Desktop Backgrounds can be used in all versions of Windows.
There are two separate areas-one for the Windows Themes and one for the Desktop Backgrounds.
When you click the See all themes link, you’ll see a page that displays a huge list of theme categories as well as the newest themes available, as shown in Figure B. Just select a category and you’ll see a host of themes. When you see one that interests you, just click the Details button to get more information.
There are plenty of categories to choose from.
As you’ll discover, the majority of the themes contain a number of desktop backgrounds as well as a window color. However others are more elaborate themes that include extras such as sounds or an RSS feed that automatically adds new backgrounds on a regular basis. If you have dual monitor setup, you’ll be interested in the panoramic themes.
For example, I selected the Aqua Dynamic theme in the From the community category and then clicked Details to access the page shown in Figure C. I discovered that this theme comes with 18 images and an RSS feed.
When you click the Details button, you’ll get more information about the theme.
After looking it over, I clicked the Download theme button, selected the Open button from Internet Explorer’s prompt, and clicked Open button in the Security Warning dialog box. Now, because this theme comes with an RSS feed, I encountered a prompt to subscribe to the RSS feed, as shown in Figure D.
If a theme contains and RSS feed, you’ll be prompted to accept the automatic downloads.
After I clicked Download Attachments, the Aqua Dynamic theme was immediately added to the Personalization window, as shown in Figure E, and was all set to go.
Once downloaded, the theme will immediately be set for use in the Personalization tool.
All I had to do was close the Personalization window. I then exposed the new desktop backgrounds and the new window color.
If you select See all desktop backgrounds link, you’ll see a page that displays a list of background categories as well as the newest backgrounds available, as shown in Figure F.
Desktop backgrounds will work in all versions of Windows.
Just select a category and you’ll see a host of backgrounds. When you see one that interests you, just click it and it will instantly fill the browser screen. To download the background, just right click on the image and select the Set as background command, as shown in Figure G.
To install the background you have opened, just select the Set as background command from the context menu.
When you do, you’ll find that the background has immediately been applied to your desktop. Enjoy!
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338244.64/warc/CC-MAIN-20221007175237-20221007205237-00182.warc.gz
|
CC-MAIN-2022-40
| 3,930
| 19
|
http://www.bibsonomy.org/tag/graph
|
code
|
This page provides two large hyperlink graph for public download. The graphs have been extracted from the 2012 and 2014 versions of the Common Crawl web corpera. The 2012 graph covers 3.5 billion web pages and 128 billion hyperlinks between these pages. To the best of our knowledge, the graph is the largest hyperlink graph that is available to the public outside companies such as Google, Yahoo, and Microsoft. The2014 graph covers 1.7 billion web pages connected by 64 billion hyperlinks. Below we provide instructions on how to download the graphs as well as basic statistics about their topology. ·
In recent years there has been a growing public fascination with the complex "connectedness" of modern society. This connectedness is found in many incarnations: in the rapid growth of the Internet and the Web, in the ease with which global communication now takes place, and in the ability of news and information as well as epidemics and financial crises to spread around the world with surprising speed and intensity. These are phenomena that involve networks, incentives, and the aggregate behavior of groups of people; they are based on the links that connect us and the ways in which each of our decisions can have subtle consequences for the outcomes of everyone else.
Networks, Crowds, and Markets combines different scientific perspectives in its approach to understanding networks and behavior. Drawing on ideas from economics, sociology, computing and information science, and applied mathematics, it describes the emerging field of study that is growing at the interface of all these areas, addressing fundamental questions about how the social, economic, and technological worlds are connected.
The book is based on an inter-disciplinary course that we teach at Cornell. The book, like the course, is designed at the introductory undergraduate level with no formal prerequisites. To support deeper explorations, most of the chapters are supplemented with optional advanced sections. ·
aiSee automatically calculates a customizable layout of graphs specified in GDL (Graph Description Language). This layout is then displayed, and can be interactively explored, printed, and exported to various formats. ·
Philipp Heim, Jürgen Ziegler, and Steffen Lohmann. Proceedings of the International Workshop on Interacting with Multimedia Content in the Social Semantic Web IMC-SSW 2008, volume 417 of CEUR Workshop Proceedings, page 49--58. Aachen, (2008)
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507449153.0/warc/CC-MAIN-20141017005729-00380-ip-10-16-133-185.ec2.internal.warc.gz
|
CC-MAIN-2014-42
| 2,467
| 6
|
https://wiki.christophchamp.com/index.php?title=Metropolis-coupled_Markov_chain_Monte_Carlo
|
code
|
Metropolis-coupled Markov chain Monte Carlo
Metropolis-coupled Markov chain Monte Carlo (or MCMCMC or (MC)3) is a variant of Markov chain Monte Carlo in which multiple chains are run in parallel, each with a different "temperature" (Geyer, 1991). Only the information from the cold chain is recorded. Periodically, trees between chains may be swapped.
"Some of these chains are 'heated' by raising the posterior probability to a power β. For example, if f (ψ|X) is the posterior probability density distribution of the phylogenetic parameters, then a heated version of the posterior distribution is f (ψ|X)β. Here, β(0 < β < 1) is the heat value of the chain. Heating a Markov chain increases the acceptance probability of new states. A heated chain tends to accept more states than a cold chain, allowing a heated chain to more readily cross valleys in the landscapes of trees." (Altekar et al., 2004).
(MC)3 can be used to empirically determine the posterior probability distribution of trees, branch lengths, and substitution parameters.
- A variable used in Metropolis-coupled Markov chain Monte Carlo. The temperature affects the likelihood of acceptance of a proposed tree and also affects the likelihood that two chains will accept a proposed tree swap.
- Only one chain is sampled
- The other chains are heated (i.e. they can take bigger steps)
- Chains can swap states
- Allows crossing of valleys
- This process of linking heated chains is called "Metropolis-coupling"
- The full-fledged Bayesian analysis is MCMCMC!
- The heated chains use powers of the likelihood ratio in their acceptance ratio
- We must only use the main chain to calculate posterior probabilities.
- In reality, all these factors involve very large numbers. It's not uncommon to throw away thousands of trees as part of the burn-in and calculate the posterior from millions of trees.
- Heated chains: usually 4-5.
- MrBayes by Huelsenbeck is the main program in current use
- Altekar G, Dwarkadas S, Huelsenbeck JP, and Ronquist F (2004). Parallel Metropolis coupled Markov chain Monte Carlo for Bayesian phylogenetic inference. Bioinformatics.
- Geyer CJ (1991). Markov chain Monte Carlo maximum likelihood. In Keramidas (ed.), Computing Science and Statistics: Proceedings of the 23rd Symposium on the Interface. Fairfax Station: Interface Foundation, pp. 156-163.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992514.37/warc/CC-MAIN-20210513204127-20210513234127-00277.warc.gz
|
CC-MAIN-2021-21
| 2,354
| 18
|
https://quiz.ileska.fi/
|
code
|
This is a simple briefing app where you can make question and answers for them in specific topics
The application provides a list of topics and allows creating multiple-choice questions into those topics that are then answered by self and others.
|Total number of topics
|Total number of question
|Total number of question answer options
|Total number of questions answered
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819273.90/warc/CC-MAIN-20240424112049-20240424142049-00338.warc.gz
|
CC-MAIN-2024-18
| 373
| 6
|
https://es.smartcat.com/marketplace/user/martin-staviar
|
code
|
My professional background is in Electrical Engineering / Cybernetics with a focus on medical devices. Besides these majors, I studied Business/Management. I have professional translation and localization experience in the software field...
Gengo, Freelance translator (Pro level), 2013 - Present
Virtus s. r. o., Translator, 2006 - Present
Translator in Virtus s. r. o., Project Manager, 2007 - 2012
Medical translation (plate and screw system for cranial bone fixation), 2016
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510100.47/warc/CC-MAIN-20230925215547-20230926005547-00235.warc.gz
|
CC-MAIN-2023-40
| 477
| 5
|
https://www-0.nuget.org/packages?q=Tags%3A%22roguelike%22
|
code
|
A .NET Standard class library providing map generation, path-finding, and field-of-view utilities frequently used in roguelikes or 2D tile based games. Inspired by libtcod
* Map and cell classes now have generic versions that are easier to inherit from.
* Weighted pool class...
FloodSpill is a fast multi-purpose flood-fill algorithm for C#. It lets you run a flood in two-dimensional space, configure it with your own conditions and callbacks and choose between FIFO, LIFO or priority-based order for processing positions. Includes possibility of performing scanline-fill for...
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499919.70/warc/CC-MAIN-20230201081311-20230201111311-00063.warc.gz
|
CC-MAIN-2023-06
| 580
| 4
|
https://invent.kde.org/rolisteam/rolisteam/-/issues/25
|
code
|
Characters randomly hidden on vector map: z-order
Yesterday we had a session with 7 instances of rolisteam.
Instance 1 (mine) hosted the game and I was a player. Instance 2 was the game master's. Instances 4..7 were additional players.
When the game master opened a vector map, all players dragged their avatar onto the map. The game master always saw all players but:
- Instance 1 saw only two players (1 and 4) but player 4 disappeared after some moves;
- Instance 2 saw only one player (2);
- Instance 3 initially saw all players but after a few moves, some players disappeared; etc.
Towards the end of the game, it dawned on me that this might be related to z-ordering, so I suggested to the game master that they right-click on all characters (they were still all visible to them) and apply "bring to front". Sure enough, all characters immediately became visible to all instances. On all instances but the game master's, they were hidden behind the background image of the map.
I think a possible fix is two-fold:
- partition the z-ordering into three non-overlapping intervals; one for each layer on the vector map i.e. ground, objects, characters. This way, characters can never be obscured by objects on the object or ground layers and objects can never be obscured by the ground layer.
- initialize the "altitude" or whatever it's called on all objects to the same value across all instances of rolisteam; i.e. as soon as a player drags their character onto a vector map, make that object top of the z-order and propagate this to all other instances.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512249.16/warc/CC-MAIN-20220516204516-20220516234516-00691.warc.gz
|
CC-MAIN-2022-21
| 1,560
| 11
|
https://forums.electricimp.com/t/impc001-adc-errors/5841
|
code
|
I am using adc pins on new impc001 breakout board and seeing some erratic readings on imp central. I am using short unshielded wires (100mm) to external low impedance sources.
The adc counts can vary by a few thousand counts each reading at 30 secs intervals,
This sounds very high and just wondering do I need some buffering and filtering of adc inputs to improve the readings.
Adding a capacitor to the input (eg 100nF) is certainly worthwhile as the ADCs are muxed and so you get charge injection as the input is selected. Try that?
Ok thanks Hugo.
That certainly made a big improvement by adding those caps on breakout board.
Just wonder do I need to convert the 16bit ADC readings using read command down to 12 bit and how do I do this if needed ?
You can do that just by shifting right 4: new = (old >> 4)
We report as 16 bits as this allows newer devices with more accurate ADCs to be used seamlessly.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00656.warc.gz
|
CC-MAIN-2022-27
| 908
| 9
|
https://simple.wikipedia.org/wiki/Action-adventure_game
|
code
|
Action adventure game
(Redirected from Action-adventure game)
An action-adventure game is a video game genre that uses gameplay from adventure and action games. Popular games in this genre are The Legend of Zelda series, the Castlevania series, the God of War series and the Metroid series. The action-adventure game genre consists of the player, who goes on adventures, that are usually filled with monsters who the character has to fight.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00455.warc.gz
|
CC-MAIN-2023-14
| 440
| 3
|
https://www.jobhack.net/linux-engineer-resumes/
|
code
|
• Operating systems, RedHat Enterprise 7, and CentOS 6/7 Linux systems
• Administering user accounts
• Registering/ Administering Servers/hosts to RedHat Satellite
• Hardening the system using Security Technical Implementation Guides (STIG) and Department of state requirements
• Installation, Update, and Configuration of software applications on servers and workstations
• Set up DNS Servers to resolve hostnames for both the internal and external networks
• Build, configure and manage NFS servers
• Build, configure and manage Web servers using Apache httpd
• Administer firewall security using firewalld
• Using VMWare ESXi Server and VMWare VSphere Client (combination) to manage virtual machines (VMs)
• Collaborate with management level and users to find solution to technological needs
• Conducting and participating in technical meeting to define and implement appropriate designs and implementation
• Writing Ansible Script for configuration management
• Writing Bash (shell) Script for task automation
• using solarwinds for system management
- Linux Engineer at CSRA/General Dynamic
- System Administrator at Skylon Information Technologies, LLC
- Linux System Administrator at Linuxjobber consulting services inc
- Linux Administrator at National Society of Leadership and Success
1 year, 5 months at this Job
- Bachelor of Science - Computer Science
• Built and configured Linux based servers including RedHat, CentOS, and Ubuntu
• Integrated Linux infrastructure into software monitoring system and wrote custom reporting scripts
• Installed, configured, and designed system infrastructure for Ansible automation framework
• Wrote Ansible automation playbooks to streamline server patching frequency and scheduling
• Deployed, managed, and troubleshooted servers in a virtualized environment using VMware
• Designed and implemented test environment framework for Commvault compatibility. Performed disaster recovery analysis and validation.
• Monitored and resolved system alerts for server issues such as networking, disk space usage, backup remediation, and memory usage
- LINUX ENGINEER at ORLANDO HEALTH
- DATA ANALYST at HOSTDIME.COM
- SUPPORT SUPERVISOR at HOSTDIME.COM
- SERVER ANALYST at HOSTDIME.COM
1 year, 10 months at this Job
- MASTER OF SCIENCE - COMPUTER SCIENCE
- BACHELOR OF SCIENCE - Finance
Engineer that provides 24/7/365 coverage at the Operations Support Center. Expedient provides Colocation Facilities, ISP Connectivity, Managed Systems, Networks, and Virtualization for 12 datacenters throughout the Northeast- All of which the OSC and it's staff have direct or indirect access/responsibility for. Expedient is multi tenant provider in which we are directly responsible for the management of customer services and equipment. During my 11 1/2 year tenure, I've had first hand access to systems administration in both Linux/Windows as well network administration in both Cisco/Juniper environments. I have also planned, designed and engineered the monitoring system (IBM Tivoli OMNIbus) which monitors both core infrastructure as well as customer equipment across all 12 datacenter's as well as POP, CO and customer premise locations.
- Linux Engineer at Expedient
- PC/Desktop Technician at BNY Mellon
- IT/Communications Technician at Halliburton
11 years, 5 months at this Job
• Build Red Hat Linux server 6 and 7, Configure and setup Netback-up/DDBR
• LDAP configuration and management.
• Experience with Physical and virtual Decommissioning of Linux servers
• Logical volume partitioning and managements.
• Red Hat Network configuration/networking protocols (DNS, NAT, SSL, TCP/IP, IPv4, UDP)
• Extended Experience with virtualization - KVM and VMware/vSphere.
• Experience using YUM and RPM for package management.
• Create Snaps for management approval required for server downtime/outage.
• Experience on IBM Remedy ticketing system; able to Assign, open and close tickets.
• Experience in Shell scripting (bash) to automate system administration jobs.
• Monitoring the hosts and networks using NAGIOS
• Performs moderately complex systems/database administration.
• Installed, configured, secured, and patched Linux servers using the latest Linux platform software packages.
• Managed user accounts and set user permission and privileges.
• Installed web servers using Apache, PHP for web server scripting and html language for website pages.
• Used bash and some Perl scripting for automated processes in managing disk space, deleting old logs, and cron jobs.
• Provided assistance in troubleshooting software installation and, application uptime and, network connection issues.
• Utilize firewall to control inbound and outbound traffic.
• Set up cron jobs for automated processes
• Installed and configured virtualization on Linux platforms using VMware, and KVM respectively.
• Installed and setup of Logical volume management and RAID hardware/software for high availability and fault tolerance.
• Maintain Linux systems serving as firewall, mail server, DHCP and DNS server
• NFS file system mounting and support for developers
• Monitoring System Performance of Virtual memory, Managing Swap Space, Disk utilization and CPU utilization
• Check alert logs, trace files and file System maintenance and application
- Linux Engineer at Bowie State University
- Enterprise Support Services/ IT Computer Technician at Bowie State University
3 years, 10 months at this Job
- Security+ Certification
- BA (Hons) - Philosophy
- MS - Management - Information Systems
• Configured Jenkins for doing builds in all the non-production and production environments.
• Worked extensively on GIT version for doing code deployments in various environment
• Handled Jira tickets for SCM Support activities.
• Understanding and experience working with CI/CD tools and best practices.
• Experience in full stack application development with an understanding of infrastructure, platform and containers-as-a-service hosting and provisioning.
• Collaborate with engineering teams to identify gaps in their integration workflow.
• Collaborate to build, and deploy customized reusable application environments and application stacks using containers.
• Creating user level of access for related GitHub project directories to the code changes.
• Knowledge of installing RHEL Server from scratch using kick start and PXE boot.
• Responsible for migration of Microsoft Virtual Servers and VMware GSX servers to ESX platform.
• Strong knowledge of Linux kernel configuration, performance monitoring, and tuning.
• Hands on experience on Puppet for configuration management to existing infrastructure.
• Created Puppet modules for Linux configuration such as user, group, SSH, Kernel, Packages
• Understanding about all Puppet integrated tools such as Hiera, Mcollective, Facteretc
• Understanding of automatic provisioning system with kickstart and Puppet.
• Familiar with all puppet resources and templates to manage Linux configuration.
• Complete understanding about puppet functionality and connectivity between master and client.
• Worked exntensively on the LVM, which include creating PVs, VGs, LVs, file systems and troubleshooting.
• Involved in complete administration tasks on UNIX, RHEL servers and documentation for the projects
• Troubleshooting hardware and software problems.
• Knowledge of installing RHEL server using kickstart and cloning image.
• Install, configure, tuning, security, backup, recovery and upgrades of RHEL 5.5 and higher
- Linux Engineer at Paypal
- Linux System Administrator at JP Morgan Chase
- Linux System Administrator at Bank of America Augora Hills
- Linux Administrator at ARC Solutions
2 years, 7 months at this Job
- Bachelors in Computer Sciences - Computer Sciences
Roles and Responsibilities:
• The primary responsibility is to carry out day-to-day tasks of System Administration.
• Different types of Installation and configuration on RedHat, CentOs, Ubuntu, and Windows7.
• Administrations of Users and Groups.
• Linux Package management tools (RPM, YUM etc).
• Troubleshooting and Configuration of SSH, FTP, NFS and APACHE Server.
• Installing and configuring new servers based on requirements.
• DNS and NIS configuration.
• Responsible for creating, modifying and deleting users, groups and assigning the permissions to users and groups.
• Having the knowledge of creating LVM.
• Good understanding of error logging subsystem. Environments: LVM, DNS, DHCP, NFS, ACL's, FTP, TCP, NIS. PLACE: Bangalore.
- Linux Engineer at Cartronics Technologies Pvt Ltd
1 year at this Job
- Master of Computer Applications - Computer Applications
- Bachelor of Computer Applications - Computer Applications
• I was originally hired as a Linux Engineer but was promoted to a Principal Systems Engineer within one year.
• Used puppet for configuration management (RHEL, Centos, Debian) systems deployed to our datacenter and some in our Dev area in AWS.
• Deploying to AWS we have been using Terraform, custom AMIs, Packer, some Ansible and eb-deploy.
• Used GitHub (Enterprise) for revision control.
• Deployed Netapp 6240's in active-active dual-controller 7-mode. Migrated all production shared volumes using NFS for the website from 3140 Filers to a 6240-cluster using snap-mirror.
• Built RHEL hosts for Oracle RAC, migrated the RAC cluster from using NFS for ASM to San attached Luns over FCoE to increase performance for the commerce store. The blades were diskless, so the design included SAN (Luns) for the OS and for Oracle ASM using dm-multipath (device mapper).
• Assisted as a MySQL DBA setting up multiple mysql clusters with master/passive master and multiple slaves. Used mylvmbackup (lvm snapshots) on passive slaves (every 2 hours) and mysql logical dumps (nightly), also on passive slaves, to an NFS volume hosted on Netapp storage shelves. o Rebuilt slaves/masters when failures occurred. Later started using xtrabackup for full/incremental backups; and recovery.
• Setup multiple mongodb replica sets. Rebuilt mongodb hosts when failures occurred using the initial sync method to catch hosts back up.
• Created a script that ran LVM snapshots on mongodb database servers (every 2 hours) and mongo dumps (nightly) for each production mongodb cluster.
• Created a script to restore the stage environment mongodb clusters using production data on a weekly basis.
• Setup Elasticsearch cluster, 3 master/4 data nodes, Logstash, and Kibana for logging and other store analytics.
• Wrote a script in Python that pulled store item/orders from Oracle every minute, running the query results through MaxMind GeoIP and stored the data in Elasticsearch so we could see top products being purchased by country, city, state etc. on an Elastic Kibana dashboard with up to the minute status.
• In Python, used the Akamai/Prolexic API, and Arbor APS (API) blocked host count to automate routing on and off the Prolexic routed DDoS protection cleaning centers when attacks occurred. This also sent email to the appropriate groups that a change occurred.
• Wrote an API in Golang that when called would update a Datagroup in the F5. This was used in an Irule that would ban source addresses for the desired period of time. I extended a bot written in ruby that could be used in slack to call the api.
• Wrote a script in Kotlin and using the AWS java-elasticbeanstalk-sdk to terminate *inactive Elastic Beanstalk clusters (in a Blue/Green design) if they hadn't been updated in 4 hours and had our team tags on them.
• Wrote many other automation strategies.
• Setup a Hadoop (Hortonworks HDP) cluster as a POC for the business intelligence team.
• The Devops team I was on (Platform) ran most of their applications in Elastic Beanstalk in AWS. Using Terraform, I deployed one of our stacks in Docker containers using AWS Fargate as a POC. o The containers included the API, Nginx, Consul agent, Syslog-ng, and a Datadog agent. The service was behind an ALB.
• Supported all services required to run multiple part of the site. Apache Tomcat, Apache and Nginx web servers, Memcached, Redis, MongoDB, MySQL, Oracle Commerce (ATG), Red Hat Jboss, Oracle DB, Java, PHP, ActiveMQ, Solr.
• Setup Zabbix with a MySQL backend for an extensive monitoring solution with over 60,000 service checks.
• Troubleshooting application issues such as high load and memory issues related to Java applications. Issuing and analyzing thread and heap dumps to find out what is consuming CPU or memory.
• Setup Amazon AWS VPC for the dev environment(s). Setup the AWS development environment to be connected to our private datacenter using IPSEC.
• Responsible for creating and implementing a completely new datacenter design for the Bodybuilding.com website(s). The design included Cisco Nexus 7010 with multiple fabric modules, supervisor engines, and power supplies, Juniper SRX cluster and multiple F5 LTMs in HA.
• Setup Akamai CP's (content provider) for multiple subdomains for Bodybuilding.com so we could utilize their global caching edge services.
• Setup dual pairs of Cisco 5548 switches, Cisco UCS and VMWare so we could move all production public facing application and backend servers into virtual machines. We have over 300 production virtual machines to support Bodybuilding.com. The design used multihop FCoE, off a SAN virtual device context (vdc) on a Cisco 7010 L2/3 switch which presented the LUNs for ESXi inside volumes hosted on both 3140 (stage) and 6240 (production) Netapp controllers. Utilized NFS Datastores for virtual machines.
• Redesigned the corporate internal network. This new design used multiple MPLS circuits (single carrier) and backup (alternate carrier) IPSEC tunnels running OSPF as the routing protocol, for network redundancy. The original design before my arrival did not allow the Fulfilment centers to reach the Datacenter and required moving cables around if one of the connections went down.
- LINUX ENGINEER /PRINCIPAL SYSTEMS ENGINEER at LLC/VITALIZE , LLC
- SYSTEMS /NETWORK ENGINEER at BMHC
- Sr. Network Engineer, Technical Lead Network Security at Albertsons, Inc
- Sr. Network Engineer at Albertsons, Inc - Boise Id
10 years, 5 months at this Job
• Worked on designing, implementing and managing solutions utilizing Red Hat Linux 6 and 7
• As a Linux/Unix system administrator maintain the various servers and also Production Support of various applications in Red Hat Enterprise Linux.
• Involved in installing Puppet client and Ansible on Red hat servers for Automation purpose
• Installed Jenkins for automation
• Developed Cron jobs and Shell Scripts (Shell, Python) for automating administration tasks like file system management, process management, backup and restore.
• Installed and maintain puppet-based configuration management system
• Configure and managing LVM on Linux using tools like lvextend, lvcreate, resize2fs etc
• Experience in Installing, configuring and maintaining the file sharing servers like Samba, NFS, FTP and also Web Sphere & Web Logic Application Servers, Nagios and Chef.
• Worked on configuring the Linux machines through Kickstart (Red Hat Linux) program for Host and Network based Installations.
• Extremely familiar with usage of SSH and rsync/ ftp / sftp /scp / telnet for remote connections and ancillary needs.
• Experience working in VMware ESX (VSphere) 4.x hypervisor for virtualization and installed Linux (RHEL).
• Experience in Package management using RPM, YUM and UP2DATE in Red Hat Linux.
• Active participation in project planning and deployment
• Configured volume groups and logical volumes, extended logical volumes for file system growth needs using Logical Volume Manager (LVM) commands.
• Responsible for Monitoring and fine tuning system and network performance for Linux systems.
- Linux Engineer at MA Tech International Inc
5 years, 1 month at this Job
- Bachelor of Science in Computer Science - Computer Science
- Associates Degree - Inter-Networking Technologies
Business environments of Local, State, Federal and DoD customers. Develop conceptual designs, engineering documents for implementation and client transformation. Operational documentation for best practices, standards, drawings. Work with operations teams in upgrade planning, roadmaps, and new technologies. Reevaluate and constant improve processes, standards, technologies. Provide Operations L3 escalation support. As needed, hands-on assistance in environment configurations, server deployment, travel to client site for planning, infrastructure builds. Technical interviews and development of team members in geographically dispersed teams. Security deployment of FireEye Endpoint Security servers, streamline Firewall Rule requests for customer compartments. Nessus scan remediation, OpenScap scans. Technology includes midrange - HPE ProLiant DL580, DL380, BL460c servers, HP c7000 Chassis, Red Hat Enterprise Linux (RHEL 6/7), physical and virtual servers (VMware ESXi 5/6), Oracle RAC, MySQL in production, pre-prod, test and Cloud DevOps environments.
Business environments of Local, State, Federal and DoD customers. Develop conceptual designs, engineering documents for implementation and client transformation. Operational documentation for best practices, standards, drawings. Work with operations teams in upgrade planning, roadmaps, and new technologies. Reevaluate and constant improve processes, standards, technologies. Provide Operations L3 escalation support. As needed, hands-on assistance in environment configurations, server deployment, travel to client site for planning, infrastructure builds. Technical interviews and development of team members in geographically dispersed teams.
Security deployment of FireEye Endpoint Security servers, streamline Firewall Rule requests for customer compartments. Nessus scan remediation, OpenScap scans. Technology includes midrange - HPE ProLiant DL580, DL380, BL460c servers, HP c7000 Chassis, Red Hat Enterprise Linux (RHEL 6/7), physical and virtual servers (VMware ESXi 5/6), Oracle RAC, MySQL in production, pre-prod, test and Cloud DevOps environments.
- Linux Engineer at Perspecta
- Technology Consultant IV at Perspecta
- SAN Storage / Linux Administrator at A.H. Belo Corporation
- Senior SAN Engineer at Xerox Corporation
3 years, 4 months at this Job
- Associate - Computer Operations
- High school or equivalent
• Design, template, and support for over 500 RHEL 6.x, 7.x, CentOS 6.x, 7.x servers for a multi-tier application architecture across geographically diverse data centers
• Serve as senior Linux technical liaison for development, security, Governance, Risk Management, and Compliance (GRC) and networking collaboration for the dominant player in the healthcare records retrieval market
• Write and implement the corporate Linux standards documentation
• Deploy virtual systems with high degree of proficiency in VMware ESXi/VCenter/VSphere
• Build and maintain production specifications for virtual RHEL 6.x, 7.x, CentOS 6.x, 7.x, and Ubuntu systems
• Serve as tier three escalation resource for RHEL 6.x, 7.x, CentOS 6.x, 7.x, and Ubuntu systems
• Engineer and manage virtual disks for production systems
• Engineer and manage disaster recovery process for Linux systems that utilize multiple NFS/CIFS shares as primary data storage
• Exposure to Ansible, open-source Puppet, Puppet Bolt, and Docker CLI/Portainer
• Provision and configure Linux EC2 instances within corporate AWS VPCs
• Resolve HITRUST/SOC2 control gaps in corporate Linux systems
• Submit and complete Linux change management requests
- Linux Engineer at Ciox Health
- UNIX Engineer at QBE First
- UNIX Systems Administrator at Ventyx, Inc
- Sales Support Systems Engineer at Ventyx, Inc
3 years, 1 month at this Job
- Associate of Arts in Business Administration - Information Systems
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000414.26/warc/CC-MAIN-20190626174622-20190626200622-00165.warc.gz
|
CC-MAIN-2019-26
| 20,037
| 192
|
https://forums.maslowcnc.com/t/left-motor-issues/7576
|
code
|
Hello fellow Maslowians,
I`m having issues with the left motor while trying to perform a calibration procedure. It does not respond very well to commands like Extend/Retract chain, for example. It does it 1-2 times then it stops responding all together…?
During the test motors/encoders procedure everything works OK on all three motors. I also tried manual calibration and test movement and only the left motor is not working as expected.
p.s. I`m using Maslow Brains on V1.23 FW and GC, motors are alternative solution with adequate specs as the original ones.
Have you tried reseating the connections to the motor and controller? Also, try to isolate the cable routing from any other cables (i.e. away from power cords)
This would normally be a loose connection.
Cable isolation is done well, but I`m not sure about the loose connections. I will double check.
I guess it would not be possible to perform a test procedure if that was the case, as test procedure goes without a hitch every time I run it.
Is there a difference between test procedure and other commands that the motor accepts?
Yes,the test procedure runs each motor full speed in each direction and checks that the encoder sends signals in the expected phase relationship for that direction. It is a ‘go/no-go’ kind of test.
The values for ‘Settings/AdvancedSettings/EncoderStepsPerRevolution’ might be different for the motors you’re using, that would be one thing to check.
Another check would be to swap the right and left motor cables at the motor board (with the sled detached) and see whether the problem shifts to the right motor (indicates motor board problem) or stays with the left motor (motor or cable problem). You could use the up/down arrows to drive the motors for the test.
I would also suggest updating to v1.24 of the firmware and GC, though it’s hard to see how that could cause a single motor to act up.
No, the motor does not accept ‘commands’, it just gets voltage and sends back
If the connectors are loose, you can have problems. If the cables run near a
source of interference that isn’t on when you do the test procedure (say the
router’s power cord), you could have problems when the router is running vs hen
Thank you for pointing out all this info. I will try to follow your recommendations and hopefully it will set me off in a good troubleshooting direction…
I had a similar problem with my left motor. I could have sworn it was not a loose cable, but I detached all the cables and then re-attached them again and it has been working since. The cable may not seem loose, but if it is ever so slightly tugging in one direction it wont have a good connection to the motor.
Hope you get it fixed!
Just to report/add to the overall knowledge base - it turns out I didn
t calculate properly the exact number of Encoder steps per revolution for the motors. Im just not sure how the right motor worked somewhat ok, and the left did not… But, now it all works well!
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104375714.75/warc/CC-MAIN-20220704111005-20220704141005-00468.warc.gz
|
CC-MAIN-2022-27
| 2,982
| 22
|
https://downloads.zdnet.com/product/10248-10033215/
|
code
|
For anyone who wants more than the generic system colors offered by most color palettes. Select colors from the desktop with the color grabber tool, use the RGB or CMY sliders to create the color or play with the random colors to get the color you want. Then choose from HTML, Java, Visual Basic, C/C++, Pascal/Delphi, CMY or CMYK to get the color value in a the correct form. Need the colors to be browser safe? A single click and they are updated to the closest color match of the browser safe colors. For website designers we have included both HTML and Cascading Style Sheet tags in the HTML tool. For programmers, there are brief code examples for each language in the help. Don't know which font would be best? Try the Text Attributes in the RGB tool to check out all your system fonts. This program is handier than you think, once you start using it you won't know how you lived without it.
Windows NT 4
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038921860.72/warc/CC-MAIN-20210419235235-20210420025235-00024.warc.gz
|
CC-MAIN-2021-17
| 910
| 2
|
http://mathhelpforum.com/calculus/83342-difference-between-left-right-derivatives.html
|
code
|
I'm not sure what you're looking for...?
The limit, as x approaches some value "a", of f(x) is nothing more than what the value of f(a) ought reasonably to be, assuming the limit exists.
In some cases, the limit does not exist, such as for f(x) = 1/x when x approaches zero.
In other cases, the limit exists, but not the functional value, such as for f(x) = [(x + 1)(x - 2)]/(x - 2) when x approaches 2. Other than for x = 2, this function is the same as g(x) = x + 1, so the limit at x = 2 is g = 3. But f(x) is not actually defined for x = 2. The limit exists, but the function doesn't actually take on that value.
In still other cases, the limit exist, and the functional value exists, such as for f(x) = x.
And then you have the cases where the limit from one side exists, but the other limit does not, or is not the same value. A piecewise function is a good example of this:
Clearly, each "half" has a limit as x approaches zero: from the left, the function "ought" to take on the value -1 (and it does); from the right, the function "ought" to take on the value 1 (but it doesn't, because f(0) = -1). Each one-sided limit exists, but "the" limit does not, because the two one-sided limits don't agree.
Does that help at all...?
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542323.80/warc/CC-MAIN-20161202170902-00502-ip-10-31-129-80.ec2.internal.warc.gz
|
CC-MAIN-2016-50
| 1,234
| 8
|
http://16.usnccm.org/stewartabstract
|
code
|
James Stewart, Sandia National Laboratories
The U.S. is on a path to achieve exascale computing by 2022, with early hardware systems scheduled to become available in 2021. These systems are characterized by heterogeneous hardware that includes GPU (Graphical Processing Unit) accelerators from a variety of vendors. This heterogeneous computing environment requires a rethinking of our computational mechanics software and analysis paradigms to realize these performance gains. In this presentation, we discuss how these new challenges are being addressed across a variety of areas, such as: hardware/software co-design, performance portability, solver concurrency, meshing and geometry, uncertainty quantification, and visualization. We will also discuss progress being made in various application areas including wind turbines, climate modeling, materials, and machine learning.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00694.warc.gz
|
CC-MAIN-2023-50
| 880
| 2
|
https://community.illuna-minetest.tk/t/smartshop-mod-fixes-updates/1641
|
code
|
The smartshop uses a fixed indent to right-align the price quantity. This causes the price to be partially or mostly cut off for prices larger than 9 due to it being clipped out the right side of the button. This patch fixes it, but you might need to convert init.lua from DOS to unix linefeeds to apply the patch.
The version used on Illuna is about two years behind upstream, which has wifi storage, supports pipeworks and who knows what other features it has added, but @afk wants them.
I have also pushed the patch upstream, it may or may not be applied considering last commit was two years ago.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376144.64/warc/CC-MAIN-20210307044328-20210307074328-00621.warc.gz
|
CC-MAIN-2021-10
| 600
| 3
|
http://iamred.at/4479-high-frequency-trading-bot-cryptocurrency.php
|
code
|
Eine sinnvolle Alternative hierzu bilden daher Online-Bezahldienste. Wenn ich zurückblicke, dann war das dumm.
The popularity of algorithmic trading is illustrated by the rise of different types of platforms. Gekko is python crypto bot github open source platform for automating trading strategies over bitcoin markets There are inputs for quoting parameters, grids to display market orders, market trades, your trades, your order history, your positions, wie kann ich zu mehr geld kommen a big button with the currency pair you are trading.
Backfills, list-selectorsyou can run this separate command after a successful docker-compose up -d: Cryptocurrency Github activity charts for Cryptocurrencies. Top 6 Cryptocurrency Trading Bots The cryptocurrency market is growing and evolving on a daily basis.
June 9, bitcoin trading software github AT Is developing several cryptocurrency projects that focus on the. Anyone who purchases the innate digital asset, XRP, has the potential to earn huge returns on their investment, if Ripple keeps making headway throughout the banking sector.
PHP cryptocurrency trading library with support for more than 100 bitcoin/altcoin
Microsoft is buying code-sharing site GitHub for bn. Bitcoin Trading Github This platform is made for experienced python developers looking to develop.
Obwohl die meisten Investoren im Krypto-Bereich bisher durch das langfristige Traden reich geworden sind ist es möglich, die einzelnen Altcoins sowie den Bitcoin auch im Daytrading zu handeln.
From strategies, to code libraries, to people sharing algorithms on github. Gatehub usd deposit how to buy ripple thru td ameritrade forex market.
Bots designed with TEMA smooths price fluctuations and filters out volatility than any other. TOP Choice!
Ideal Bollinger Bands setting vary from the kraken wakes sie von erfahrenen händlern indem sie etoro trader kopieren to market, and may even need to be altered over time even when trading the same instrument. Python 3 WSS Library: Institutions viel geld machen aber wie kann man am schalter abheben individuals rely on direct access to Coinigy's low-latency live streaming data spanning hundreds of markets and currencies.
Once mined Bitcoin becomes like a currency that can be purchased, used in transactions or even traded like with this Bitcoin trading platform. Other, less reputable automated bitcoin trading software providers only accept crypto. Bitcoin Gambling Strategies Why not create a simple trading bot that can crypto trade net bitcointalk trade Bitcoin and bitcoin trade github.
They range from free software that anyone can use to expensive subscription-based bots for professional crypto day traders.
Although Haasbot is probably the most complete of the trading bots that are currently available, doing much of the labor with relatively minimal input required from the user, in order to provide this service, it is pretty expensive, with costs ranging from between 0. Zenbot is one of the only autonomous trading solutions that is capable of high-frequency trading, and supports the trading of multiple assets at the same time.
Sign in wie man aktien in schweiz handelt the email and password you entered when you created your account.
Something more suitable for beginners or people who just want something to run in the background while they build another business or simply enjoy life. References How the Bitcoin blockchain works How mining works For a complete implementation high frequency trading bot cryptocurrency can ig handelskonto mine! Commodity Trading News You up to paymium launches european cryptocurrency trading platform up some material in connection with mac os x bitcoin mining pool as well.
The number of trading bots in the market is increasing on a daily basis.
Zeitungsausträger ist als Nebenjob auch bei Pensionisten sehr beliebt, die sich ihre Pension etwas aufbessern wollen. Du bist für alle Tätigkeiten verantwortlich, und wenn du genügend Geld verdienen möchtest, musst du ununterbrochen für deinen Erfolg arbeiten und Leidenschaft zeigen.
Depositing fiat currency can be a bitcoin software time taking process can take up to 10 days depending on your payment bitcoin naar euro omreken method. Our software is updating at least once on a week.
Doch das wird von der Firma vehement verneint. Official Omni Foundation GitHub projects. If you expect to earn a lot of money through mining then it would be smart to purchase a ist ein binärer handel secure wallet: If you've decided to take the plunge and have bought your own Bitcoin BTC.
There are python crypto bot github that are free of charge and can be downloaded online, and there are also trading bot services you have to pay for, offered by various trading engine and programming companies. Currently, the beta version only supports Binance, but developers plan to add online schnell geld verdienen kostenlos crypto exchanges as Zignaly gets closer to release. As an open-source project, Zenbot is freely available for high frequency trading bot cryptocurrency to download and modify the code as online schnell geld verdienen kostenlos.
It is built with the ping pong strategy that allows you set a static buy and sell price. As we cover mostly forex brokers, we are used to companies stating this information openly. Here is one that works with the getwork protocol. Get the right Bitcoin mining hardware.
- Wie kann ich jetzt geld verdienen? was ist ein forex broker bitcoins explained funny
- Bitcoin Trade Github
- Ein Konto auf.
XRP kaufen? To it's older brother bitcoin in that the original developer of NewYorkCoin github. Check out why Minera is considered the best bitcoin mining dashboard. However, even the most popular cryptocurrency trading bots vary in quality, usability, and profitability.
But before you can receive any Bitcoins you need to set up a Bitcoin address. These parameters need to be adjusted as you go along. Under these circumstances, profit is made by buying at the bottom of the troughs and selling at the crest of the charts. Pycharm IDE ig handelskonto https: A clear set of rules about the price range is in place such that as an order is executed, the exit order is immediately placed at a price predetermined by the profit range defined by the rules high frequency trading bot cryptocurrency the strategy, and a predetermined stop order is also a placed for unexpected movement of prices against the order.
Bitcoin Trade Github
Jeff Garzik python crypto bot github from Bitcoin github repo for no good. At kursus forex malaysia the moment, an exchange is. Like some other trading bots, Gunbot is also supported by many cryptocurrency exchanges including BittrexKrakenPoloniexand Cryptopia.
The bots that promise you wie man aktien in schweiz handelt will, most likely turn out to be scams and will probably end you up losing you money.
Mobi Bitcoin Karte
The bot provides several customizable technical analysis tools users to develop advanced crypto trading strategiesand it is supported by major bitcoin exchanges. Mobi Bitcoin Karte.
- It is built with the ping pong strategy that allows you set a static buy and sell price.
If you are checking from the same IP address, you can alternatively enter, localhost: To upgrade anytime. Ein Konto auf. Hence, trading strategies need to be updated and adjusted to function in new market conditions as well. The software supports multiple currencies and exchanges and allows users to buy their favorite trading strategy, or alternatively the kraken wakes sell strategies developed by themselves using the backtesting tools that allow users to geld im internet gewinnen how their strategies best islamic forex broker uk work under different market conditions.
Flatpak for QtBitcoinTrader.
As an open-source project, Zenbot is freely available for users to download and modify the code as necessary. Gatehub usd deposit how to buy ripple thru td ameritrade forex market.
Beste broker für daytrading der bitcoin trading github Registrierung meldest geld durch investieren du dich nun auf Poloniex an. Yuk simak cara mining BitCoin gratis! Ensure the Coinbase-specific properties have been set with your correct account information if you are using the sandbox or live-trading environment.
Some amazing properties of Margin include: While its functionalities are somewhat limited compared to some of its peers, Gekko can be a good option for those new ig handelskonto the cryptocurrency markets who want to test out different automated trading strategies.
Directly from github where cgminer is best islamic forex broker uk hosted, and compile it with the. Day Trading Uptrends with Bollinger Bands Bollinger bands help assess how strongly an asset is rising, and when the asset is potentially losing strength or reversing.
Python Crypto Bot Github, Wie viele bitcoin nutzer gibt es
CryptoTrader offers ist ein binärer handel different subscription plans, with fees ranging from 0. Wichtige Informationen zum Handel mit Ripple-Coins.
Ideal Bollinger Bands setting vary from market to market, and may even need to be altered over time even when trading the same instrument. Microsoft is buying code-sharing site GitHub for bn.
Ada cara termudah untuk mining bitcoin di laptop menggunakan sebuah software. Ethereum trading bot github.
It took me about three days to https: Can be very effective for trading periods of 1h, with a shorter period like 15m it verdienst lehrer grundschule mv too erratic and the Moving Averages are kind of lost. In case of large volatility online schnell geld verdienen kostenlos it off, and leave the order on the books.
Top Cryptocurrency Trading Bots & Bitcoin Software Platforms
It's difficult for a computer program to react to fundamental market conditions such as rumors on social platforms, hack on exchange platforms, or the decisions of some governments on cryptocurrency. Since it is a fairly simple strategy there's a lot of free trading bots that one can download and configure to perform market making on a wide variety of exchanges.
There were often large differentials between prices offered on various exchanges, meaning, profits could be made through arbitrage. High frequency trading bot cryptocurrency High Frequency Trading Bot for. Your browser does not currently recognize any of the video formats available.
This software helps you open and cancel orders very fast. High frequency trading bot cryptocurrency 32bit - Windows 64bit. Your configuration for pools algorithm to mine, address, port etc will be saved in pools. The more sophisticated crypto trading bots allow traders to set specific parameters at which the bots execute trades on his behalf.
The most important thing to highlight when it comes to cryptocurrency trading bots is that they are not a one-stop passive income solution that will brings riches while in sleep. This is why we present you with the tested and trusted ones in the market. It should be noted that trading isn't only based on technical analysis alone, but also viel geld machen aber wie kann man am schalter abheben including the fundamental analysis.
In other words, XRP is the grease that allows any currency to be easily exchanged for any other currency on the Ripple platform. In den Einstellungen können sich Kundinnen und Kunden zwischen der klassischen Handelsmethode, sowie der Martingale Fibonacci-Folge entscheiden, bei welcher der Einsatz immer wieder verdoppelt wird.
In-built ig handelskonto trading strategies Lernen sie von erfahrenen händlern indem sie etoro trader kopieren and configurable interface to multiple exchanges Margin trading support Indicators: The trading bot has three different price package plans currently ranging between 0. YouTube Premium Getting Started with Gatehub Ripple Wallet Trading Part 1 Transkript In this Video, I'll show you how to get 20 ripple into your gatehub account so you can start trading, and how to transfer bitcoin into your account using coinbase.
Bitcoin Wallet On Server Any update.
Pycharm IDE - https: The bot will be programmed to make both buy and sell limit orders near the existing marketplace.
Qt Bitcoin Trader v1. TOP Choice!
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998558.51/warc/CC-MAIN-20190617183209-20190617205209-00282.warc.gz
|
CC-MAIN-2019-26
| 12,145
| 53
|
http://archive.railsforum.com/viewtopic.php?id=25294
|
code
|
Topic: Sending out a lot of email ... hosting options anyone?
So, my happy Rails app needs to send out a lot of email to notify people of new events that they subscribe to. Unfortunately, my host (coughdreamhostcough) has an upper limit of 100 recipients per hour. Yes, if I send 100 emails with 1 recipient I've hit my quota; if I send 25 emails with 4 recipients I've also hit my quota. This is obviously not going to work for a busy site.
I'm hoping there's someone here who has gone through the pain of scaling up a web app...
Does anyone have a workaround for the quota limits?
Any suggestions on a better host for high-volume email distribution?
Are there email-only distribution services?
Would getting a colo be the right thing where I manage my own e-mail server and go through a whitelisting process?
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281649.59/warc/CC-MAIN-20170116095121-00296-ip-10-171-10-70.ec2.internal.warc.gz
|
CC-MAIN-2017-04
| 810
| 7
|
https://discourse.nodered.org/t/stop-full-flow-with-node/1584
|
code
|
I have a flow with a lot of waits. the full flow takes up to an hour. (and that is exactly what it should do)
sometimes by accident the flow starts. and i need to wait the full hour to undo all steps.
I was thinking to add a switch node with a flow variable that will stop the flow.
however that will take me a lot of switch nodes. is here a way to stop the full flow without stopping node-red itself?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578711882.85/warc/CC-MAIN-20190425074144-20190425100144-00104.warc.gz
|
CC-MAIN-2019-18
| 401
| 4
|
https://rakeebchowdhury.com/2016/12/13/nhs-startup-part-x-vanity-metrics/
|
code
|
We have a 100% growth rate per day at the moment.
What does this mean? Nothing.
Vanity metrics or getting “high on your own supply” is so easy to do. But what matters is the mission and accomplishing what you set out to do. The whole point of a startup is to test a hypothesis and get metrics which prove what you set out to do. It is an experiment. Metrics that don’t have anything to do with the mission are useless unless you want to pivot into something which you didn’t foresee when you started.
However, I will say one thing. It’s really awesome having users! An advantage of getting users is that it gives you social proof. Now all of a sudden we have other clinics approaching us to use our software. I’m considering this at the moment as one of the clinics that approached us want to use what we’ve created in a novel way.
However, I don’t want to diverge from the mission until I’ve gathered the data I need to make that decision.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652207.81/warc/CC-MAIN-20230606013819-20230606043819-00167.warc.gz
|
CC-MAIN-2023-23
| 958
| 5
|
http://chazhound.com/index.php?threads/vomiting-should-i-be-worried.147234/
|
code
|
My two year old golden has vomited 6 times in the last 3 hours. First couple of times it was his food, last 4 times it's yellow bile. He's lethargic and hasn't drank any water. When should I get really worried about him? It's afterhours so I'd have to bring him to the emerg. clinic but not sure if this is something he ate that just has to pass.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886830.8/warc/CC-MAIN-20180117063030-20180117083030-00556.warc.gz
|
CC-MAIN-2018-05
| 346
| 1
|
https://www.leadtools.com/help/leadtools/v19/dh/l/rastersupport-setlicense(irandomaccessstream,string).html
|
code
|
Sets the runtime license for LEADTOOLS and unlocks support for optional features such as LEADTOOLS Imaging Pro, Document and Medical capabilities, or PDF support. After you have obtained a runtime license and a developer key, you can call Leadtools.RasterSupport.SetLicense in your application to disable the nag message.
Public Overloads Shared Sub SetLicense( _
ByVal licenseStream As IRandomAccessStream, _
ByVal developerKey As String _
IRandomAccessStream containing the LEADTOOLS runtime license to load.
Character string containing the developer key.
You must use this function to set the runtime license for LEADTOOLS and to unlock support for any optional features that you have licensed. If you do not set a RELEASE runtime license, your application will display a "nag" message dialog at runtime, indicating that you have developed the application without a valid runtime license.
In order to obtain a runtime license and developer key, you must contact LEAD. For more information, refer to About LEADTOOLS Runtime Licenses.
For information about LEADTOOLS Document/Medical capabilities, contact LEAD.
To determine if support for optional features has been unlocked, use IsLocked
To set the runtime license from a memory buffer instead of stream, use SetLicense(byte/[/] licenseBuffer, string developerKey).
NOTE: You should not use this overload from the main UI thread in WinRT applications. This method uses blocking reads internally, which if called from UI thread can block too long and cause and exception to be thrown. Instead, you should use SetLicenseAsync, and disable input for your UI until the asynchronous operation has completed.
Medical Web Viewer .NET
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945793.18/warc/CC-MAIN-20180423050940-20180423070940-00040.warc.gz
|
CC-MAIN-2018-17
| 1,679
| 13
|
http://systemictheory.blogspot.com/2017_09_28_archive.html
|
code
|
Halliday & Matthiessen (2014: 177-8):
In a proposal, the meaning of the positive and negative pole is prescribing and proscribing; positive ‘do it’, negative ‘don’t do it’. Here also there are two kinds of intermediate possibilities, in this case depending on the speech function, whether command or offer.
(i) In a command, the intermediate points represent degrees of obligation: ‘allowed to/supposed to/required to’;
(ii) in an offer, they represent degrees of inclination: ‘willing to/anxious to/determined to’.
We shall refer to the scales of obligation and inclination as modulation, to distinguish them from modality in the other sense, that which we are calling modalisation.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879532.0/warc/CC-MAIN-20200702142549-20200702172549-00476.warc.gz
|
CC-MAIN-2020-29
| 701
| 5
|
http://billyfung.com/gadget-the-tape-follower/
|
code
|
As part of the ENPH 253 course, Introduction to Instrument design at UBC, I was part of an team that created an autonomous tape following robot car. The course is taken in the summer of 2nd year to introduce young engineers to the various aspects of instrument design, from mechanical to electrical to software. The content of the class always remains the same, but the end final project changes from year to year. For my year it was chosen that we build an autonomous robot to race around a track. The common theme throughout all the final projects is that the robots must be autonomous. My team included Russell Vanderhout, Adrien Emery, and Daniel Lu
The first aspect of the robot is the chassis. Learning how to design for waterjet cutting in Solidworks was the first step in creating the outershell for the robot. Made from sheet metal, the unibody chassis is designed in Solidworks then folded into place. Many other teams opted to design several different parts and then screw them together, or weld them together. The major skill learned is to design something quickly to see where the problems lie. Our chassis design ran through 6 iterations before we finally figured out what worked and what didn't. There was not enough time to model out every component to see what would fit into the chassis, so it was much quicker to design and then put parts in to see what worked. For the final chassis, we powder coated the metal red.
For our drivetrain, we chose to use two electric motors mated to a 13-tooth sprocket driving a differential gear via timing belt. This drivetrain design was done by Daniel, who is quite the car enthusiast. The gearing for the differential was laser cut out of acrylic. This was our first foray into mechanical gears and design, and it was a success although we did not come back into this topic until upper year mechanical design classes. The steering design was chosen to be done using a servo motor linked to the front wheels via gauge steel. We were taught about the Ackermann steering geometry and it was implemented in our steering. An important note is that the performence of the servo motor greatly affected the performance of the steering; a faster servo meant faster steering.
Another piece of engineering fun were the wheels designed by Daniel. Instead of using standard steels off the shelf, he decided to design both the wheels and tires for the robot. The wheels were laser cut from polycarbonate and the tires were cut from rubber. The wheels were inspired by the Lamborghini Countach five porthole design, and the tires were made to be similar to the Michelin Tweel to improve ground contact.
In order to detect the black tape on the white track surface, the sensors we chose to use were QRD1114 sensors. The analog sensors use diodes to detect emitted infrared reflections. Using these sensors we are able to "scan" the ground to follow the black tape. An array of 8 QRD sensors in order to gain a higher resolution of the black tape, and the signal is then fed into a comparator circuit to set the threshold value. Using 8 QRD sensors allows for easier software programming to smoothly steer along the black tape. All the electrical circuits stem back to the main TINAH board we used, which has an ATMega 128 as the processor.
Since the race track will feature obstacles and another robot, there is the chance that two cars might run into each other in the same lane. To prevent this, we chose to use an IR rangefinder to detect obstacles infront of Gadget. This circuit is very straightforward and consisted of the analog rangefinder and a comparator circuit to set the threshold via a potentiometer. Another option would have been to use software to set the threshold but we did not chose that route. Using a rangefinder, we can then receive a HIGH or LOW signal depending on if there is an obstacle at a certain distance away from the front of the car. In order to filter out extraneous signals, we ran the rangefinder signal through a low pass filter first. And to furthur reduce noise, we built an enclosure around the rangerfinder to block off signals from above, and to focus the IR forward.
The code that makes Gadget run is written in the Wiring programming language for the custom TINAH board. The TINAH board is very similar to the more well known Arduino systems. In order to follow the tape accurately without constantly overshooting, the tape following algorithm consisted of two simple states.
For each iteration of the QRD scan, an instruction is sent to the steering servo motor. Using a lookup table, we devised 15 angles at which the sensor readings corresponded to the sensor array. This is essentially using proportional control for the steering based off the QRD sensors.
The lane changing aspect of the software consisted of tracking which lane Gadget is in, and then executing a lane changing algorithm when needed. The steering algorithm consists of setting the servo motor to steer at a certain angle, waiting for all 8 sensors to detect no tape, then steer straight until the sensor farthest away from the lane-to-be-changed-into tape is detecting the tape. This allows for an overshoot before smoothly going back into the tape following algorithm
In order to control the power of the motor, we had the PWM duty cycle be around 70% in order to not go too fast. Sadly we were not able to optimize our robot to go fast and perform reliability until the time constraints we had. Often going at 100% would result in our steering mechanism not being able to respond fast enough to the course, and ended up oscillation around the tape. Another consideration was to slow down the motor around steeper corners in order to prevent skidding out. If the angle of steering exceeds a certain threshold then the robot slows down into the corner before accelerating again.
After reading about everything we did and learned through this project we have to sadly say that we didn't win the racing competition. To our dismay, our controlled testing track was our downfall. At the competition, the environment was such that there were many people taking pictures and filming the race. We did not think about this when choosing to use an IR rangefinder to detect obstacles, so every camera autofocus on the robot caused the rangerfinder to trigger the turning mechanism. This choice in easier and quicker sensor implementation was obviously not the right one. Although after the competition, we placed our robot on the track and let it run continuously without problem. The entire ordeal was a great learning experience
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512382.62/warc/CC-MAIN-20181019082959-20181019104459-00544.warc.gz
|
CC-MAIN-2018-43
| 6,580
| 11
|
https://gitlab.torproject.org/legacy/trac/-/wikis/org/meetings/2019Stockholm/Notes/ScalabilityProjectFunding
|
code
|
Scalability Project, Creating Funding Proposals
**Goal: **Understand next steps in scalability work, outline how we can approach funders with this project.
Where are we now?
- Tor network capacity has increased over time
- But we need to make that experience more predictable, more uniform
- Tune our network and look for performance issues
- Even if Tor is good on average the worst cases are quite bad, they are bad enough to turn users away (‘long tail’)
- We want to improve the network and add more relays, but it isn’t at the place to handle that effectively
- Right now we don’t have a good handle on what’s going on the Tor network
- Baselines help us keep a consistent experience for the user
- Human knowledge
- Building a cycle: every time we make improvements after this improvement will happen faster, work better, be more effective
What is the benefit of a scalability project?
- Remove a major barrier for people to start and continue using Tor
- Give up because it’s slow: we want to retain users
- Problems could put them in danger
- Popularizing Tor / reputation
- Better prepared for the load that will happen later: handle more people, more third party integration
- Better handle spikes in usage
- Censorship happens, large spikes in users, the better the network can handle these spikes, the better we can respond to censorship
Who benefits from this?
- Third party integrators
- Users under censorship
- Users in general
Thinking through a phase 0-2 project
- Phase 0 is low hanging fruit
- Phase 1 baseline metrics & measurements + building list of performance improvements & evaluating
- baselines of browser level usability (ie, testing performance of webpages—how does it go, what is it like for a user?). we’re at the beginning of this process, we need to measure success and prove that what we’re doing is working.
- Phase 2 doing the science, deploying improvements, measure to see if we are right
- If we want the average user experience to get better…
- We need to increase our capacity
- If we want the drastically negative experiences… (long tail)
- We cut off the smaller relays
- We could do right now, it’s easy enough to measure
- Production tuning
- Low-hanging single client tasks: we’re going to do two key performance changes; we’re going to test and see which one made the biggest difference
- We do tests, get results, not sure if we believe them, need experts to look over the data to verifying / figuring out what went wrong, debugging analysis (this is an important part of this proposal)
- Developing a new data model that will allow us to more quickly run experiments b/c we can design queries that help us look at the data quickly
- **Staff: **1 to 2 devs on network, 1 to 2 metrics/network health side
- **Time: **~6 months. Requires wall clock time to measure changes: at least weeks per experiment. Gets faster if we have both the machines & engineers to simultaneously compare Shadow to the live network—getting to the point that we can trust Shadow
- The more funding we get for this, the faster we can do these experiments, the better we get at this process, the faster we can provide results
After Phase 2, what happens?
- Start to add the dedicated network health monitoring, actively monitor the scan results and analyzing them
- Doing the queries over the metrics data —> contacting the relays & improving the network in this way
- More in-depth dev work
- Load balancing
- SPWS improvements
- Research horizon
- Adding research programmers (would require a dedicated new developers)
'''What could third parties do to help this project? '''
- Brave could help us understand what their users performance looks like on Tor. They may have their own user studies that can help tell us more about who/why/etc uses the Tor function.
- Bitcoin people? As China works towards censoring bitcoin, etc…
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058263.20/warc/CC-MAIN-20210927030035-20210927060035-00221.warc.gz
|
CC-MAIN-2021-39
| 3,885
| 52
|
https://www.glassdoor.com/Benefits/Symantec-Vacation-and-Paid-Time-Off-US-BNFT29_E1931_N1.htm
|
code
|
234 employees reported this benefit
Symantec grants paid time off.
Unlimited time off. However, employees don't feel they deserve to take many days off.
Symantec has "unlimited" time off, as long as your manager approves. Most managers seem to be flexible and will approve the time off. However since PTO isn't tracked, if you don't take any all year, you don't get paid for the time off you didn't take.
There should have been more vacation & PTO because it is the type of job that causes "burn out".
Sales has a NTO system and its up to you to use it.
In 2017, Symantec moved to accountable time off which meant you just needed to get it approved by your manager.
Competitive for what other companies offer.
This ALL depends on your manager. No guaranteed vacation. If your mgr approves it, you get it.
We use My Time Off which is manager approved. You build no vacation time and therefore the company has no overhead and if you leave they owe you no vacation time. On the positive side if you are a hard worker and have a good manager then you time off requests get approved without a problem.
Symantec uses the "take what you need" model of time off. This is great as I don't have to closely monitor how much time I have, but it ALSO makes people nervous to take time off...they don't want to be perceived as abusing the system. Overall, I believe this has lead to people taking *less* time off...people often need to be pushed to take the time off they need.
Symantec does not provide vacation time off. Reason: if you need time off, you address this with your colleague, and take of the week or two that has been requested. Your colleague will attempt to back fill you in your absents. Thus no vacation time process is required.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489282.7/warc/CC-MAIN-20190219000551-20190219022551-00391.warc.gz
|
CC-MAIN-2019-09
| 1,734
| 12
|
https://platinmods.com/threads/ott-navigator-iptv-v1-6-7-3-google-play-store-version-mod.159632/
|
code
|
Overview: View your provider IPTV on any device (phone, tablet, TV, TV-box)
● Premium Feature Unlock.
● Ads Removed / Disabled.
● Ads Related Activity & Code Removed / Disabled.
● Ads Related Layouts Visibility Gone.
● Analytics / Crashlytics Removed / Disabled.
● Receivers and Services Removed / Disabled.
● Play Services, Transport, Firebase Properties Removed.
● All Unnecessary Garbage Folder & File Removed.
● Optimized PNG Save To 30 Kb.
● Optimized JPG Save To 37 Kb.
● Re-Compressed Classes.dex & Library.
● Optimized Graphics / Zipalign.
● Removed Annotation Code.
● Removed Debug Information (Source, Line, Param, Prologue, Local).
- tvguide: option to prefer it to the standard channel browser in most places
- tvguide: do not close when launching playback and restore state when returning from playback to main screen
- backup: support saving and restoring backup using user-provided http storage via url (assume that the url applies POST data)
Play Store Link: OTT Navigator IPTV - Apps on Google Play
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00551.warc.gz
|
CC-MAIN-2022-40
| 1,043
| 19
|
https://www.perforce.com/blog/task-streams-even-if-you-are-classic-perforce-shop
|
code
|
Task Streams – Even if you are a classic Perforce shop
There is something in it for you: More lightweight branches.
With the 2013.1 release, Perforce enhanced Streams further and added a new stream type called “Task Streams”.
Here are a few details about what Task Streams can do for Perforce users who have yet not adopted Streams - I am assuming you have no experience with Streams and will focus just on what you need to get going on using Task Streams to facilitate lightweight branching.
First the big backend things. We need two new depots. The first one will serve as the container for our Task Streams. It is of type “stream” and I simply call it “tasks”. Here is the depot spec.
Depot: tasks Description: Container for Task Streams. Type: stream Map: tasks/...
The other one will serve as the container for everything we don’t need on a daily basis. It is a depot of type “unload” in which we store our used Task Streams and I call it “attic”. Another depot spec.
Depot: attic Description: Task Streams not in use anymore. Type: unload Map: attic/...
In P4V, the “attic” won’t show up at all. The “tasks” will, but there is no need to look into that.
We have a simple project called “project” to start with and two branches “main” and “release1”. The project has one file called “foo” and thousands of other files that are represented here for simplicity by one file called “bar”. “release1” is obviously branched from “main”. You get the picture. Here it is:
The revision graphs for both files look identical, which is no surprise.
Let’s do some work in classic style on foo in release1 and propagate the change back to main. In order to do so I have created two workspaces “project_main” and “project_release1”. I’ll make a change to foo, submit it and merge it back to main. The revision graphs now. No surprises here:
We can count integration records in the Perforce database here. It’s two for foo and one for all the bars in the project. In terms of storing file content, Perforce (of course) hasn’t even bothered creating the file bar more than one time on the server. The database, however, knows about each and every branched file and this is now optimized with Task Streams.
Let’s create a client “project_task” for our work. It does not need a view right now as we will be assigning this workspace to a Task Stream later on. Here is the client spec:
Client: project_task Description: Client to be assigned to a Task Stream.
Task Streams for isolated small work items
Now it’s time to introduce our first Task Stream. It will get created in the tasks depot. All streams in a depot will not be placed in some special directory structure - they just go straight into the root of that depot. Dependencies between codelines get modeled using parent-child-relationships, which are constructed by defining the parent stream of one given stream. This relationship can change and therefore using a directory structure to model these dynamic relationships is pointless.
With Task Streams we don’t want to model complex codeline relationships at all. These streams will just have no parent. As they are stored in the root and, as we might have lots of them, naming becomes a topic to consider. Most importantly, the names have to be unique which is why we can just pick some external identifier or right away a UUID. That’s what I’m doing in this example. Most operating systems have some way to generate one for you. On my Mac I can simply call uuidgen in a terminal window.
Our first Task Stream will have the beautiful name B9DE0357-85F6-4FCC-956B-0EE39153E4C6. The Perforce path will therefore be //tasks/B9DE0357-85F6-4FCC-956B-0EE39153E4C6. We can create the stream in either P4V or on the commandline. Please note “Type” and “Parent” here. The spec is this:
Stream: //tasks/B9DE0357-85F6-4FCC-956B-0EE39153E4C6 Name: B9DE0357-85F6-4FCC-956B-0EE39153E4C6 Parent: none Type: task Description: Some Task Stream to be used for some work. Paths: share ...
And the quickest way to generate one on the Mac without invoking the editor is probably this:
p4 stream -o -t task //tasks/`uuidgen` | p4 stream –i
Our Task Stream is still empty and with the relatively new p4 populate we can even fill it up without the need of a client workspace.
p4 populate //depot/project/main/... //tasks/B9DE0357-85F6-4FCC-956B-0EE39153E4C6/...
In order to get some work done it’s time to assign (switch) our client workspace to this Task Stream. This can also be easily accomplished with P4V, or by invoking this on the commandline or even inside a script.
p4 client -sS //tasks/B9DE0357-85F6-4FCC-956B-0EE39153E4C6 project_task
Our client workspace is still empty. So we need to sync it as usual. Once we have done that, we can checkout foo and get our work done and submit our change as we normally do.
This is a good time to review our revision graphs.
Well, foo got a little more cluttered but bar is still nice and clean. Let’s do even more and create a second one.
p4 stream -t task //tasks/E62EB0FE-C105-4E9E-AB05-4D22BD26BEDD p4 populate //depot/project/main/... //tasks/E62EB0FE-C105-4E9E-AB05-4D22BD26BEDD/... p4 client -sfS //tasks/E62EB0FE-C105-4E9E-AB05-4D22BD26BEDD project_task p4 -c project_task sync p4 -c project_task edit foo p4 -c project_task submit -d'another change to foo'
The revision graph is no surprise again. Foo is getting bigger and bar remains small.
After work in our Task Streams is done, we are going to think about what changes should make it back into our main codeline. After reviewing it carefully, we decide that the first wasn’t good enough but the second actually passed our quality tests. That means we need to merge changes from the Task Stream E62EB0FE-C105-4E9E-AB05-4D22BD26BEDD back to main.
p4 -c project_main merge //tasks/E62EB0FE-C105-4E9E-AB05-4D22BD26BEDD/... //depot/project/main/... p4 -c project_main resolve –as p4 -c project_main submit -d'merge back into main'
We all guessed this revision graph for foo.
As we are done with our work in the task streams and we have no intention to touch them again, it’s now safe to unload them. Let’s go in reverse order and unload Task Stream E62EB0FE-C105-4E9E-AB05-4D22BD26BEDD first.
p4 unload -s //tasks/E62EB0FE-C105-4E9E-AB05-4D22BD26BEDD
Stream //tasks/E62EB0FE-C105-4E9E-AB05-4D22BD26BEDD unloaded.
The revision graph for foo remains unchanged.
Now we unload our very first Task Stream.
p4 unload -s //tasks/B9DE0357-85F6-4FCC-956B-0EE39153E4C6 Stream //tasks/B9DE0357-85F6-4FCC-956B-0EE39153E4C6 unloaded.
This does not change anything to the picture either. Only we cannot continue to work on the files that were branched into the unloaded Task Streams.
What’s happening behind the curtain and why is this interesting?
When we add or edit a file in a branch in Perforce, database records are created. One important table to mention here is the db.rev table. Each unique file revision under Perforce control has an entry in there regardless of whether the change was made in a classic branch or a Task Stream. That is good as it let’s us track our changes to files in Perforce. Changes cause entries here and they will stay persistent as long as the files are not obliterated which is a completely different subject. Another thing is the file revisions that exist because we branch/merge/copy/integrate one file revision to make or change another, in a different location, in one of our depots.
There are two tables that are important here: the db.integed and the db.integtx table. They are important because they integrate where the target location is a Task Stream resulting in entries in the db.integtx table which is somewhat special. It’s special because entries will be removed from this table if you p4 unload a Task Stream. It’s also special because the revision graph does not consider entries there unless there is another db.rev entry for this Task Stream, which is the reason why bar hasn’t shown up in the graph. We just branched bar and have not made any edits to it. As a Task Stream is only really important for the individual(s) working on the task, all these branches or integration records of files in there are practically meaningless for everybody else. So we really shouldn’t care too much about this situation and we shouldn’t reserve any shared database space for it. At least not for long. This change in a Task Stream should be propagated to any other regular branch in Perforce. Therefore entries in the db.integed table are created even if the source of that integrate is a Task Stream. That way they persist equally to changes that result in entries to the db.rev table, which is true for edits in regular branches and in Task Streams.
Task Streams can now be created, unloaded and reloaded if need be. In terms of database operations and database storage this means records are created (create, reload) in and removed (unload) from the db.integtx table. Without Task Streams they would just get created and (except for obliterates) never get removed from the db.integed table. Administrators and users can now much more easily protect the db.integed table from uncontrolled growth by using Task Streams and unloading them if there is no further need. Unloading a Task Stream at the same time does not require the same super powers as p4 obliterate. Give Task Streams a try should you want to have lightweight branching with Perforce.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524290.60/warc/CC-MAIN-20190715235156-20190716021156-00122.warc.gz
|
CC-MAIN-2019-30
| 9,487
| 45
|
https://vimeo.com/156945522
|
code
|
An Appointment with yourself, understanding not understanding you and the other.
A small journey into the actually situation of having a document to travel to be free,
or being a refugee.
I create an integral dance affair made of words and movement; an autonomous whole. It reveals a conversing of cultures in which language plays the leading and the supporting role. Do you understand?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512501.27/warc/CC-MAIN-20181020013721-20181020035221-00245.warc.gz
|
CC-MAIN-2018-43
| 386
| 4
|
http://srimanjavagroup.com/1/forum/16/spring-core-and-aspect-oriented-programming-aop.htm
|
code
|
Forum: Spring Core and Aspect Oriented Programming (AOP).
Spring Core, Aspect oriented programming (AOP) related questions (configuration and annotations).
Joined: Nov 22, 2017
No Posts Available
Joined: Apr 20, 2017
Do you want to Lock this Forum?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593004.92/warc/CC-MAIN-20180722022235-20180722042235-00331.warc.gz
|
CC-MAIN-2018-30
| 248
| 6
|
https://www.gridphp.com/support/questions/primary-key-search-in-grid-not-working-when-im-using-inner-join-in-query/
|
code
|
I used a select_command like:
$a->select_command = "SELECT $t1.Contract, $t1.Name, $t1.Text, $t2.ID, $t2.Model FROM $t1 INNER JOIN $t2 WHERE $t1.Contrat = $t2.Contrat ";
When i search from Name, Text, ID or Model I don't have any problems but I can't search for Contract. When I try it Grid return: "Couldn't execute query. Column 'Contrato' in where clause is ambiguous" […]
Anyone knows why it's not working?
Thank you for helping.
You need to set dbname property to resolve ambiguity.
You can specify exact table.field then it will be used in where clause.
With contrato column,
$col["name"] = "Contrat";
$col["dbname"] = "$t1.Contrat";
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145818.81/warc/CC-MAIN-20200223154628-20200223184628-00000.warc.gz
|
CC-MAIN-2020-10
| 757
| 13
|
https://www.justingermino.com/new-contest-from-leapfish/
|
code
|
This post contains affiliate links.
This morning I signed up for the LeapFish 100K Cash Dash contest, where LeapFish is giving away amazing prizes for people reaching certain point milestones. Like an iPOD touch for the 1st three people to reach 500 points, or a Samsung 42″ OLED TV for the first three people who reach 6,000 points.
I myself am hoping to earn enough points to win the 15″ Dell Laptop, or the Apple iPad and figure I will have to create quite a few posts and some YouTube video’s talking about LeapFish quickly if I want to have a chance to win those items.
In addition to reaching prizes for certain point milestones (if you can be among the first three to reach that milestone) LeapFish is raffling away cash prizes at set intervals of homepage sets. Your number of points is your entries into the raffle, and someone could win $100,000 dollars when LeapFish reaches 1,000,000 homepages set to Leapfish.com.
Here is the full list of prizes:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100016.39/warc/CC-MAIN-20231128214805-20231129004805-00765.warc.gz
|
CC-MAIN-2023-50
| 965
| 5
|
https://ce3c.ciencias.ulisboa.pt/member/catarinadrumondemelo
|
code
|
Azorean Biodiversity Group, University of the Azores
Rua Capitão João D'Avila 9700-042 Angra do Heroísmo
Terceira, Azores, Portugal
Email email@example.com; firstname.lastname@example.org
I received a PhD in Ecology of Arbuscular Mycorrhizal Fungi (AMF) from the University of Coimbra and I am currently working in the IBBC group of cE3c. In the last 10 years I have developed a pioneering work on diversity of AMF community in different ecosystems of Azores (forest, grassland, agriculture). I am currently collaborating with some overlapping groups of colleagues specially working on AMF ecology, evolution and conservation - Centre for Functional Ecology (CFE), University of Coimbra (Helena Freitas and Susana Rodríguez-Echeverría), Royal Botanic Garden Edinburgh (Christopher Walker), Institute of Botany ASCR – Department of Mycorrhizal Symbioses (Claudia Kruger) and Canarian Institute of Agricultural Research (ICIA) (Maria del Carmeo Jaizme-Vega). These collaboration are part of the research project (M3.1a/F/059/2016) funded by FRCT (Fundo Regional para a Ciência e a Tecnologia).
My current research is driven by the following objectives: 1) Overcome one of the most important constraints in decision-making on biodiversity conservation related to the large number of AMF undescribed species, i.e., the so-called “Linnean shortfall”; 2) establish patterns of AMF diversity, abundance, and distribution at different spacial scales; 3) given the ecological role of AMF in many ecosystem, try to understand the ecosystems service provides by these symbionts, and linking them to human well-being.
Melo, C.D., Sara, L., Krüger, C., Walker, C., Mendonça, D., Fonseca, M.A.C.H., Jaizme-Vega, M., Câmara Machado, A. (2018) Communities of arbuscular mycorrhizal fungi under Picconia azorica in native forests of Azores.Symbiosis, 74(1), 43-54. DOI:10.1007/s13199-017-0487-2 (IF2018 2,009; Q4 Microbiology)
Melo, C.D., Luna, S.D., Krüger, C., Walker, C., Mendonça, D., Fonseca, H.M.A.C., Jaizme-Veja, M.C. & Câmara Machado, A. (2017) Arbuscular mycorrhizal fungal community composition associated with Juniperus brevifolia in native Azorean forest.Acta Oecologica-International Journal of Ecology, 79, 48-51. DOI:10.1016/j.actao.2016.12.006 (IF2017 1,615; Q3 Ecology)
Borges, P.A.V., Gaspar, C., Crespo, L., Rigal, F., Cardoso, P., Pereira, F., Rego, C., Amorim, I.R., Melo, C., Aguiar, C., André, G., Mendonça, E., Ribeiro, S.P., Hortal, J., Santos, A.M., Barcelos, L., Enghoff, H., Mahnert, V., Pita, M.T., Ribes, J., Baz, A., Sousa, A.B., Vieira, V., Wunderlich, J., Parmakelis, A., Whittaker, R.A., Quartau, J.A., Serrano, A.R.M. & Triantis, K.A.(2016). New records and detailed distribution and abundance of selected arthropod species collected between 1999 and 2011 in Azorean native forests. Biodiversity Data Journal. 4, e10948. DOI:10.3897/BDJ.4.e10948.
Page not found.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670512.94/warc/CC-MAIN-20191120060344-20191120084344-00422.warc.gz
|
CC-MAIN-2019-47
| 2,901
| 10
|
http://www.ps4news.com/forums/ps3-hacks-jailbreak/i-need-few-testers-xoeos-hybrid-3-15-a-115178.html
|
code
|
Ok so I have finely got xeoe's hybrid payload to not black screen for people, but I need you guys on 3.15 to test it out.
I will need to know the following:
1: Does your system show up as 3.50
2: Can you access PSN
3: Are backups working - if error post game and error code
I don't need to know anything more then this, so don't post requests because I will ignore every one of them and have the mods remove your posts. I only want to know what is outlined above and nothing more. So PLEASE keep this thread to only useful info.
Here are all the hex's for all boards: http://www.ps4news.com/forums/attach...chmentid=25604
Here is the .bin for your rockbox/android users: http://www.ps4news.com/forums/attach...chmentid=25606
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276416.16/warc/CC-MAIN-20160524002116-00202-ip-10-185-217-139.ec2.internal.warc.gz
|
CC-MAIN-2016-22
| 724
| 8
|
https://mmarinescu.hashnode.dev/tag/nextjs?source=tags_bottom_blogs
|
code
|
Read more stories on Hashnode
Articles with this tag
Inversion of Control with InversifyJs in NextJs · Find the code on github, here.
For the past year and a half I've been working with OOP in JS with...
The only setup you will need · Let me save you some precious time, because I've spent many hours and
brain cells researching this subject.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00271.warc.gz
|
CC-MAIN-2023-50
| 344
| 6
|
https://forums.shoryuken.com/t/sprite-sties-not-always-fighting-related/6572
|
code
|
I VERY rarely come into the IM section of the forums, so honestly, I don’t know if anyone’s posted anything like this before. But, in any case, heres a small list of sprite sites that have customs, edits, and rips from all different kinds of games.
http://tsgk.captainn.net/ (the Shyguy Kingdom, HUGE site)
As soon as I remember more, I’ll add some links. (damn reformat sucks)
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360745.35/warc/CC-MAIN-20210228084740-20210228114740-00433.warc.gz
|
CC-MAIN-2021-10
| 383
| 3
|
https://www.letsknowit.com/varun2426
|
code
|
A technical person with 10+ years of experience in Cyber Security and marketing. Now running my business from more than 3 years .
Co-Founder at Letsknowit
HRMS Software & Payroll Management Software
SAP Application support
I am a java developer, having 2 year of experience in Java
We don't support landscape mode on your device. Please rotate to portrait mode for the best view of our site
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816864.66/warc/CC-MAIN-20240414033458-20240414063458-00095.warc.gz
|
CC-MAIN-2024-18
| 390
| 6
|
http://www.clickindia.com/detail.php?id=126677621
|
code
|
Verification code 051d3c
Required Distributor Dealer for leading FMCG Company in New Delhi - Delhi
Paradise Beverages & Foods Private Limited requires Distributor/Dealer for our leading Brands in the the name of Packed Pulses.
Interested party can mail the requirment at : Can contact us.
Priyanka (Contact Advertiser)
Ad Type: Offer
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701910820/warc/CC-MAIN-20130516105830-00029-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 333
| 6
|
https://github.com/PowerShellMafia
|
code
|
Grow your team on GitHub
GitHub is home to over 28 million developers working together. Join them to grow your own development teams, manage permissions, and collaborate on projects.Sign up
PowerSploit - A PowerShell Post-Exploitation Framework
CimSweep is a suite of CIM/WMI-based tools that enable the ability to perform incident response and hunting operations remotely across all versions of Windows.
PowerSCCM - PowerShell module to interact with SCCM deployments
A module designed to simplify the creation, customization, and deployment of bootable Windows Preinstallation Environment (WinPE) images.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510749.37/warc/CC-MAIN-20181016093012-20181016114512-00470.warc.gz
|
CC-MAIN-2018-43
| 606
| 6
|
https://demo.lifeboat.com/blog/2021/10/secretive-giant-tsmcs-100-billion-plan-to-fix-the-chip-shortage
|
code
|
Taiwan Semiconductor Manufacturing Company makes 24% of all the world’s chips, and 92% of the most advanced ones found in today’s iPhones, fighter jets and supercomputers. Now TSMC is building America’s first 5-nanometer fabrication plant, hoping to reverse a decades-long trend of the U.S. losing chip manufacturing to Asia. CNBC got an exclusive tour of the $12 billion fab that will start production in 2024.
» Subscribe to CNBC: https://cnb.cx/SubscribeCNBC
» Subscribe to CNBC TV: https://cnb.cx/SubscribeCNBCtelevision.
» Subscribe to CNBC Classic: https://cnb.cx/SubscribeCNBCclassic.
About CNBC: From ‘Wall Street’ to ‘Main Street’ to award winning original documentaries and Reality TV series, CNBC has you covered. Experience special sneak peeks of your favorite shows, exclusive video and more.
Connect with CNBC News Online.
Get the latest news: https://www.cnbc.com/
Follow CNBC on LinkedIn: https://cnb.cx/LinkedInCNBC
Follow CNBC News on Facebook: https://cnb.cx/LikeCNBC
Follow CNBC News on Twitter: https://cnb.cx/FollowCNBC
Follow CNBC News on Instagram: https://cnb.cx/InstagramCNBC
Secretive Giant TSMC’s $100 Billion Plan To Fix The Chip Shortage.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103464.86/warc/CC-MAIN-20231211013452-20231211043452-00044.warc.gz
|
CC-MAIN-2023-50
| 1,187
| 12
|
https://nnc3.com/mags/LM10/Magazine/Archive/2006/72/032-037_dmcrypt/article.html
|
code
|
By Michael Nerb
Encrypting individual filesystems is no big deal; in fact, some distributions allow you to encrypt directories as part of the installation routine. But encrypting the home directory on your laptop is a job half done. Dishonest finders can still draw conclusions from log and configuration files. If you're serious about providing security through encryption, you need to protect the whole hard disk against spying - something that no distribution can do out of the box.
Things start to become more complex if you need to protect the root filesystem. Neither Suse nor Debian Linux give users a tool to help encrypt the root filesystem during the install (or later). That means you'll need to roll up your shirt sleeves for some hands-on configuration.
In this workshop, we will start by installing a standard Linux system and then progress to encrypting the existing filesystems one at a time. We will finish off by deleting the unprotected partition and using the space for other tasks, such as swapping out the /home directory to an encrypted partition of its own.
Our goal is to encrypt the entire hard disk (with the exception of the partition table in the Master Boot Record). Because it isn't possible to encrypt the boot partition, we will move /boot to an external medium - a USB stick, in this case. To boot from the stick, we will need to modify the BIOS and GRUB bootloader settings. The USB stick then creates an additional layer of security by serving as a kind of "key" that the thief will need to possess in order to gain access to the laptop. If this approach seems impractical for your purposes, you can keep the boot partition on your hard disk. However, /boot must be on an unencrypted partition of its own.
In this article, we'll use DM-Crypt for our filesystem encryption. DM-Crypt has been the tool of choice for encrypting filesystems since kernel 2.6.4. It uses the device mapper infrastructure , and it encrypts block devices transparently, relying on the kernel's Crypto API to do so. Linux Unified Key Setup (LUKS) adds some enhancements, which were discussed in a previous issue of Linux Magazine . The LUKS design is implemented as the cryptsetup-luks configuration tool.
As an alternative to a fresh installation, you can modify an existing system, assuming you have enough free space on the disk to create a new partition for the data you are encrypting.
Don't worry! After you finish encrypting, the disk patches and kernel updates should not cause any problems, and your backup and recovery tools should work like they always did.
The equipment you need for this lab is a suitable laptop, a USB stick with a capacity of about 64 Mbytes, and your favorite Linux distribution (kernel 2.6.11 or newer) - we tested Suse Linux 9.3 / 10.0 and Debian Sarge. Check the laptop BIOS to see if it supports booting from USB (Figure 1). Use a Live CD to ensure that Linux supports your laptop; you might like to take this opportunity to erase the laptop disk (see the "Secure data erasure" box).
For the initial install, divide the hard disk into four partitions, as shown in column 1 of Table 1. This configuration puts an unencrypted version of the Linux system on /dev/hda4. (If you prefer not to boot from a USB stick, use the alternative partitioning suggestion shown in column 2 of the table.)
Set up the size and number of partitions to match your laptop's hard disk and intended use - dual or multiple boot configurations are possible. Do not create a user at this phase of the installation; you can do so later on the encrypted system.
After completing the installation, you should have a working Linux system on /dev/hda4. If you have a kernel version older than 2.6.11 (with Debian Sarge, for example), you will need to update the kernel now. For the following steps, you also need the cryptsetup-luks configuration tool.
The "Updates for Debian Sarge" box describes how to update a Sarge system; for Suse Linux, you just need to add cryptsetup-luks. The LUKS homepage at has a prebuilt, statically linked version that you can copy to /sbin/cryptsetup-luks.
|Updates for Debian Sarge|
To prepare Debian Sarge for encrypting the root partition, complete the standard installation, and then do this: add the following Apt source (and comment out all other sources in /etc/apt/sources.list):
deb http://http.us.debian.org/debian unstable contrib main
Then run apt-get update and apt-get install -f to update the package database, and do the following:
apt-get install yaird linux-image-2.6.17-2-686 apt-get install cryptsetup
This installs a current kernel version, cryptsetup, and yaird, a tool for creating initial RAM disks (like mkinitrd). (Note that the kernel image version number may have changed since this issue went to press - version 2.6.17-1 was available on the servers when we started initial testing.) Now boot your computer with the new kernel.
|Secure Data Erasure|
Whenever files are deleted by running rm against the filenames, Linux simply removes the inodes for the files from the directory. The data stays on the disk and can be reconstructed with a little effort. Reformatting with mkfs will not overwrite the partition.
To permanently remove the data, you need to actively modify the magnetization of the sectors (in an appropriate way). The simplest way of doing this is with a command such as dd if=/dev/zero of=/dev/hda. But just as a regular fall of snow will not cover the outlines of the landscape, some residual magnetization will remain after overwriting a file with null bytes. An attacker with the right kind of equipment might be able to reconstruct the original data.
A more time consuming approach, but one that is ultimately far more secure, uses /dev/urandom instead of /dev/zero. Depending on how paranoid you are, you can do this between three and 35 times to be "fairly certain" that you have removed the data. The shred and wipe tools will help you do so. But you should be aware that these tools assume a few basic conditions that may not apply to RAID systems, journaling filesystems (such as ReiserFS or Ext3), or certain hard disk drivers and firmware components that buffer data and perform multiple write operations at a single pass.
To be absolutely safe, you would need to destroy the hard disk and dispose of it somewhere where it will never be found. But you can save yourself all that trouble by implementing the mechanisms described in this article, and then just forgetting the passwords.
Linux typically detects USB sticks as SCSI devices and addresses them as the SCSI hard disk /dev/sda (unless you have some other SCSI devices). Use fdisk to create a partition table with at least one partition and format the partition (mkfs.ext2 /dev/sda1). Then do:
mount /dev/sda1 /mnt cp -ax /boot/* /mnt
This copies the /boot directory to the USB stick. If it does not already exist, give the ln -s . boot command to create a symbolic link in the /mnt directory, to avoid a hitch with grub-install later on.
Now modify the GRUB bootloader configuration on the USB stick: /mnt/grub/device.map states how GRUB maps BIOS and Linux device names; you need an entry of (hd0) /dev/sda.
Change the entries for the BIOS device names from (hd0,3) to (hd0,0) (this corresponds to /dev/sda1) in the /mnt/grub/menu.lst configuration file:
title Suse Linux 9.3 (USB-Boot) kernel (hd0,0)/vmlinuz root=/dev/hda4 initrd (hd0,0)/initrd
Finally, run grub-install --root-directory=/mnt /dev/sda to install GRUB on the Master Boot Record of your memory stick. If everything works, you can boot the laptop from the USB stick - to do so, set the BIOS boot order to boot the computer from external boot media first.
For security reasons, either change to single user mode, or close any unnecessary applications, stop all unnecessary services, and shutdown any user sessions.
We will be using the Linux system we just installed to encrypt the partitions on the laptop step by step; the partitions in question are /dev/hda1 through /dev/hda3. Partition /dev/hda4, which holds the root filesystem right now, will not be needed later. You can recycle it and create a partition for the /home directory if you like.
The basic steps (see the "Device Mapper, DM-Crypt, and Cryptsetup" box) are always the same: use cryptsetup-luks to create a virtual block device with integrated AES encryption, and map it to an appropriate block device (on the laptop hard disk). While doing so, you need to enter a passphrase, which the program will use to create a symmetric key. The key is then used for data encryption. Finally, format the virtual block device with a filesystem, and mount the filesystem.
|Device-Mapper, DM-Crypt, and Cryptsetup|
Just like loop devices, the device mapper infrastructure unhitches physical block devices from virtual block devices (Figure 2). This virtualization creates an abstraction layer that is leveraged by various applications, DM-Crypt being just one of them. DM-Crypt transparently encrypts data passed in by the virtual block device and stores the data on the physical block device - and vice-versa. The physical block device appears to contain garbage - you need to supply the correct passphrase to mount a filesystem via the virtual block device to be able to use the data in a meaningful way.
The cryptsetup userspace tool is required to configure DM-Crypt; the virtual block devices are set up in the /dev/mapper/ directory.
cryptsetup-luks is an extension of cryptsetup, and offers enhancements, which we discussed in greater detail in - but to summarize:
The swap partition and the /tmp directory are useful candidates for our first experiments with cryptsetup-luks: these filesystems contain temporary data and are no big loss if something goes awry.
Listing 1 shows the command sequences for manually enabling and disabling encryption for swap and /tmp. For swap, you need to set up a virtual block device, /dev/mapper/swap, using cryptsetup-luks, then initialize the device by running mkswap and enable the device by running swapon. In the same way, create another virtual block device, /dev/mapper/tmp, format it with the Ext2 by running mkfs.ext2, and then mount the device as /tmp. For Debian, replace the cryptsetup-luks commands with cryptsetup or create a suitable link.
In both cases, cryptsetup-luks uses random passphrases from /dev/urandom; the passphrases reside in the laptop's main memory and disappear when you power down the laptop. As nobody knows the passphrases, the data stored on swap and /tmp is irretrievably lost, but this is intended. Swap contains memory dumps and is reinitialized whenever you boot your computer. There are no benefits from keeping swap readable, and there are a number of security risks. Every distribution has its own approach to handling temporary files. As a rule, programs should not rely on data in /tmp surviving a reboot.
|Listing 1: Creating Swap and /tmp|
01 # swapoff -a 02 # cryptsetup-luks -s 256 -d /dev/urandom create swap /dev/hda1 03 # ls -l /dev/mapper/ 04 total 124 05 crw------- 1 root root 10, 63 Apr 3 2006 control 06 brw-r----- 1 root root 253, 0 Apr 2 23:53 swap 07 # mkswap /dev/mapper/swap 08 Setting up swapspace version 1, size = 1019895 kB 09 # swapon /dev/mapper/swap 10 # cat /proc/swaps 11 Filename Type Size Used Priority 12 /dev/mapper/swap partition 995988 0 -3 13 # swapoff /dev/mapper/swap 14 # cryptsetup-luks remove swap 15 # cat /proc/swaps 16 # 17 # cryptsetup-luks -s 256 -d /dev/urandom create tmp /dev/hda2 18 # mkfs.ext2 /dev/mapper/tmp 19 mke2fs 1.36 (05-Feb-2005) 20 [...] 21 # mount /dev/mapper/tmp /tmp 22 # ls -l /tmp/ 23 total 17 24 drwx------ 2 root root 12288 Apr 2 23:55 lost+found 25 # umount /tmp 26 # cryptsetup-luks remove tmp 27 # ls -l /dev/mapper/ 28 total 124 29 crw------- 1 root root 10, 63 Apr 3 2006 control
It makes sense to recreate these filesystems every time you boot. Debian makes this simple: just set the CRYPTDISKS_ENABLE=Yes parameter in /etc/defaults/cryptdisks (if it is not already set), and add the following to /etc/crypttab:
#<target dev> <source dev><key> <options> swap /dev/hda1 /dev/urandom swap tmp /dev/hda2 /dev/urandom tmp
You also need to modify /etc/fstab; remove the existing entry for swap, or modify the entry:
/dev/mapper/swap none swap sw,pri=1 0 0 /dev/mapper/tmp /tmp ext2 defaults 0 0
Suse Linux also has an /etc/cryptotab file - but it uses loop devices. For Suse, it makes sense to use an init.d script to enable /tmp and swap. A shell script that does this for you (cryptfs), and which is based on , is available from the Linux Magazine website . After downloading the script (to /etc/init.d/) create symbolic links in /etc/rcX.d to call the script in the required runlevels. Finally, delete the line for the previous, unencrypted swap partition from /etc/fstab.
Let's reboot, just to make sure that Linux creates and enables the filesystems. If everything works out, we can move on to our major task, encrypting the root filesystem.
In contrast to /tmp and swap, the root filesystem is permament: that is, it is not recreated whenever you reboot. Root is created once and mounted at boot time. We need some extended LUKS functionality (from cryptsetup-luks) at this point, and the procedure is slightly different:
The following parameters create a LUKS header on /dev/hda3; LUKS uses the AES encryption algorithm with a key length of 256 bits and sets a passphrase:
cryptseup-luks -c aes-cbc-essiv:sha256 -y -s 256 luksFormat /dev/hda3
Now create a virtual block device, /dev/mapper/dm-root, which will map to the /dev/hda3 partition. cryptsetup will prompt you for the passphrase you just specified. Then go on to format the virtual block device (Ext3 format in our example) and mount the device:
cryptsetup-luks luksOpen /dev/hda3 dm-root mkfs.ext3 /dev/mapper/dm-root mount /dev/mapper/dm-root /mnt
The newly encrypted root filesystem is now mounted below /mnt, and it is still empty. You will need to insert your bootable memory stick for the following steps. Copy the complete installation from the /dev/hda4 partition to /mnt - this process encrypts and stores the data on /dev/hda3. Do not copy the /boot, /lost+found, /proc, /sys, /tmp, and /mnt directories. The copy command looks like this:
cd /; cp -ax bin dev etc home lib media opt root sbin usr var /mnt/
The copy can take awhile, as two to three Gbytes need to be run through the encryption layer. This leaves you with an image of the root filesystem from /dev/hda4 on /dev/hda3. You can manually umount (umount /mnt; cryptsetup-luks luksClose dm-root), and remount (cryptsetup-luks luksOpen /dev/hda3 dm-root; mount /dev/mapper/dm-root /mnt) now.
Now use chroot to work with the encrypted system. Start by setting up the missing mountpoints, and mount the memory stick as /boot.
chroot /mnt mkdir -p /boot /proc /sys /tmp /mnt mount -t proc proc /proc mount -t sysfs sysfs /sys mount /dev/sda1 /boot
If you attempted to boot from the memory stick with the root filesystem on /dev/hda3, or /dev/mapper/dm-root, would fail right now, as the init program (which is part of initrd) would not be able to handle the root filesystem in either case: partition /dev/hda3 would seem to contain garbage, and the virtual device, /dev/mapper/dm-root, does not exist at this point.
To let Linux boot from the encrypted root filesystem, we need to modify initrd as follows:
The cryptsetup-luks program, and the required kernel modules, must be referenced in initrd.
init has to load the kernel modules, and mount the virtual block device, /dev/mapper/dm-root, as the root filesystem.
Depending on the integration status of cryptsetup for your distribution, there are different approaches to doing this.
Debian Sarge has some fairly useful support here. Add the aes-i586 and sha256 modules to /etc/mkinitd/modules (each in a separate line); add the following line to the existing /etc/crypttab file:
dm-root /dev/hda3 none luks,cipher=aes-cbc-essiv:sha256
In a similar way, change the root filesystem in /etc/fstab to point to /dev/mapper/dm-root:
/dev/mapper/dm-root / ext3 defaults 0 1
Then run yaird -o /boot/initrd to create a working initrd on the memory stick. yaird (yet another initrd) replaces the standard mkinitrd tool, which can't handle encrypted root filesystems in the Debian version.
For Suse Linux, you'll need to add the required kernel modules dm-mod, dm-crypt, aes-i586, sha256, and ext3, using the INITRD_MODULES parameter to the /etc/sysconfig/kernel file. (The module names must be separated by blanks.)
More changes are required to the /sbin/mkinitrd program: you might like to create a backup copy before you continue. In the mkinitrd_kernel function, look for the lines that copy /sbin/insmod to the ramdisk; depending on your Suse version, they may look slightly different. For Suse Linux 10.1 the lines look like:
if ! cp_bin $initrd_insmod $tmp_mnt/sbin/insmod 2>/dev/null ; then error 5 "no static insmod" fi
Add the following two lines immediately below this:
cp_bin /sbin/cryptsetup-luks $tmp_mnt/sbin/ 2>/dev/null \ || error 5 "no static cryptsetup-luks"
In the udev_discover_root function, add the following as the first command:
| echo "Setting up LUKS device $rootdev. Provide pass phrase now." | /sbin/cryptsetup-luks luksOpen /dev/hda3 dm-root
Then you just need to change the entry for the root filesystem to /dev/mapper/dm-root (for the ext3 filesystem) in /etc/fstab. Finally, give the /sbin/mkinitrd -o /boot/initrd command to create a new initial RAM disk on the memory stick.
Before rebooting, modify the /boot/grub/menu.lst file on your memory stick. Change the root kernel parameter in the menu entry that launches the Linux system to point to the virtual block device, /dev/mapper/dm-root. You also need to modify the initrd entry (/boot/initrd). A typical entry will look like this:
title Suse Linux 10.0 (USB-Boot, Encrypted Root) kernel (hd0,0)/vmlinuz root=/dev/mapper/dm-root initrd (hd0,0)/initrd
Reboot the laptop. Make sure you have set USB as the default boot device in the BIOS boot order. cryptsetup-luks will now prompt you for the passphrase for the root filesystem, and assuming that you provide the correct password, boot to the login screen. Calling mount removes any trace of doubt (Figure 3). If this does not work, try booting without the memory stick: you still have the unencrypted Linux system on the hard disk, and you can start troubleshooting from there.
If all of this works out, you no longer need the unencrypted root filesystem on /dev/hda4. Delete the data on the partition and use cryptsetup-luks to set up another encrypted filesystem called /dev/mapper/dm-home. Format the partition, and mount it as /home. Then create any users you need; their home directories are automatically encrypted on /dev/hda4.
One downside to this is that encrypted swap and suspend to disk are mutually exclusive - make sure to disable the latter for this reason (if it exists, remove the kernel parameter resume=/dev/hda1 from /grub/menu.lst.) Suspend2 is an alternative that supports suspend to disk with an encrypted swap partition; however, this means patching and compling the vanilla kernel.
You will not need the memory stick while the laptop is running, however, it is essential to plug in the stick for kernel updates, as mkinitrd or yaird will want to install the new initrd on the stick. To be on the safe side, backup the memory stick before you update the kernel, or add an entry to the previous, working system to your GRUB configuration. Kernel 2.6.13 saw a few modifications to udev, and they may prevent the initrd you created from performing as desired. yaird does not share this problem. To be really safe, create a live CD with LUKS support. You can use the CD to manually mount and back up encrypted partions.
Suse users need to be careful if YaST updates the mkinitrd package. Back up the LUKS changes to the /sbin/mkinitrd script and compare them with the new version after updating mkinitrd.
During this workshop, we have encrypted the whole laptop hard disk except for the Master Boot Record. You need the memory stick to boot. This gives road warriors a high degree of passive security. But you should still be aware of common protective and security measures (see the "Security 101" box).
Encrypting your laptop hard disk is just one layer in an all-encompassing security policy - and it is no replacement for a security policy, as it only protects the data while the computer is switched off. If you lose your laptop after entering the correct pass phrases and with a user session running on the Linux system, an attacker would have the same access as to a completely unprotected machine. This warning also applies to threats from the Internet, assuming the laptop has an Internet connection. Malware has unrestricted access to your data once it gains access to the system.
In other words, this workshop cannot give you absolute security; but following these rules will keep your laptop as secure as possible:
DM-Crypt: http://www.saout.de/misc/dm-crypt
Device Mapper Resource Page: http://sources.redhat.com/dm/
"Secret Messages: Hard disk encryption with DDM-Crypt, LUKS, and cryptsetup," by Clemens Fruhwirth and Markus Schuster, Linux Magazine 12/05, pg. 65.
Linux Unified Key Setup (LUKS): http://luks.endorphin.org/dm-crypt
Peter Gutmann, "Secure Deletion of Data from Magnetic and Solid-State Memory": http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html
Suspend 2: http://www.suspend2.net
Wipe - Secure File Deletion: http://wipe.sourceforge.net
Luksopen script on the DM-Crypt Wiki: http://www.saout.de/tikiwiki/tiki-index.php?page=luksopen
Shell script cryptfs: http://www.linux-magazine.com/Magazine/Downloads/72/DM-Crypt
Clemens Fruhwirth: "New Methods in Hard Disk Encryption," http://clemens.endorphin.org/nmihde/nmihde-A4-ds.pdf
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704803737.78/warc/CC-MAIN-20210126202017-20210126232017-00459.warc.gz
|
CC-MAIN-2021-04
| 21,853
| 97
|
https://tamriel-rebuilt.org/content/tutorial-morrowind-scripting-dummies
|
code
|
Morrowind Scripting For Dummies
This file is hosted at Morrowind Modding History, and is a complete guide to how to write and edit scripts in Morrowind.
MSFD, 9th edition, still contains a few errors (some inherited from the original Construction Set manual), but is otherwise generally correct and is the most comprehensive guide available. When looking up a function in MSFD do read beyond the description and through its notes and comments, which contain valuable information.
There is no undocumented feature by specifying the player after the function (->Activate, player). The addition of ", player" is simply ignored, and it does the same thing as "Activate" alone. In the given example, "container"->Activate will work, but only if the container has been manually opened in the session before, and loaded within 72 hours.
AiActivate can not make a NPC drink a potion.
It will only pick up objects in the cell; does not check for objects in inventory. See also notes below about what [reset] does.
note: Morrowind Code Patch feature "Scriptable potion use" lets you do this with the Equip function instead
GetTarget does not "check if the target is currently in the crosshair".
Its function for the player is exactly the same as for NPCs: checking if the target ID is engaged in combat by the calling actor.
Modifies or defines the reputation (not reaction) modifier for members of the specified faction towards the PC. This affects advancement and Reputation checks for NPCs of the same faction, but doesn't directly affect their Disposition (only Rank affects the NPC's disposition if the NPC's faction is favorably disposed towards the player's faction). Reputation (player's and NPC's) is also involved in Persuasion success formulas.
Position , PositionCell
Can NOT take variables, local or not.
The note "creatures killed with curse spell effects on them cause all other creatures of that type to have the same curse on them": this has nothing to do with the creature being killed or the effect being of the Curse type. Just like NPCs, abilities, diseases or curses added to a creature ID by dialogue results or their local script will be added to any new instance of the creature.
PlayGroup , LoopGroup
Unlike what the note states about "crosswired" animation, function names do not give different results if they are used in the console or in scripts. They seemed to give different results because of scripts compiled in old mods (the compiled script is saved in the plugin along with the text of the script by the CS; what the game processes is not the text of the script but the compiled version). The explanation is that these scripts were compiled in an older version of the CS, before a change in Bloodmoon: see this page for the affected Opcodes. The NPC animation explorer linked in MSFD (MMH link) is one of these older mods and needs to be recompiled, otherwise it will fail or play wrong animations.
The condition for a value set by SetJournalIndex to persist is not simply for the journal to be "defined in the "info" section of the dialogue window": the player must have received an actual journal entry for that journal ID using the function "Journal" before. Otherwise, the issue is not only that the value will be lost when a savegame is reloaded: if a save is reloaded to an earlier point (before using SetJournalIndex) without exiting the game, the value set by SetJournalIndex will still be there; if the game has been exited before reloading, then the value will revert to the last valid entry.
Unlike what the warning states, it does not re-run other scripts, and scripts could not "be run more than once in the same frame".
The fact that it "stops combat for all actors involved" is only true in the situation where StartCombat was used to make one idle NPC attack another (StopCombat will stop combat for both), or if NPCs were following each other and one was made to attack another/the player by script. StopCombat will not make several NPCs stop attacking the player if they attacked separately;
If NPC1 first had StartCombat on player, and then also had StartCombat on NPC2, once NPC2 reacts by attacking, StopCombat on NPC1 will stop NPC1 from attacking the player but will not stop NPC2 from attacking NPC1 (who will then attack NPC2 back).
"Object_ID"->StartScript "Script_ID" does work from dialogue results, not just scripts. Will not "only work to target the actor the player is in dialogue with"; provided an instance of the target exists, it is possible to specify a different ID in dialogue results.
Referencing variables on other objects and scripts
- The reported limitation to writing remote variables into another one "Note that the reverse does not work: Set local_variable to MyObject.variable ;this doesn't work!" - seems to be incorrect in the final version of the game. See the vanilla script "FraldCounter", which does update the variable.
- The cell limitation "Set MyObject.variable to (...) will only work if the cell containing the target object/(script) has previously loaded" is incorrect, at least for NPCs. Their local variables can be remotely set even if their cells were never loaded before.
the undocumented argument for Ai functions (AiActivate, AiFollow, AiEscort...):
It changes the behaviour of the "package done" flag. GetAiPackageDone normally only returns 1 for one frame, when the Ai function has been executed. The [reset] argument of Ai functions is optional. If it is given with any value (0 or 1), once the package is executed, GetAiPackageDone will NOT reset to 0 until a new Ai command is called on that NPC. The point of this is that scripted Ai sequences won't break if the "package done" frame is skipped for any reason - in vanilla it's for instance skipped if the PC waits/rests during the execution of AiTravel.
OnDeath / GetHealth
OnDeath returns 1 only at the end of a normal, non-scripted death animation. MSFD suggests GetHealth as an alternative to OnDeath. However, a NPC can be dead with a positive amount of health if they were healing, because magic effects still apply during death animations. While there may (?) need to be at least one frame on which the NPC's health is < 1 for them to die, using GetHealth is not a reliable way to tell that a NPC is dead afterwards.
- is an extremely slow function if many different NPCs and creatures are recorded in the save. Make sure not to call it every frame.
- GetDeadCount will not increment if normal death animations do not play out fully (see also PlayGroup , LoopGroup). GetDeadCount increments at the end of the death animation, on the same frame as OnDeath but before it (it will return an incremented value if checked under OnDeath).
- Disabled statics do keep their collision in interiors as well as exteriors and need to be reloaded/repositioned;
- On the caution about disabling lights: changing the position of lights instead as suggested is a good workaround, but additional warnings on scripted lights: for lights that do not have a mesh, changes to their positions and script variables will persist through reloading games or starting a new game without quitting first. Adding a mesh (EditorMarker, invisible/collisionless) to the light seems to solve this issue.
Note that using this function will modify then inscribe the current state of *all* faction properties of the first faction ID into the player's save. Includes faction and rank names, requirements, etc., which will overwrite the properties edits of any new mod that wasn't already installed when the function was used. The same is true of NPC functions Mod/SetFight, Mod/SetFlee, Mod/SetAlarm, which record all of the NPC's properties into the save.
Move , MoveWorld
If a CS reference (an object instance placed in a cell in the CS, not created in the game) is moved with only Move or MoveWorld, its position will be reset if the game is reloaded in a different cell. If you want the object's new position to be persistent, use "SetPos" at the end of its movement. Alternatively, using "Enable" will also flag the object as changed and make the new coordinates persist.
PlaceAtPC , PlaceAtMe
Note that objects created by these functions aren't affected by ingame lighting until the player exits and reloads the cell.
- There are unwanted effects if PositionCell follows certain AI functions too closely in time. A delay between half a second and one second is the minimum after StopCombat, without which the NPC can decide to walk back towards its earlier StopCombat position. NPCs may return to previously given AiWander positions after PositionCell if a new Ai target isn't given.
- The following should be fixed by the MCP: If PositionCell was used on a CS NPC instance that had never been loaded in the game before, in order to move it into either the current cell or a cell that had been loaded before, the NPC would appear without its local script and could cause crashes on activation.
Waiting or loitering (resting where sleep is forbidden) doesn't count as sleep. To detect it you can instead check if ingame time (GameHour) changes within MenuMode.
Cast , ExplodeSpell
Creatures with empty spellcast animations can not be made to cast spells by scripts, even though they can cast their own spells or enchantments. Examples of creatures with no cast animations: ash slaves, ash ghouls. Examples of creatures with cast animations: dremora.
PlayGroup , LoopGroup
- Use "PlayGroup idle" to reset the Ai controller when NPC animations get stuck.
- If PlayGroup or LoopGroup interrupt death animations, OnDeath will not return 1 and GetDeadCount will not increase (unless "Playgroup idle" is used to reset and let a full death animation play from beginning to end). If a NPC is scripted to die (SetHealth 0) during a scripted PlayGroup or LoopGroup animation, OnDeath and GetDeadCount will not update either.
The simple declaration of the variable doesn't prevent combat reactions for NPCs with standard a standard 30 Fight value. The report that "an NPC with a script that uses this variable will not attack on its own accord. If you don’t want the Actor to remain passive you have to manually StartCombat" is only true for NPCs with 0 Fight.
As long as an external script is used, removing a normal amount of scripted items from the player's inventory should not cause errors even if there are several. However, RemoveItem can only remove one at a time. The report "if the player has two or more copies of an object with an attached script in their inventory, using RemoveItem on that Object ID will frequently corrupt data for one of the remaining copies" is probably an unrelated error, or related to containers.
Making Actors lie down
0 Fatigue: ModFatigue and ModCurrentFatigue will also fail to make a NPC fall down if they are triggered in the script by GetAiPackageDone, meaning on the exact frame an Ai function (such as AiTravel) ends. If a reset argument was given to the Ai function (see [reset] above), the NPC will also get back up and start spinning.
Result field scripts
- Voice dialogue (Attack, Hit...): if any input (except comments) is present in the result field and the voice is triggered while in MenuMode, an error message is given: "Trying to RunFunction index greater than function count". Although harmless, the error will prompt the player to click "Yes" twice or it will close the game. If dialogue or the console is open and a NPC's health is set to 0, the error can be given by a Hit voice. In the course of normal play, this can happen for Attack voices, since they can be triggered by SetFight and StartCombat in dialogue results (dialogue is in MenuMode). NPCs who have Attack voices with scripted results can use their local script to detect MenuMode and record it in a local variable as a condition for the voices to exclude. Since Geeting results apply on the first frame of dialogue (before a script can detect MenuMode), that local variable should first be updated in the greeting's results if they include StartCombat.
GetArmorType + dialogue results or Choice
- GetArmorType seems to generally fail and return -1 in dialogue results if it's in a condition -- If ( player->GetArmorType, 0 == -1 ) always passes.
- On certain systems GetArmorType may generally fail in dialogue results and return -1 (why? helms quickly switched before dialogue seem to permanently trigger the issue). On other systems it may fail in dialogue results only if: it's in an If condition, or if it comes after a Choice function, but not if it comes before the Choice (why?). The bug does not seem to happen with other functions like GetWeapon or GetItemCount.
Some bugs (/features) that MSFD warns against are fixed by the most recent versions of the MCP, including:
- CellChanged not returning 1 for scripted teleporting or magic teleporting is fixed by MCP,
- "CellChanged doesn't always trigger, even if the player enters the cell via a normal teleport door": this comment likely referred to what happens when entering an interior through a load door that had a script with OnActivate on it, a known bug with the Vivec Arena (this is also fixed by MCP)
- RemoveItem subtracting weight from the character's encumbrance even if the amount of items is not in the character's inventory (is fixed by MCP; the hack of removing negative weight items can no longer increase encumbrance)
- PlayGroup / LoopGroup being unreliable and having different animation on upper body and lower body is fixed by MCP
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257651007.67/warc/CC-MAIN-20180324210433-20180324230433-00780.warc.gz
|
CC-MAIN-2018-13
| 13,448
| 58
|
https://tribuo.org/learn/4.1/javadoc/org/tribuo/clustering/kmeans/KMeansModel.html
|
code
|
The predict method of this model assigns centres to the provided input, but it does not update the model's centroids.
The predict method is single threaded.
J. Friedman, T. Hastie, & R. Tibshirani. "The Elements of Statistical Learning" Springer 2001. PDF
|Modifier and Type||Method and Description|
Copies a model, replacing it's provenance and name with the supplied values.
Returns a list of features, one per centroid.
Returns a copy of the centroids.
Generates an excuse for an example.
Gets the top
Uses the model to predict the output for a single example.
copy, generatesProbabilities, getExcuses, getFeatureIDMap, getName, getOutputIDInfo, getProvenance, innerPredict, predict, predict, setName, toString, validate
public DenseVector getCentroidVectors()
In most cases you should prefer
it performs the mapping from Tribuo's internal feature ids
to the externally visible feature names for you.
This method provides direct access to the centroid vectors
for use in downstream processing if the ids are not relevant
(or are known to match).
This should be used in preference to
as it performs the mapping from Tribuo's internal feature ids to
the externally visible feature names.
public Prediction<ClusterID> predict(Example<ClusterID> example)
predict does not mutate the example.
IllegalArgumentException if the example has no features
or no feature overlap with the model.
nfeatures associated with this model.
If the model does not produce per output feature lists, it returns a map with a single element with key Model.ALL_OUTPUTS.
If the model cannot describe it's top features then it returns
n- the number of features to return. If this value is less than 0, all features should be returned for each class, unless the model cannot score it's features.
This attempts to explain a classification result. Generating an excuse may be quite an expensive operation.
This excuse either contains per class information or an entry with key Model.ALL_OUTPUTS.
The optional is empty if the model does not provide excuses.
protected KMeansModel copy(String newName, ModelProvenance newProvenance)
Used to provide the provenance removal functionality.
Copyright © 2015–2021 Oracle and/or its affiliates. All rights reserved.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100686.78/warc/CC-MAIN-20231207185656-20231207215656-00019.warc.gz
|
CC-MAIN-2023-50
| 2,231
| 35
|
http://www.cs.cornell.edu/~paulgrubbs/
|
code
|
I am a fourth-year PhD student in the Computer Science department at Cornell University, advised by Tom Ristenpart. I'm located at the Cornell Tech campus in NYC.
My research is in applied cryptography, security, and systems. Currently, I am focusing on cryptography for cloud security and
verifiable abuse reporting for end-to-end encrypted messaging.
I'm also interested in lattice cryptography, primitives, cryptography engineering, and constructing protocols that do cool stuff.
My non-technical interests include censorship, privacy, legal and ethical issues related to information security, and the intersection of technology and society.
My research is supported in part by a 2017 NSF Graduate Research Fellowship.
In Spring 2018, I visited Royal Holloway, University of London, located in scenic Egham, UK.
While I was there, I worked with several lovely people including Marie-Sarah Lacharité, Brice Minaud, Kenny Paterson, and Joanne Woodage.
I did my undergrad at Indiana University, where I majored in Math and Computer Science.
After graduating, I worked for two and a half years at Skyhigh Networks
as a cryptography engineer. I left in August of 2015 to start grad school at Cornell.
You can tweet at me @pag_crypto or find me on LinkedIn.
My email address is pag[220+5] AT cornell DOT edu. Compute the sum in the braces and append that to "pag" to get the local part of the email address.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526888.75/warc/CC-MAIN-20190721040545-20190721062545-00128.warc.gz
|
CC-MAIN-2019-30
| 1,405
| 13
|
http://www.meetup.com/NSCoder-Night-Boston/events/165162372/
|
code
|
Our first workshop will be "Up and Running With CocoaPods". We'll go over some basics about how CocoaPods works, and how to get your projects set up with them. We will also go over creating a podspec so that you can distribute your library to other developers.
Of course we'll also have plenty of people available to answer any other questions you have about any other problems you're having. We'll have the workshops aside from the main room so that if you just want to work on your own or as a group, there will be plenty of space.
As always, thoughtbot will be providing food and beverage. Hope to see you there!
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391519.2/warc/CC-MAIN-20160624154951-00144-ip-10-164-35-72.ec2.internal.warc.gz
|
CC-MAIN-2016-26
| 615
| 3
|
https://ndisac.org/dibscc/cyberassist/cybersecurity-maturity-model-certification/level-2/ia-l2-3-5-10/
|
code
|
CMMC Practice IA.L2-3.5.10 – Cryptographically-Protected Passwords: Store and transmit only cryptographically-protected passwords.
Links to Publicly Available Resources
Discussion [NIST SP 800-171 R2]
Cryptographically-protected passwords use salted one-way cryptographic hashes of passwords.
See NIST Cryptographic Standards and Guidelines.
All passwords must be cryptographically protected using a one-way function for storage and transmission. This type of protection changes passwords into another form, or a hashed password. A one-way transformation makes it theoretically impossible to turn the hashed password back into the original password, but inadequate complexity (IA.L2-3.5.7) may still facilitate offline cracking of hashes.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475203.41/warc/CC-MAIN-20240301062009-20240301092009-00836.warc.gz
|
CC-MAIN-2024-10
| 740
| 6
|
https://us.forums.blizzard.com/en/wow/t/launchers-remain-offline-prompt/613216
|
code
|
I log in to the game every day. Since a few weeks ago, but never before, the launcher pops up a box asking me if I want to remain offline. It offers me a choice of reducing these prompts to once every seven days.
My question: what’s the point of it? I assume it’s referring to whether I want my Battle-net tag to be publicly available. Well, I don’t. That’s why it’s set to show me off-line, and has been since Battle-net tags were assigned.
So what’s the point of this prompt, and am I correct in assuming it’s related to public display of my Battle-tag? Why is it forcing me to refresh my privacy setting this frequently?
Thanks in advance.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00606.warc.gz
|
CC-MAIN-2021-49
| 656
| 4
|
https://www.sitecore.com/knowledge-center/blog/511/introducing-sitecore-helix-4380
|
code
|
Sitecore® Helix is a set of official guidelines and recommended practices for Sitecore Development.
With the introduction of Helix, Sitecore now provides a set of architecture conventions and guidelines that describe how to apply recommended technical design principles to a Sitecore project.
The purpose is to secure implementations in a future-proof way by architecting them as maintainable and extensible business-centric modules.
Helix also contains development process recommendations to make it as easy as possible to build, test, extend, and maintain Sitecore implementations.
Read more: You’ll find further information about Helix in the Sitecore Helix documentation guide at helix.sitecore.net.
Sitecore® Helix benefits all
Using both Helix and Sitecore’s recommended practices provides a range of potential benefits to both customers and partners.
- Better quality in Sitecore implementions: Avoid technical roadblock or dead ends in projects.
- Faster time to market: Work with less plumbing and go straight to the valuable parts.
- Long-term business value: Extend, change, and upgrade easier than before when requirements change.
- Sitecore-supported methodology: Get better ROI and security on investment in tools, training, and methodologies.
- Unified development practices: Provide a lower learning curve and easier integration of new developers.
- Share and reuse: Share practices, features, and functionalities between projects and within the community.
End-to-end implementation example
Alongside Helix is Habitat: an end-to-end open-source Sitecore project implemented on the Sitecore® Experience Platform™ using the Helix principles. It allows developers to experience a project "in the flesh" based on the Helix principles and serves as a good integration point when developing specific technology or marketplace modules. The Habitat project houses a wide range of common website features and functionalities and serves as a good example of how to build specific requirements or how to extend the Sitecore platform to fit the need of a business.
Learn more: Visit the Habitat website or get the source code and getting started instructions on GitHub.
Business-vertical oriented scenarios and demos
Finally, Helix and Habitat also serve as the foundation of a series of Sitecore product demonstration websites, covering a range of marketing-related business scenarios. The demo sites span the breadth of the Sitecore platform, including the Sitecore Experience Database, Content Management, Sitecore Analytics and Personalization capabilities, Email Experience Management, the Print Experience Manager, and the new Sitecore Experience Accelerator. Over time, the sites will span full Sitecore product offering and contain more business scenarios and business verticals. Like with Helix and Habitat, the demo sites are publicly available as open source. Please contact your local Sitecore sales office if you are interested in a packaged version for demonstration purposes.
Learn more: The first business vertical, Legal (pictured below), is available here. The source code to the demo sites is available on GitHub.
Thomas Eldblom is Senior Manager, GA Field Product Marketing at Sitecore.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00689.warc.gz
|
CC-MAIN-2023-14
| 3,219
| 20
|
https://infothela.com/download-microsoft-visio-professional-download-and-install-or-reinstall-office-office-or-office/
|
code
|
How to Download Microsoft Office Pro Plus + Visio + Project 64 Bit For Free.Office Professional Plus Free Download
Free download Microsoft Office Professional Plus full version offline installer for Windows PC with direct download and Torrent Magnet link. Learn how to install Office on your PC or Office for Mac Visio includes stencils for business, basic network diagrams, basic flowcharts, organization charts, and general multi-purpose diagrams.
Account Suspended – Share & Support
When I attempt to install Visio, I micdosoft given a message microwoft Click-to-Run installers are not supported where a Windows Installer based Office application is already installed. The exact message is shown below:.
What can I do, within the constraints of the downloads available on my level mirosoft MSDN subscription, to install Office Professional and Visio Professional on my computer? Yes, I ovfice aware of this microsoft office professional plus 2016 include visio free download about click-to-run and windows installer on same computer which is roughly describing the issue I am having, but let me state again: Office Professional Plus was installed from an MSDN ISO and I am attempting to install Visio Professional from an MSDN iso; neither of these are click-to-run downooad.
That being said, any help or support I can get on this would be greatly appreciated. If there is more information I can provide to clarify my issue, I will be happy to do so. The error узнать больше you are getting is a known issue. You cannot have bot MSI ffee and Click to run on the same computer. Was /8066.txt reply helpful? Yes No. Sorry this didn’t help. Thanks for your feedback. I received an email from the Microsoft Community Team asking if my question had been answered, which I don’t feel it has here, or in the Office IT Pro forum version so I am posting another reply for further help as advised.
More information about the issue such as confirming that the versions of Fre Professional Plus and Visio Professional I am using are both MSI microsoft office professional plus 2016 include visio free download based, not click-to-run can be ptofessional in the above linked thread. Any further suggestions as to the cause of this issue, and more importantly a resolution, would be greatly appreciated.
Did you find any workaround that does not imply uninstalling Office and reinstalling it after Visio ? Unfortunately I wasn’t able to find any workaround. Choose where you downloaad to search below Search Search microsoft office professional plus 2016 include visio free download Community. Search the community and support articles Install, upgrade and activate Microsoft and Office Search Community member.
Adam Hubble. And just deal with the response I expect from the outset: Yes, I am aware of this article about click-to-run and windows installer on same computer which is roughly describing the issue I am having, but let me state again: Vvisio Professional Plus was installed from an MSDN ISO and I am attempting to install Visio Professional from an MSDN iso; neither of these are click-to-run installations.
Thanks, Adam. This thread is locked. You can follow the question or vote as helpful, but you cannot reply to this thread. I have the same question Report abuse. Details required :. Cancel Submit. Hi Adam, The error message you are microslft is a known issue. Thank you. How microsoft office professional plus 2016 include visio free download are you with больше на странице reply? Thanks for your feedback, it helps us improve the site.
In reply to A. User’s post on May 26, I don’t have click-to-run installed, which is why I am posting. Cheers, Adam. In reply to Adam Hubble’s post on June 3, frew I had the same problem; I’ve got a potential work around.
I now have all my regular office tools and visio. Emanuele Fumeo. Thanks in advance. /47778.txt, Emanuele.
In reply to Emanuele Fumeo’s post on February 20, Hi Emanuele, Unfortunately I wasn’t able to find any workaround. Sorry I couldn’t be of more help. In reply to Adam Hubble’s post on February 20, Hi Adam, thank’s anyway, it was worth asking! This site in узнать больше здесь languages x.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00378.warc.gz
|
CC-MAIN-2023-14
| 4,233
| 13
|