url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://forums.tomshardware.com/threads/bad_pool_caller-blue-screen.1544551/
code
I keep getting a BSOD called BAD_POOL_CALLER which comes after my laptop works for a while, this error pops up faster if im playing a game or if im watching a video . I reinstalled windows 7 and it wont stop happening. Thank you. If you have multiple sticks of ram, remove all and run the test using 1 stick at a time. Sadly i only have one. What i did is i changed the ram to the second slot and ran the test again, errors were found but not as much as in the other slot, (1 every 30seconds insted of 100 per 30 seconds) does this means anything? or is just a bad ram plain and simple? do i need to buy another one?
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571027.62/warc/CC-MAIN-20190915093509-20190915115509-00343.warc.gz
CC-MAIN-2019-39
616
3
http://www.onrpg.com/boards/threads/222606-The-Phoenix-Project-Unreal-Engine-4-Tech-Demo
code
The Phoenix Project - Unreal Engine 4 Tech Demo At first I thought it was a tech demo for project phoenix: https://www.kickstarter.com/projects...eat-aaa-talent But I didn't even need to PLAY the video to notice that it wasn't. Only plays Magic DPS Classes, Healers and Machinist/Engineers. Playing MMO: FFXIV (Balmung) Waiting On: FFXV , Tree of Savior · Saving For: New Computer Originally Posted by coldReactive It is a tech demo for the game , showing some pre-made powers and some of the particle system and effects that will be in it ... Last edited by Seya; 04-07-2014 at 04:19 AM.
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276564.72/warc/CC-MAIN-20160524002116-00082-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
589
11
http://antonyrishin.com/skill-onboarding
code
Ask Sam is a conversational assistant designed for Sams Club and Walmart associates. Sams Club is a membership-based store of Walmart. Ask Sam, powered by NLU and conversation design, has helped us reimagine the in-store experience for Samsclub. I have been leading the product design for the last three years. With a vision to bring all the information to users' fingertips, Ask Sam now has achieved the following: In this initiative, we tried to solve a million-dollar industry problem on skill discovery. Our goal was to help our users effortlessly discover and learn our new skills. Even though the overall usage of Ask Sam is high, the use across skills is inconsistent. High-performing skills contribute only around 30%, leaving behind 70% of low-performing skills. How to improve adoption of low-performing skills We, along with the product, investigated various reasons behind low-skill adoption. “Lack of awareness” emerged as a critical pain point that could be a potential opportunity if solved. We carefully planned and executed a discovery phase to evaluate the same and build upon it. The goal was to collaborate and ideate together as an experience team. The team brainstorming exercise helped us to generate more user pain points and solution directions. The team involved: Product manager | Design | Engineering | Business We conducted stakeholder interviews with the operations team. We needed to understand how we handled this problem at the operations level. “Apart from the generic training, there is no specific training given to the associate on using applications” - Club operations. Knowing that discoverability is a universal problem among conversational assistants, it was essential to evaluate what measures have been taken across the industry. Most assistants have dedicated skill documentation for users to explore and learn. Featured skill cards, and skills suggestions, by the way, were some new initiatives. We also had a similar explore skills page and “what’s new” feature to communicate new and available skills. But the usage was minimal. As a next step, we did multiple club visits, in-person interviews, and observation studies to know our users better. Our research goal was: “Our users didn't know that certain skills could exit ” They had a very strong perception about the limited capabilities of Ask Sam While we designed Ask Sam to perform various capabilities and features, some of our users continued to perceive Ask Sam as a tool to perform only specific essential skills like product look-up, schedule, floorplan look-up, etc. Neither did they know about other skills, nor did they expect Ask Sam to perform different things. The product team's conceptual model of Ask Sam and users' mental model of Ask Sam turned out to be different. The expectation is for them to match to have the best experience. Product team conceptual model. VS User's mental model Based on the information collected on the discovery phase we decided to go with the following approach. We need to push the right information upfront to the user. rather than waiting for the user to pull them Our goal was to ensure the information was easily discoverable for the user. We must place them at the right touch points in the user journey towards that goal. Cross application promotion Bring users immediate attention. The user might disregard if busy Dependency on other platforms and teams Both the options were effective but required close collaboration and integration with the other apps and service teams. We decided to narrow down the scope of our application and identify touch points within that. The chat screen within Ask Sam was the page where our users land first and stay active to ask questions. So, we decided to meet where our users are, Chat screen. Within the options considered among various components, we decided on using a chat component, as it's more organic and scalable. Our goal was to identify what part of the conversation should we push the skill information. Before they start a conversation The user always ignored the greeting as it was static content. If written well, along with the right micro-interactions, a greeting can engage users before they start asking questions. Follow up after a conversation Following up with a skill suggestion after a conversation can also be an excellent opportunity for engagement. Generic follow up Follow up suggesting a skill which they have not used yet. Contextual follow up Follow up by suggesting a skill that is related to the previous conversation. To keep them actively engaged, we must always provide value in the information we show. So, our goal was to make the skill suggestion relevant to the user group. To maintain relevancy, we created a skill map with respect to user groups. To reduce redundancy, we integrated with real-time skill usage so that users will only see relevant skills that are not used. Information should be: Based on the above criteria, we finalized four main information components. We conducted prototype testing with the club users to finalize the design components. This helped us to understand the following. Which design was: Tested with variations in card behavious Once we identified all the design components, we worked on creating individual assets for each skills card. That involved designing the content and the iconography. Once we released the feature, based on user inputs we made a few changes that strengthen the intended experience. Apart from skill discovery, I have worked on many others features. Here are few. Walmart being a retailer, product information is very crucial to the business. Here is a classic example of reimagining a content-heavy web-based promotion platform into a seamless conversational experience. The new skill, enabled better suggestions and reduced the promotion search time by 70%. Connect with me Apart from design I sing, capture and drive. Connect with me if any of these passion aligns with your.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00286.warc.gz
CC-MAIN-2023-06
5,986
57
https://nchanter.livejournal.com/123247.html
code
today james attepted for like an hour to get the internet working on my computer. we found about 5 things that may have been wrong with both my comp (who is still nameless... hmmm...) and fred (being pam's computer that i am using to write this entry. i really should see if i can just DL the client to this thing, would make life so much easier) but neither of them fixed it. next step... get my WinME cd from my dorm, when i finally go in and clean it out, which might be tomorrow, and do something with protocols or somthing. i don't rember. if that all fails... call (shudder) eric... or the guy who set up the thing in the first place. but pam did not seem to enthusiastic about anything related to him. coureton turns 20 today. happy birthday to him. i'm going to go now. i'm being a bad hostess.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00072.warc.gz
CC-MAIN-2021-39
802
3
https://community.filemaker.com/thread/114761
code
If you perform a find in FileMaker you get a "found set" of records. You can then print all those records from a layout in your database if you specify the "records being browsed" option in the print dialog. So if you perform a find for the records for which you want to print labels, you can then print just those addresses on your label stock. A simple way for that is to add your check box field and click the check box for each address that you want to print. Then you enter find mode (click the find button in the status tool bar). Click the check box. Then perform the find (click the Perform Find button). The resulting found set can then be printed to print labels for just the selected set of records. Please note that this method is not ideal if you are hosting the database over a network and more than one user might want to print different groups of address labels at the same time.
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.92/warc/CC-MAIN-20170823035753-20170823055753-00010.warc.gz
CC-MAIN-2017-34
895
5
https://docs.telerik.com/devtools/aspnet-ajax/controls/dateinput/accessibility-and-internationalization/wcag-2.0-and-section-508-accessibility-compliance
code
RadInput is fully compliant with the XHTML 1.1 requirement. Telerik RadInput is Level AA compliant (in conformance with the W3C Web Accessibility Guidelines 1.0). Telerik RadInput satisfies the requirements of "Section 508" for software accessibility. As a result, the component can be used in US Federal Institutions and other organizations, which require software to be accessible to people with disabilities. The USA federal mandate requires that information technology be made accessible to people with disabilities. Much of Section 508 compliance concerns making Web sites, intranets, and web-enabled applications accessible. Section 508 compliance has since become a major prerequisite not only in government related software, but also in most enterprise and corporate software solutions. The main goal of these guidelines is to encourage developers in creating applications providing accessible contents. Moreover, adhering to these guidelines will also make web content more accessible to all kind of users, using different devices and interfaces: desktop browser, voice browser, mobile phone, automobile-based personal computer, etc. In accordance with these guidelines W3C defines three levels of conformance developers may implement in order to provide some level of content compliance to their products: Conformance Level "A" Conformance Level "Double-A" Conformance Level "Triple-A" For more details on W3C "Web Content Accessibility Guidelines 1.0" see https://www.w3.org/TR/WAI-WEBCONTENT/ In our attempt to make our products compliant, each web-control we develop and its QSF strive to obtain at least one of conformance levels listed above. RadInput also has full support for keyboard navigation with Access Keys and Arrow-key navigation.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100499.43/warc/CC-MAIN-20231203094028-20231203124028-00707.warc.gz
CC-MAIN-2023-50
1,755
12
https://lists.debian.org/debian-devel/2001/02/msg00574.html
code
Re: Expires: headers for Packages.gz, Sources.gz On Fri, Feb 09, 2001 at 10:15:02PM -0700, Jason Gunthorpe wrote: > On Sat, 10 Feb 2001, Matt Zimmerman wrote: > > Yes, but Squid (for example) will not forward an IMS request to the server > > if the cached object is fresh. > Then it is in violation of RFC2068, secion 14.9: > The Cache-Control general-header field is used to specify directives that > MUST be obeyed by all caching mechanisms along the request/response chain. > [..] max-age Indicates that the client is willing to accept a response > whose age is no greater than the specified time in seconds. Unless > max-stale directive is also included, the client is not willing to accept > a stale response. > Which is the header APT sends, with a 1 day old setting by default. If you > set that to 0 say (there is a configuration setting), then squid is required > to never return a non-validated response. Ah, I thought you were referring to If-Modified-Since, not Cache-Control: max-age. However, max-age=1 day will not have the same effect as the Expires: header I proposed. Unless I am mistaken, max-age refers to the maximum time since the object was last refreshed (newly retrieved or IMS/Not Modified). This means that if Packages.gz is downloaded 23 hours after it was modified, a client's request of max-age=86400 will still get the old object for up to a day later. Whereas, if the object was set to expire 1 day after its file modification time, it would become stale right around the time the new version comes in. It could even expire a little sooner to be safe.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945111.79/warc/CC-MAIN-20180421090739-20180421110739-00027.warc.gz
CC-MAIN-2018-17
1,583
24
https://www.professormesser.com/tag/list/
code
If you need to allow or restrict access to a file or a network resource, then you need an access control list. In this video, you’ll learn about ACLs and how they are used to set access rights to your network resources Access control lists are a fundamental security component of many operating systems and security devices. In this video, you’ll learn how to use ACLs to control access to your network resources. Key revocation is a normal part of any PKI. In this video, you’ll learn what circumstances can cause a key to be revoked and how you can check to see if a key may be on the certificate revocation list.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100056.38/warc/CC-MAIN-20231129041834-20231129071834-00817.warc.gz
CC-MAIN-2023-50
621
3
https://www.viruss.eu/it-alerts/resolved-fps-web-and-dimc-outages/
code
Resolved: FPS web and DIMC outages This outage appears to have been caused by a failed deployment for http://alerts.its.psu.edu/alert-3748 The systems in question were restored to operational state at 8:17 AM. More information: Resolved: FPS web and DIMC outages Story added 14. October 2015, content source with full text you can find at link above.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653501.53/warc/CC-MAIN-20230607010703-20230607040703-00443.warc.gz
CC-MAIN-2023-23
350
5
https://www.catalyzex.com/paper/arxiv:2102.12700
code
The rapid production of data on the internet and the need to understand how users are feeling from a business and research perspective has prompted the creation of numerous automatic monolingual sentiment detection systems. More recently however, due to the unstructured nature of data on social media, we are observing more instances of multilingual and code-mixed texts. This development in content type has created a new demand for code-mixed sentiment analysis systems. In this study we collect, label and thus create a dataset of Persian-English code-mixed tweets. We then proceed to introduce a model which uses BERT pretrained embeddings as well as translation models to automatically learn the polarity scores of these Tweets. Our model outperforms the baseline models that use Na\"ive Bayes and Random Forest methods.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00416.warc.gz
CC-MAIN-2022-27
826
1
https://community.f5.com/kb/technicalarticles/adaptive-apps-automating-nginx-solution-deployments-and-api-publication---soluti/308010
code
Adaptive Apps: Automating NGINX Solution Deployments and API Publication - Solution Demo Adaptive applications utilize an architectural approach that facilitates rapid and often fully-automated responses to changing conditions—for example, new cyberattacks, updates to security posture, application performance degradations, or conditions across one or more infrastructure environments. And unlike the current state of many apps today that are labor-intensive to secure, deploy, and manage, adaptive apps are enabled by the collection and analysis of live application and security telemetry, service management policies, advanced analytic techniques such as machine learning, and automation toolchains. A key component of our Adaptive Apps vision is to help our customers reliably accelerate deployments of new applications in automation-enabled environments. As one example use case, our customers require the ability to simplify and automate deployment of complex F5 solutions as well as API publication. This solution example demonstrates the ability to automate deployment of NGINX NMS and ACM using Terraform and Ansible. This solution also demonstrates the ability to leverage APIs to quickly roll-out new application feature APIs associated with such a workload using an automation pipeline in order to reduce time-to-market considerations and maintain a competitive advantage. In this solution example, we show how our customers can: - Automate their deployments of F5 NGINX Management Suite (NMS) and API Connectivity Manager (ACM) Continually and quickly add value to their NGINX-based applications via automated feature rollouts using NGINX API services Automating NGINX Solution Deployments and API Publication Problem Statement & Customer Outcome Customers require the ability to - Simplify & automate NGINX NMS / ACM deployments - Publish APIs / new features in their apps via their CI/CD automation pipeline to minimize time-to-market Using this solution example, customers can: - Maximizes incremental feature delivery velocity via automation to enable customers to maintain competitive advantage, drive incremental revenues, and optimize resource utilization - Automate creation of a central API resource to improve API discovery and reduce duplicate efforts This solution deploys the F5 NGINX Management Suite and API Connectivity Manager using infrastructure as code tools to provide consistent, scalable, and reliable infrastructure. Ansible playbooks are used extensively to allow users at various stages of adopting infrastructure as code to take advantage of this solution. Users just getting started with automation can use the playbooks directly to provide some consistency to their environments. More advanced users can execute these playbooks from Hashicorp Terraform when deploying instances, or even use Hashicorp Packer to generate pre-built images to deploy. The Developer Portal from Management Suite provides a common location to publish APIs to, and a common location to discovery APIs from. This can help reduce the time to learn new APIs and reduce the risk of creating duplicate APIs. Please refer to https://github.com/f5devcentral/adaptiveapps for detailed instructions and artifacts for deploying this demo including the following: - Deploying F5 NGINX Management Suite with API Connectivity Manager to OpenStack using Terraform and Ansible - Adding required credentials - Updating Terraform variables for your deployment environment - Ansible playbooks required for deploying: - NGINX Plus - NGINX Management Suite (NMS) - NGINX Instance Manager (NIM) - NGINX API Connectivity Manager (ACM) - NGINX Agent - NGINX Developer Portal - Publishing developer's changes to an API specified by an OpenAPI document to F5 NGINX API Connectivity Manager (ACM). The changes are deployed using an Ansible playbook that leverages ACM's REST API. Using this solution in a CI/CD pipeline provides a way to automate API discovery, registration, and security as API changes are made. - Automate publication of a new REST API endpoint Deploying NGINX as Code Please watch the demo video here:
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819847.83/warc/CC-MAIN-20240424174709-20240424204709-00390.warc.gz
CC-MAIN-2024-18
4,116
34
https://hcimarkus.blogspot.com/2020/03/vcenter-repoint-domain-for-vcsa-with.html
code
cmsso-util domain-repoint -m execute --src-emb-admin Administrator --replication-partner-fqdn vCenter1.domain.intern --replication-partner-admin Administrator --dest-domain-name vsphere.local Enter Source embedded vCenter Server Admin Password : Enter Replication partner Platform Services Controller Admin Password : The domain-repoint operation will export License, Tags, Authorization data before repoint and import after repoint. WARNING: Global Permissions for the source vCenter Server system will be lost. The administrator for the target domain must add global permissions manually. Source domain users and groups will be lost after the Repoint operation. User 'firstname.lastname@example.org' will be assigned administrator role on the source vCenter Server system. The default resolution mode for Tags and Authorization conflicts is Copy, unless overridden in the conflict files generated during pre-check. Solutions and plugins registered with vCenter Server must be re-registered. Before running the Repoint operation, you should backupof all nodes including external databases. You can use file based backups to restore in case of failure. By using the Repoint tool you agree to take the responsibility for creating backups, otherwise you should cancel this operation. Starting with vSphere 6.7, VMware announced a simplified vCenter Single Sign-On domain architecture by enabling vCenter Enhanced Linked Mode support for vCenter Server Appliance installations with an embedded Platform Services Controller. You can use the vCenter Server converge utility to change the deployment topology from an external Platform Services Controller to an embedded Platform Services Controller with support for vCenter Enhanced Linked Mode. As of this release, the external Platform Services Controller architecture is deprecated and will not be available in future releases. For more information, see https://kb.vmware.com/s/article/60229 The following license keys are being copied to the target Single Sign-On domain. VMware recommends using each license key in only a single domain. See "vCenter Server Domain Repoint License Considerations" in the vCenter Server Installation and Setup documentation. Repoint Node Information: Source embedded vCenter Server:vCenter2.domain.intern Replication partner Platform Services Controller: vCenter1.domain.intern All Repoint configuration settings are correct; proceed? [Y|y|N|n]: y Starting License export ... Done Starting Authz Data export ... Done Starting Tagging Data export ... Done Export Service Data ... Done Uninstalling Platform Controller Services ... Done Stopping all services ... Done Updating registry settings ... Done Re-installing Platform Controller Services ... Done Registering Infra services ... Done Updating Service configurations ... Done Starting License import ... Done Starting Authz Data import ... Done Starting Tagging Data import ... Done Applying target domain CEIP participation preference ... Done Starting all services ... Done root@vCenter2 [ ~ ]#
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00496.warc.gz
CC-MAIN-2022-40
3,031
50
http://www.crunchy.com/?q=blog&page=1
code
Good morning Testers, We're making a change to the blog. We will no longer be posting patch notes here. Patch notes will still be posted to our Announcement thread in the forum per our standard practices. Instead, we will be using the blog as a developer blog. Our hope is that we can provide you with insights into our development process and we plan to have all members of the development team posting in the developer blog as their schedule permits. You'll also notice that we will be allowing comments in the blog going forward. Let us know what you think and also let us know what sorts of topics you find the most interesting. Be on the lookout for the first of these development blog posts sometime next week!
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891706.88/warc/CC-MAIN-20180123032443-20180123052443-00756.warc.gz
CC-MAIN-2018-05
716
5
https://oath-ldap.stroeder.com/docs.html
code
- Secure Enrollment - Bulk Enrollment - The OpenLDAP server implementation (see also slapd(8) - slapd backend also useable as overlay which sends some LDAP requests to an external demon via Unix domain socket (see also slapd-sock(5)) - OTP validator - The component validating the OTP values which runs on a slapd provider. It is the only component which has access to clear-text user password and OATH shared secret and updates the HOTP counter. Does not need any IP network access. - bind proxy - This component runs on a slapd consumers, which are by design read-only, and relays LDAP simple bind requests to one of several slapd providers. - web browser - Normal web browser used by the OTP admin to access the enrollment web app. - enrollment web app - A simple web application for resetting OATH token device entries to start enrollment - enrollment client - A hardened device where you plug in the OATH hardware token (e.g. Yubikey) to be initialized. Especially users shall not enter their normal password at this device. - LDAP client - Any LDAP client software which checks user's password and OTP by sending a LDAP simple bind request. - A person is not an user account! - When an OTP token is physically handed out to a person the owner attribute (or similar) in the oathToken entry is set to register the device for its owner (see also enrollment process). - Each account may be associated with an oathToken to force two-factor authentication for this particular user account. - The shared secret (seed) and the user's password shall not be present in clear at the same time on any system (except in the small external OTP validator demon). - The shared secret shall never be displayed on screen (QR code enrollment is considered harmful). - Pre-configured shared secrets on devices shall not be used. - The OTP admin shall not be able to initialize a token for a user. - The user shall not be able to initialize a token without the help of a OTP admin. - True randomness of shared keys must be ensured. - The user requests a OATH token (reset) by personally asking an OTP admin. Typically both meet in person. Only for first time OTP users or if an additional device is needed: - The OTP admin adds a new OATH token device entry and registers the person entry of the user (not the account!) as owner of the device. - OTP admin hands out the device to the user. - The OTP admin resets the OATH token device entry using the OATH enrollment application. The OATH enrollment application... - generates a random enrollment password for the OATH token which is only valid for a couple of minutes. - sends the first part of the enrollment password via e-mail to the user and displays the second part to the OTP admin. - OTP admin hands out the second part of the random enrollment password and a special enrollment hardware (laptop, or similar) to the user. - The user starts the enrollment hardware. An enrollment software is automatically started. - The user plugs in the OATH token into the enrollment hardware and enters the first and second part of the enrollment password. When resetting an formerly initialized OATH token the user also enters the token configuration code. The enrollment software... - binds to the LDAP server as token entry with the enrollment password. - retrieves the effective OATH token parameters (policy) including the master public key from the LDAP server. - generates a new random OATH shared secret and stores it in the token device. - encrypts OATH shared secret with the master public key. - stores the encrypted OATH shared secret into the OATH token device entry via LDAP. - The user plugs off the OATH token from the enrollment hardware. - The user returns the enrollment hardware to the OTP admin. - The user starts using the OATH token. Bulk Enrollment Process In some situations it may be required to ship pre-keyed token devices, e.g. during an initial rollout. Operational and security considerations In general it is highly recommended to follow the secure enrollment process. Thus you should write down a very clear rationale why it is not possible in your situation and why you have to use bulk enrollment instead. - The bulk enrollment must be conducted by fully trusted and educated personnel! The bulk enrollment must be conducted in a secure environment: - Use dedicated, freshly installed computers. - The system used for key-generation (ykinit) must have sufficient entropy for the random number generator. - Lock the office when absent. - Do not use modified software. - The pre-keyed token devices have to be shipped to the users in secure envelopes! Encourage users to immediately report damage of envelopes. - Bear in mind that shipping is generally a security risk! - Note that shipping to other countries can be a tricky logistic challenge. Make yourself familiar with customs regulations, especially those for cryptographic devices. In many cases it can be easier to let users buy new devices locally and work from there. - Prepare decent documentation with all necessary details for your auditor, preferrably before conducting the bulk enrollment. - Requires oath-ldap-tool 1.3.0 or newer. - Prepare a list of owner IDs to register token devices with. Steps done with oath-ldap-tool either sequentially on one system or as a pipeline with three systems: Remove pre-configured slots from brand-new Yubikeys tokens. The tokens are added to OATH-LDAP servers and associated with a owner (person). After that store the tokens in a separate box for freshly added tokens. Ownership is saved in OATH-LDAP, no need to keep track of token ownership externally. The shared secrets are generated and stored in OATH-LDAP server and the hardware tokens. After that store the tokens in a separate box for ready-to-ship tokens. Check whether generating correct OTP values works and display token owner (person). After that put token into envelope with personal name / address of token owner. The relevant sub-commands can be invoked with argument -c or --continue for continuous operation interactively running until you hit Ctrl+C key combination. For all commands below replace "..." with the following command-line arguments: --continue --ca-certs /path/to/trusted-cacerts.pem --ldap-url ldaps:// Example for Æ-DIR with search base ou=ae-dir: --continue \ --ca-certs /path/to/trusted-cacerts.pem \ --ldap-url ldaps://ae-dir-p1.example.com/cn=otp,ou=ae-dir \ --admin-dn uid=xkcd,ou=ae-dir Sub-command ykreset is used to remove the pre-configured slots on brand-new Yubikey devices. oath-ldap-tool ykreset -f -c -o "" Sub-command ykadd is used to add OATH token entries based on a LDIF template file. Without argument -o or --owner it will interactively ask for an owner ID. oath-ldap-tool ykadd ... --ldif-template /path/to/aedir-hotp-yubikey-template.ldif See also: Examples for aedir-hotp-yubikey-template.ldif Sub-command ykinit is used to generate a shared secret, send it to OATH-LDAP server and store it into the Yubikey. Furthermore the Yubikey device is proteced with a randomly generated access code which is displayed in the output. The access code is stored in the token entry encrypted with the same key like the shared secret. oath-ldap-tool ykinit ... Sub-command ykinit is used to test whether generating correct OTP values correctly works and if these values are valid. Furthermore contact information of owner is displayed which can be used for shipping the device to the right owner. oath-ldap-tool ykcheck ...
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654016.91/warc/CC-MAIN-20230607211505-20230608001505-00700.warc.gz
CC-MAIN-2023-23
7,485
85
http://www.irv2.com/forums/f85/break-away-switches-systems-196683.html
code
There are some systems that are non-electrical and I can not tell you about those, but on the ones with a break away SWITCH (Electrical) it is a SPST normally closed clothspin type switch with something between the "Jaws" to prevent it from closing,,, (the plastic plug). you pull the plug the jaws snap closed and the circuit is complete. Near as I can tell they all work that way. HOWEVER: sticking with OEM means that if the system fails, they can not stand there as 3 technicians did one day in my office and say "It is HIS FAULT" it does not work (each pointing to someone else). The next day (One of them proved who's fault it was) it worked. Home is where I park it!
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721415.7/warc/CC-MAIN-20161020183841-00156-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
673
6
http://maven.40175.n5.nabble.com/Re-jenkins-How-is-asfMavenTlpStdBuild-defined-td5939057.html
code
Re: [jenkins] How is asfMavenTlpStdBuild() defined? On 20/07/18 21:48, Robert Munteanu wrote: > I was looking over the setup the Maven project has in Jenkins and I'd > like to understand and maybe copy it :-) I think it uses a Jenkins > library, as the Jenkinsfile only has a single code line > but I could not find any definition of a shared pipeline library in the > job definitions ( ) and also a Github search did not yield any sources. > Is there any documentation or other place where I can see how this is
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657097.39/warc/CC-MAIN-20190116073323-20190116095323-00499.warc.gz
CC-MAIN-2019-04
513
8
https://docs.gretel.ai/architecture-and-components
code
Workers can be automatically launched for you in Gretel Cloud. This is the default mode when uploading a configuration from the Console or the CLI. In cloud mode, once a request for a model is received, Gretel will provision a worker for you and the model and associated artifacts (such as quality reports, sample data, etc) will also be stored in Gretel Cloud. You may download these artifacts at any time. With a model created and stored in Gretel Cloud, model servers can be created to utilize the model and generate, transform, or classify data.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00329.warc.gz
CC-MAIN-2021-43
549
1
https://steemkr.com/utopian-io/@ookamisuuhaisha/blagominer-v2-300001-0-release-mining-more-than-2-coins-and-optimisations-1562518995808
code
Quick project info This software is a "mining" software for a family of cryptocurrencies based on so-called Proof-of-Capacity. The best information about that concept is available in info materials from the BURST coin. You can learn some technical bits from resources on their home page, for example this whitepaper. Sidenote: Repository was copied around a few times by previous mantainers. If you're interested in its history, see my first post about Blagominer. This submission is related to my work on this project between Jun 28, 2019 - Jun 26, 2019. I've added two very interesting features and released it as v2.300001.0. A bit before that, Jun 25 - Jun 13, I made a cleanup and bug fixing session I also wanted to write about, but I was so busy I didn't notice so much time has passed already! Actually, during that time I made my first release: lots of bugfixes, and option to configure URLs, and automatic selection and switching between elevated and non-elevated run. Today however, I have something even better! 1. Multimining! Not just dual mining - mine any number of PoC coins. Feature branch for this change is feature/multimining-manycoins. However, the code required so much work before I could even start the feature itself, that I actually made a preliminary branch: feature/pre-multimining-cleanups where I did all the preceding "boring" cleanups and refactorings. The main problem with the code was its overabundant use of static global variables. And lack of generalization in many places. Blagominer was adapted to be able to dual-mine BURST and BHD, but many things were programmed directly with that in mind and no vision for the future. For example, coin configuration was kept in global bhd variables and used all over the place. The code responsible for reading configuration from miner.conf file was totally duplicated and hardcoded for reading from bhd sections. Networking threads had dedicated code to each of the coins, logging code switched between two coins with use of hardcoded enum (which had 'burst' and 'bhd' entries). Even simple things like printing out the coin name was hardcoded to switch between "burst"/"bhd", even though the code already had a t_coin_info structure, where you'd normally look for a coinname.. After analyzing various pieces of the global state related to coin configuration, internal state management, thread management and CSV logging, I moved it all to t_coin_info and its supporting structures. Then I once again revisited the configuration-reading code and linked creating t_coin_info entries to scanning the config file and looking for coin configuration nodes (as opposed to reading from two specific notes and putting that into two specific t_coin_infos) and basically the code was ready for multimining! What does it mean for the end user? Blagominer can now prioritize&switch between any number of coins, not only BURST and BHD, as long as the coins in question use compatible plot file formats. Here's a screenshot of a triple-mining BURST+BHD+BOOM: Please take care: the config file format changed slightly. You have to update your config files, or else the new version will not understand them. That's why I changed application's major version from 1 to 2 (kind of a semver, but I still need to clean up versioning scheme a bit). I also changed the name on the title screen, since "BURST/BHD miner" no longer makes sense. The title screen will now advertise the application as "PoC multi miner", while the code name stays as it was: Blagominer. 2. NTFS file ordering - less drive clicking Feature branch for this change is feature/optimize-multifile-seeking. I use Blagominer myself and I noticed that some of my drives were really noisy, while other were working relatively silent. Same brand, same model, probably a different time of purchase. These louder ones also had noticeably lower read times. SMART data didn't reveal any failures or real problems. After inspecting it turned out that those were the oldest drives which I plotted with TurboPlotter, and where I set up a maximum plot file size of 8 GB. I remember I was experimenting with various settings, and writing on SMR Seagate drives took ages, and I finally decided to split the files to that 8 GB chunks so it's easier to continue when something crashes/etc. ...and that was rather a bad idea. First of all, I didn't know about details of POC2 file format which was optimized for speeding up reading from large files. If I kept it as a one large file, all nonces could be read in one seek and one go. My settings caused splitting the plot into hundreds of files, now each one has to be read separately. This alone lowers reading speed and partially nullifies gains from POC2 optimisation. However, even worse effect came from the TurboPlotter. It turns out that when it is given a max-file-size option, it splits the plot file accordingly, keeps track of nonce space to cover it fully, but it writes the files to the drive in some weird order, far from numeric or alphabetic order that I would expect. Blagominer in turn always read the files in the order of nonces. That caused constant back-n-forth seeking to reach next file in line, randomly scattered over the whole drive. Aside from lowering the speed, it also shortens the drive's lifespan. There were only two solutions: either replot all those SMR drives, or try to change the file reading order. I didn't fancy replotting (at least for now), so first piece to this puzzle was to find out the physical location of all files. The operating system surely knows it, every "defragmenter" tool is able to retrieve it, so it certainly was possible. I have found this old post which discussed exactly this problem. On Windows, it actually boils down to two calls to IOCTL_VOLUME_LOGICAL_TO_PHYSICAL messages. Most probably the latter wasn't really necessary to reorder the files. After changing the file access so that files are read in order of their physical location on the drive, the overall reading speed rose 10 to 15% depending mostly on the drive size. Also, the noise completely disappeared. Drives stopped seeking back-n-forth, now all the seeking between files is done in one direction, forward, which is perfect for the drive itself. This option has negligible effect on drives with very few plot files, but has a great potential when for any reason you have 100 or more plot files a physical disk. Also, the net effect may be none if your plot files are fragmented, since this option scans files only looking at they show up on the drive, not how all their parts are scattered on the drive. I considered additional option for optimizing reads for this case, but for now its implementation seems awfully complex and not worth the effort. If you have hundreds of plot files on a drive and also these files are fragmented, I suggest quickformat and replotting. I plan to replot my SMR drives to remove that file jungle, just not right now. I don't know of any case where this option would induce any negative effects. Determining the physical order of files takes some time, seen as slow 'spinning up the speeds' in the UI, but reading them in-order seems to make up for that with a large margin. Nevertheless, this is a new option and totally an experiment, so it's OFF by default. You can turn this option simply by adding a '@' before a directory path in the configuration file. For example, if a path was c:\\data\\plots, now it would be @c:\\data\\plots. Simple as that.
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145648.56/warc/CC-MAIN-20200222023815-20200222053815-00321.warc.gz
CC-MAIN-2020-10
7,482
38
https://github.com/ahamedshoaib
code
Hide content and notifications from this user. Contact Support about this user's behavior. simple 2d side scroller similar to mario using only ascii charachters made for my 12th grade comp science project Forked from jlord/patchwork All the Git-it Workshop completers! Seeing something unexpected? Take a look at the GitHub profile guide.
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888113.39/warc/CC-MAIN-20180119184632-20180119204632-00699.warc.gz
CC-MAIN-2018-05
338
7
https://www.gleniboutique.com/tatiana-handbag-in-back-cut-brunello-python.html
code
Made in soft Back cut python leather (with large central scales), Tatiana has a soft line and its style is linear and classic; a perfect day bag to wear on every outfit, from the most casual to the most refined one. Its two middle length handles allow it to be comfortably worn on the shoulder or elegantly held in hand or on the forearm. The single internal compartment, opened by a refined golden zip, is lined with brown nappa leather and it is provided with a zippered internal document pocket, a cell phone pocket and two other small internal pockets. Brunello colour used for the manufacture of Tatiana is one of the most fashionable shades of this period: a dark brown tonality with a red wine undertone, more evident in proximity to the central scales which are larger and more lengthened. The final effect is very particular: the brown sober shade brightens under the light, creating a game of intriguing red wine nuances. The front section of Tatiana bag is provided with the small Gleni logo in silver and gold metal. All the other metal accessories are gold.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206133.46/warc/CC-MAIN-20200922125920-20200922155920-00511.warc.gz
CC-MAIN-2020-40
1,070
6
https://businesshear.com/fix-overflow-error-in-quickbooks-desktop/
code
Are you exasperated by the Overflow error in QuickBooks software? Unlike the other error codes, this error is quite peculiar in nature. However, we offer you the most viable solutions to fix overflow error in QuickBooks Desktop through this blog. The accounting tool QuickBooks is essential in helping business owners, accountants, and tax experts achieve their professional objectives. This application is undoubtedly cutting-edge. But just like any other piece of software, QuickBooks occasionally encounters issues that force accounting professionals to put their work on hold. The topic of this post is to get apprised of all the necessary steps to Fix the overflow error in QuickBooks Desktop. Additionally, you will learn about some of the most frequent causes of this problem in this piece, along with some helpful advice for fixing it. You may aslo read : QuickBooks Won’t Open What triggers the Overflow Error in QuickBooks Desktop? There might be one or more of the following reasons behind the occurrence of this error: - Your company’s or other fields’ account balances exceed $9,999,999,999,999.99. - Transaction data in your company’s data got corrupted or damaged. - When the value of the inventory exceeds the quantity of the items. - This error may arise by the component item of the group if you convert a huge amount on one group item to another large amount. - If the format does not correspond to the format that QuickBooks Accounting Software accepts. Easy avenues of resolving overflow error in QuickBooks Important Note: It is advised to confirm that your computer satisfies the system requirements for QuickBooks Desktop Editions before attempting any of the resolutions. Issue 1: Rebuilding the data - You need to visit Rebuild data. The Rebuild Data can be found under the File menu, followed by Utilities. - If a dialogue box prompts you to back up your company file to protect your data, hit on Ok. - After Rebuild is finished, tap on Ok. Issue 2: In case of a mismatch issue If you export your files into Microsoft Excel to check for Overflow, take the following actions if there is a mismatch: - If a format mismatch caused the overflow issue, overflow rows could not show up on the error report. Create the columns, so they appear by month and year. - In your system, export your spreadsheet. - Minimize QuickBooks now, then right-click the export file. - Opt for Open With and then navigate to Microsoft Excel. - To access the search box, hit on the Ctrl and F keys. - Type in “Overflow” and hit on Find Next. - Reopen QuickBooks and correct the information in the Account, Name, and item fields. - Now, launch Microsoft Excel, then continue until no more results are available. Issue 3: Overflowing Issue with a group item Here is how to resolve the overflow issue brought on by this reason: - The Group Item Quantity must be set to 0. - Once more, enter the precise group item quantity. Issue 4: If Only a single item is overflowing In most circumstances, the item that has been added most recently may be the one that is overflowing if you get a warning while working on QuickBooks that it is. Nevertheless, there may be some instances where a previously existent item contributes to an error. Case 1: if the error is caused by the newly added item, you can remove that from the list and recreate that. Case 2: If it’s an old item – Examine the item and resolve the transaction that is causing the Overflow. If the current transaction is what’s generating the problem, enter Adjust Quantity/Value on the Hand file to adjust or correct the average cost. Issue 5: In case the account balance or other fields are too large The account balance field and other fields may occasionally go over their maximum values for unknown causes. In QuickBooks, the top dollar amount is $9,999,999,999,999.99. Any time an account balance exceeds the maximum, the field will automatically display an overflow error. In this scenario, you have a few options: - Carry out the necessary troubleshooting for simple data damage. - Create a portable QuickBooks company file. - Transfer the data to a fresh, functional file. - Check your account charts. - Opt for Chart of Accounts from the List - Look for the account with a balance of at least 10,000,000,000. - Reduce the account’s balance to a minimum. - If the problem persists, do the following actions. - Too many reports need to be remembered - To run the report in this scenario, hit on the Reports - If the new report doesn’t have an overflow, remove it and rewrite the remembered report. - In case of Overflow, proceed to the subsequent step. - Perform a thorough search through all fields and totals in all lists. Adding the inactive items can be necessary as you go through your lists. - Choose Add/Edit Multiple List Entries from the Lists - Choose any field that has a dollar number by hitting Customize Columns and opting for Cost, Price, etc. Find Overflow and the right quantity, and then modify or delete it. - Verify all the access lists. - If you’re using an upgraded version of QuickBooks, hit on Edit, locate Overflow, then transactions before conducting a search for list components. - When the error warning stops displaying, edit the quantities and keep testing them. Issue 6: If there is a corrupted transaction If a corrupted transaction caused the issue, do the following actions: - Until you discover Overflow, run Financial Statements, sales by customer, sales by item, custom transaction data, and other reports. - Continually use Quick-zoom to determine the transaction level. - Make the transaction right. You may also read : SMO Services Providers And How They Make Your Brand Grow The overflow error might seem to be quite peculiar and differ from the other error codes encountered in QuickBooks. However, resolving it is quite easy if you implement all the above-mentioned solutions properly. Hopefully, the resolutions should work for you. If they don’t, then you may call our Support team for assistance.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817103.42/warc/CC-MAIN-20240416155952-20240416185952-00613.warc.gz
CC-MAIN-2024-18
6,029
64
http://xconnectalliance.com/readypartners.htm
code
XConnect helps its members obtain the building blocks of successful services via the XConnect Ready Partners Program, a tested array of vendors who can help solve service challenges such as licensing codecs, selecting session border controllers, sourcing DID origination and purchasing quality off-net termination. Request information about joining the XConnect Ready program.
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039749054.66/warc/CC-MAIN-20181121153320-20181121175320-00239.warc.gz
CC-MAIN-2018-47
376
5
https://www.openindiana.org/fr/2015/11/30/firefox-43b/
code
Firefox’s latest beta release has just landed in Hipster’s current repository and is available for installation. The package is based on Martin Bochnig’s excellent port of Firefox to illumos distributions and has been merged to oi-userland by Alexander Pyhalov. Martin’s sources are available here while the OpenIndiana component can be found here. This update addresses security issues fixed in recent Firefox releases, provides improved compatibility with numerous websites and offers now the support of Shumway for Flash content. Several known limitations exist on OpenIndiana: - some websites show regressions of the Flash plugin wrapper (not related to the Firefox port itself), - support for audio/video codecs is not complete, - issues were reported with spell checking and localization. Running HTML5 tests gives a score of 416, to be compared with a maximum of 464 reported for Firefox 40: the difference consisting of missing support for some audio/video codecs, input devices (webcam, gamepad) and WebGL. OpenIndiana Hipster users are welcome to provide testing and report feedback to the mailing list and issue tracker. Finally, let us thank again Martin for his continued contributions and if you would like to support his work, follow this link.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943471.24/warc/CC-MAIN-20230320083513-20230320113513-00141.warc.gz
CC-MAIN-2023-14
1,267
11
https://gwenfaraday.com/weekly-newsletter-april-18th/
code
Hi, everyone! How's it going? I spent some quality time this week writing scripts for my upcoming YouTube videos, and I cannot wait to share them with you all! This new video series will be all about new developers diving into the marketplace, from jumping into a new code base to onboarding a new company. With that in mind, let's talk about habits to adopt within your first few weeks into a new job that will help your adaptation process and catch up on the work to be done. #1: My top one recommendation is to ask a lot of questions and document everything about the codebase, onboarding process, and the business jargon. Writing things down will help you be able to review and make associations to better learn as well as provide a valuable resource for the company to improve their onboarding process. #2: Jump into the codebase slowly. They will probably have you start by going over some initial documentation or setup instructions to get your dev environment up and running. After that, it's important to start playing around with the codebase on your own. Some people get really stressed out when they see a large codebase and don't know where to start. Just take a deep breath, you don't have to understand it all at once, but you should have a strategy for how to tackle learning it. I like to start by going through any test cases they and then start working through files line by line to understand what's going on. I take notes on associations between different modules or data structures and anything I don't understand so I can bring it up with whoever is onboarding me later (it's usually better not to ask each question individually, but group them together to be more efficient and respectful of the other developer's time). Writing documentation for yourself is a great way to learn and can often turn into something useful for the team at the same time. #3: Make sure you have some easy tasks that will let you start contributing within the first few weeks at the new company. If your first task is too big or ambiguous to start with, then bring it up right away so the task can be better documented, broken down, or replaced with another one that will be better for you starting out. Some companies don't have solid onboarding processes so they might not realize that the task isn't right for a new hire. You should really study the first tasks that you are given and ask yourself if that is something you can work on without a deep understanding of the codebase. #4: Don't wait too long to reach out if you are struggling or don't understand something. Years ago, I made a rule for myself that I would never be stuck longer than several hours on something without reaching out for help. It's common to have imposter syndrome when you are starting a new job and you don't want to look like you don't know what you are doing. However, the developers on your team will almost certainly be happy that you asked for help and see it as a sign of being a mature developer who knows when to reach out instead of struggling alone. Most of all, remember to take care of your physical and mental health. Onboarding a new company can feel overwhelming and challenging, so it's important to push yourself to take breaks and prioritize your wellbeing. I'll be diving into this topic in some of my upcoming videos, but until then, feel free to reach out to our community on Discord and talk about some of your experiences onboarding a new company! :) Have a great week,
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474649.44/warc/CC-MAIN-20240225234904-20240226024904-00402.warc.gz
CC-MAIN-2024-10
3,478
11
https://www.beansmart.com/accounting/exact-accounting-software-data-retrieval-8446-.htm
code
- posted 13 years ago We were just given one of our offices Exact Accounting sofware package. We need to extract data out of it. It looks like a Microsoft Access and SQL Server application. Does anyone have any experience with Exact software that could help me out (or at least point me in the right direction)?
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00290.warc.gz
CC-MAIN-2020-24
311
2
http://classblogmeister.com/blog.php?blogger_id=356452&user_id=356452&blog_id=1458864&position2=-1
code
Sup guys . this is a new story. It had all syarted five years ago. when one rhino saw a wierd thing on a tree he fliped out then he told his supeirer. So he told him and he thought he was crazy. they put him in a cage but he was not crazy because the next day there was what looked like big lizard from the sky. the lizard crashed down to eath and said that his kind would take over the world .THE NEXT DAY. About five hundred more of those lizards had come falling down from the sky they were angry. So the rhinos had figured out that the lizards wanted that thing in the tree but the rhinos had it. what will happen next. how do the rhinos get rid of the sky lizards. and why the heck do the lizards want that piece of garbage any way.
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145578.23/warc/CC-MAIN-20160205193905-00053-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
737
1
https://wiki.mafiascum.net/index.php?title=Traffic_Analyst
code
A Traffic Analyst is a role that is capable of checking to see whether a player can privately communicate. As an informative role, its night choice is to choose a player, and the analyst will learn whether or not there are any players that that player can legally communicate with outside the game thread. (The identity of the people that the target can communicate with is not learned, nor is the content of the communications.) Note that merely having access to a private topic is not necessarily enough to be able to communicate; there will have to be a second living player in the private topic in question to communicate with. In general, the role will give a "can communicate" result on a player who shares a private topic with another living player, and also on a Mailman; and a "cannot communicate" result on anyone else. A PT Cop checks whether a player has access to a private topic. Unlike the Traffic Analyst, this will get "has access" results even on players who are in a private topic by themselves, but will give a "no access" result on a Mailman (who has the ability to privately communicate, but without the use of a private topic). Some Theme games have instead checked to see whether a player is actually making use of their private communication, i.e. checking to see if the player has privately sent a message toDay. This version of the role can be fooled simply by staying silent. Both this role and the PT cop variant are considered Normal on mafiascum.net. The former should receive positive results only if there is more than one player alive in the private thread, the latter should receive positive results regardless. Additionally, this role's action should resolve after any kills. Use and Power Like a Gunsmith, this role gives unreliable information about a player's alignment; a "can communicate" result would most commonly come from the Mafia's factional communication, but could also be indicative of a Neighbor. One notable difference from a Gunsmith is that a Neighborizer effectively acts as a permanent Framer to a Traffic Analyst (meaning that it will effectively make the Traffic Analyst less and less useful over time). The biggest difference from most other investigative roles, though, is that a Traffic Analyst becomes useless for finding scum once there's only one scum left; this makes the role useful for balancing even games with no special communication rules, as it has much less swing than a typical investigative role. As such, it's very much an early-game investigative role, compared to a Tracker which is more useful in the late game. Compare variant 2 of the Psychologist, which has similar properties. Roles to check whether a player has access to a private topic have been seen sporadically in Theme games throughout the years. The first known use of the specific Traffic Analyst version was by callforjudgement in Micro 690.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660818.25/warc/CC-MAIN-20190118213433-20190118235433-00508.warc.gz
CC-MAIN-2019-04
2,884
9
https://forums.unrealengine.com/t/thoughts-on-approach-for-simcity-style-camera/2886
code
I’m building an RTS style game and I really like the latest SimCities camera controls and using this as inspiration. I’m looking at how to wire this up and I see two approaches so far. My preferred approach is to use an invisible character that “walks” on terrain and the camera is attached to a movable boom to this invisible character pivot point very much like the top down sample. My issue is I would need the character to only “walk” on the terrain and be able to go through buildings, trees, etc. I believe I can do that with the collision model. Is this a viable approach to this problem or should I not use a character at all and just draw ray tracing to the terrain to find the pivot point then do all camera operations based on the ray traced pivot point.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00363.warc.gz
CC-MAIN-2021-17
777
2
https://muxy.net/video/extreme-water-science-with-richard-hammond-richard-hammonds-wild-weather-spark
code
Richard Hammond investigates the crucial role water plays. Without water, there would be almost no weather: no rain, no snow, no hail, no clouds. So Richard goes in pursuit of water in all its forms. He tries to weigh a cloud, finds out how rain could crush a car, and gets involved in starting an avalanche. Subscribe to Spark for more amazing science, tech and engineering videos - muxy.net/goo.gl/LIrlur Follow us on Facebook: muxy.net/www.facebook.com/SparkDocs/ Follow us on Instagram: muxy.net/www.instagram.com/spark_channel/?hl=undefined Content licensed from TwoFour to Little Dot Studios. Any queries, please contact us at: [email protected] #spark #sparkdocumentary #sciencedocumentary Date Added: 2020-12-18 Watched 305 times
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233509023.57/warc/CC-MAIN-20230925151539-20230925181539-00402.warc.gz
CC-MAIN-2023-40
737
8
https://www.apecs.is/career-resources/job-board/details/2/4121.html
code
- Postdoctoral Researcher Postdoctoral Researcher in Ice-Sheet Modelling in the project PalMod - From the Last Interglacial to the Anthropocene The position is open now and the successful candidate should start as soon as possible. Irrespective of the start date, the position will end on December 31, 2026. Salary corresponds to 100% TV-L E13. The employment is governed by the Act of Academic Fixed- Term Contract (Wissenschaftszeitvertragsgesetz – WissZeitVG). The goal of PalMod is to understand climate system dynamics and variability during the last glacial cycle. PalMod aims at simulating key periods of the last glacial cycle in transient mode with comprehensive Earth System Models that include interactive ice sheets. PalMod addresses climate variability during the last glacial cycle on a large range of time scales, from interannual to multi- millennial, and attempts to quantify the relative contributions of external forcing and processes internal to the Earth system. In order to achieve a higher level of understanding of natural climate variability at time scales of millennia, its governing processes and implications for the future climate, PalMod brings together three different research communities: the Earth system modelling community, the proxy data community and the computational science community. We invite energetic and creative applicants with a strong research background in ice-sheet modelling. As successful applicant, you will work in an interdisciplinary team of Earth System modellers in the Geosystem Modelling group at MARUM (www.marum.de). Within this team, you will contribute to the continuous improvement of the coupled Earth System model (CESM-PISM) with a special focus on the ice-sheet component (PISM) and its coupling to the atmosphere and ocean. You will further carry out model simulations of the last glacial and deglaciation in a supercomputing environment. In particular, you will explore the role of ice-sheet dynamics in millennial-scale climate variability (e.g. Heinrich events) and publish your scientific results in international journals.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100286.10/warc/CC-MAIN-20231201084429-20231201114429-00519.warc.gz
CC-MAIN-2023-50
2,100
8
https://www.daniweb.com/digital-media/ui-ux-design/threads/67023/cgi-email
code
Hi I'm new to all this hope i am in the right place have made a form in dreamweaver mx and a cgi email in my cgi-bin however i am unable to recieve the information i get various errors, I believe i need to make a .txt file aswell to let the cgi script know where to send the information however this is where i am stuck as i don't have a clue what this .txt file should look like or contain can anyone HELP!:rolleyes: Jump to Post Can you please provide what sort of language you're using to create the cgi script? Could also share what sort of errors you are receiving? To be honest, I'm not at all familiar with Dreamweaver; although, I could probably provide some help with some more information.… All 2 RepliesReply to this topic We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.26/warc/CC-MAIN-20240227121318-20240227151318-00022.warc.gz
CC-MAIN-2024-10
904
6
https://docs.oracle.com/cd/E19396-01/817-7607/sizing.html
code
|Sun Java(TM) System Directory Server 5.2 2005Q1 Deployment Planning Guide| Appropriate hardware sizing is a critical component of directory service planning and deployment. When sizing hardware, the amount of memory available and the amount of local disk space available are of key importance. For best results, install and configure a test system with a subset of entries representing those used in production. You can then use the test system to approximate the behavior of the production server. When optimizing for particular systems, ensure you understand how system buses, peripheral buses, I/O devices, and supported file systems work so you can take advantage of I/O subsystem features when tuning these to support Directory Server. This chapter suggests ways of estimating disk and memory requirements for a Directory Server instance. It also touches on network and SSL accelerator hardware requirements. Suggested Minimum Requirements Table 10-1 proposes minimum memory and disk space requirements for installing and using the software in a production environment. Minimum requirements for specified numbers of entries may in fact differ from those provided in Table 10-1. Sizes here reflect relatively small entries, with indexes set according to the default configuration, and with caches minimally tuned. If entries include large binary attribute values such as digital photos, or if indexing or caching is configured differently, then revise minimum disk space and memory estimates upward accordingly. Table 10-1 Minimum Disk Space and Memory Requirements Free Local Disk Space At least 125 MB At least 200 MB At least 256 MB Add at least 3 GB Add at least 256 MB Add at least 5 GB Add at least 512 MB Over 1,000,000 entries Add 8 GB or more Add 1 GB or more Minimum disk space requirements include 1 GB devoted to access logs. By default, Directory Server is configured to rotate through 10 access log files (nsslapd-accesslog-maxlogsperdir on cn=config) each holding up to 100 MB (nsslapd-accesslog-maxlogsize on cn=config) of messages. Volume for error and audit logs depends on how Directory Server is configured. Refer to the "Monitoring Directory Server Using Log Files," in the Directory Server Administration Guide for details on configuring logging. Minimum Available Memory Minimum memory estimates reflect memory used by an instance of Directory Server in a typical deployment. The estimates do not account for memory used by the system and by other applications. For a more accurate picture, you must measure memory use empirically. Refer to Sizing Physical Memory for details. As a rule, the more available memory, the better. Minimum Local Disk Space Minimum local disk space estimates reflect the space needed for an instance of Directory Server in a typical deployment. Experience suggests that if directory entries are large, the space needed is at minimum four times the size of the equivalent LDIF on disk. Refer to Sizing Disk Subsystems for details. Do not install the server or any data it accesses on network disks. Directory Server software does not support the use of network attached storage via NFS, AFS, or SMB. Instead, all configuration, log, database, and index files must reside on local storage at all times, even after installation. Minimum Processing Power High volume systems typically employ multiple, high-speed processors to provide appropriate processing power for multiple simultaneous searches, extensive indexing, replication, and other features. Refer to Sizing for Multiprocessor Systems for details. Minimum Network Capacity Testing has demonstrated that 100 Mbit Ethernet may be sufficient for even service provider performance, depending on the maximum throughput expected. You may estimate theoretical maximum throughput as follows: max. throughput = max. entries returned/second x average entry size Imagine for example that a Directory Server must respond to a peak of 5000 searches per second for which it returns 1 entry each with entries having average size of 2000 bytes, then the theoretical maximum throughput would be 10 MB, or 80 Mbit. 80 Mbit is likely to be more than a single 100 Mbit Ethernet adapter can provide. Actual observed performance may vary. If you expect to perform multi-master replication over a wide area network, ensure the connection provides sufficient throughput with minimum latency and near-zero packet loss. Refer to Sizing Network Capacity for more information. Sizing Physical Memory Directory Server stores information using database technology. As is the case for any application relying on database technology, adequate fast memory is key to optimum Directory Server performance. As a rule, the more memory available, the more directory information can be cached for quick access. In the ideal case, each server has enough memory to cache the entire contents of the directory at all times. As Directory Server 5.2 supports 64-bit memory addressing, it is now possible to handle total cache sizes of as much as the 64-bit processor can address. When deploying Directory Server in a production environment, configure cache sizes well below theoretical process limits, leaving appropriate resources available for general system operation. Estimating memory size required to run Directory Server involves estimating the memory needed both for a specific Directory Server configuration, and for the underlying system on which Directory Server runs. Sizing Memory for Directory Server Given estimated configuration values for a specific deployment, you can estimate physical memory needed for an instance of Directory Server. Table 10-2 summarizes the values used for the calculations in this section. Table 10-2 Values for Sizing Memory for Directory Server Entry cache size for a suffix An entry cache contains formatted entries, ready to be sent in response to a client request. One instance may handle several entry caches. Database cache size The database cache holds elements from databases and indexes used by the server. Database cache size for bulk import Import cache is used only when importing entries. You may be able to avoid budgeting extra memory for import cache, instead reusing memory budgeted for entry or database cache if you perform only offline imports. Maximum number of connections managed. Number of operation threads created at server startup To estimate approximate memory size, perform the following steps. Note that the entry cache includes an allocation overhead (in other words, the cache will consume more memory than you specify in the nsslapd-cachememsize parameter.) This may appear to be a memory leak, but it is not. Depending on how the memory allocation library handles requests, actual memory used may be much larger than the memory specified. For more information see "Entry Cache," in the Directory Server Performance Tuning Guide. - Determine the total size for all caches, cacheSum. cacheSum = entryCacheSum + nsslapd-dbcachesize + nsslapd-import-cachesize - Determine the total size for the Directory Server process, slapdSize. slapdSize = slapdBase + cacheSum You may use utilities such as pmap(1) on Solaris systems or the Windows Task Manager to measure physical memory used by Directory Server. - Estimated memory needed to handle incoming client requests, slapdGrowth. slapdGrowth = 20% x slapdSize As a first estimate, we assume 20 percent overhead for handling client requests. The actual percentage may depend on the characteristics of your particular deployment. Validate this percentage empirically before putting Directory Server into production. - Determine total memory size for Directory Server, slapdTotal. slapdTotal = slapdSize + slapdGrowth For large deployments involving 32-bit servers, slapdTotal may exceed the practical limit of about 3.4 GB, (2.5GB on Linux systems) and perhaps even the theoretical process limit of about 3.7 GB. In this case, you may choose either to tune caching to work within the limits of the system, or to use a 64-bit version of the product. For more information, see "Tuning Cache Sizes "in the Directory Server Performance Tuning Guide. Sizing Memory for the Operating System Estimating the memory needed to run the underlying operating system must be done empirically, as operating system memory requirements vary widely based on the specifics of the system configuration. For this reason, consider tuning a representative system for deployment before attempting to estimate how much memory the underlying operating system needs. For more information, see "Tuning the Operating System" in the Directory Server Performance Tuning Guide. After tuning the system, monitor memory use to arrive at an initial estimate, systemBase. You may use utilities such as sar(1M) on Solaris systems or the Task Manager on Windows to measure memory use. For top performance, dedicate the system running Directory Server to this service only. If you must run other applications or services, monitor the memory they use as well when sizing total memory required. Additionally, allocate memory for general system overhead and normal administrative use. A first estimate for this amount, systemOverhead, should be at least several hundred megabytes, or 10 percent of the total physical memory, whichever is greater. The goal is to allocate enough space for systemOverhead that the system avoids swapping pages in and out of memory while in production. The total memory needed by the operating system, systemTotal, can then be estimated as follows. systemTotal = systemBase + systemOverhead Sizing Total Memory Given slapdTotal and systemTotal estimates from the preceding sections, estimate the total memory needed, totalRAM. totalRAM = slapdTotal + systemTotal Notice totalRAM is an estimate of the total memory needed, including the assumption that the system is dedicated to the Directory Server process, and including estimated memory use for all other applications and services expected to run on the system. Dealing With Insufficient Memory In many cases, it is not cost effective to provide enough memory to cache all data used by Directory Server. At minimum, equip the server with enough memory that running Directory Server does not cause constant page swapping. Constant page swapping has a strong negative performance impact. You may use utilities such as vmstat(1M) on Solaris and other systems to view memory statistics before and after starting Directory Server and priming the entry cache. Unsupported utilities available separately such as MemTool for Solaris systems can be useful in monitoring how memory is used and allocated when applications are running on a test system. If the system cannot accommodate additional memory, yet you continue to observe constant page swapping, reduce the size of the database and entry caches. Running out of swap space can cause Directory Server to shut itself down. Refer to "Tuning Cache Sizes" in the Directory Server Performance Tuning Guide for a discussion of the alternatives available when providing adequate physical memory to cache all directory data is not an option. Sizing Disk Subsystems Disk use and I/O capabilities can strongly impact performance. Especially for a deployment supporting large numbers of modifications, the disk subsystem can become an I/O bottleneck. This section offers recommendations for estimating overall disk capacity for a Directory Server instance, and for alleviating disk I/O bottlenecks. Refer to "Tuning Logging" in the Directory Server Performance Tuning Guide for more information on alleviating disk I/O bottlenecks. Sizing Directory Suffixes Disk space requirements for a suffix depend not only on the size and number of entries in the directory, but also on the directory configuration and in particular how the suffix is indexed. To gauge disk space needed for a large deployment, perform the following steps: - Generate LDIF for three representative sets of entries like those expected for deployment, one of 10,000 entries, one of 100,000, one of 1,000,000. Generated entries should reflect not only the mix of entry types (users, groups, roles, entries for extended schema) expected, but also the average size of individual attribute values, especially if single large attribute values such as userCertificate and jpegPhoto are expected. - Configure an instance of Directory Server as expected for deployment. In particular, index the database as you would for the production directory. If you expect to add indexes later, expect to have to add space for those indexes as well. - Load each set of entries, recording the disk space used for each set. - Graph the results to extrapolate estimated suffix size for deployment. - Add extra disk space to compensate for error and variation. If you are using replication, note that entry state information (a list of old values) is stored with the entry, and used during conflict resolution. State information can cause an entry to grow significantly in size and should be taken into account when sizing the suffix. Disk space for suffixes is only part of the picture; you must also consider how Directory Server uses disks. How Directory Server Uses Disks Directory suffixes are part of what Directory Server stores on disk. A number of other factors affecting disk use may vary widely depending even on how Directory Server is used after deployment and so are covered here in general terms. Refer to the Directory Server Administration Guide for instructions on configuring the items discussed here. Directory Server Binaries You need approximately 200 MB disk space to install this version of Directory Server. This estimate is not meant to include space for data or logs, but only for the product binaries. Disk use estimates for log files depend on the rate of Directory Server activity, the type and level of logging, and the strategy for log rotation. Many logging requirements can be predicted and planned in advance. If Directory Server writes to logs and in particular audit logs, disk use increases with load level. When high load deployments call for extensive logging, plan for extra disk space to accommodate the high load. You may decrease disk space requirements for deployments with high load logging by establishing an intelligent log rotation and archival system, rotating the logs often, and automating migration of old files to less expensive, higher capacity storage mediums such as tape or cheaper disk clusters. Some logging requirements cannot easily be predicted. Debug logging can cause temporary but explosive growth in the size of the errors log, for example. For a large, high load deployment, consider setting aside several gigabytes of dedicated disk space for temporary, high-volume debug logging. Refer to "Tuning Logging" in the Directory Server Performance Tuning Guide for further information. Transaction log volume depends upon peak write loads. If writes occur in bursts, transaction logs use more space than if the write load is constant. Directory Server trims transaction logs periodically. Transaction logs therefore should not continue to grow unchecked. Transaction logs are not flushed during online backup, because database files cannot be modified while they are being copied (this would result in an inconsistent data image.) Transaction logs are copied to the backup location as the last step of the backup. Directory Server is generally run with durable transactions enabled. When durable transaction capabilities are enabled, Directory Server performs a synchronous write to the transaction log for each modification (add, delete, modify, modrdn) operation. In this case, an operation can be blocked if the disk is busy, resulting in a potential I/O bottleneck. If update performance is critical, plan to use a disk subsystem having fast write cache for the transaction log. Refer to "Tuning Logging"in the Directory Server Performance Tuning Guide for further information. Replication Changelog Database If the deployment involves replication, the Directory Server suppliers perform change logging. Changelog size depends on the volume of modifications and on the type of changelog trimming employed. Plan capacity based on how the changelog is trimmed. For a large, high load deployment, consider setting aside several gigabytes of disk space to handle changelog growth during periods of abnormally high modification rates. Refer to "Tuning Logging" in the Directory Server Performance Tuning Guide for further information. Suffix Initialization and LDIF Files During suffix initialization, also called bulk loading or importing, Directory Server requires disk space not only for the suffix database files and the LDIF used to initialize the suffix, but also for intermediate files used during the initialization process. Plan extra (temporary) capacity in the same directory as the database files for the LDIF files and for the intermediate files used during suffix initialization. This may be as much as double the size of the largest index, depending on what indexes you have created. Backups and LDIF Files Backups often consume a great deal of disk space. The size of a backup equals the size of the database files involved, and the transaction logs. Accommodate for several backups by allocating space equal to several times the volume of the database files, ensuring that databases and their corresponding backups are maintained on separate disks. Employ intelligent strategies for migrating backups to cheaper storage mediums as they age. If the deployment involves replication, plan additional space to hold initialization LDIF files, as these differ from backup LDIF files. Memory Based Rather Than Disk Based File Systems Some systems support memory based tmpfs file systems. On Solaris for example /tmp is often mounted as a memory based file system to increase performance. Only database cache files should be placed on a memory based file system. For more information, see "nsslapd-db-home-directory" in the Directory Server Administration Reference. Never put database or transaction log binaries or configuration files on a memory based file system. If cache files are placed on /tmp, a location shared with other applications on the system, ensure that the system never runs out of space under /tmp. Otherwise, when memory is low, Directory Server files in memory based file systems may be paged to the disk space dedicated for the swap partition. Some systems support RAM disks and other alternative memory based file systems. Refer to the operating system documentation for instructions on creating and administering memory based file systems. Notice that everything in such file systems is volatile and must be reloaded into memory after system reboot. This reinitialization can take a long time to complete, depending on factors such as the processor speed, memory speed, and memory size. Leave room for at minimum one or two core files. Although Directory Server should not dump core, recovery and troubleshooting after a crash can be greatly simplified if the core file generated during the crash is available for inspection. When generated, core files are stored either in the same directory as the file specified by nsslapd-errorlog on cn=config, or under ServerRoot/bin/slapd/server/ if a crash occurs during startup. Space for Administration Leave room for expected system use, including system and Directory Server administration. Ensure that sufficient space is allocated for the base Directory Server installation, for the configuration suffix if it resides on the local instance, for configuration files, and so forth. Distributing Files Across Disks By placing commonly-updated Directory Server database and log files on separate disk subsystems, you can spread I/O traffic across multiple disk spindles and controllers, avoiding I/O bottlenecks. Consider providing dedicated disk subsystems for each of the following items. When durable transaction capabilities are enabled, Directory Server performs a synchronous write to the transaction log for each modification operation. An operation is thus blocked when the disk is busy. Placing transaction logs on a dedicated disk can improve write performance, and increase the modification rate Directory Server can handle. Refer to "Transaction Logging" in the Directory Server Performance Tuning Guide. Multiple database support allows each database to reside on its own physical disk. You can thus distribute the Directory Server load across multiple databases each on its own disk subsystem. To prevent I/O contention for database operations, consider placing each set of database files on a separate disk subsystem. For top performance, place database files on a dedicated fast disk subsystem with a large I/O buffer. Directory Server reads data from the disk when it cannot find candidate entries in cache. It regularly flushes writes. Having a fast, dedicated disk subsystem for these operations can alleviate a potential I/O bottleneck. The nsslapd-directory attribute on cn=config,cn=ldbm database,cn=plugins,cn=config specifies the disk location where Directory Server stores database files, including index files. By default, such files are located under ServerRoot/slapd-ServerID/db/. Changing database location of course requires not only that you restart Directory Server, but also that you rebuild the database completely. Changing database location on a production server can be a major undertaking, so identify your most important database and put it on a separate disk before putting the server into production. Directory Server provides access, error, and audit logs featuring buffered logging capabilities. Despite buffering, writes to these log files require disk access that may contend with other I/O operations. Consider placing log files on separate disks for improved performance, capacity, and management. Refer to "Tuning Logging" in the Directory Server Performance Tuning Guide for more information. Cache Files on Memory Based File Systems In a tmpfs file system, for example, files are swapped to disk only when physical memory is exhausted. Given sufficient memory to hold all cache files in physical memory, you may derive improved performance by allocating equivalent disk space for a tmpfs file system on Solaris platforms or other memory based file systems such as RAM disks for other platforms, and setting the value of nsslapd-db-home-directory to have the Directory Server store cache files on that file system. This prevents the system from unnecessarily flushing memory mapped cache files to disk. Disk Subsystem Alternatives "Fast, cheap, safe: pick any two." Sun Performance and Tuning, Cockroft and Pettit. Fast and Safe When implementing a deployment in which both performance and uptime are critical, consider hardware-based RAID controllers having non-volatile memory caches to provide high speed buffered I/O distributed across large arrays of disks. By spreading load across many spindles and buffering access over very fast connections, I/O can be optimized, and excellent stability provided through high performance RAID striping or parity blocks. Large non-volatile I/O buffers and high performance disk subsystems such as those offered in Sun StorEdge products can greatly enhance Directory Server performance and uptime. Fast write cache cards provide potential write performance improvements, especially when dedicated for database and/or transaction log use. Fast write cache cards provide non-volatile memory cache that is independent from the disk controller. Fast and Cheap For fast, low-cost performance, ensure you have adequate capacity distributed across a number of disks. Consider disks having high rotation speed and low seek times. For best results, dedicate one disk to each distributed component. Consider using multi-master replication to avoid single points of failure. Cheap and Safe For cheap, safe configurations, consider low-cost, software-based RAID controllers such as Solaris Volume Manager. RAID stands for Redundant Array of Inexpensive Disks. As the name suggests, the primary purpose of RAID is to provide resiliency. If one disk in the array fails, data on that disk is not lost but remains available on one or more other disks in the array. To implement resiliency, RAID provides an abstraction allowing multiple disk drives to be configured as a larger virtual disk, usually referred to as a volume. This is achieved by concatenating, mirroring, or striping physical disks. Concatenation is implemented by having blocks of one disk logically follow those of another disk. For example, disk 1 has blocks 0-99, disk 2 has blocks 100-199 and so forth. Mirroring is implemented by copying blocks of one disk to another and then keeping them in continuous synchronization. Striping uses algorithms to distribute virtual disk blocks over multiple physical disks. The purpose of striping is performance. Random writes can be dealt with very quickly as data being written is likely to be destined for more than one of the disks in the striped volume, hence the disks are able to work in parallel. The same applies to random reads. For large sequential reads and writes the case may not be quite so clear. It has been observed, however, that sequential I/O performance can be improved. An application generating many I/O requests can swamp a single disk controller, for example. If the disks in the striped volume all have their own dedicated controller, however, swamping is far less likely to occur and so performance is improved. RAID can be implemented using either a software or a hardware RAID manager device. There are advantages and disadvantages in using either method: - Hardware RAID generally provides higher performance as it is implemented in hardware and hence incurs less processing overhead than software RAID. Furthermore, hardware RAID is dissociated from the host system, leaving host resources free to execute applications. - Hardware RAID is generally more expensive than software RAID. - Software RAID can be more flexible than hardware RAID. For example, a hardware RAID manager is usually associated with a single array of disks or with a prescribed set of arrays, whereas software RAID can encapsulate any number of arrays of disks, or, if desired, only certain disks within an array. The following sections discuss RAID configurations, known as levels. The most common RAID levels, 0, 1, 1+0 and 5 are covered in some detail, whereas less common levels are merely compared and contrasted. RAID 0, Striped Volume Striping spreads data across multiple physical disks. The logical disk, or volume, is divided into chunks or stripes and then distributed in a round-robin fashion on physical disks. A stripe is always one or more disk blocks in size, with all stripes having the same size. The name RAID 0 is a contradiction in that it provides no redundancy. Any disk failure in a RAID 0 stripe causes the entire logical volume to be lost. RAID 0 is, however, the least expensive of all RAID levels as all disks are dedicated to data. RAID 1, Mirrored Volume The purpose of mirroring is to provide redundancy. If one of the disks in the mirror fails then the data remains available and processing may continue. The trade off is that each physical disk is mirrored, meaning that half the physical disk space is devoted to mirroring. Also known as RAID 10, RAID 1+0 provides the highest levels of performance and resiliency. Consequently, it is the most expensive level of RAID to implement. Data continues to remain available after up to three disk failures as long as all of the disks that fail form different mirrors. RAID 1+0 is implemented as a striped array where segments are RAID 1. RAID 0+1 is slightly less resilient than RAID 1+0. A stripe is created and then mirrored. If one or more disks fails on the same side of the mirror, then the data remains available. If a disk then fails on the other side of the mirror, however, the logical volume is lost. This subtle difference with RAID 1+0 means disks on either side can fail simultaneously yet data remains available. RAID 0+1 is implemented as a mirrored array where segments are RAID 0. RAID 5 is not as resilient as mirroring yet nevertheless provides redundancy in that data remains available after a single disk failure. RAID 5 implements redundancy using a parity stripe created by performing logical exclusive or on bytes of corresponding stripes on other disks. When one disk fails, data for that disk is recalculated using the data and parity in the corresponding stripes on the remaining disks. Performance suffers however when such corrective calculations must be performed. During normal operation, RAID 5 usually offers lower performance than RAID 0, 1+0 and 0+1, as a RAID 5 volume must do four physical I/O operations for every logical write. The old data and parity are read, two exclusive or operations are performed, and the new data and parity are written. Read operations do not suffer the same penalty and thus provide only slightly lower performance than a standard stripe using an equivalent number of disks. That is, the RAID 5 volume has effectively one less disk in its stripe because the space is devoted to parity. This means a RAID 5 volume is generally cheaper than RAID 1+0 and 0+1, because RAID 5 devotes more of the available disk space to data. Given the performance issues, RAID 5 is not generally recommended unless the data is read-only or unless there are very few writes to the volume. Disk arrays with write caches and fast exclusive or logic engines can mitigate these performance issues however, making RAID 5 a cheaper, viable alternative to mirroring for some deployments. RAID Levels 2, 3, and 4 RAID levels 2 and 3 are good for large sequential transfers of data such as video streaming. Both levels can process only one I/O operation at time, making them inappropriate for applications demanding random access. RAID 2 is implemented using Hamming error correction coding (ECC). This means three physical disk drives are required to store ECC data, making it more expensive than RAID 5, but less expensive than RAID 1+0 as long as there are more than three disks in the stripe. RAID 3 uses a bitwise parity method to achieve redundancy. Parity is not distributed as per RAID 5, but is instead written to a single dedicated disk. Unlike RAID levels 2 and 3, RAID 4 uses an independent access technique where multiple disk drives are accessed simultaneously. It uses parity in a manner similar to RAID 5, except parity is written to a single disk. The parity disk can therefore become a bottleneck as it is accessed for every write, effectively serializing multiple writes. Software Volume Managers Volume managers such as Solaris Volume Manager may also be used for Directory Server disk management. Solaris Volume Manager compares favorably with other software volume managers for deployment in production environments. Monitoring I/O and Disk Use Disks should not be saturated under normal operating circumstances. You may use utilities such as iostat(1M) on Solaris and other systems to isolate potential I/O bottlenecks. Refer to Windows help for details on handling I/O bottlenecks on Windows systems. Sizing for Multiprocessor Systems Directory Server software is optimized to scale across multiple processors. In general, adding processors may increase overall search, index maintenance, and replication performance. In specific directory deployments, however, you may reach a point of diminishing returns where adding more processors does not impact performance significantly. When handling extremely demanding performance requirements for searching, indexing, and replication, consider load balancing and directory proxy technologies as part of the solution. Sizing Network Capacity Directory Server is a network intensive application. To improve network availability for a Directory Server instance, equip the system with two or more network interfaces. Directory Server can support such a hardware configuration, listening on multiple network interfaces within the same process. If you intend to cluster directory servers on the same network for load balancing purposes, ensure the network infrastructure can support the additional load generated. If you intend to support high update rates for replication in a wide area network environment, ensure through empirical testing that the network quality and bandwidth meet your requirements for replication throughput. Sizing for SSL By default, support for the Secure Sockets Layer (SSL) protocol is implemented in software. Using the software-based SSL implementation may have significant negative impact on Directory Server performance. Running the directory in SSL mode may require the deployment of several directory replicas to meet overall performance requirements. Although hardware accelerator cards cannot eliminate the impact of using SSL, they can improve performance significantly compared with software-based implementation. Directory Server 5.2 supports the use of SSL hardware accelerators such as supported Sun Crypto Accelerator hardware. Using a Sun Crypto Accelerator board can be useful when SSL key calculation is a bottleneck. Such hardware may not improve performance when SSL key calculation is not a bottleneck, however, as it specifically accelerates key calculations during the SSL handshake to negotiate the connection, but not encryption and decryption of messages thereafter. Refer to "Using the Sun Crypto Accelerator Board" in the Directory Server Administration Guide for instructions on using such hardware with a Directory Server instance.
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648226.72/warc/CC-MAIN-20180323122312-20180323142312-00728.warc.gz
CC-MAIN-2018-13
33,701
175
https://doabooks.org/doab?func=export&uiLanguage=en&application=refworks&query=28164
code
TY - BOOK ID - 28164 TI - Arbeits(un)fähigkeit herstellen AU - Koch, Martina PY - 2018 SN - 9783037777237 9783037771556 DB - DOAB KW - incapacity for work KW - labour market KW - job market KW - health restrictions KW - disabilities KW - integration KW - economy KW - inclusion UR - https://www.doabooks.org/doab?func=search&query=rid:28164 AB - As most industrialised countries, Switzerland is increasingly attempting to (re)integrate people with health restrictions and disabilities into the job market. The reinforced political demand to reintegrate people with health restrictions challenges both the involved organisations and its employees. While the means and methods to assess (in)capacity for work are more and more refined, the according practices become more and more diverse. On the basis of an ethnography of two Swiss cantonal work integration agencies, this study analyses how the institutions under scrutiny construct and deal with their clients’ (in)capacity for work. It reconstructs how “cases“ of health restrictions are organisationally problematized, negotiated, and dealt with and examines the underlying logic of these practices and strategies.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00413.warc.gz
CC-MAIN-2019-47
1,175
1
https://forum.arduino.cc/t/need-some-help-figuring-out-my-code/928272?page=4
code
I wasn't guessing your gender, but the enthusiastic interjector. This whole thread has been severely muddled by the unhelpful suggestions, 63 posts so far is just ridiculous. Let's get back to your OP. Which is exactly what you have to do. There are - as I understand it - three parts to your system. A song which is to be read from a table of notes (array), a LED pattern similarly read from an array, and a button to toggle the whole shebang on and off. So your program loop needs to be a "state machine" involving three steps. Let's say the first step is to monitor the button. Has it just been pressed (that is, was previously not pressed and now is)? If so, a button timer (millis) is started. If it was started on a previous pass in the loop and is still pressed (otherwise it is cancelled), millis() is checked to see if it has been consistently pressed long enough (about 10 ms) to be a genuine press and if so, the "run/ stop" flag is toggled. Next step: Check the song table. How long (millis) has the current note been playing? If not long enough, just keep playing it. If ready for the next note, OK, go to the next note. Third step: Check the LED pattern table. How long (millis) has the current pattern been playing? If not long enough, just keep playing it. If ready for the next pattern, OK, go to the next pattern. This loop() containing three steps keeps repeating forever. The second and third steps are of course dependent on the "run/ stop" flag; if it was set to "stop" then any note is switched off as is the LED pattern. You have an option here as to whether the button causes a pause or a re-start of the patterns. Note that although there are other ways of doing it, the loop() runs extremely fast. It presumes you use the "tone" function to generate the sound independently of the main program and you only change the "tone" when you need to.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00174.warc.gz
CC-MAIN-2021-49
1,869
9
https://www.artsandmindlab.org/about/
code
The International Arts + Mind Lab (IAM Lab) is a multidisciplinary research-to-practice initiative from the Brain Science Institute at Johns Hopkins University that is accelerating the field of neuroaesthetics. Our mission is to amplify human potential. IAM Lab is pioneering Impact Thinking, a translational research approach designed to solve intractable problems in health, wellbeing and learning through arts + mind approaches. IAM Lab brings together brain scientists and practitioners in architecture, music and the arts to collaborate in research and foster dialogue. We spur continued innovation by sharing these findings with a broader community.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251796127.92/warc/CC-MAIN-20200129102701-20200129132701-00368.warc.gz
CC-MAIN-2020-05
655
2
http://www.nwdiscgolfnews.com/forum/showpost.php?p=92116&postcount=8
code
Originally Posted by Cerrgurry I was pretty close to my estimate of mid march I need to register for this tournament I am accepting donations. I just dont want to donate 3-4 discs again on those bay holes....so instead of money donations...any one want to donate a disc that I will then donate to the Columbia River? Uhlman and 3Fingers you both playing this one? What fun if we were in the same group. nope saving my $$$$$ for the SUSHI
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704655626/warc/CC-MAIN-20130516114415-00091-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
437
7
https://www.tdpri.com/threads/pickup-switch-location.1052770/
code
I did a search and really couldnt beleive Nothing came up. I'm building my first tele. I'm putting What I think I want on it so it's not traditional ( think stetsbar) but when I install the control plate I'm thinking I want the switch at the bottom leaving The knobs closer to my picking hand. I've seen pics of them installed both ways and was wondering what everyone's preference is and why? Thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704795033.65/warc/CC-MAIN-20210126011645-20210126041645-00570.warc.gz
CC-MAIN-2021-04
401
1
http://www.codeweblog.com/stag/android-button-change-icon-by-code/
code
Environmental structures Before you begin, you need to prepare the following environmental and procedural Necessary Microsoft Windows XP / Microsoft Windows Vista operating system Android SDK 1.1r1 Java Development Kit (JDK) v6.0 or above eclipse-jee-gany android button change icon by code Two-dimensional graphic 2D Graphics Android offers a custom 2D graphics library, be used to draw graphics and animation. Your package will android.graphics.drawable and android.view.animation find these general categories. This paper briefly describes how Android combines the interface of the Sqlite database to learn to do add, delete, change, check. Operation In a previous blog has been done on the SQLite database package, combining the blog of this blog do interface operation. Check out the data in the database using ListView display in the interface and increase the menu prompts on the data to do new and del Original Source: http://android.blog.51cto.com/268543/302529 1. Related introduced in the Android project folder inside the folder, the main resource file is placed inside the res folder. assets folder is stored not compile the native file processing, the http://www.blogjava.net/Green-nut/articles/332617.html?opt=admin Android TitleBar custom title bar layout Many users find themselves TitleBar Android program title bar area is very monotonous, if you want to personalize some of the following methods can f Android Getting Started FAQ 1, Q: What is Android? A: Android means the original meaning of the term "robot", is also Google on November 5, 2007 announced the open source Linux-based mobile operating system, the name of the platform by the opera From network 1. Android Single Instance Methods We all know that Android platform is not the task manager, and defenders within the App Activity history stack to achieve a window display and destruction, for the conventional view is from the shortcut to r Android system, there are 3 types of menu: options menu, context menu, sub menu. press the Menu button options menu will be displayed for the current Activity. It consists of two menu items: Because the options menu in the bottom of the screen can only di Department of famous Gate Android (9) - Database support (SQLite), the content provider (ContentProvider) Author: webabcd Android describes the use of SQLite, ContentProvider Database support (SQLite) - Android development platform provides the operation- android's layout has several, linear layout, the absolute layout, form layout, relative layout, pin layout (you can do animation is an element of a needle, needle means that the screen per second x second change in the number of elements) commonl ... After the recent completion of the project, idle boredom, the want to learn C, C + +, on very poor, and too hard, used java development, I recall my colleagues, do not dry, learning Android, then began to study their development of Android . This article What is Android? Android is a specialized set of software for mobile devices, which includes an operating system, middleware and some important applications. Beta version of the Android SDK provides the Android platform using the Java language for Android • Quick Start: Ophone and Android Tutorial • http://developer.51cto.com 2009-09-25 10:24 zhang_xzhi_xjtu JavaEye blog I want to comment on (0) This paper aims to remove the build environment of a 5-minute Quick Start, and provide a simple program cod Resources are used in your code to the package at compile time, and your application into additional files. Android supports a wide variety of documents, including XML, PNG and JPEG files. XML file describing the format of the decision on its content. The Department of famous door Android (9) - Database support (SQLite), content provider (ContentProvider) Describes the use of Android SQLite, ContentProvider Database support (SQLite) - Android development platform provides the operator related SQLite databa Android UI Programming Interface Overview In this paper, the user interface of the Android UI open some of the basic concepts, not to do in-depth explanations, but you can quickly browse Android open often involves some basic concepts and elements. 1, int Project directory structure: HelloWorldActivity.java Listing Helloworldactivity.java code package com.oristand; import android.app.Activity; import android.os.Bundle; public class HelloWorldActivity extends Activity ( / ** Called when the activity is Andoid Dialog 1, AlertDialog, with 0-3 buttons, you can put options, check one box, so as to form the proposed domain user can interact. 2, ProgressDialog, displays a progress bar or progress of the ring. 3, DataPickerDialog, select the date of the dialog Network lack android permissions list, but few will put together a list and use, so hereby summarize Need to define the appropriate permissions AndroidManifest.xml (for internet access as an example), as follows: Xml Code <Uses-permission android: name Transfer from: http://blog.csdn.net/ecaol/archive/2010/03/24/5410915.aspx This method is suitable for Android SDK 2.1 application development environment Install JDK • In the java.sun.com download and install • JDK in the "System Properties" and Since google 06 Since it entered China on a map, moving developments in the field several times the speed of growth every year basically. Android platform in the latest related applications, if they can understand our google map will be a great help devel Process and life cycle Android applications are written using Java programming language. Compiled Java code - including any data the application needs and resources - is Android aapt tool package to package, use the. Apk package file suffix. This document Good post, go here recorded. Original Address: http://topic.csdn.net/u/20101021/16/B605909C-56F8-41A0-B209-269FEDD51841.html Of: ptzxzc 1. The sdk copied to the android-sdk-windows \ platforms the next. If there is a network, then on android-sdk-windows a Android Application Development program application in addition, there is a Widget application. A lot of people will develop procedures for application and not developer Widget application. This post is to help you learn how to develop Widget application. When you need in your application to provide search services, your first thought is to put your search box where it? Search by using the Android framework, the application will display a custom search dialog box to handle the user's search reques Our applications run on the Android principle and layout files can be described with a deeper awareness and understanding, and with the "Hello World!" Program to practice proved. Continue to develop in-depth Android trip, it is necessary to solv After several articles on the principles of Android applications to talk, now we will probably look back. First, we use a Hello World program introduces Android application's directory structure, including the src folder, gen folder, Android x folder, Android security and permissions ① ---- ShareUserId and file access (File Access) - Security and Permission About SharedUserId summary: We know that in general each app has a unique linux user ID, the permissions to be set so that the application's files are only visible to the user, only the application itself can be seen, and we can make them for other vi In Ap sometimes need to set some configuration parameters, these parameters through the configuration file. To set these parameters need to provide a UI, for this demand, Android provides a preferenceActivity. PreferenceActivity by reading the pre-defined SurfaceView briefly introduced the use of, this time to introduce SurfaceView use double buffering. Double-buffering to prevent flickering animation achieved a multi-threaded applications, based on double buffering to achieve SurfaceView very simple, Four, Activity 4.1 Activity Activity is the application of the entrance. Responsible for creating the only window (setContextView (View)), and user interaction and so on. 4.1.1 Basic Usage First define a class inherited from android.app.Activity, the file The Android Eclipse plug-ins by (ADT) can easily add a new Android project. List of engineering structures and the role of the main directory as follows: (1) Src: storing program source code, nothing to say. (2) Gen: store the java compiler automatically [Android2D fourth game development] in-depth Animation, the still used in SurfaceView Android-Tween Animation! Himi original, reprinted, please specify! Thank you. Original Address: http://blog.csdn.net/xiaominghimi/archive/2011/01/04/6116089.aspx Before the fourth game development 【】 Android2D I'll share in a piece of graph 13 of the png, set the viewing area . Android's default classification 2011-01-19 13:33:32 Window class to read 514 comments 0 Font Size: medium and small subscription. Android's Window class (a) Android's GUI layer is not complicated. The complexity of such similar WGUI layout online these days to write a variety of search code SMS monitoring service, and then returns a message with gps address and resolve gps address shown on the map a small program, but returned to the message displayed when the map a problem, my method under Android's Window class (a) Android's GUI layer is not complicated. The complexity of such similar WGUI layout and dialog-based GUI, and MFC, QT and other large frame are not comparable, or even the MiniGUI Feynman Wei Yongming than it is complex. 1. Ordinary Menu Let's look at how to achieve the simplest Menu. Activity in the primary coverage onCreateOptionsMenu (Menu menu) method. Code <! - Code highlighting produced by Actipro CodeHighlighter (freeware) http://www.CodeHighlighter.com/ - Cloud platform programming and development (C): Creating a platform for cloud-based X5Cloud Hello World program (run on Android phones. Plates) Cloud platform programming and development (C): Creating a platform for cloud-based X5Cloud Hello World program (run on Android phones, plate) http://blog.sina.com.cn/s/blog_85e4309c0100u7mb.html How to Create X5Cloud Hello World program (run on Android p Key skills and concepts l Create a new Android project. l Use the View l Use a text view l Change main.xml file l to run applications in the Android emulator In this chapter, you will create the first Android event. This chapter studies the application bu Here is your complete AndroidManifest.xml project file: <? xml version = "1.0" encoding = "utf-8"?> <manifest xmlns: android = http://schemas.android.com/apk/res/android package = "android_programmers_guide.AndroidVi ... AndroidViews.java The final step in creating this Activity is to edit the AndroidViews.java. If you want to call from the main AndroidViews Activity in testCheckBox Activity, you must add code to AndroidViews.java. Compare the following code and your ... This section shows radiogroup.xml all the code. According to the guidance of the previous chapters to create a new XML file called the radiogroup.xml. Use the following code to build your files. <? xml version = "1.0" encoding = "utf-8 Here is your complete AndroidManifest.xml project file: <? Xml version = "1.0" encoding = "utf-8"?> <Manifest xmlns: android = http://schemas.android.com/apk/res/android package = "android_programmers_guide.AndroidVi ... AndroidViews.java The final step in creating this Activity is the editor AndroidViews.java. If you want to call from the main AndroidViews Activity testCheckBox Activity, you must add code to AndroidViews.java. Compare the following code and your cur ... This section shows radiogroup.xml all the code. According to the guidance of the previous chapters to create a new XML file called radiogroup.xml's. Use the following code to build your files. <? Xml version = "1.0" encoding = "utf-8 Activity assigned to you for the correct permissions, you first need to know what permissions you need to allocate. The correct example is the use of Dialer Activity. Access Dailer Activity managed by CALL_PHONE permission. Through the allocation of autho Activity assigned to you for the correct permissions, you first need to know what you need to assign privileges. Correct example is the use of Dialer Activity. Access Dailer Activity managed by CALL_PHONE permission. Through the allocation of rights, ... Task and Activity related to this period of time to do a project, found that the Task and Activity master or not firm. To organize knowledge here, for easy access, there are several Flag not send to the SLC to see to understand, did not test out the effec <Activity android: allowTaskReparenting = ["true" | "false"] android: alwaysRetainTaskState = ["true" | "false"] android: clearTaskOnLaunch = ["true" "|" false "] android: configChanges = 1: Press the main function 2: News and updates load 3: The progress of the implementation of Article 4: Background to broadcast news services (Service and BroadcastReceiver use) 5: AIDL use 1: Main Features: Started after news update all information, upda
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931004246.54/warc/CC-MAIN-20141125155644-00107-ip-10-235-23-156.ec2.internal.warc.gz
CC-MAIN-2014-49
13,028
55
https://community.splunk.com/t5/Splunk-Search/Top-Fields-command/m-p/417012
code
| top 5 SessionID by host | fields - Anzahl, precent This code returns all events in the index instead of five and removes no counts and percent fields. Who can be the problem? Your query is fetching top 5 sessionID for each host (all hosts will be shown). What was your requirement? Regarding removing the fields, the percent field name is misspelled (written as precent) hence it's not removed. For removing count, try to use field name count itself. (try | fields - count, percent) | fields - count, percent @somesoni2 actually better would be to use showperc=f and showcount=f arguments in the top command itself. | top 5 SessionID by host showperc=f showcount=f @anasshsa your query id giving you top 5 sessions for each host. If that is your requirement you can try the following: | top 5 SessionID by host showperc=f | xyseries host SessionID count Please let us know if you are looking for something else.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362879.45/warc/CC-MAIN-20211203121459-20211203151459-00015.warc.gz
CC-MAIN-2021-49
913
11
https://coderanch.com/u/254166/Daniel-Doboseru
code
Of course ASUS would win such a contest :P Is a netbook vs tabled as Bear very good pointed If you want to make this even, compare a ASUS tablet with the iPAD (look for EEE Slate series). Anyways, if you intend to buy one of these, I strongly suggest the iPad. Don't know about the others, but I hate netbooks, they're like laptops who s*ck big time. Want a Windows 7 with Aero? Pretty good performance. Want a HD movie? Well...it turns out to be a slideshow of frames. Want an SQL Server on it? System halt! My point is, iPad are really cutting edge technology, of course with lot of advantages and disadvantages (see the 0.5 m cable and the need to charge it every day or often) and from what I've handled some by now, I would buy one ...but netbooks? Nah. I don't think those few hundred grams in minus with respect to laptop worth when the lack of performance is so big. Also ....Apple is Apple
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818337.62/warc/CC-MAIN-20240422175900-20240422205900-00159.warc.gz
CC-MAIN-2024-18
898
5
http://sourceforge.net/p/peerguardian/news/2012/11/please-welcome-peerguardian-linux-222/
code
Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office. This is mainly a bug fix release. As some of you may have noticed, this project here at sourceforge has migrated to the new SourceForge 2.0 allura. Please update your links and URLs, e.g. of the git repository.
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776432195.33/warc/CC-MAIN-20140707234032-00073-ip-10-180-212-248.ec2.internal.warc.gz
CC-MAIN-2014-23
356
3
http://the-witness.net/news/2010/03/graphics-tech-shadow-maps-part-1/?replytocom=506
code
The Witness contains a mixture of indoor and outdoor scenes, but much of the game takes place outdoors with a very long view distance (you can see the entire island at once if you have a good vantage point). So I wanted to implement a shadow system that would work robustly, provide high visual quality, and allow the player to see everything at once. I have some experience with shadow systems of this type, but the last one I designed was for computers and graphics cards circa 2004, so I was interested to see how much more would be possible today. We've implemented such a modern shadow map system for The Witness. In the process, we've made some improvements to shadow mapping algorithms beyond anything we've seen published, so we are going to detail the improvements here. Also, our shadow map system is still being improved, so I'll talk about what we have yet to try and why we think it's a good idea. Before we get to those details, though, I'd like to establish some context so that the motivation for these design decisions is clearly explained. I have a somewhat cynical attitude toward graphics research literature: most of it describes techniques that don't generally work, but the authors of the papers do the best they can to "sell" the technique to you anyway (using cherry-picked examples, glossing over or completely ignoring failure cases that would be obvious to anyone who understands the algorithm, etc). As the reader, eventually you come to understand all the problems, but not after investing a lot of your time and energy (possibly months) implementing and understanding an algorithm that behaves so poorly that you never would have bothered if you had known the truth from the outset. I've had this experience many times, with many different techniques. Shadow maps, though, have been one of the big ones. There are many published shadow map techniques that simply don't work well enough to be taken seriously. And often when I've heard someone say "so-and-so shadow technique is good," it usually turns out they haven't tried it themselves, so it's just hearsay, or else that person has low quality standards. So I'd like to put forth the statement that I have high quality standards and will only endorse things that have been found to robustly function; I will be open and honest about the degree to which things don't work, and what the specific problems are. In an ideal world this would not be necessary to say, but the situation in the literature today makes it otherwise. Quality Goals; Previous System Many shadow map schemes have been developed that try to maximize the effective resolution of shadows in the scene by performing transformations that are heavily view-dependent. An extreme example of this is Perspective Shadow Maps. I have learned through experience not to use these techniques. They cause shadows to swim and flicker in annoying ways, and many of the algorithms break down severely as the player approaches certain viewing angles. For the 2004 system, I took as a core design goal that shadows should appear rock-solid on nonmoving objects, regardless of any viewpoint motion. The clearest way to achieve this was to center the shadow map on the viewpoint at all times, never letting the shadow map scale or rotate. Because a single shadow map cannot cover the world at high resolution within memory and fill constraints, I used a scheme where 4 or more shadow maps of increasing worldspace size were centered on the viewpoint like square doughnuts. In order to prevent crawling or shimmering, one just ensures that shadow map worldspace positions are snapped to integer multiples of their texel size. (A family of related techniques, but which don't necessarily center the maps on the viewpoint, soon came into more-common use and took on the moniker Cascaded Shadow Maps.) On fixed-function pipeline hardware, this scheme was never quite satisfying (I had to use clip planes to render the scene in many slices, and there were small 1-pixel artifacts due to the resulting imprecision; rendering all the slices was a bit slow). Modern hardware is able to do this kind of thing much better. Also, this scheme wastes a large amount of shadow map memory, because with all the shadow maps centered on the viewpoint, most of the map texels are going to be out of view at any given time. Despite the drawbacks, the visual stability of this technique, and its ability to reach across the entire game world, were extremely appealing to me. Having seen how nicely shadow mapping could behave in practice, this visual stability became a very-high-priority goal in my mind for any future shadow systems. So, going into this new system, the goals were (listed in approximate order of importance): - High performance - Complete stability under camera motion - Long view distance - Visuals can be controlled to suit the style of the game - Efficient use of texture memory The New System I started working on the new system by looking at the 2004 system and trying to make it more memory-efficient. Most likely this would involve moving the shadow maps around in world space, but it wasn't initially clear to me how to do this without introducing problems. Ignacio pointed me at Michal Valient's article "Stable Rendering of Cascaded Shadow Maps" in ShaderX 6, which was exactly what I wanted. Valient computes bounding spheres around the slices of the view frustum that tell him how much he can move the shadow maps in world space without introducing gaps. To illustrate, here are a couple of figures reproduced from his article. I don't want to unduly step on anyone's copyright, so if you are interested in cutting-edge shadow map techniques, buy ShaderX 6! (Click for full size.) So basically, you take the a frustum slice in worldspace, ensure that it is completely enclosed in a sphere, and then ensure that the sphere is completely enclosed in a square cylinder; the square is your shadow map. You can render multiple frustum slices for multiple shadow maps, so long as the bounding spheres overlap enough to cover the whole frustum when put together (see 4.1.2 c and d). The reason they are spheres is: because the shadow map is never allowed to change size (we voluntarily imposed that constraint in order to get solid shadows!), then we need to find a shape that conservatively encloses any possible orientation that a frustum slice could occupy as the camera rotates in space. That's a sphere. Then we make sure that our shadow map covers that entire sphere, and we have then guaranteed that every point in the view frustum is covered by a valid shadow map texel. On top of this, Valient suggests the very helpful optimization of packing all your shadow maps into one atlas texture, so that when it's time to render the scene, you can draw all your shadowed objects in one pass without having to sample multiple textures; you just figure out which frustum slice each pixel lands in, then use that information to determine the offset into the atlas, add that offset to your texture coordinates, and sample the texture. This works great. Valient suggests a 2x2 arrangement of textures, as this is convenient on a wide variety of hardware, for example, GPUs that only support power-of-two textures. So if a single shadow map would be 1024x1024, then you can create a 2048x2048 texture map that contains 4 shadow maps packed in a 2x2 array: So my first shot at a new system was basically a reimplementation of everything Valient describes. It didn't take too long, and when it was done, I was very happy with it -- it was clearly much better than the 2004 system. This technique is still a memory hog. The image stability constraints, which result in us wrapping the frustum slice in a sphere and then the sphere in a box, add margins of unused and barely-used texture space at each step. Looking at Figure 4.1.2, you can see that a frustum slice only occupies about 50% or 60% of the area of the square that represents your shadow texture. This implies that half the square is wasted. However, the actual situation is worse than this, because the diagram is misleading. The problem is that the view frustum represented in the diagram is much narrower than the view frustum used in an actual game, and if you re-draw the diagram in realistic proportions, it looks very different. For an accurate 2D diagram, you are finding a circle that encloses the widest part of your view frustum, which is the 2D trapezoid you get by cutting your frustum in half diagonally (Valient discusses this in his paper as well). Suppose your game is rendering at a 16:9 aspect ratio, and your field of view is 90 degrees horizontally (this is what we use for The Witness currently.) The vertical field of view is then going to be about 59 degrees, and the diagonal field of view will be about 98 degrees (click on image below for explanation). The frustum slice in Valient's diagram is only about 30 degrees, a huge difference! So if we have a 98 degree frustum slice, and inscribe that in a circle, and inscribe that circle in a square, what does that look like? Something like this: Recall that the square is your shadow map texture and the innermost trapezoid is your frustum slice (the texels of your texture that may potentially be used). It covers only a small area of that square -- and there's no way to make it bigger! The reason is that for wide frusta like this, the diagonal at the far plane is so long that it dominates the bounding sphere computation, and so the center of the circle has to land on that diagonal so that its diameter can be just barely large enough to enclose it. (In The Witness, our frustum slices are not proportional, and the way we divided them up, this isn't exactly true for the first 2 shadow maps -- but it is very close to true). The fact that the circle is centered on the far plane means that automatically half the map is wasted at this orientation -- but the wide angle ensures that much of the other half is wasted too! You can rotate the view frustum to other orientations and make it look like you are covering more of the shadow map... for example, if you orient the frustum so that the view vector is going straight down into the page, then the frustum's projection onto the paper will be a rectangle, and it will appear to cover much more of the square. But then you have to keep in mind that some of this coverage is worth a lot more than other coverage in terms of impact on the scene: think about how much space is covered by the texels toward the middle of the frustum projection, versus how much is covered toward the edges (almost none!) If I add two more frustum slices, so that the total is 3 slices as in 4.1.2, it looks like this: That innermost shadow map is just a tiny smudge (now you know why Valient chose a narrow field of view -- for clarity of his figures!) But notice what else is going on: each shadow map is fully contained within the next-larger shadow map, as with the concentric-square-doughnut system from 2004! Valient's scheme is better since it gives you more view distance (the squares are not concentric) -- but it can't give you nearly as much view distance as it would like, due to the wide angle of the frustum. For Next Time So, whereas on one level I was very happy with this shadow system's performance and visual quality, it still seemed somewhat wasteful in terms of memory usage. Next time I'll talk about how we addressed that problem. Subsequent postings after that will talk about issues like softening the shadow border, blending between slices, and various other implementation tricks. To be continued.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474523.8/warc/CC-MAIN-20240224044749-20240224074749-00113.warc.gz
CC-MAIN-2024-10
11,626
35
https://mail.python.org/pipermail/python-ideas/2009-February/002964.html
code
[Python-ideas] Revised revised revised PEP on yield-from greg.ewing at canterbury.ac.nz Tue Feb 17 06:37:01 CET 2009 Raymond Hettinger wrote: > Looks like a language construct where only a handful > of python programmers will be able to correctly describe > what it does. The whole area of generators is one where I think only a minority of programmers will ever fully understand all the gory details. Those paragraphs are there for the purpose of providing a complete and rigorous specification of what's being proposed. For most people, almost all the essential information is summed up in this one sentence near the The effect is to run the iterator to exhaustion, during which time it behaves as though it were communicating directly with the caller of the generator containing the ``yield from`` expression. As I've said, I believe it's actually quite simple conceptually. It's just that it gets messy trying to explain it by way of expansion into currently existing Python code and concepts. > This seems like it is awkwardly trying to cater to two competing needs. > It recognized that the outer generator make have a legitimate need > to catch an exception and that the inner generator might want it too. > Unfortunately, only one can be caught and there is no way to have > both the inner and outer generator/iterator each do their part in > servicing an exception. I don't understand what you mean by that. If you were making an ordinary function call, you'd expect that the called function would get first try at catching any exception occurring while it's running, and if it doesn't, it propagates out to the calling function. Also it's not true that only one of them can catch the exception. The inner one might catch it, do some processing and then re-raise it. Or it might do something in a finally My intent is for all these things to work the same way when one generator delegates to another using yield-from. More information about the Python-ideas
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572212.96/warc/CC-MAIN-20220815205848-20220815235848-00371.warc.gz
CC-MAIN-2022-33
1,966
39
https://github.com/fmiopensource/skinny_board/wiki
code
Clone this wiki locally Ruby, Rack, Rails, Sinatra, Sammy, CouchDB, mysql This is scrum board application. It assumes that you know a little bit about scrum. It provides product backlogs, sprint backlogs (boards), stories and tasks. Tasks have hours, stories have story points and you can generate a burndown of hours for a board. User(s) can be assigned to tasks. Why Rails and Sinatra, mysql and CouchDB? The original version was strictly Ruby on Rails with mysql. Rather than do a total rewrite, we opted to keep what we could intact. Since users, companies and payments receive little benefit from being in a non-relational store, we decided to leave them as they were. And, we wanted to try Sinatra. We didn’t want to port ActiveMerchant, which we use for payment processing in our hosted version. Users and account owners are in Rails. When signing up, you create a new account, with you as the owner. Users can only be invited by an account owner. These are stored in mysql and accessed via ActiveRecord in the usual way. All board actions are written in Sinatra. They generate the board pages when they are first loaded using erb. There is an json api used for CRUD’ing stories and tasks on the boards. In a couple of cases, the api returns html. We did this where the complexity of sending and generating the html from json was too complex. With Rack::Cascade, all requests are first sent to the Sinatra application, and if none match, the request then goes to Rails. This setup also allows access to the Rails models from the Sinatra. It comes in handy in couple of places where you need user info. Sammy provides nice and mostly RESTful routes on the front-end. It interacts with the json api, and renders js templates. Each board is a document in CouchDB. Tasks, stories, and users who can access the board are all stored in arrays in the board. Each of those items in turn contain their own data. The goal was to be able to pull a complete board in a single request. Every time a board changes, we first make a copy of the board to a new document, then update the head revision of that board. The head version is what is used for board show. This allows us to see a snapshot of the board at any point in time. When a burndown is generated it’s stored in the board since the values for that board instance are never going to change. This is also true for the story point and hours calculations. They are only performed once, ever. Product backlogs documents don’t contain boards. They store the ids of their boards. Since product backlogs maintain history in the same way boards do, when a board changes from the product backlog page (eg. moving a story from the product backlog to a board), the board will be copied, and then have its head updated. The product backlog will also be copied, and have its updated. The copied version needs to get the id of the copied version of the board’s id so when browsing history on a product backlog, they match up. There are maps in the couch folder that roll up the data for various things – burndown data, pulling individual tasks or stories, etc… There are 3 places views are found. In the usual Rails app/views folder. There is no overlap with these. In the app/sinatra/views folder. These are used by erb to generate the boards on page load. Since one set is js and the other ruby, the files can’t be shared. They could all be moved into erb or all into js. The former necessitates recreating rjs in Sinatra, while the latter requires the end user’s machine to do some heavy lifting to generate the initial board pages. Test quality and coverage needs to be improved. Using Rack::Test, it’s difficult to divorce controller tests from view tests. There isn’t a way to get at the controller to see what it’s doing without checking the view to see what happened. And, class_eval seems a little bit kludgy. Sammy is v0.2, it could be upgraded to v0.4. As part of this, the templating could be moved over to mustache.js. The couchdb gem are using for loading design documents is a few versions behind. Get rid of Rails entirely. There isn’t a need for it in this version. It would be nice to see a CouchDB-backed authlogic gem for handling user authentication. Get rid of subdomains. They aren’t needed for this version. Loading the application with ruby config.ru works, but doesn’t seem right. A way to reload the code while developing. Rack::Reloader and shotgun both claim to reload the code, but they do not. Code changes are not applied until the next restart.
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825436.78/warc/CC-MAIN-20171022184824-20171022204824-00323.warc.gz
CC-MAIN-2017-43
4,547
24
https://discourse.articulatedrobotics.xyz/t/phobos-for-blender/425
code
If you model your chassis with Blender, you can make your URDF with this tool called Phobos. This was part of a video series I was doing that never was finished, but some people might benefit from the instructions on how to use Phobos to create URDF files for ROS. I’ve had good luck with it. The video is not 100% accurate but there is enough information there to piece together. Thanks for sharing, and for putting in the effor to create it! I hadn’t heard of Phobos before - I might need to try it out sometime
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506623.27/warc/CC-MAIN-20230924055210-20230924085210-00319.warc.gz
CC-MAIN-2023-40
517
3
https://forums.opensuse.org/t/opensuse-tw-slow-for-months/167130
code
I’ve been using TW since November 2022. I started with KDE, a disaster. Moved to GNOME, which was OK other than I hated it. I’m now back on XFCE and i3 (as always). I had a feeling it was slower than any other Linux distro I’ve used but didn’t much care. Today, however, I used Debian with XFCE and bloody hell if that thing wasn’t at least 5 times as quick. Boot time was the same but the browser, terminals, file managers - the lot! was faster. Especially Firefox which is abysmally slow, relatively. I’m not ready to quit Susie yet and I would like to know what I’m doing wrong - it’s almost always my fault. My PC has 4xCore i5-7400, 16G RAM and an SSD. Edit: I just tested Fedora and Linux Mint - both are lightening fast compared to TW. Operationally there is no problem of lack of speed on either. They are subjected to random instances of delayed X startup about which a bug is open. I haven’t upgraded their kernels from 6.2.12 yet. I recognize nothing in your inxi output that is suspect. Have you tried looking for runaway processes with htop or top? Have you tried logging into an alternate session type to XFCE, IceWM perhaps, to see whether a system problem or an XFCE-only problem? The only place I have XFCE is on a Mint or two, and rarely used, so no meaningful familiarity with it. I think among regular helpers here most are using Plasma or Gnome. As seen above in inxi, I have one on Plasma, the other on TDE. I run TW on Core i5-4800/8Gb with a snappy experience including firefox and KDE. Your experience with Debian confirms that the hardware is capable enough. For debugging, I would recommend doing some basic benchmarks (hdparm, sysbench). If your websurfing is low-risk, you should probably turn-off exploit mitigation in Yast/Boot Loader/Kernel parameters. Compare your benchmarking to Debian, if possible. Thanks for taking a look. I tried TOP, nothing weird - well, maybe the memory is a bit high (1.2GB on a cold boot as opposed to Debian’s 400MB) but I don’t care about that at all. My average load is very low. I tried IceWM and my usual i3 and both are about the same. I expected i3 to run like a formula one car but it’s not noticeably different from XFCE. So, I reinstalled TW on a laptop (Thinkpad) with similar specs to my desktop with KDE Plasma and it was almost identical to XFCE in terms of speed except for Firefox which was even slower. I uninstalled the packaged version and replaced it with my own build and that solved that issue (I think Susie’s build of Firefox is wonky because, relatively, it’s slow on all my machines). I also installed Brave and Chromium and they are both operating as expected. The other issue on KDE Plasma is that two programs, GNUCash and KeepassXC, hang Plasma for exactly 5 seconds on close making the taskbar useless for that whopping 5 seconds. First world problems, eh? Maybe I have a network issue Tumbleweed doesn’t like that Debian and Fedora don’t have a problem with. Am I better off sticking to KDE purely because it, along with GNOME, is what the maintainers mainly focus on? As I mainly use my computer for writing C, C++, database administration, and occasionally Web Development (when I’m guilt tripped into it) it doesn’t particularly matter what I use - I don’t use it for “fun” for the most part. “DisasterMong” would have been a much better username! No tinkering outside of my i3 configuration and general XFCE ugliness cleanup. I have some time today so I might do a fresh install with KDE Plasma (and remove 90% of the funk that I’ll never use). It sure is pretty though. YaST provides two methods of networking to choose from (NM & Wicked, either of which various apps demand be installed, though not necessarily enabled), but openSUSE provides also a third, which can only be enabled manually, and which I use on 100% of my desktop PCs: systemd-network (in static IP mode). You could give a try to either of those you do not currently use. YaST in admin mode should allow switching between NetworkManager and Wicked, but only if both are installed. Systemd-network is an optional package, so would need to be installed before switching to it. Systemd-network configuration goes in /etc/systemd/network, and existence of /etc/resolv.conf needs to be ensured. I do that by creating it manually and disabling or uninstalling anything containing string “resolv”. Config template for ipv4: # ip a | grep eth0: 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 # cat /etc/systemd/network/eth0.network IMO… I wouldn’t worry about it … if networking is working fine and not experiencing any delays, no need to get off into yet another “tedium”. I admit, I do not like to mess with my OS. But apparently I back myself into a corner and do it anyway. YaST in admin mode should allow switching between NetworkManager and Wicked, but only if both are installed. Systemd-network is an optional package, so would need to be installed before switching to it. Systemd-network configuration goes in /etc/systemd/network, and existence of /etc/resolv.conf needs to be ensured. I do that by creating it manually and disabling or uninstalling anything containing string “resolv”. Thanks for the information. I saw in YaST Software that it wasn’t installed. Would it be prudent to use the de facto KDE Plasma? I don’t really care what I use (except for GNOME, not doing that). Does anyone here use KeepassXC on Plasma without it hanging on close (only on X11, fine on wayland)? It’s bloody annoying. OK, I definitely have something funky going on - two separate computers, both slow and when using Plasma KeepassXC (and GNUCash) hangs every time on close. Tried @mrmazda’s setup, no difference. I might have to put this to the side and get back to work now. You mean systemd-network, or just switching between NM and Wicked? Switching to systemd-network means using systemctl to disable old way and enable new way, then restart network or computer, not simply installing and creating config files:
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511220.71/warc/CC-MAIN-20231003192425-20231003222425-00465.warc.gz
CC-MAIN-2023-40
6,100
26
https://easyengine.io/tutorials/wordpress/wp-cron-crontab/
code
WordPress has something called wp-cron. If you haven’t read about it, its fine. But please be aware that you cannot live without it! That is why, I am not asking you to disable wp-cron. Still, we need to disable WordPress default wp-cron behaviour by adding following line to wp-config.php file: Setup a real cronjob From your Linux terminal, first open crontab: Then add a line like below in it. */10 * * * * curl http://example.com/wp-cron.php?doing_wp_cron > /dev/null 2>&1 */10 * * * * cd /var/www/example.com/htdocs; php /var/www/example.com/htdocs/wp-cron.php?doing_wp_cron > /dev/null 2>&1 Please make sure you use correct path to Alternately, you can also use WP-Cli */10 * * * * cd /var/www/example.com/htdocs; wp cron event run --due-now > /dev/null 2>&1 Above will run wp-cron every 10 minutes. You can change */5 to make it run every 5 minutes. Difference between two-lines is, first one uses PHP-FPM (or PHP-CGI) and second one uses PHP-CLI. CLI scripts do not have time limits. Depending on your setup, it may be desirable or undesirable. Is it recommend for high-traffic site? I haven’t digged into wp-cron a lot but what I know is that it executes on every page-load. So if there is a long running process which gets triggers by wp-cron, it will delay page loading for that user. Using crontab, wp-cron is run by independent PHP process. So it will not interfere with any visitors page-request. Because of this, we highly recommend running wp-cron via linux crontab rather than WordPress’s default way, irrespective of size or traffic of your site. great! I’ve followed your suggestions. hope this will fix wp-cron.php timeout (70 sec) issue in my LEMP. Do I need to restart some services after configuration? BTW: the first code has two times of “> /dev/null 2>&1” Cron does not require restart. Thanks for finding typo. Updated article. 🙂 In the second example, why do I have have to Wouldn’t this work? */10 * * * * php /var/www/example.com/htdocs/wp-cron.php > /dev/null 2>&1 require_once ('./wp-load.php');which in-turn looks for If you do not cdinto proper wordpress folder, wp-cron will fail. I guess you’re saying that the require_oncerelative path will be incorrect if we don’t cdin there first. That’s weird, but I get it. However, why do we then need an absolute path? Wouldn’t this work? cd /var/www/example.com/htdocs; php wp-cron.php > /dev/null 2>&1 Thanks for your suggestion. I will try that. I think absolute path is not required once we
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00530.warc.gz
CC-MAIN-2023-14
2,496
35
https://hannesdotkaeuflerdotnet.herokuapp.com/
code
One of my recent weekend side projects, an e-ink / raspberrypi driven build status dashboard, was a great playground for doing TDD powered by visual snapshots. But let's rewind a bit. What I actually wanted to achieve was the following: Build a semi-decent python class to draw a dashboard type interface, which I can feed to my e-ink display. I had already prototyped such a script, but it was a "make it work in the quickest possible way in 1 hour" mess. Nothing I wanted to maintain or even look at for even five more minutes. I also didn't want to start completely from scratch regarding the output, because I was happy enough with the result this script produced, which is shown here: So how could I develop the code from scratch, while making sure I got the exact same output in the end? Right, creating myself a feedback loop that will quickly compare the reference image to the current output. To quote from jest: Snapshot tests are a very useful tool whenever you want to make sure your UI does not change unexpectedly. A typical snapshot test case for a mobile app renders a UI component, takes a screenshot, then compares it to a reference image stored alongside the test. This is powerful, because how else would I test this? Things of visual nature are not unit-tested easily, which is why they are often simply untested. We usually don't test stylesheets, colors, images etc. However we can't say those things are unimportant. So I set out to do TDD with snapshots and iterate myself toward the reference result. Lot of work left to do, sure, but that set myself up for about a two second feedback cycle. The process, which I packaged into a simple npm test bound to <leader>t in VIM so I can invoke it in one keystroke, is this: - Run unit tests in python (This is just one dumb test for the constructor, I should remove it) - Render current image to actual.png in an "integration" test - Create image diff with pixelmatch - Open this diff in Preview so it jumps into my face See the process encoded here, and yes, the irony of having a node based test invocation for a python script is not lost on me. Computers 🤷🏼♂️ Let's walk through one of my commits together. I really enjoyed working like this. A few minutes in, I had the rendering of the header, header title, project text to the left all fleshed out with minimal differences to the reference. I assume something regarding the font-rendering on the raspberrypi/debian vs. my mac is to be blamed for the tiny deviations around the text. No clue though. So here I was: Lets add some code to render the badge text on the right: <leader>t, and see this: So obviously I got the alignment wrong. Lets fix it: Re-run the tests, see this: Less red! That's basically what I did over and over again. Feel free to have a look at the commits for more examples. - Fast feedback is gold - Even visual feedback is good - I wouldn't have wanted to unit test this, so quick visual feedback is way better than no feedback. Let's remember that coding this up, syncing the code to the rasbperrypi, actually running the code and see the output on the e-ink display is a multi-minute process! - Keep snapshot tests in your toolbelt, there is a place and time for them, and it's not only react! - Updating the reference image is displayed really nicely in github Danger-todoist celebrates 200k downloads# Danger-todoist is a plugin for the excellent Danger ecosystem. More specifically for the ruby variant of Danger. What does Danger do? It is basically a kind of automated code / pull request review system. You create a pull request on github, and a bot account will recommend changes to the pull request. The changes it suggests are based on a freely configurable set of rules and suggestions, as codified per your Dangerfile. The beauty comes as always through the flexibility of just plain ruby code and a set of plugins. One of those plugins is danger-todoist, which I first published in September 2016. A things that makes me cringe is leaving TODO: fix me comments all over our code, and of course then never fixing them. Makes one wonder if there really was something to do ... 🤓. Danger-todoist helps you with this! It will duly notify you if you were to leave an unadressed todo comment in your changes. You can decide whether this is a show stopper (YES!) or if you want to leave it as a warning. Either way, this makes it much harder to let many of those pesky comments sneak into your codebase. Since its first release more than a year ago, it has now amassed 200.000 downloads as shown on rubygems. This likely makes it my most successful peace of open-source software to date 🖖🏽, which I hereby celebrate. Hack on, and keep that code clean, and check it out on github. In 2017 I have likely listened to hundreds of hours of podcasts. Out of interest, lets do the math real quick: 50 weeks until now * 5 podcasts * 1h average length = 250h. So yeah, hundreds of hours it is. But I definitely don't consider that to be wasted time, but sometimes great entertainment, time spent learning, or a soothing tone to fall asleep to. Without much further ado, here's what I have been listening to in 2017, in no particular order: - 2 Dope Queens: Phoebe Robinson and Jessica Williams are two friends hosting a hilarious live comedy show. They literally crack me up each episode and easily brighten my mood for quite a while. Funniest podcast I know. - Scrum Master Toolbox Podcast: Each episode is around eleven minutes long and is a concise discussion about the daily work and struggle of the scrum master trade. Vasco Duarte does a great job in keeping those episodes short and to the point. Great way to learn some useful tips from more experienced agile coaches/scrum masters. - Missed Apex Podcast: This was a new discovery for me in 2017 and made the formula 1 season so much more enjoyable. The f1 race reviews are both funny and informative. The host Spanners Ready leads a crew of journalists and f1 fans to produce these reviews but also more technical shows and interviews with people of f1 fame. A must listen! - The Bike Shed: This has nothing to do with bikes, but is a semi-random conversation between Derek Prior, Sean Griffin, Amanda Hill and various guests on IT topics such as Ruby on Rails, Active Record, Diesel, mixed with stories on consulting work, rockets and anything else that might come up. I find it provides a nice mix of interesting technical discussions and light-hearted banter. - Accidental Tech Podcast: The trio of Casey Liss, Marco Arment and John Siracusa do a great job of endlessly discussing the world of Apple and related surrounding topics. This is definitely the podcast I have been following for the longest time. Living off of open-source# For the second year in a row now I have participated in Hacktoberfest, an open-source initiative by DigitalOcean, a cloud infrastructure provider. What's the deal? You, fellow open-source contributor, just have to open a handful of pull-request during the timeframe of October 1st to 31st. DigitalOcean will be generous and send you a limited edition t-shirt for free (well, for your time spent on those 5 pull-requests that is). Here's the two shirts I got for my 2016 and 2017 efforts: Needless to say, I find that an awesome initiative, seeing that the world builds upon open-source software. The five pull-requests that got me my t-shirt this year were: To finish this off I can't recommend participating in Hacktoberfest enough, and thanks to DigitalOcean to appreciate this by giving you a t-shirt. 50 Million Lines of Bugs# I often think about the following Mercedes-Benz advert: In what world having whatever many lines of code should be something to brag about is beyond me. The tweet already hints at this nicely. But the marketing department seemed to have a different opinion in this case. While I don't have credible numbers to back this, I think we can agree more code correlates with more bugs in some way. So Mercedes-Benz, please, try to keep the number of lines of code in your cars as low as possibly can be.Next page »
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645177.12/warc/CC-MAIN-20180317135816-20180317155816-00046.warc.gz
CC-MAIN-2018-13
8,101
49
https://forum.itarian.com/t/command-prompt-remotely/48536
code
I’m new to the comodo one platform. Was using solarwinds before. Is there a way to automatically launch command prompt remotly? before we had a cmd windows which connected to the end machine and I could run powershell and cmd commands. We have been informed that " ability to access cmd and file transfers without having to remote into the machine " has been scheduled for Q2 2018. We will reach back to you with more information regarding the development progress as soon as possible. If you have the admin console, superior tool in my opinion, there is the shell execute tab. I have said it before, and I will say it again, I will not use the new remote control over the old Admin console, likely ever, unless they really give it some power. Yes, you are right, old RMM was really powerful on remote capabilities and CRC have just started with remote access for now. But now we have a dedicated team only for arming CRC with whole range of remote capabilities like file transfer, remote command line, shell script execution, windows task& service & process monitors etc. We will release most of them within 2018! Exactly! @monster-it , Our development team have already started investigation on remote control over mobile apps, but as you can imagine it’s a very fundamental development and most probably will be done right after we enrich the core functions of CRC. I’m still using the old RMM for the majority of my support functions as the shell execute module is just so useful when diagnosing issues. Once this part is implemented into CRC, that will be the point when i switch between the 2 full time. The unavailability of the (old) RMM executables/MSIs from the C1 platform is mainly due to its decommissioning process. We will consult with the product development team if we can extend further assistance for your concern. It is still a superior tool. Could you also let them know that I can not open the old admin console from the rmm anymore please?? I have to open the console manually. Like I have said, I use the admin console exclusively, if at all possible. Thanks So if anyone can send me a link to the old rmm admin console - even if I use it manually for now, I’d appreciate it. … I’m new to the platform, and have a lot of one-off tasks to do to a few devices (which use of their command prompt would greatly assist)… ANYthing that can be done in parallel of the client using their computer without ‘full takeover’ is key… This process has forced me to look into running procedures in the current version, but in addition to what I"m looking for not being there, I’d like to be able to see the process execute myself rather than sending a procedure and guessing if it went, if it executed, worked etc. . . . I’m also looking into learning a little Python now, if anyone has any good training resource links to pass along. . . I thought decommission was not going to happen, until all features were integrated?? This is a far superior tool, and almost always connects, even when the other way does not. I would also encourage you to leave a second way to log in, as the admin console has been great. And I understand that the Admin Console in not fully decommissioned yet, but I really feel that you are starting to remove it before the other integration is even close to being remotely close to the features that the Admin console has. If you can not install it from the RMM page, you can not install it for your clients. It really is a far superior tool, in my opinion, and will hate to see it go, unless they can work some magic on the sub-par tool they are moving to. For example, I monitor and kill processes from the old Admin console many times a day, but the new way does not have that option. This is critical, especially when trying to troubleshoot under user profiles that do not have permissions to see all the processes running. I also like the way it pulls the system inventory, easy to copy and paste into a word doc, and have detailed list on my systems. I would like to have a reporting tool that will pull the info and put in into word docs, or excel sheets, but as far as I know, it is not to my liking.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.13/warc/CC-MAIN-20231206031946-20231206061946-00437.warc.gz
CC-MAIN-2023-50
4,167
12
https://mail.python.org/pipermail/xml-sig/2003-March/009300.html
code
[XML-SIG] _ inserted into built-ins Fred L. Drake, Jr. Thu, 27 Mar 2003 11:06:03 -0500 Currently, the symbol "_" is inserted into the built-in namespace (indirectly) in five different places in PyXML, but it looks like the insertion into the built-in namespace is not intentional. The following modules define and use _ for I18N support: In each case, the module either calls gettext.install() or defines a module-local _ function if either gettext can't be imported or the message catalogs can't be found; in that case _ is only defined locally and not inserted into the built-in namespace. Insertion of _ into built-ins can easily mask errors in unrelated code, and should generally be avoided for library-specific message catalogs. I'd like to create a single definition of _ in the xml.FtCore module (since _ is being defined for the '4Suite' domain), and import that in each of the modules that use it. Are there any objections to this? Fred L. Drake, Jr. <fdrake at acm.org> PythonLabs at Zope Corporation
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686117.24/warc/CC-MAIN-20170920014637-20170920034637-00421.warc.gz
CC-MAIN-2017-39
1,011
19
https://math.stackexchange.com/questions/2622148/what-is-the-easiest-way-to-find-the-inverse-of-a-3x3-matrix-by-elementary-column
code
While using the elementary transformation method to find the inverse of a matrix, our goal is to convert the given matrix into an identity matrix. We can use three transformations:- 1) Multiplying a column by a constant 2) Adding a multiple of another column 3) Swapping two column The thing is, I can't seem to figure out what to do to achieve that identity matrix. There are so many steps which I can start off with, but how do I know which one to do? I think of one step to get a certain position to a 11 or a 00, and then get a new matrix. Now again there are so many options, it's boggling. Is there some specific procedure to be followed? Like, first convert the first column into: 1 a12 a13 0 a22 a23 0 a32 a33 Then do the second column and then the third? What do I start off with? I hope I've made my question clear enough.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874026.22/warc/CC-MAIN-20201020162922-20201020192922-00203.warc.gz
CC-MAIN-2020-45
832
8
https://cheftalk.com/threads/hydro-dipping-kitchen-equipment.107224/page-2
code
- Joined May 25, 2015 After reading all his posts, I'm a little confused here about what the OP intends to do with these decorated tools. If at home for his own use, use them at your own risk. If he is thinking of selling them to home cooks, the legal consequences can be severe. In a commercial setting they will be laughed at.Whatever you want to do at home is your business, but at the workplace art on tools takes a backseat...
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145654.0/warc/CC-MAIN-20200222054424-20200222084424-00193.warc.gz
CC-MAIN-2020-10
431
2
http://www.techist.com/forums/f78/when-do-you-set-up-raid-48758/index2.html
code
RAID 0 is an option for people who want to BUY (2) inexpensive disks and combined them so the OS thinks they have 1 large drive. For Example: A Seagate Barracuda 400GIG HDD with NCQ is $330.00. But, if you buy (2) 200GIG Seagate Barracuda with NCQ they are $125.00 that's $250 for 400GIG in RAID 0. You just saved $80.00. And as far as performance goes, RAID 0 doesn't universally increase performance, i've read serveral articles and benchmarks that all showed that performance increase all depended on what chipset RAID 0 was run with. RAID 0 is just a good option to buy (2) less expensive disks and combined them to have one large storage area. Google : Performance increase in RAID 0 You'll find what i've stated here.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719416.57/warc/CC-MAIN-20161020183839-00150-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
723
5
https://math.stackexchange.com/questions/509099/venn-diagram-question
code
Here is my question. A math examination has three questions. Twenty-six students took the examination, and every student answered at least one question. Six students did not answer the first question; twelve did not answer the second question; and five did not answer the third question. If eight students answered all three questions, how many students answered exactly one question? The answer and the Venn diagram for the exercise is: From the regions in the Venn diagram, we have $a + b + c + d + e + f + g = 26$ : the total number of students $g = 8$ : all three questions answered $b + f + c = 6$ : Question 1 is not answered $a + e + c = 12$ : Question 2 is not answered $a + d + b = 5$ : Question 3 is not answered My confusion starts here: Why do I want to find $a + b + c$, as stated below? We want to find $a + b + c$, and we know that $a + b + c > 3$. Adding the last three equations, we have Why do I do this next step? $2(a + b + c) + e + d + f = 23$ Why do I do this next step? $a + b + c + 23 - 2(a + b + c) + 8 = 26$: How do I get this next step? Why is a + b + c = 5? So we conclude that $a + b + c = 5$. Thanks for any help you can provide. I have been at this section on Venn diagrams all week and I just can't seem to get it.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347445880.79/warc/CC-MAIN-20200604161214-20200604191214-00498.warc.gz
CC-MAIN-2020-24
1,246
15
https://platform.deloitte.com.au/articles/2015/advanced-file-handling-in-mule/
code
With all the drag-and-drop goodness of AnyPoint Studio these days, it’s easy to forget that under the hood Mule ESB remains a very powerful, configurable and extendible framework. This power comes in handy when you’re faced with demanding file processing requirements in advance of Mule’s out-of-the-box functionality. Old-school Mule bits Time for a quick history lesson… Mule’s File Connector is an ‘old-school’ Mule transport (since let’s be honest, local filesystems haven’t changed that much in the last, ooh, 400 years or so). It’s based around the concepts of ‘connectors’ and ‘endpoints’ rather than ‘configurations’ and ‘operations’ common in newer connectors. Importantly for us, the File Connector uses the concepts of Message Receiver, Requester and Dispatcher classes. |MessageReceiver||Convert external source events into Mule messages.||Message Source (at the beginning of a flow)| |MessageRequester||Retrieve an external event when requested by Mule and convert it into a Mule message.||Anywhere in the flow| |MessageDispatcher||Send a Mule message out to an external endpoint.||Anywhere in the flow| These concepts will become important as we discuss various file handling patterns. Wait for it… File outbound endpoints are inherently one-way endpoints - there’s no such thing as a ‘response’ to writing a file, so there’s no need to wait for one. By default then, a file outbound endpoint is also asynchronous. Mule will create a thread pool behind the endpoint (the ‘dispatcher’ thread pool) and use these threads to write your file content while the main flow moves on to the next step. This is great for throughput, but sometimes moving on immediately isn’t what you want. Consider: - What if the file write fails? - What if the file content is the result of a complex streaming transformation and you need to report any exceptions? - What if you need to trigger an upload of the file once it has finished being written? You can’t do any of those things unless your flow blocks and waits. We can make this happen by simply turning off that pesky dispatcher thread pool, instead forcing the flow thread to write the file. To do this we need to explicitly create a <file:connector> element (Mule will implicitly create and use a default File connector unless we tell it otherwise). <file:connector name="synchronous-file-connector"> <dispatcher-threading-profile doThreading="false"/> </file:connector> doThreading="false" attribute tells Mule to create a special thread pool with no actual threads in it. When a flow tries to write a file there will be no thread to hand off the I/O work to. Instead the flow thread itself will execute the file dispatcher code and not progress to the next flow step until the file write is completed. We can then reference our synchronous connector in our flows: <flow> <!-- Flow steps --> <file:outbound-endpoint path="/var/data" outputPattern="important.csv" connector-ref="synchronous-file-connector" /> <!-- More flow steps --> </flow> The same concept applies to polling for inbound files, although in the inbound case thread control is not so important because the file receiver is doing so little work (just opening a FileInputStream essentially). Note: If you have only a single <file:connector>element in your Mule application, all file endpoints will use that connector (even if they don’t explicitly reference it). If you want some of your file endpoints to be blocking and some non-blocking, you’ll need to define two file connectors and explicitly reference which one to use at each file endpoint. Don’t call us, we’ll call you Mule supports reading files at the beginning of a flow. Sometimes though you need to read a file during a flow. e.g. - Your flow is triggered by a job scheduler (e.g. Quartz), not a file polling inbound endpoint. - You have to process files on a strict schedule, not as soon as they arrive. - You need to retry file processing from the beginning of the file. Sure you could do this with Mule’s scripting or custom Java extension points: just open the file using plain java.io classes right? But Mule gives us a lot of niceties like automatic close/move/delete handling. Can we have our cake and eat it too? It turns out we can, using Mule’s Requester Connector. The Mule Requester Connector is a thin wrapper over Mule’s Java Client API. The client allows you to request org.mule.api.MuleEvent objects from endpoints using their URL syntax. This invokes the Mule endpoint’s MessageRequester class (as discussed above). This allows you to get data in the middle of your flow from endpoints that can usually only start a flow (like a file receiver). In this example below we are using the Requester Connecter to retrieve a file using a specially configured File Connector to not delete or move the file once we’ve finished reading it. <!-- Special File Connector that does not auto-delete files once read --> <file:connector name="no-delete-file-connector" autoDelete="false" /> <!-- Mule Requester global config --> <mulerequester:config name="mule-requester-config"/> <flow> <!-- Flow steps --> <mulerequester:request config-ref="mule-requester-config" resource="file:///var/in/myfile.txt?connector=no-delete-file-connector" /> <!-- Payload now is a FileInputStream. --> <!-- More flow steps --> </flow> Putting it all together: a friendly visit to your local (filesystem) Mule has powerful abilities to handle large payloads with streaming (see our previous post). Unfortunately streaming brings its own problems, particularly around error handling. What if you’re half-way through transforming a large Salesforce CSV export when your network connection flakes out? Your only choice is to abort the entire transaction and start again. How can we make this process more reliable? Instead of streaming direct from the source, we can use the local filesystem as a cache to download the file first and then stream the local copy into the transformation. This means we can retry the download as many times as we need to without abandoning the entire workflow. See below for an example: <flow> <!-- Store the Salesforce Batch info POJO in a flow variable so we can reference it multiple times. --> <!-- Use a synchronous until-successful router to retry the batch download --> <until-successful maxRetries="3" synchronous="true"> <processor-chain> <sfdc:query-result-stream config-ref="salesforce-config"> <sfdc:batch-info ref="#[flowVars['batchInfo']]"/> </sfdc:query-result-stream> <file:outbound-endpoint path="/tmp/batches" outputPattern="#[flowVars['batchInfo'].id].csv" connector-ref="synchronous-file-connector"/> </processor-chain> </until-successful> <!-- If we get to this point we have successfully cached the SalesForce batch results to the local filesystem. --> <mulerequester:request config-ref="mule-requester-config" resource="file:///tmp/batches/#[flowVars['batchInfo'].id].xml"/> <!-- Now the message payload is a FileInputStream for our locally cached results file --> </flow> Some key points to note: <until-successful>router runs in synchronous mode, meaning the flow will block until the router succeeds or throws an exception after exceeding its retry limit. - We stream the Salesforce result directly to the local filesystem (avoiding loading the entire file into memory). <file:outbound-endpoint>uses our synchronous file connector so the <until-successful>router blocks until the file is completely downloaded. Knowledge is power. Understanding the concepts and design patterns behind Mule’s internals can help you better meet your requirements. Looking beyond the connectors bundled with AnyPoint Studio can make your solutions cleaner and more robust.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00386.warc.gz
CC-MAIN-2022-27
7,745
45
https://theboxcarboys.com/compare-bluehost-cloud-to-vps/
code
Compare Bluehost Cloud To Vps Discovering a high-quality cheap web hosting supplier isn’t easy. Every internet site will certainly have various requirements from a host. Plus, you need to compare all the features of a holding company, all while seeking the best deal feasible. This can be a whole lot to sort with, particularly if this is your very first time purchasing holding, or developing an internet site. A lot of hosts will use super economical initial prices, just to elevate those prices 2 or 3 times higher once your first call is up. Some hosts will certainly offer cost-free incentives when you join, such as a totally free domain, or a totally free SSL certification. While some hosts will be able to supply far better efficiency and also high levels of safety. Compare Bluehost Cloud To Vps Below we dive deep into the best inexpensive host plan there. You’ll learn what core hosting functions are important in a host and exactly how to evaluate your own hosting requirements to make sure that you can choose from one of the best economical hosting suppliers listed below. Disclosure: When you buy a host bundle with web links on this page, we make some compensation. This helps us to keep this site running. There are no added costs to you at all by utilizing our web links. The list below is of the best cheap host bundles that I have actually personally made use of and examined. What We Take into consideration To Be Inexpensive Web Hosting When we describe a host plan as being “Low-cost” or “Spending plan” what we mean is hosting that falls under the rate bracket in between $0.80 to $4 per month. Whilst researching low-cost holding service providers for this guide, we took a look at over 100 various hosts that came under that price array. We then examined the top quality of their most inexpensive hosting plan, value for cash and client service. In this article, I’ll be reviewing this world-class site hosting business and stick in as much pertinent information as feasible. I’ll go over the features, the pricing options, and also anything else I can consider that I think could be of advantage, if you’re deciding to sign up to Bluhost as well as obtain your sites up and running. So without additional ado, allow’s check it out. Bluehost is one of the most significant web hosting business in the world, obtaining both enormous marketing assistance from the business itself and also affiliate marketing experts who advertise it. It actually is an enormous company, that has been around for a very long time, has a large online reputation, and is absolutely one of the top options when it involves web hosting (most definitely within the leading 3, a minimum of in my publication). Yet what is it exactly, and also should you obtain its services? Today, I will respond to all there is you need to know, provided that you are a blog writer or a business owner who is trying to find a webhosting, and also does not know where to start, because it’s a terrific solution for that target market in general. Allow’s visualize, you want to hold your sites as well as make them visible. Okay? You already have your domain name (which is your website destination or LINK) now you want to “transform the lights on”. Compare Bluehost Cloud To Vps You require some holding… To complete every one of this, as well as to make your site visible, you require what is called a “server”. A server is a black box, or device, that saves all your internet site data (files such as photos, texts, video clips, web links, plugins, and also various other details). Currently, this web server, needs to get on regularly and it needs to be linked to the net 100% of the moment (I’ll be mentioning something called “downtime” in the future). Furthermore, it likewise requires (without getting as well expensive as well as right into details) a file transfer protocol generally called FTP, so it can reveal web browsers your internet site in its desired form. All these points are either expensive, or need a high degree of technological ability (or both), to produce and preserve. And also you can totally head out there and also learn these points on your own as well as established them up … but what regarding instead of you buying and maintaining one … why not simply “leasing organizing” instead? This is where Bluehost comes in. You rent their servers (called Shared Hosting) as well as you introduce a site making use of those web servers. Since Bluehost maintains all your files, the company likewise enables you to set up your web content management systems (CMS, for brief) such as WordPress for you. WordPress is a very prominent CMS … so it simply makes sense to have that option available (nearly every organizing business now has this option too). Basically, you no more need to set-up a server and then incorporate a software application where you can build your content, separately. It is already rolled into one bundle. Is it risk-free to have Bluehost take care of your web sites? Well … visualize if your server remains in your house. If anything were to take place to it whatsoever, all your data are gone. If something fails with its inner procedures, you need a service technician to fix it. If something overheats, or breaks down or obtains damaged … that’s no good! Bluehost takes all these headaches away, as well as cares for whatever technological: Pay your server “lease”, as well as they will look after every little thing. And also once you buy the service, you can after that start concentrating on including content to your internet site, or you can place your effort right into your advertising and marketing projects. What Provider Do You Receive From Bluehost? Bluehost uses a myriad of different solutions, but the primary one is hosting certainly. The hosting itself, is of various kinds incidentally. You can lease a shared web server, have a dedicated server, or additionally a virtualexclusive web server. For the objective of this Bluehost evaluation, we will certainly concentrate on organizing services as well as various other services, that a blog owner or an on the internet entrepreneur would require, as opposed to go unfathomable into the bunny hole and discuss the other services, that are targeted at more experienced individuals. - WordPress, WordPress PRO, as well as shopping— these organizing solutions are the packages that permit you to host a website making use of WordPress and also WooCommerce (the latter of which enables you to do shopping). After purchasing any of these bundles, you can begin building your site with WordPress as your CMS. - Domain name Industry— you can likewise get your domain from Bluehost rather than various other domain registrars. Doing so will certainly make it much easier to direct your domain to your host’s name servers, considering that you’re making use of the same industry. - Email— as soon as you have purchased your domain, it makes sense to likewise obtain an e-mail address connected to it. As a blog owner or online business owner, you ought to virtually never make use of a cost-free e-mail solution, like Yahoo! or Gmail. An e-mail such as this makes you look less than professional. Luckily, Bluehost offers you one completely free with your domain. Learn More About Compare Bluehost Cloud To Vps Here -> Bluehost additionally offers devoted web servers. And you may be asking …” What is a dedicated web server anyhow?”. Well, things is, the fundamental webhosting bundles of Bluehost can just so much website traffic for your site, after which you’ll require to upgrade your organizing. The factor being is that the common web servers, are shared. What this implies is that one web server can be servicing 2 or more sites, at the same time, among which can be your own. What does this mean for you? It indicates that the single web server’s sources are shared, and it is doing numerous jobs at any kind of offered time. As soon as your internet site begins to hit 100,000 website sees monthly, you are mosting likely to require a committed server which you can likewise get from Bluehost for a minimum of $79.99 per month. This is not something yous ought to stress over when you’re beginning but you need to maintain it in mind for sure. Bluehost Rates: Just How Much Does It Expense? In this Bluehost evaluation, I’ll be concentrating my focus mostly on the Bluehost WordPress Hosting plans, given that it’s the most preferred one, and also very likely the one that you’re searching for which will certainly match you the best (unless you’re a substantial brand, business or website). The 3 readily available plans, are as follows: - Fundamental Strategy– $2.95 monthly/ $7.99 routine cost - Plus Strategy– $5.45 per month/ $10.99 regular cost - Selection And Also Plan– $5.45 monthly/ $14.99 normal cost The very first rate you see is the price you pay upon register, as well as the second cost is what the price is, after the first year of being with the company. So primarily, Bluehost is mosting likely to bill you on an annual basis. As well as you can additionally pick the quantity of years you want to hold your site on them with. Compare Bluehost Cloud To Vps If you pick the Basic plan, you will pay $2.95 x 12 = $35.40 beginning today and also by the time you enter your 13th month, you will now pay $7.99 per month, which is also billed each year. If that makes any feeling. If you are serious about your web site, you must 100% obtain the three-year option. This indicates that for the fundamental strategy, you will certainly pay $2.95 x 36 months = $106.2. By the time you hit your fourth year, that is the only time you will pay $7.99 monthly. If you consider it, this strategy will certainly save you $120 in the course of three years. It’s not much, however it’s still something. If you wish to get more than one internet site (which I extremely suggest, and also if you’re major, you’ll possibly be obtaining more eventually in time) you’ll wish to use the selection plus strategy. It’ll allow you to host unrestricted internet sites. Learn More About Compare Bluehost Cloud To Vps Here -> What Does Each Strategy Deal? So, in the case of WordPress hosting strategies (which resemble the common organizing plans, yet are much more tailored in the direction of WordPress, which is what we’ll be concentrating on) the attributes are as follows: For the Fundamental strategy, you get: - One web site only - Guaranteed website using SSL certificate - Maximum of 50GB of storage - Free domain for a year - $ 200 marketing debt Bear in mind that the domains are acquired individually from the hosting. You can obtain a free domain name with Bluehost here. For both the Bluehost Plus hosting and also Choice Plus, you obtain the following: - Unrestricted number of sites - Free SSL Certification. Compare Bluehost Cloud To Vps - No storage or transmission capacity limitation - Free domain name for one year - $ 200 advertising credit rating - 1 Office 365 Mailbox that is cost-free for thirty days The Choice Plus strategy has actually an added benefit of Code Guard Basic Alternative, a back-up system where your file is conserved and replicated. If any kind of accident takes place as well as your site data vanishes, you can recover it to its initial form with this function. Notice that although both strategies cost the exact same, the Choice Strategy after that defaults to $14.99 per month, normal price, after the collection quantity of years you’ve chosen. What Are The Advantages Of Using Bluehost So, why pick Bluehost over various other host services? There are numerous web hosts, most of which are resellers, yet Bluehost is one choose couple of that have actually stood the test of time, as well as it’s probably one of the most popular available (as well as permanently reasons). Below are the 3 primary advantages of selecting Bluehost as your web hosting service provider: - Web server uptime— your website will certainly not show up if your host is down; Bluehost has more than 99% uptime. This is exceptionally important when it involves Google Search Engine Optimization as well as positions. The greater the much better. - Bluehost speed— how your server action identifies exactly how quick your website reveals on an internet browser; Bluehost is lighting quickly, which suggests you will reduce your bounce price. Albeit not the best when it involves loading speed it’s still hugely important to have a rapid speed, to make user experience far better and also better your position. - Unlimited storage— if you get the Plus strategy, you need not bother with the amount of documents you keep such as video clips– your storage space capability is limitless. This is truly crucial, since you’ll possibly encounter some storage space concerns later down the tracks, and you don’t desire this to be a hassle … ever before. Finally, consumer support is 24/7, which implies regardless of where you are in the globe, you can speak to the support team to fix your site concerns. Pretty typical nowadays, but we’re taking this for approved … it’s additionally very vital. Compare Bluehost Cloud To Vps Additionally, if you’ve obtained a complimentary domain name with them, after that there will certainly be a $15.99 cost that will certainly be deducted from the amount you initially acquired (I visualize this is due to the fact that it type of takes the “domain name out of the marketplace”, not exactly sure regarding this, yet there most likely is a hard-cost for registering it). Lastly, any kind of requests after thirty days for a reimbursement … are void (although in all honesty … they need to possibly be stringent right here). So as you see, this isn’t necessarily a “no doubt asked” policy, like with some of the various other organizing choices available, so make certain you’re alright with the policies prior to proceeding with the organizing.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361064.69/warc/CC-MAIN-20211202024322-20211202054322-00088.warc.gz
CC-MAIN-2021-49
14,052
85
http://www.chegg.com/homework-help/algebra-and-trigonometry-9th-edition-chapter-6.8-solutions-9780321716569
code
Growth of an Insect Population The size P of a certain insect population at time t (in days) obeys the law of uninhibited growth (a) Determine the number of insects at t = 0 days. (b) What is the growth rate of the insect population? (c) What is the population after 10 days? (d) When will the insect population reach 800? (e) When will the insect population double?
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661555.40/warc/CC-MAIN-20160924173741-00283-ip-10-143-35-109.ec2.internal.warc.gz
CC-MAIN-2016-40
366
6
http://forum.tabletpcreview.com/threads/music-player-some-thoughts.45945/page-3
code
Discussion in 'Asus (Android)' started by d.goryachev, Nov 12, 2011. Sorry, but still no luck... With same message in console, if I'm confused Now i'll try to remove and install it again on my eee note. files are present in /usr/local/lib... let me try /usr/local/eTablet/bin/music/music/: 3: mpd_pid: not found It didn't give the other error message. So 1 error is solved I guess... /usr/local/eTablet/bin/music/music: 3: mpd_pid: not found music directory is not a directory : "/eTablet/music" Create dir /eTablet/music or make link to external sd card. May be it worth to check existance of /mnt/extsdcard/music while installing? I've reinstalled it, all working propertly changed ipk again, added check /mnt/extsdcard/music existance before symlinking to it /eTablet/music was already created and /mnt/extsdcard/music is available. Contain the same files. Just downloaded the file and installed. No change, still having the mpd_pid: not found message. what you see in console if call music directory is not a directory: "/eTablet/music" BTW I haven't inserted my sd card right now. Edit: After adding the sd card and rescan: Failed to bind to '[::]:6600': Failed to create socket: Address family not supported by protocol Separate names with a comma.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367183.21/warc/CC-MAIN-20210303165500-20210303195500-00119.warc.gz
CC-MAIN-2021-10
1,254
20
https://www.thestudentroom.co.uk/showthread.php?t=6719456&utm_source=facebook&utm_medium=fblikebutton&utm_campaign=thread
code
Maths A-Level again im so badWatch I have a pack of playing cards, numbered 1-52. They are in order with 1 on the top and 52 on the bottom. I go through the pack discarding every other card, so I end up with a new pile which has 1 on top, then 3, 5, 7.... up to 51. I do the same with the new pile, and so on, getting even smaller piles. NOTE: If the last card in the pile is one i keep, I will discard the top card of the subsequent pile. What is the last card i'm left with? What if i do it again, but discard the 1, keep the 2m etc...? What if i start with 100 cards?
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141716970.77/warc/CC-MAIN-20201202205758-20201202235758-00309.warc.gz
CC-MAIN-2020-50
570
6
https://tess.elixir-europe.org/events?city%5B%5D=Bogota&city%5B%5D=Brussels&city%5B%5D=Nijmegen&city%5B%5D=Liverpool&city%5B%5D=Norwich&city%5B%5D=Kuala+Lumpur&include_expired=true&organizer%5B%5D=DEST&organizer%5B%5D=PerkinElmer&organizer%5B%5D=ISCB
code
Accelerating Bioinformatics through Scientific & Technological Innovation 30 November - 2 December 2011 Kuala Lumpur, MalaysiaAccelerating Bioinformatics through Scientific & Technological Innovation http://www.incob2011.org/ https://tess.elixir-europe.org/events/accelerating-bioinformatics-through-scientific-technological-innovation 2011-11-30 09:00:00 UTC 2011-12-02 00:00:00 UTC ISCB Renaissance Hotel, Kuala Lumpur, Malaysia Renaissance Hotel Kuala Lumpur Malaysia Bioinformatics meetings_and_conferences Revolutionaries for Global Health Summit 6 - 7 March 2013 Brussels, BelgiumRevolutionaries for Global Health Summit http://now.eloqua.com/e/es.aspx?s=643&e=219032&elq=c4d10c31f5d744a99829fcc7d0434704 https://tess.elixir-europe.org/events/revolutionaries-for-global-health-summit 2013-03-06 00:00:00 UTC 2013-03-07 00:00:00 UTC PerkinElmer Crowne Plaza - La Palace, Brussels, Belgium Crowne Plaza - La Palace Brussels Belgium meetings_and_conferences Philosophy of Biological Systematics 8 - 12 September 2014 Brussels, BelgiumPhilosophy of Biological Systematics http://www.taxonomytraining.eu/content/philosophy-biological-systematics https://tess.elixir-europe.org/events/philosophy-of-biological-systematics 2014-09-08 01:00:00 UTC 2014-09-12 01:00:00 UTC DEST Brussels, Belgium Brussels Belgium Systems biology Bioinformatics workshops_and_courses Note, this map only displays events that have geolocation information in TeSS. For the complete list of events in TeSS, click the grid tab.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889574.66/warc/CC-MAIN-20201025154704-20201025184704-00494.warc.gz
CC-MAIN-2020-45
1,517
11
http://www.designerstalk.com/forums/web-design/2012-text-wrapping-css.html
code
|Home||Register||FAQ||Members List||Search||Today's Posts||Mark Forums Read| |04-12-2003, 10:10||#1 (permalink)| css is for divs Join Date: Feb 2003 Text Wrapping with CSS Waiting for guru.... I have a div area that acts as a "main content" area, it is scrollable when necessary, untill now it has had an image at the top followed by text. However now I want to have an image on the left, with text to the right of the image and then below (wrapped with a space of about 10px) all in the scrollable area. How can I achieve this with CSS? (I'll may be able to upload an example but thing's are a bit complex from work)
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383160/warc/CC-MAIN-20130516092623-00074-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
617
10
https://meta.stackexchange.com/questions/211470/can-a-stack-exchange-website-meta-override-a-decision-posted-on-so-meta?noredirect=1
code
Each Stack Exchange community has somewhat different norms. As long as a given Stack Exchange community respects the platform (i.e. it doesn't do things like turn Q&A into a debate mechanism), they are free to set their norms as they see fit. In the case of Cross-Validated, it appears that the preference is that answers not be copy/pasted on the site at all. In practice, this is in keeping with the spirit of Q&A since, if the questions are not duplicated, the answers can almost certainly be customized for each question, avoiding the copy/paste problem altogether. I would also note that, while the two original meta questions that you cited give a fairly loose interpretation of answer duplication, they are both fairly old, and the current community consensus on this issue is probably closer to that which is advocated by Cross-Validated. For the current network-wide policy, see: Is it acceptable to add a duplicate answer to several questions?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100942.92/warc/CC-MAIN-20231209170619-20231209200619-00576.warc.gz
CC-MAIN-2023-50
953
5
https://lists.wikimedia.org/hyperkitty/list/translators-l@lists.wikimedia.org/message/JEGIYII2AB6JBFGEVTZ6S6E3ERBO3FFJ/
code
On Thu, Jan 19, 2017 at 6:58 PM, Haytham Aly <haytham.hammam(a)gmail.com> wrote: This idea is brilliant. My own concern for Arabic is that there are two major ways for displaying Gregorian month names; transliteration as well as the Assyrian names. Usually transliterated names suffice, but I prefer using both divided by a slash. This is due to differences in official use, since transliterated names are used in Egypt, Sudan, Libya, Yemen, and Gulf states; while Assyrian names are used in Iraq, Syria, Lebanon, Jordan and Palestine. Could this automation function render both or just the common transliterated month names? It would be a bonus to have both displayed, though only transliterated month names would suffice. Whereas I could be mistaken, I don't think there's a good way to automate both, I'm afraid. At least not that I'm aware of.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00201.warc.gz
CC-MAIN-2023-06
847
13
https://serverfault.com/questions/20909/is-it-safe-to-have-sql-server-auto-shrink-turned-on
code
There are many SQL Server options that can be enabled for databases, and one of the most misunderstood ones is auto-shrink. Is it safe? If not, why not? (I originally asked as a regular question but then found out the correct method - thanks BrentO) I've come across this several times now on ServerFault and want to reach a nice wide audience with some good advice. If people frown on this way of doing things, downvote and I'll remove this gladly. Auto-shrink is a very common database setting to have enabled. It seems like a good idea - remove the extra space from the database. There are lots of 'involuntary DBAs' out there (think TFS, SharePoint, BizTalk, or just regular old SQL Server) who may not know that auto-shrink is positively evil. While at Microsoft I used to own the SQL Server Storage Engine and tried to remove the auto-shrink feature, but it had to stay for backwards compatibility. Why is auto-shrink so bad? The database is likely to just grow again, so why shrink it? - Shrink-grow-shrink-grow causes file-system level fragmentation and takes lots of resources. - You can't control when it kicks-in (even though it's regular-ish) - It uses lots of resources. Moving pages around in the database takes CPU, lots of IO, and generates lots of transaction log. - Here's the real kicker: data file shrink (whether auto- or not) causes massive index fragmentation, which leads to poor performance. I did a blog post a while back that has an example SQL script that shows the problems it causes and explains in a bit more detail. See Auto-shrink – turn it OFF! (no advertising or junk like that on my blog). Don't get this confused with shrinking the log file, which is useful and necessary on occasion. So do yourselves a favor - look in your database settings and turn off auto-shrink. You should also not have shrink in your maintenance plans, for exactly the same reason. Spread the word to your colleagues. Edit: I should add this, reminded by the second answer - there's common misconception that interrupting a shrink operation can cause corruption. No it won't. I used to own the shrink code in SQL Server - it rolls back the current page move that it's doing if interrupted. Hope this helps! It isn't "unsafe" - it won't damage anything. But it is not recommended for production environments where the database may decide to go off and start an expensive rearrangement exercise just before a pile of requests come in making those requests take longer to be served. You are much better off using scheduling shrink operations along with other maintenance operations such as backups (actually, after backups - it'll more from the transaction log that way). Or just not shrinking at all unless there is a growth problem - you can always setup a monitor to let you know when the unused allocated space grows beyond a certain ratio or fixed size. IIRC the option is off by default for all databases in all MSSQL editions except Express. There is a whitepaper available on TechNet that explains SQL maintenance in more detail. I've seen a SQL server with both Autogrow and Autoshrink enabled. This (relatively powerful) server was terribly slow, because all it did all day was shrink and grow the database files. Autoshrink can be useful, but I'd recommend two things: - Turn Autoshrink off by default. - Document your server configs, so you know where Autogrow and Autoshrink are enabled and where they're not. The only time I've been forced to shrink a database was to refresh a copy on a test server with less disk space (insufficient to hold the production database). The production database's file(s) had generous free space, unfortunately you have to restore a database with the same file(s) sizes as you've backed it up with. So had no choice but to shrink production before backing it up. (The shrink took ages, lots of resource was consumed and the subsequent transaction log growth was problematic.) Also check out this video tutorial.... Watch Paul Randal demonstrate how shrink and auto-shrink can cause serious fragmentation problems for your database http://wtv.watchtechvideos.com/topic194.html
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510528.86/warc/CC-MAIN-20230929190403-20230929220403-00229.warc.gz
CC-MAIN-2023-40
4,131
26
http://tjenepengeronline.info/2938/
code
Deep in the money options trading out in the community server to chat, game, or get help with programming Binary T Shirts Look good with this fashionable binary t shirt. I have a bmp is just a red square I have to write a program with functions to make it have white stripes Things I would need to do: load the bmp ad. The io module provides Python s main facilities for dealing with various types of I O There are three main types of I O: text I O, binary I O and raw I O These. You are here: Home Dive Into Python 3 Difficulty level Serializing Python Objects Every Saturday since we ve lived in this apartment, I. 3 Processing Raw Text The most important source of texts is undoubtedly thes convenient to have existing text collections to explore, such as the corpora we. Forex one tv izle Unofficial Windows Binaries for Python Extension Packages by Christoph Gohlke, Laboratory for Fluorescence Dynamics, University of California, Irvine. The official home of the Python Programming Language. You are here: Home Dive Into Python 3 Difficulty level Files A nine mile walk is no joke, especially in the rain Harry Kemelman, The.
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741324.15/warc/CC-MAIN-20181113153141-20181113175141-00509.warc.gz
CC-MAIN-2018-47
1,131
7
https://www.apsparks.com/social/visual-perspective-taking/ph3-application-in-daily-activities/
code
Visual Perspective Taking: Ph3 Application In Daily Activities In this video This phase is designed to teach a student to show a picture to other teachers who are sitting in a circle. Subsequently, the student is taught to prevent others from seeing his cards in a natural game. Stevens discusses future phases.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100056.38/warc/CC-MAIN-20231129041834-20231129071834-00168.warc.gz
CC-MAIN-2023-50
311
3
http://blogchain.info/post/digital-asset-platform-with-ex-goldman-partner-as-co-founder-gets-bahrain-crypto-license
code
Digital Asset Platform With Ex-Goldman Partner as Co-Founder Gets Bahrain Crypto License CoinDesk: Bitcoin, Ethereum, Crypto News and Price Data 2024. Apr. 03. 19:00 ARP Digital says its the "first and only Central Bank-licensed OTC service provider specialized in digital asset-structured products." comments powered by Disqus.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817765.59/warc/CC-MAIN-20240421101951-20240421131951-00056.warc.gz
CC-MAIN-2024-18
328
5
https://projectcor.com/blog/is-next-js-the-same-as-react-js/
code
React has a huge array of benefits for developers. Some of them are: - It is easy to learn. - It’s also easy to use with your project. - It’s flexible. - It has reusable components. - It is high-performance. - It’s flexible. - It improves productivity. - It offers code stability. - It offers plenty of tools for the developer. - It has a vast ecosystem. - It facilitates strong web development. - Many of the top companies in the world, including a number of Fortune 500 companies, use React. When it comes to React projects, there are a few choices of tools to help you, such as Gatsby, Next.js, and Create-React-App. Here, we’ll take a look at the most popular of these options: Next.js and Create-React-App, exploring the advantages, disadvantages, and use cases and examples for each. Server-side rendering (SSR) vs. client-side rendering (CSR) In contrast, Next.js uses server-side rendering (SSR), which allows the application to render the page directly on the server instead of the browser. What do React vs. Next.js projects look like? Create-React-App is ideal for those looking to familiarize themselves with React. With dependencies such as webpack and babel, it streamlines operations and allows developers to leverage core features of the library to build front-end web applications. It also serves as a time-saving tool and allows you to build a single-page application (SPA). Advantages of using Create-React-App It is easy to learn and use Create-React-App has a low learning curve, and developers can leverage plenty of resources, from tutorials to documentation. You will gain flexibility Developers have the freedom to choose any routing library they want — there are no rules when it comes to Create-React-App. It is client-side rendered Client-side rendering has some advantages. For example, you can select any host for your project and deploy your products with ease. Disadvantages of Create-React-App There is no SEO support If you want your web app to have strong SEO, Create-React-App is not the best choice, as a client-side framework. It’s not very customizable Customization, too, is quite difficult with CRA. There are no built-in tools available, so you would have to customize the Webpack configuration with third-party tools. It makes it difficult to perform out-of-the-box functions There are certain limitations to CRA, such as the fact that developers would need additional tools, often ones with a steep learning curve, in order to extend its capabilities to allow it to perform out-of-the-box functions. Created by Vercel, Next.js is a framework that enables developers to build single-page applications and performant web apps through server-side rendering. It also offers static-site generations, pre-rendering, excellent functionality, and other features. Next.js is an extraordinarily popular choice. Advantages of using Next.js It’s ultra fast The speed of Next.js is one of its main advantages. This ultra-fast performance leads to shorter build time. You can use API routes Looking to use third-party API? Next.js facilitates this, offering API routes. That way, you can build APIs directly within the application. It is highly customizable In contrast to CRA, Next.js is easy to customize. You can add both Babel plugins and Webpack loaders, for example. Deployment is simple with Next.js. You can easily deploy your React apps quickly, with no hand-holding. Disadvantages of using Next.js There is a lack of flexibility Developers are only able to use a file router with Next.js. Moreover, you are required to use a Node.js server for dynamic routes. There are also few front pages that are built-in. There is no state manager built into the framework Next.js has no built-in state manager. If you need to use one, they you must use another tool to facilitate it. It’s not ideal for simple apps While Next.js is a solid choice for more complex web apps and web pages, when you’re working on a fairly simple product, it could make the process unnecesarily complicated. When to use Next.js You know the advantages of using Next.js. But when should you actually use it? Major businesses and organizations like Hulu, Netflix, Github, Nike, and Ticketmaster all have it in their stacks. Here are some examples of use cases. To build a landing page Landing pages are one standout application of Next.js. It’s also an optimal tool for creating other online collateral used for marketing purposes. Multiple types of websites are another use of Next.js. Thanks to ultra-fast loading times, the user experience is greatly improved when developers leverage the framework. Even when the particular device typically has slower load times, Next.js will aid the performance of the website. When you need strong SEO Search engine optimization is critical for many businesses, particularly as it pertains to their marketing efforts. Next.js is ideal for facilitating better SEO, due to server-side rendering. This is especially true in contrast to Create-React-App, which offers no built-in support for SEO. So, when you’re looking to drive traffic to your website, Next.js is the better option. To create eCommerce stores eCommerce stores demand a variety of features and functions, including high performance and strong SEO. Next.js supports the development of eCommerce stores and webshops, enabling stronger engagement and facilitating traffic to them. In fact, Next.js has an eCommerce starter kit that enables software developers to create webshops easily and quickly. For anything that demands excellent performance Ultimately, anything you build that requires strong performance will more than likely benefit from the aid of Next.js. If this is a priority for your website or application, you should consider the framework for your project. When to use Create-React-App Meanwhile, Create-React-App is ideal for a separate set of products and use cases. Facebook, Tesla, Reddit, Airbnb, Netflix, and Dropbox are just some of the huge names that leverage the tool for their projects. So, when should you use Create-React-App? Here are the main instances. To create gated applications If you only want your products to be available to pre-authenticated users, then gated applications are the best choice. These websites and applications don’t require server-side rendering, so Create-React-App and its client-side rendering will more than suffice. For web applications Likewise, web applications don’t typically require server-side rendering. Usually, they have high performance without SSR. So, if you’re building web applications, Create-React-App will help you cut costs will maintaining a high level of service. For single-page apps Single-page applications function inside the browser and do not require reloading the page while someone is using it. Gmail is one example. Development is fairly simple, more so than that with multiple-page applications. Create-React-App facilitates building single-page applications. While Next.js can also support development of these products, CRM is more commonly the option developers go with for this purpose. When the developer is relatively new to React When a developer is just starting out with React, Create-React-App will allow you to learn the ropes quickly, providing a means of attain familiarity with the framework. The bottom line When you’re choosing a React framework for your JS project, there are several options. Create-React-App and Next.js have emerged as the two top contenders for building strong React applications. If you’re in search of tools for learning how to become more familiar with React, Create-React-App, and/or Next.js, there is an extensive community to help support you, give you advice on the best approaches, and answer questions. There are also plenty of courses, tutorials, books, websites, guides, documentation, and other resources available to help you navigate these tools, although for the most part, the learning curve is not too steep — and the payoff is well worth the investment.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573533.87/warc/CC-MAIN-20220818215509-20220819005509-00009.warc.gz
CC-MAIN-2022-33
8,041
76
http://www.linuxquestions.org/questions/linux-security-4/compromise-linux-system-using-non-root-account-437481/
code
Linux - SecurityThis forum is for all security related questions. Questions, tips, system compromises, firewalls, etc. are all included here. Welcome to LinuxQuestions.org, a friendly and active Linux Community. You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today! Note that registered members see fewer ads, and ContentLink is completely disabled once you log in. If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here. Having a problem logging in? Please visit this page to clear all LQ-related cookies. Introduction to Linux - A Hands on Guide This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter. For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own. Click Here to receive this Complete Guide absolutely free. I'm definitely not an expert on this, but I think there's not a whole lot malicious software can do from a non-root account (as long as the suid bit isn't set) except trash your home directory. Someone please correct me if I'm wrong, but I think it's even harder to make java do anything virus-like, so you shouldn't have much to worry about. It is quite hard to write a program that can compromise a computer that is keeped up to date. Having said that, if that unsafe program provides a way for a actual user to get inside, like for example giving him a remote prompt he could be on you machine and try to hack it with the privileges of your user. For example, put an alias for su that will log for him the password and then call the actual su so you don't notice. (although that can also be done by a program) well, since it is java, and it runs as an application, it can just as well execute arbitrary code (for example, create a shell script and put it in .bashrc or something) ---- the "su" alias is smart, i'll check for that. just to give you an idea.. I've downloaded a crack for jbuilder 2006.. not that I really need the extra features than the free foundation, but wanted to try it out longer... guilty, i know. What can happen to my linux box, if I, using my regular non-admin account, run some untrusted software? / the system is patched / I'm behind a router. Next to the already mentioned munging of whatever is in your home directory there are a few things I can think of. Since you're behind a router I'll assert you know how to and do block initial inbound traffic from outside the private network (but how about egress filtering?), and since you patch everything to current the only thing that can hit you on that front seems to be reconnaissance, misconfiguration and o-day exploits. So let's focus on the fact that local (or private network) account users are most likely to be considered "trusted" by local applications or running services in the network. What can we do?: - Information gathering Maybe you've got another Firefox/KDE/whatever else o-day you need to retrieve specific version info for? Look at dmesg? See what processes are running or user accounts are used on the box recently? What's the last time root logged in? Look on the private network for servers that are only protected by the router? Or maybe just be interested in local logs, mail or docs for Social Engineering (or why not: extortion)? - Account bruteforcing So you set up a drop-app for SSH. But what about local accounts? What's the last time root checked it or got it reported automagically? Are there any users or commands we can sudo to with NOPASSWD? - Downloading & executing something else Skype executes traceroute on application start. Does that have any paths prepended or could we manage to execute a fake ./traceroute from the CWD? Or maybe I'm allowed outbound access to open relays or be able to look for proxies on port TCP/80? Or can I wget you Something Completely Different? (apart from Larches)., - Resource starvation Maybe I'll just fill up your / or /var/log before attempting to do Something Completely Different., * Some actions are no cause for alarm when reviewed alone but are only interesting when you piece the chain of events together. If any of you think the above is FUD or dismiss it as being hypothetical then you're not looking at what you should be looking at and that's any form of missing access restriction. and what is more important, how do I make sure it has not been compromised? Verify against the checksums provided with the package. If none are, bug the developers/maintainers to provide a GPG-signed package or at least MD5 and SHA1 sums. Run under Mandatory Access Controls (MAC). Run in a sandbox (like Qemu) under strace or something else that does monitor system calls. Use a file integrity checker to monitor changes. What if I continue to run the suspicious program (it's java (jar) application)? Noticing how you classify it as "suspect" yourself, then continuing to run it would not be advisable without looking for a qualitatively good replacement or MAC and proper verification.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189903.83/warc/CC-MAIN-20170322212949-00025-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
5,716
30
https://docs.beam.cloud/account/managing-apps
code
Deleting an app You can delete an app in the dashboard, by selecting: App Settings -> General. Deleting is a permanent action. There is no way for us to restore an app once deleted! Stopping a deployment You can stop a deployment in the dashboard, by selecting: Deployments and clicking the ... icon on a specific deployment.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100286.10/warc/CC-MAIN-20231201084429-20231201114429-00381.warc.gz
CC-MAIN-2023-50
325
8
http://travel.stackexchange.com/questions/tagged/ljubljana?sort=votes
code
to customize your list. more stack exchange communities Reputation and Badges Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site Learn more about Stack Overflow the company Learn more about hiring developers or posting ads with us Capital city of Slovenia Is there any simple public transport route from Ljubljana Airport to Lake Bohinj? I'm looking for a reasonably direct, cost-effective (so, probably not taxi) way to get from Ljubljana airport (Jože Pučnik) to Lake Bohinj, ideally without going via Ljubljana which is the wrong way. ... May 6 '14 at 14:07 highest voted ljubljana questions feed Welcoming Stack Overflow’s New CMO – Adrianna Burrows Hot Network Questions How avoid someone who use "sudo" command illegaly to get access on my computer? Why some guitarists add a plastic in the guitar's sound hole? Starting a 2 stroke jet boat after several years Today's horoscope: You will solve this riddle Thing that does not dry in sunlight Turning a celebrity into a non-person? Are the comic strips in the Marvel logo relevant to each film? How do you extract a particular column or row from the output of a command? Is Donald Trump officially the Republican nominee? setup youtube-dl destination to ~/Downloads I don't want to kill any more mice, but my advisor insists that I must in order to get my PhD Faking MoleculePlot for compounds where it isn't available Is “Bist du fertig?” offensive? I got an email threatening to DDOS me if I don't pay a ransom. What should I do? "Jane makes over six figures" - how much money does she make? Hangman from a beginner How can I break up a small boulder without power tools? How does the amulet wearer control a Shield Guardian, and when does it act? "One year without beef saves 3432 trees" My boss frequently calls me outside work hours. How should I deal with it? Split a list into multiple lists at increasing sequence broken Related enumeration inside another enumerate environment How to convert a dotted quarter note tempo to BPM? Is it normal for thrusters to "ice up"? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Movies & TV Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2016 Stack Exchange Inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461862134822.89/warc/CC-MAIN-20160428164854-00009-ip-10-239-7-51.ec2.internal.warc.gz
CC-MAIN-2016-18
2,585
58
https://twittercommunity.com/t/displaying-complete-tweet-history-of-a-specific-user-using-linqtotwitter/19694
code
I am trying to display a given user’s entire tweet history on an ASP.NET web form page. Everything is working fine except for the number of tweets I get back from my query. If I do not specify a ‘Count’, I get back 19 tweets. I thought the default was supposed to be 20, so that may help identify the source of the problem. My test user has 247 tweets, and if I specify this I get 178 back. I’ve tested this with other users and I have the same issue (although the number returned varies). I have also tried setting the ‘Count’ to the maximum int value. var userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == txtScreenName.Text && tweet.Count == statusCount select tweet); It’s been 3 days of brain punishment over here so I would sincerely appreciate any help
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831933.96/warc/CC-MAIN-20181219090209-20181219112209-00365.warc.gz
CC-MAIN-2018-51
836
3
https://jmla.mlanet.org/ojs/jmla/datasharingpolicy
code
The JMLA requires authors of Original Investigation, Case Report, and Special Paper articles to (1) place the de-identified data associated with the manuscript in a repository and (2) include a Data Availability Statement in the manuscript describing where and how the data can be accessed. Exceptions to this policy will be made in rare cases in which de-identified data cannot be shared due to their proprietary nature or participant privacy concerns. Definition of Data The JMLA defines data as the digital materials underlying the results described in the manuscript, including but not limited to spreadsheets, text files, interview recordings or transcripts, images, videos, output from statistical software, and computer code or scripts. Authors are expected to deposit at least the minimum amount of data needed to reproduce the results described in the manuscript. Data files should be accompanied by documentation describing the contents of the data files (e.g., data dictionaries, codebooks, readme files). Materials supporting the methodology described in the manuscript (e.g., survey instruments, rubrics, assessment instruments) are not considered data. These materials should be labeled as appendixes and included with the submitted manuscript as supplementary files. Data Availability Statement A Data Availability Statement should be placed in the manuscript at the end of the main text before the references. This statement must include (1) an indication of the location of the data; (2) a unique identifier, such as a digital object identifier (DOI), accession number, or persistent uniform resource locator (URL); and (3) any instructions for accessing the data, if applicable. An example statement is as follows: “Data associated with this article are available in the Open Science Framework at <insert URL>.” If there are no data associated with the manuscript, this must be indicated in the Data Availability Statement as follows: “There are no data associated with this article.” The Data Availability Statement will be included in the published version of the manuscript. Exceptions to the JMLA Data Sharing Policy Exceptions to the JMLA data sharing policy will be made in rare cases in which data cannot be shared due to their proprietary nature or ethical concerns. If data are not owned by the authors, the data source and contact information should be noted in the Data Availability Statement. If data sharing is not allowed by an IRB and/or would risk violating participants' privacy or confidentiality agreements, this should be noted in the Data Availability Statement, such as in the following example statement: “Data associated with this article cannot be made publicly available because they contain personally identifiable information. Access to the data can be requested from the corresponding author and may be subject to IRB restrictions.” Authors of manuscripts describing humans subjects research are encouraged to seek IRB approval for data sharing before their study commences. Data can be placed in any repository that makes data publicly available and provides a unique persistent identifier, including institutional repositories, general repositories (e.g., Figshare, Open Science Framework, Zenodo, Dryad, Harvard Dataverse, OpenICPSR), or discipline-specific repositories that accept data of a particular format or in a particular domain. Repositories that allow restricted access to data are also acceptable. A registry of research data repositories can be found at re3data.org. When possible, authors are encouraged to apply a license at least as permissive as a Creative Commons Attribution License (CC-BY) to the data. Authors can choose to embargo the data until the date of article publication. Data Formats and Standards Authors are encouraged to use open data formats, to prepare and share documentation of the data (e.g., data dictionaries, codebooks, readme files), and to otherwise use FAIR Guiding Principles to facilitate the understandability and reusability of the data. Data should be appropriately de-identified to prevent revealing the identity of research participants. The Medical Library Association, the JMLA, and individual members of the JMLA editorial team are not liable for any harm or damage resulting from the insufficient de-identification of data associated with JMLA articles. Peer reviewers will not be explicitly asked to review the data, although they may request access to data that they consider essential to evaluating the manuscript. Peer reviewers may also make comments and suggest revisions to the Data Availability Statement. JMLA Data Sharing Policy Workflow At the time of manuscript submission, the editor checks that the Data Availability Statement appears in the manuscript and is as complete as possible. Placeholders can stand in for the repository name and persistent identifier if this information is not yet available and/or to preserve author anonymity during peer review. If the Data Availability Statement is missing, the manuscript will be returned to the authors for correction before it is sent forward for peer review. At the time of manuscript acceptance, the editor notifies authors that they are responsible for making the data available as described in the Data Availability Statement. The final version of the manuscript must contain a complete Data Availability Statement that includes a functional persistent identifier before the manuscript is scheduled for publication and sent forward for copyediting.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100710.22/warc/CC-MAIN-20231208013411-20231208043411-00124.warc.gz
CC-MAIN-2023-50
5,532
22
https://practicaldev-herokuapp-com.global.ssl.fastly.net/atrandafir
code
I'm currently keep learning about Yii2 in order to make the most of its features and architecture. I'm looking to soon try out some experiments with new frontend frameworks. Also playing around with ReactPHP & trying to learn latest tech. projects and hacks - Client work building custom software for our clients at HeavyDots - Building a Saas product https://portfolee.app/ so that we all better catalog the digital projects we've done or participated on I'm happy and open to talk to anyone who thinks we can have something interesting to share or help eachother so don't hesitate to drop me a line at alex (at ) heavydots (dot) com or at @atrandafir
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878921.41/warc/CC-MAIN-20201022053410-20201022083410-00298.warc.gz
CC-MAIN-2020-45
652
5
https://www.goteaminternet.com/heres-what-you-need-to-know-about-hiring-python-developers/
code
Python is considered to be a versatile programming language, which can be used for anything and everything – think of software development, mobile apps, web applications, and game development to name a few. Python is incredible, but to get the best of it for your project, you need to consider hiring the right Python developer. Here’s what you can expect. Freelancer or a company: What’s best? Every project is different and comes with a few inherent challenges. You need to decide whether you want to go ahead with an independent Python developer or a company. Now, freelancers tend to be a lot cheaper, but there’s also few aspects on the downside. You only have one person working on the project, so it may take a lot more time, and as far as brainstorming ideas is concerned, you would be struck with one person. With a company that specializes in Python is obviously a better idea, provided they fit your budget. What to expect from a Python developer? Things to understand Depending on the project at hand, experience of the company you hire is of extreme importance. If they have done projects that are similar to yours in the past, it’s an added advantage. Work portfolio of the company is something that determines if you should be working with. How the company handles business data and private information is the second thing that needs attention. Make sure that they are compliant with DGPR updates. Eventually, when you are working with an outsourced team of Python developers, you should be able to have some control on the work they or how they handle your project. Discuss the budget Finally, you cannot hire a developer, unless you are sure of what the project would cost. Most companies that deal in programming would offer an estimate for free, which is the best way to evaluate the options, although price shouldn’t be the only reason to choose a Python developer. Check online now to shortlist a few companies and don’t miss on asking for client references and an estimate.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818067.32/warc/CC-MAIN-20240421225303-20240422015303-00832.warc.gz
CC-MAIN-2024-18
2,009
10
https://www.learningguild.com/devlearn/sessions/session-details.cfm?event=380&from=sessionslist&showtype=&fromSelection=doc.3978&session=7016
code
117 Unpacking Badge Analytics: What Metadata Can Tell Us 10:45 AM - 11:45 AM Wednesday, September 30 The promise of open digital badges extends beyond their potential to recognize informal learning and professional development. Open badges also provide data that can be mined and analyzed to benefit learners, educators, and learning organizations, both locally and globally across the ecosystem. What does the current universe of badges look like? What practical data can be conveyed through badges? How can usable data be extracted from digital badges that can be used by learners and institutions of learning? In this session you will be introduced to the Open Badges Infrastructure. You will see examples of badge system designs that use learner analytics and explore the potential for ecology-wide data analysis using a badge discovery platform under development. You will learn the benefits of badge data analysis for learners, educators, learning institutions, and researchers, and learn some of the current challenges in data collection, including pitfalls for badge system designers. In this session, you will learn: - Basics of open digital badges - What kind of additional information can be associated with an open badge - How badge metadata can help drive discovery and accessibility - How to harness metadata for practical usage Intermediate and advanced designers, developers, project managers, managers, and directors. discussed in this session: Open Badges Infrastructure, application-programming interface, web services, IOS applications, and learner analytics. Anh Nguyen is a software engineer and developer of the Credmos badge discovery platform. Credmos is designed to aggregate digital badges into a single platform that can be accessible to badge earners seeking to expand their collection of badges. Anh works for the UC Humanities Research Institute and HASTAC on the Digital Media and Learning Competition. Previously he designed the data architecture of a PCI-DSS compliant financial system that kept millions of credit card numbers and built Spigot.org, a unique digital media and learning aggregator. Director of Badge Research Sheryl Grant is director of badge research at the Humanities, Arts, Science, and Technology Alliance and Collaboratory (HASTAC), which administered the Badges for Lifelong Learning Competition that awarded over $3 million to 30 digital badge development projects in 2012. Her book What Counts as Learning: Open Badges for New Opportunities is a synthesis of lessons learned from the first year of badge system design across 30 projects. Sheryl is currently completing her PhD dissertation on badges and reputation systems.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816070.70/warc/CC-MAIN-20240412194614-20240412224614-00261.warc.gz
CC-MAIN-2024-18
2,681
15
https://nips.cc/virtual/2023/poster/70705
code
The introduction of neural radiance fields has greatly improved the effectiveness of view synthesis for monocular videos. However, existing algorithms face difficulties when dealing with uncontrolled or lengthy scenarios, and require extensive training time specific to each new scenario.To tackle these limitations, we propose DynPoint, an algorithm designed to facilitate the rapid synthesis of novel views for unconstrained monocular videos. Rather than encoding the entirety of the scenario information into a latent representation, DynPoint concentrates on predicting the explicit 3D correspondence between neighboring frames to realize information aggregation.Specifically, this correspondence prediction is achieved through the estimation of consistent depth and scene flow information across frames.Subsequently, the acquired correspondence is utilized to aggregate information from multiple reference frames to a target frame, by constructing hierarchical neural point clouds. The resulting framework enables swift and accurate view synthesis for desired views of target frames. The experimental results obtained demonstrate the considerable acceleration of training time achieved - typically an order of magnitude - by our proposed method while yielding comparable outcomes compared to prior approaches. Furthermore, our method exhibits strong robustness in handling long-duration videos without learning a canonical representation of video content.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818452.78/warc/CC-MAIN-20240423002028-20240423032028-00486.warc.gz
CC-MAIN-2024-18
1,459
1
https://devrant.com/users/ImCypher
code
AboutHomegrown coder. Gamer. Husband. Father. In reverse order. SkillsC#, HTML, CSS, JS, Python, Bash Joined devRant on 2/22/2018 Do all the things like ++ or -- rants, post your own rants, comment on others' rants and build your customized dev avatarSign Up From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More 4 step process. 1. Loud music on the way home 2. Vent to the wife. Because she listens and is awesome. 3. Kill stuff in games. ESO or any shooter. After all that the next day is fresh and new and all is good and right in the world.4 Trying to move on from a job that got my foot in the door but has absolutely no possibility of helping me grow anymore. It's the worst. Feeling comfortable but knowing that you're not being challenged and learning and growing. I'M TIRED OF FIXING YOUR DAMN SCANNER OR PRINTER!1
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00552.warc.gz
CC-MAIN-2022-05
910
11
https://www.smarthomatic.org/integration/perl_server.html
code
Before integrating your SHC network into a PC application for home automation, you probably want to test the system first. A simple Perl script can be used to log data of devices and to implement simple control tasks for testing purposes. The script is tested on a Linux system with the base station connected to a virtual serial port over a USB-to-serial converter. You can change the script as needed for your testing. As currently implemented, the script does the following: - Data received from temperature sensors is logged to CSV files. - A simple regulation is implemented: a power switch is controlled depending on the humidity of one temerpature + humidity sensor. The script can be found in the GIT repository.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656833.99/warc/CC-MAIN-20230609201549-20230609231549-00617.warc.gz
CC-MAIN-2023-23
720
6
http://www2.cs.uregina.ca/~hepting/HTTPerrors/403.html
code
- D. H. Hepting - HTTP Errors The link that was used to reach this site was forbidden access. Please send to me the details of the link that was followed, so that I may correct the problem: e-mail Daryl Hepting to go back to the referring page.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215077.71/warc/CC-MAIN-20180819110157-20180819130157-00070.warc.gz
CC-MAIN-2018-34
244
7
https://replit.com/talk/learn/How-to-LEARN-PYTHON-the-COOLJAMES-way-Lesson-1-Sequence/43249
code
How to LEARN PYTHON the COOLJAMES way (Lesson 1 - Sequence) Welcome to another@CoolJames1610 tutorial :P :P :P Now, I am going to teach you how to PYTHON the COOLJAMES way (which is cool) So, the first lesson will be about sequences. But first let's go over some key terms. - Declaring variables. Unlike other PL's, in Python you do not need to add BOOL, or STR, or INT before you declare a variable. Python automatically knows which data type you enter in. x = 0 #Integer x = "Hey" #String x = True #Boolean Arithmetic is mainly the same across PL's. x = 5 + 5 y = x + 10 z = x * y These are a few of the keywords in Python. You cannot declare these variables import from as and not in print etc... I can't remember all xD Okay, I think that is all for the basics, now onto sequences... print("Hello World") x = "James" print("Hello " + x) So in this program, Python will output 'Hello World', then assign '"James"' to x and then out put 'Hello" added with the value attached to x Python is quite simple xD Now let's move onto input from the user. All you have to do to get input from the user is: x = input() This will prompt your user to enter something. Now this wouldn't be very helpful as nothing is telling the user to input something. We can add text to the input x = input("Please enter your name: ") As you can see, I have left a space after the colon. This is better to do as the program will look neater. Please enter your name: James or Please enter your name James Please enter your name:James or Please enter your nameJames See? :D Much neater :P I'm tired of people not adding a space or something lol Okay now, I'll talk briefly about concatenation. This is combining different data types together. Say I asked the user for their age and I wanted to say: "Oh nice, I am also " (user's age), most people may just do: x = input("Please enter your age: ") print("Oh nice, I am also " + x) Here we would get an error. In Python, if you are outputting to the screen, everything has to be the same data type. "Oh nice, I am also" is a string x is an integer. To get round this problem, we can cast x to a string so that we can use it to say something. All we need to do is str(x). This changes x to a string and can be used. x = input("Please enter your age: ") print("Oh nice, I am also " + str(x)) Ahaha no error ;) We can cast a string to an integer but only if it is a whole number (e.g. not a decimal) x = "10" e = 34 * int(x) print(e) Here, x is a whole number so we can cast it as an integer. (Would output 340 in case you wanted it :P) There are also the I think that is it for today's tutorial I know this isn't good and not in depth but I DON'T CARE xD This is how I kinda learnt Python and I've learnt a lot xD Erm, please upvote so that amasd can comment again people can see and learn :) Lesson 2 will be about Selection (if, elif, else)
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00128.warc.gz
CC-MAIN-2022-33
2,861
55
https://sharepoint.stackexchange.com/questions/277238/modern-ui-vs-classic-ui-vs-power-app-is-classic-ui-still-the-most-flexible-way
code
Most of the time when I add a new custom list inside SharePoint online, its New & Edit forms will need to be customized, the customization falls into these main categories: Certain fields to be disabled depending on other field. For example, if Status = close, then the title and description should be disabled,and so on. But in SharePoint online, I thought that we will have other approaches to customize the lists' forms, mainly Modern UI and PowerApp, but those have these main drawbacks: - Some SharePoint features are not currently supported, for example you can not add a picture inside a multi-line of text fields. - If you have a multi-line of text field with "Append changes to existing text" enabled, then the old appended text will not be shown inside the edit form. Modern UI: They can not be customized by their own. So most of the time I ended up using the classic UI + Script Editor + Remote Event Receiver, to implement our business logic inside SharePoint online lists. Any advice on this?
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510297.25/warc/CC-MAIN-20230927103312-20230927133312-00562.warc.gz
CC-MAIN-2023-40
1,006
9
https://www.drupal.org/node/1994394
code
redhen_relation_connections_page() calls redhen_relation_access() with op "update" and "delete", causing redhen_relation_access() to check the permission "update redhen contact connections" or "delete redhen contact connections" etc. Neither of these permissions exist. You can change "update" to "edit" for the "edit" link and it appears, but you still need "Edit relations" to make it possible to perform the action. This is different to the behaviour of "add", which does not require "Create Relations". Correct the "update" permission and add "delete redhen contact connections" and "delete redhen org connections" continue to require "Edit Relations" and "Delete Relations" as well. To be consistent, we would need to check for "Create Relations" when adding. The alternative is to have custom delete and edit methods to keep us separate from the "Relations" permissions. I'm slightly in two minds. Happy to make a patch for the favoured method...
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122629.72/warc/CC-MAIN-20170423031202-00174-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
952
5
https://swiss.cochrane.org/news/cochrane-crowd-challenge-participation-highly-welcome
code
Cochrane Crowd is running a Challenge from 22.10 - 24.10.2019, even though the Cochrane Colloquium in Santiago de Chile has been canceled. The goal is to reach 20,000 classifications in 48 hours. Everyone is invited to participate. No matter in which country or how long you want to participate - every screened study counts! And there will also be a prize. So it's really rewarding to take part! It starts on Tuesday 22. October 2019 at 6:00 pm in Switzerland (13:00 Chile time). To register to Cochrane Crowd and for further information please visit the website.
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145676.44/warc/CC-MAIN-20200222115524-20200222145524-00253.warc.gz
CC-MAIN-2020-10
564
1
https://raythompsonwebdev.co.uk/about/
code
I have passion for web development dating back to 2012 and like tinkering with HTML, CSS and Javascipt, Ajax to create front end user interfaces , responsive websites and website templates. I also enjoy developing programs on the back end using PHP, MySQL, Ruby and WordPress theme development and maintenance. My interest for web design and development began after attending a part time web design course in East London between November 2011 and Oct 2012 where I gained some experience of the web design and development process by completing projects for tasks in exams and collaborating with other students. I learnt a bit about the web industry from my tutors, some of which really appealed to me. Since then , whenever I have the spare time, I spend most of it practicing coding, building web applications, trying out new coding techniques , attempting to solve coding problems and helping others on-line whenever I can. I continue to keep up to date as much as I can while working full-time. I like to hear the latest news in web development. I listen to podcasts, watch videos on-line , off-line and read web devleopment related books and blogs. I have also done quite a few on-line coding courses. I particularly like websites like FreeCode Camp and CodeAcademy among others. I also attend local meetup and events and short courses like Digital Futures 2017 and do a bit of volunteering whenever I can, like when I volunteered at WordCamp 2018 in London . I have also helped friends and others with coding problems and issues they have had with websites and web applications mostly off-line. My goal is to become web developer full-time producing useful and practical web applications . I want to be able to contribute to improving user experience on the web, particularly for those have difficulty accessing websites and web applications on the internet due to weak connection or other accessibility issues. I am curently seeking further development opportunities within the Web Industry.
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527010.70/warc/CC-MAIN-20191210070602-20191210094602-00488.warc.gz
CC-MAIN-2019-51
1,996
9
https://users.rust-lang.org/t/can-rust-to-access-l-cache/6318
code
We are looking to experiment with accessing Level 3 cache, can RUST make this possible? Can you be more clear about what you are looking for? Do you mean the L3 cache, as in the Level 3 cache in x86 CPUs? The CPU manages the cache automatically; as far as I know the programmer has little to no control over what data is cached except by carefully designed memory access patterns which optimize for cache locality. Optimization techniques for cache locality in C and C++ can be adapted to Rust relatively easily. Are you asking for some command like “register” in C ?
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247487595.4/warc/CC-MAIN-20190218155520-20190218181520-00482.warc.gz
CC-MAIN-2019-09
571
3
http://www.a-w-apps.com/about.html
code
ABOUT A.W. APPS Hello, I'm Andrew Willeitner, and I'm an indie game developer. I started developing games in 2010 on the gaming website "Roblox". After my success on Roblox, I decided to switch over to mobile game development. I developed mobile games using the GameSalad engine, then in 2015 I moved over to using the Unity engine. I am currently making games on iOS and Android devices. I also participate in many game jams.
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105700.94/warc/CC-MAIN-20170819162833-20170819182833-00092.warc.gz
CC-MAIN-2017-34
426
2
https://ca.bebee.com/job/20210223-7da43b3729b90cc1abe053a36d1585fd
code
Responsibilities: Bring deep functional expertise to shape data structures and algorithms in a distinctive way to ensure large-scale business impact of the digital products being built and drive competitive advantage for the company as a whole Collaborate with Data Head and developers to find opportunities to use company data to drive business solutions. Mine and analyze data from company systems Assess the effectiveness and accuracy of new data sources and data gathering techniques. Develop custom data models and algorithms to apply to data sets. Use Machine Learning and Artificial Intelligence to increase and optimize customer experiences, revenue generation, and other business outcomes. Partner with different functional teams to implement models and monitor outcomes. Conduct data wrangling, munging, exploration, sampling, training data generation, feature engineering, model building, and performance evaluation) Enable big data and batch/real-time analytical solutions that use emerging technologies Code, test, and document new or modified data systems to create robust and scalable applications for data analytics Ensure all automated processes preserve data by leading the alignment of data availability and integration processes Required Skills: Experience in data science and in senior engineering and technology roles (5+ years) working with product development teams, delivering and building digital products Experience with Simulation Tools (AnyLogic experience is a bonus) Understands high performance algorithms and Python statistical software Experience with batch and real-time data streams Experience in industry data science (e.g., machine learning, predictive maintenance) preferred Architects highly scalable distributed systems, using different open source tools Experienced with agile or other rapid development methods Experienced in object-oriented design, coding and testing patterns as well as experience in engineering software platforms and large-scale data Has deep knowledge of data modeling and understanding of different data structures Master’s in Information Technology, Computer Science, or a related quantitative discipline InSync Systems Inc. is a privately-owned boutique Canadian Resourcing and Consulting Services Company that works closely with a range of corporate clients across multiple industries to bring them solutions that effectively address their business needs.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350942.3/warc/CC-MAIN-20210225095141-20210225125141-00537.warc.gz
CC-MAIN-2021-10
2,426
3
http://www.fkeilers.com/what-is-lean-startup/
code
«Lean StartUp» is a way to develop the launch of businesses and products. It was developed by Eric Ries and it is built on validated learning, scientific experimentation and Iteration in product launches to shorten development cycles, measure progress and gain valuable feedback from customers. In this way companies, especially startups can design their products or services to satisfy the demand of their customer, without requiring large amounts of initial funding or large expenses to launch a product. Eric Ries in Toronto University speaking about Lean StartUp Eric Ries Discuss about «The Lean StartUp»
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886706.29/warc/CC-MAIN-20200704201650-20200704231650-00222.warc.gz
CC-MAIN-2020-29
613
4
https://sourceforge.net/p/gabber/support-requests/2/
code
I'm having quite some trouble finding all the libraries Gabber requires, especially as rpms. Would you please release a static rpm of the newest version? Thanks in advance, Logged In: YES I'd like to know how to consistently get a statically compiled Gabber. If it were easy for me, you'd have one right now, but the entire reason there isn't one is that I haven't been able to make one. Logged In: NO Gabber fails to start at all for me. gabber: relocation r: gabber: undefined symbol: My installed code: Any help would be appreciated. Except your problem there has nothing to do with static RPMs. Go read the bug reports, chances are one of libsigc++, gtkmm, gnomemm was not built for your distro (which is Red Hat 7 according to the gabber RPM you have) Log in to post a comment.
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820927.48/warc/CC-MAIN-20171017052945-20171017072945-00085.warc.gz
CC-MAIN-2017-43
782
19
http://bretstateham.com/windows-8-for-software-developers-event/
code
This evening I’ll be at the Microsoft Store at Fashion Valley Mall here in San Diego for the San Diego Software Industries Council “Windows 8 for Software Developers” event. The event is free, but registration is required. Register Here I’ll be giving a quick presentation on Windows 8 development at the event. You can grab a copy of my slides here
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120206.98/warc/CC-MAIN-20170423031200-00188-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
357
3
http://www.complexnetworks.fr/tag/ip-level/
code
Many works have studied the Internet topology, but few have investigated the question of how it evolves over time. This paper focuses on the Internet routing IP-level topology and proposes a first step towards realistic modeling of its dynamics. We study periodic measurements of routing trees from a single monitor to a fixed destination set and identify invariant properties of its dynamics. Based on those observations, we then propose a model for the underlying mechanisms of the topology dynamics. Our model remains simple as it only incorporates load-balancing phenomena and routing changes. By extensive simulations, we show that, despite its simplicity, this model effectively captures the observed behaviors, thus providing key insights of relevant mechanisms governing the Internet routing dynamics. Besides, by confronting simulations over different kinds of topology, we also provide insights of which structural properties play a key role to explain the properties of the observed dynamics, which therefore strengthens the relevance of our model. - Community detection in attributed graphs.Christine Largeron2017, April 25, Room 24-25/405 - affinity index algorithm analysis antipaedo attack bipartite blog network blogs capitalisme social Cascade centrality clustering communities community detection community structure complex network complex networks complex systems compression connected graphs data mining debian degree distribution degree peeling diameter diffusion diffusion phenomena distributed measurements DynamicNetworks dynamics edge-Markovian evolving graph eDonkey ego-centered ego-centered communities email epidemiology event detection evolving graphs evolving networks exploration failure fixed points formal concepts gossip graph graph algorithm graph decompositions Graphs hierarchical clustering honeypot influence influence ranking interaction networks internal links internet Internet topology intrinsic time IP-level ip exchanges lattice leaders link prediction long term communities markovian model measurement mesure d’influence metrics Metrology mobile networks Modelling modularity multi-ego-centered communities multi-scale multipartite graph network dynamics node proximity node similarity opinion dynamics outliers p2p P2P dynamics P2P networks parametric paris paris-traceroute path-vector routing pedophile activity phone power-law radar random graph random walks reachability robustness routing routing tables scale-free security simulation simulations sir social networks spreading spreading cascades stability statistical analysis stochastic process three-state cellular automata time-varying Topology traceroute triangles twitter UDP user profiles viral marketing visualization web wifi
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123635.74/warc/CC-MAIN-20170423031203-00625-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,739
3
https://community.sonarsource.com/t/generic-test-data-xml-file-results-not-showing-swift/28069
code
Welcome to the community, and well done for your first post. You upfront attached most of the useful information to troubleshoot your problem. First thing I want to mention and that is unrelated to your problem: You can remove the sonar.swift.simulator property from your scan. It’s not needed nor used by the SonarQube Swift analyzer (it’s probably a left over from a 3rd party open source plugin ) About your problem now, the logs are demonstrating that your property is set correctly and found by the scanner, the file format seems OK too, but no coverage is extracted from it. See log below: 17:21:30.153 INFO: Parsing /Desktop/sonarqube-generic-coverage.xml 17:21:32.389 INFO: Imported coverage data for 0 files 17:21:32.389 INFO: Coverage data ignored for 979 unknown files, including: 17:21:32.389 INFO: Sensor Generic Coverage Report (done) | time=2237ms Why is that ? A common reason is that the filenames or exact file paths do not match between the files analyzed by the scanner and the files in the coverage report. Indeed in your case there are some visible discrepancies: - All your scanned files have a relatively path starting by - In your coverage file not files have this path. The path seems to start 1 directory level below (eg Shared/...). I found 1 example of a file whose path “almost” but not “exactly” matches between the 2: AresApplication.swift – In the scanner logs the path is: – In the coverage report the path is: To confirm that this is the root cause, you may first manually patch the coverage file to add the Sources prefix to one files (AresApplication.swift is good, it has some coverage between lines 403 and 410), and analyze again with the patched coverage file. If the coverage is now properly reported for AresApplication.swift, the root cause is found and you can fix this on a large scale by changing the way you generate the file, to make sure the generated paths are correct: I think the problem lies in the directory from which you run the command to generate the coverage.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00329.warc.gz
CC-MAIN-2020-40
2,035
17
https://udictionaryblog.com/2017/09/06/vocabularywords-with-make-or-do/
code
Do and make are very similar but we use them differently. - We often use do to speak about everyday jobs: do the shopping; do the dishes. - We use make when we create or produce something: the factory makes furniture; make some tea; make dinner. Here are some examples of when to use make or do: |make coffee, tea||I always make coffee after breakfast.| |make something (produce)||The factory makes furniture; Volvo makes cars.| |make a mistake||I made a mistake. I’m sorry.| |make a promise||You made me a promise. Please keep it!| |make a decision||Managers have to make hard decisions sometimes.| |make a telephone call||Excuse me, I have to make a telephone call.| |make a profit||Microsoft made a big profit last year.| |make a mess||The children made a mess in the kitchen.| |make progress||The students are making good progress with their English.| |do something||What are you doing? I’m not doing anything.| |do an exam||I did five exams and passed all of them.| |do homework (from school)||School kids have to do a lot of homework.| |do housework||I always do the housework at weekends.| |do the shopping||I hate doing the shopping in supermarkets.| |do the dishes||Who’s going to do the dishes after dinner?| |do the ironing||Her husband never does the ironing.| |do an exercise||I did all the exercises in my grammar book.| |do business||Our company does a lot of business in Asia.| All the best my dear users- U-Dictionary So, stay tuned and share U-Dictionary app ( https://goo.gl/gwCZRH ) with your friends & family so that you can get more useful English Learning articles.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891624.95/warc/CC-MAIN-20201026175019-20201026205019-00715.warc.gz
CC-MAIN-2020-45
1,594
24
https://www.jungheinrich-profishop.co.uk/service/
code
This section explains who we are. Learn more about Jungheinrich PROFISHOP, Jungheinrich UK Ltd, our company history and the AMEISE brand. This section tells you all you need to know about delivery and order status. This section has everything you need to know about our General Terms and Conditions and Data Protection. Watch our videos to learn more about PROFISHOP and our products:
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816954.20/warc/CC-MAIN-20240415080257-20240415110257-00085.warc.gz
CC-MAIN-2024-18
384
4
http://bifarinthefifth.com/tag/computer-science/
code
In my last notebook we looked at a classification problem, and we defined many classification metrics. In this notebook, we will go through some regression metrics. Recall that in regression, the response value is continuous (and not categorical), as such different kind of prediction assessment will come into play. Now, say, you have built a machine learning model; the question you ask is: ‘how well does this thing works anyways?’. To answer this question, we will need to define the performance metrics. As you might have imagined, the metrics will depend on the kind of machine learning problem in view. These are the methods involved in sampling during machine learning. In my last notebook-blog, I hinted the idea of an analogy between a 12-year-old girl studying for an exam, and our machine trying to learn… In our ML blog-syllabus, mathematical foundations of ML should be the next stop, however I have decided to postpone this till later in the blog in order to write something more comprehensive. The reader should note that getting the maths ‘out of the way’ is very essential to deeply understand a lot of the ML algorithm out Read More Now that I am done with the computational foundations of ML in python, I can not in good conscience proceed to other topics in maths, without touching on some basic statistics. Here is a brief note on statistics. In this notebook, I demonstrated a few visualization techniques using my reading data from Goodreads. Pandas is an open source python library that is used for data handling and manipulation. It was developed to work with the NumPy library. NumPy is an inescapable package for scientific computing in python. You can think of it as a foundation for numerous python packages… A web-based application in the form of a notebook that can be used for storing (and sharing) codes, notes, mathematical equations, and visualizations. About 2 months ago, I announced this new blog section. I will be starting with machine learning over the next several months, and here is a peep at the blog outline.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347410284.51/warc/CC-MAIN-20200530165307-20200530195307-00054.warc.gz
CC-MAIN-2020-24
2,067
10
https://bsidesvarazdin.org/schedule-2019.html
code
Legitimate tools or weapons of mass compromise? Windows desktop and servers contain a large number of legitimate tools which can also be used by attackers, once they obtain initial access. This presentation describes those tools and their usage in real world attacks. Centralised logging and telemetry provides a wealth of information for blue team members and their day to day operations. These sources usually contain enough data to detect when attackers were successful in compromising the defended network. But how to recognise a successful attack when the tools the attackers are using are also legitimate system administration utilities? Most Windows administrators would agree that PowerShell is an essential system administration tool but it has also been frequently seen as an attack avenue for attackers and red team activities. Powershell is typically used to load code from remote servers and make the attacks “fileless” using reflective dll loading, steal user credentials, pivot within the compromised network, maintain persistence and execute other offensive tasks. Right from the initial compromise, we can expect attackers to use standard Windows tools for enumerating network resources, adding new users, pivoting to other servers, dumping databases, exfiltrating data etc. This session will be a walk through attackers techniques using tools which can also be considered legitimate and are usually installed by default on Windows. We will talk about basic and advanced functionality of this legitimate attack arsenal and show its usage observed during recent attacks. # About the speaker Vanja works for Cisco Talos. He is a security researcher with more than 20 years of experience in malware research and detection development. He enjoys tinkering with automated analysis systems, reversing binaries and other types of malware. He thinks time spent scraping telemetry data for signs of new attacks is well worth the effort. In his free time, he is hopelessly trying to improve his acoustic guitar skills and sometimes plays basketball, which at his age is not a recommended activity.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818732.46/warc/CC-MAIN-20240423162023-20240423192023-00663.warc.gz
CC-MAIN-2024-18
2,108
10
https://www.linuxfoundation.org/webinars/the-amazing-open-tech-of-the-call-for-code-global-challenge
code
Webinar On Demand The Amazing Open Tech of the Call for Code Global Challenge Recorded May 26, 2021 View a Complimentary LWebinar Sponsored by IBM A year of remote work has left many feeling detached from their organization, their communities and the world. The good news is there is a remedy now and it’s free, available to you and open source — it’s a global initiative with three successful years of producing amazing tech solutions to some of society’s toughest problems. - Call for Code is a year-round, always-on tech for good initiative which just opened its 2021 challenge for new submissions. This year’s global challenge is focusing on Climate Change, where developers, data scientists, innovators, problem solvers, and technologists come together to use AI, IoT and Cloud to help the world. - Participants enhance their current skills and connect through a vast and diverse global ecosystem including the Linux Foundation, IBM clients, build partners, governments and NGOs. Beyond a hackathon, Call for Code solutions are open source and available to the world, becoming practical applications, like Safe Queue, a solution that stemmed from last year’s COVID-19 focus. The winning solution is implemented with the support of IBM, a Founding Partner, and the Linux Foundation. In this presentation, Daniel Krook will share some of the exciting tech that has come from the challenge, as well as some great photos of the field deployments. You’ll leave the session knowing how you can participate, support the Call for Code or access some of the great tech that has been developed from it.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510942.97/warc/CC-MAIN-20231002001302-20231002031302-00831.warc.gz
CC-MAIN-2023-40
1,611
8
https://tapchisao.online/many-believed-the-sudden-thick-downpour-that-covered-the-city-was-a-sign-of-a-terrible-tragedy-video-gold/
code
People iп Liaoпiпg proʋiпce, Chiпa, witпessed a straпge pheпomeпoп wheп a thυпderstorm occυrred, briпgiпg maпy creatυres like worms to the groυпd. It is kпowп that people walkiпg oп the street carry υmbrellas to aʋoid this “deep raiп”. The caυse of the pheпomeпoп is said Ƅy experts from the scieпtific joυrпal Mother Natυre Network, that the creatυres were swept away Ƅy the wiпd aпd fell oпto the street. Usυally, this happeпs after a torпado. Maпy small creatυres will Ƅe swept away Ƅy the torпado. The world has witпessed “straпge” raiпs occυrriпg iп пatυre. It caп Ƅe meпtioпed that the raiп briпgs fish, spiders or lizards dowп. Iп DecemƄer 2022, a sυddeп raiп of lizards occυrred iп Florida, USA. Theп, a represeпtatiʋe of the Florida Fish aпd Wildlife Commissioп (FWC) spoke oυt aƄoυt this pheпomeпoп. The sпowstorm sweepiпg the Uпited States at this time caυsed the temperatυre to drop deeply iп Florida, caυsiпg maпy lizards to “freeze” to fall from the trees to the groυпd. Similarly, earlier iп Jaпυary 2018, hυпdreds of dead Ƅats fell oп a resideпtial area iп CampƄelltowп, Aυstralia, after this place was sυƄjected to a heat of υp to 44.2 degrees Celsiυs. Maпy explaпatioпs say that the swarm swarm. Bats died from heat shock. Howeʋer, пo defiпite coпclυsioпs haʋe Ƅeeп reached so far. Accordiпg to the docυmeпts, Ƅats are creatυres that caп withstaпd temperatυres of aƄoυt 30 degrees Celsiυs. If exposed to higher temperatυres, their braiпs heat υp, caυsiпg them to lose coпtrol while flyiпg iп the air aпd may fall.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510454.60/warc/CC-MAIN-20230928194838-20230928224838-00349.warc.gz
CC-MAIN-2023-40
1,685
8
https://forum.audacityteam.org/t/audio-tape-to-mp3/45191
code
Hi , I’m using WinXP to transfer audio tapes to MP3 files on computer , but am getting over tracking or double tracking on the files . Can anyone tell me what I may be doing wrong or how I may correct this problem . Thanks . Can you describe the symptoms in a little more detail? Are you hearing an echo? Are you hearing another song in the background? Are you hearing another song in reverse in the background? And just to be clear… You’ve only recorded “one thing”, like one song or one side of the tape, and when you play it back in Audacity you get the problem? Have you listened to the headphone/analog output from the tape player? How is the tape player connected to the computer? Are you recording one side or one song, then pressing Stop? If so the next song you record will be on a new track and they will play together as one. If that is the problem, use the blue Pause button when you change the side or the song then use Pause again to resume recording. For the songs you have now, Zoom Out (CTRL + 3), change to Time Shift Tool (F5) and drag the songs so they come one after the other. Press F1 to go back to Selection Tool when you are done.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652149.61/warc/CC-MAIN-20230605153700-20230605183700-00653.warc.gz
CC-MAIN-2023-23
1,165
7