url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://www.dell.com/community/NetWorker/NMC-New-Setup/m-p/7126713
|
code
|
Greetings for the day!!
I am doing fresh installation from NW Server, Storage Node and I got stuck in NMC. Finished installation of NMC in centos and configured with the NMC_Config script. But not able to connect my console when i am trying to access NMC from local host with https://<nmcserverip>:9000
I able to ping nmc server from my local system, So i believe there is no issue with connectivity.
I am thinking it's due to port issue. Any suggestion?
Note: gst service started without any issue
Solved! Go to Solution.
If you need iptables up and running, you need to add some rules to get ports opened in networker server. In recent versions of Networker Security Guide are not given this rules (I don't know why, honestly). But you can see "Networker 7.5 Firewall and TCP configurations" to consider rules given there.
We don't use iptables, We delegate security to external Firewall. So, we disable iptables. I am not expert in CentOS, but for RHEL you can disable with "service iptables stop" and then use "chkconfig iptables off" to prevent iptables go up when reboot server.
Hope it helps.
Plz check the version of the Java runtime environment (JRE) in your PC, and make sure the firewall both in NMC server and PC can 't stop your connectivity.
Not sure if you mentioned https by mistake or trying to access the NMC with HTTPS.
Access the NMC with HTTP not HTTPS.
if you are accessing the NMC from same server : http://localhost:9000
also you can disable the local firewall in centos by using iptables off command.
Thanks Gautam. You're right. I am not able to do telnet my NMC server via 9000 port. But I didn't think as it's due to local (PC) firewall blocking. Because I able to do other NMC servers (hosted on windows- same new setup) from this PC.
Any other suggestion which may help to get closer..
Thanks ledugarte. Even I heard the same from other friends. Do you have any idea how to unblock this for CentOS.
Anyhow I am googling now, in case if u get it before me it's really appreciatable..
I wanted to say that the firewall on your NetWorker Server is blocking the traffic.
you can use the following commands to disable firewall on CENTOS.
systemctl stop firewalld
systemctl disable firewalld
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315174.57/warc/CC-MAIN-20190820003509-20190820025509-00550.warc.gz
|
CC-MAIN-2019-35
| 2,217
| 22
|
https://iris.imtlucca.it/handle/20.500.11771/14879
|
code
|
We review some results regarding specification, programming and verification of different classes of distributed systems which stemmed from the research of the Concurrency and Mobility Group at University of Firenze. More specifically, we examine the distinguishing features of network-aware programming, service-oriented computing, autonomic computing, and collective adaptive systems programming. We then present an overview of four different languages, namely KLAIM, COWS, SCEL and AbC. For each language, we discuss design choices, present syntax and semantics, show how the different formalisms can be used to model and program a travel booking scenario, and describe programming environments and verification techniques.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00366.warc.gz
|
CC-MAIN-2022-33
| 797
| 3
|
https://groups.google.com/g/lastools/c/TGKxxT7PleI
|
code
|
Hi everyone! I can't run LAStools because of a UnicodeDecode error, which has already been discussed in the group.
I have looked at the solutions proposed on the forum and tried them all, but nothing helps, the error always comes back.
Finally, I made sure to extract the LAStools folder in the shortest possible path:
Could you please help me! Thanks in advance! And sorry for this newbie question
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500250.51/warc/CC-MAIN-20230205063441-20230205093441-00574.warc.gz
|
CC-MAIN-2023-06
| 398
| 4
|
https://help.figma.com/hc/en-us/articles/360040451453
|
code
|
Before you Start
Who can use this feature
Users on any plan
Users with edit access to a Figma design file can resize objects with the scale tool
Use the scale tool to proportionally resize layers and objects. This tool preserves aspect ratios and ignores constraints of any nested layers in order to scale them proportionally. Any blurs or strokes will scale as well.
To resize an object using the scale tool:
- Activate the scale tool by pressing K, or by clicking in the toolbar and selecting Scale.
- Select an object.
- Scale the object by doing one of the following:
- Click and drag: Hover over the object’s bounding box to make the cursor appear. Then, click-and-drag to resize.
- Scale multiplier: Use the scale multiplier, in the Scale panel of the right sidebar. Open the dropdown to select a multiplier, or type a multiplier in the text field and press Enter / Return to apply.
- Dimensions fields: Use either the width or height fields, in the Scale panel of the right sidebar. Type a number in either field and press Enter / Return to apply. The other dimension field will automatically update.
Set scale direction
The anchor box in the scale panel allows you to set which direction an object scales when using the scale multiplier or dimension fields.
You use the anchor box to set the object’s anchor point, which tells Figma which side of the object to stay put. Once you scale your object, it scales in the opposite direction of the anchor point.
For example, if the anchor point is set to center, the object’s center stays put and resizes in all directions. If you set the anchor point to the top-left, an object will resize down and to the right.
To change an object’s scale direction:
- Activate the scale tool K.
- Select the object.
- Click on one of the anchor points in the anchor box.
- Use the scale multiplier or dimension fields to scale the object.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102637.84/warc/CC-MAIN-20231210190744-20231210220744-00275.warc.gz
|
CC-MAIN-2023-50
| 1,885
| 21
|
https://talk.manageiq.org/t/missing-vmware-vcenter-events/2689
|
code
|
During provisioning, I’m attempting to wait for the
CustomizationSucceeded event prior to doing post clone tasks, however this event seemingly never appears. Calling
$evm.vmdb(:ems_event).where(vm_name: $evm.root['miq_provision'].vm.name) returns
, even after the VM has successfully deployed.
It is as though the events are not being captured at all, yet they must be to trigger a provider refresh after the VM is cloned, which is triggered and completes successfully.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538226.66/warc/CC-MAIN-20210123160717-20210123190717-00412.warc.gz
|
CC-MAIN-2021-04
| 471
| 5
|
https://brightsec.com/blog/what-is-a-security-champion/
|
code
|
While a security culture for a successful DevOps and AppSec programme is important, to succeed, security needs to be top of mind for everyone across your pipeline.
Your developers, QA and security teams must have a close working partnership to break down silos and improve security knowledge.
One effective way to achieve this is to create security champions to act as the voice of security across your teams.
In this article:
- What are the benefits of a security champion program?
- Responsibilities of a security champion
- Do you already have a security champion in the making?
- Get Your Security Champion Programme Started today!
What is a security champion?
With the ratio of developers to security professionals being ~50:1, your security team is spread thin – they cannot make up for the lack of security experience of your developers, nor provide the full security coverage developers need.
A security champion can help bridge this gap, by evangelizing, managing and enforcing the security posture with your development team(s) acting as an extended member of the security team.
What are the benefits of a security champion program?
A security champion can help an organization compensate for a lack in security skills among existing teams. This can be achieved by providing a member of the development team with the knowledge and authority to assist with security tasks. The security champion can become a force multiplier who can address questions, ensure security awareness, and help enforce security best practices across the development organization.
Because a security champion understands the terminology used by developers working on software projects, they can relay security concerns in a manner that the development team will understand and be able to implement. Also, by performing code reviews, they can improve code quality early in the development lifecycle, reducing security efforts later on.
Responsibilities of a security champion
Being in the Know – knowledge is key and your security champion will benefit from ongoing training to keep up-to-date with the latest practices, methodologies and tooling to share this knowledge.
Raising Awareness – disseminating security best practices, raising and maintaining continual security awareness around issues / threats with the development organization and answering security related questions
Being Part of Security – performing scans for security issues and being the go between to escalate issues for review by the security team, helping with QA and testing. This will also enable them to be involved in risk and threat assessments, as well as architectural and tooling reviews to identify opportunities to remediate security issues early.
Getting and Maintaining Buy-In – Intrinsic to the project and speaking the developers’ language, your security champion can get their colleagues’ buy-in by communicating security issues in a way they understand, to produce secure products early in the SDLC. This increases the effectiveness and efficiency of your AppSec program while strengthening relationships across multifunctional teams, while minimizing the security testing bottlenecks further downstream, so your security team can focus on other critical tasks.
Collaboration – Connecting and partnering with other security champions and players, attending weekly meetings to share ideas and tips whilst assisting in making security decisions
Review and escalation – Evaluating code for security issues and taking responsibility for raising issues that require the involvement of the security team.
Inspiration – Creating team workshops, sharing best practices, or simply relaying news from the security field. Champions can get teams involved with security by starting challenges, hackathons, and competitions. These and other initiatives can create interest, share knowledge, and also have practical value by encouraging teams to identify and fix vulnerabilities.
Do you already have a security champion in the making?
It is likely that the perfect candidate for a security champion is already part of your team. They are a colleague who is involved with and familiar with your product(s) while showing an interest in security issues. They could be a developer, QA, architect, or DevOps colleague.
They don’t need to be senior, but management needs to see the value in having a security champion to provide them the right support. Extra work will be required so having a willing ‘volunteer’ with a keen interest in the role is important to ensure they are effective and stay engaged.
Get Your Security Champion Programme Started today!
Here are some key aspects to consider to help build your security champion programme in your organisation. See the OWASP Playbook for a complete framework that can help you develop security champions.
This is the most critical aspect, as without it, you are likely to fail. Management, along with security and engineering managers will need to invest time, money and resources to ensure security champions are effective, but the benefits will soon outweigh the investment
Nominate your security champions
Ideally you should nominate, rather than appoint, a security champion. This will ensure that they are attentive and keen to give time to the position. Because the aim is to nominate champions in a voluntary way, you should articulate the advantages that come with being a champion. People are not likely to want to participate and take on extra work if they don’t get something in return.
If management approves, you may give champions the opportunity to attend security conferences. There is also the advantage of self-development – adopting the role of a security champion can help advance the career of an individual and increase their value within the organization.
Establish communication channels
Once you have nominated the champions, next you will need to establish communication channels they can use. These channels should make use of the technologies your organization already uses, such as Skype, Slack, or Stride channels. You may even use a traditional email mailing list – whatever is most likely to attract the attention and engagement of teams.
Build a sound knowledge base
Champions should be responsible for creating an internal base of knowledge, which will be the main focal point for security-related information. A knowledge base may provide access to the organization’s security approach, policies and procedures, information about vulnerabilities and risks relevant to the organization, and best practices relating to secure coding.
Define and track success
Security needs to be a fundamental KPI and the efficacy of the Security Champion, and the efficiencies they bring to the security team and DevOps pipeline, all need to be tracked to evaluate the ROI of the program
Training and education
A security champion can’t be expected to know everything…at least not initially. Build on their willingness to be part of the solution, by leveraging your internal security experts to define issues they want the security champion to manage. Provide the knowledge they will need to start reviewing products for issues early and pass on best practices to the development team, freeing up your security team
The right tooling
Consolidating your tooling, so your developers, security champion, QA and security team are able to use, understand the output of and effectively collaborate to remediate issues early is important. You need security tools that are developer friendly and dead accurate while providing comprehensive security compliance on every build to enable you to shift security testing left, coordinated by your security champion.
Bright is an automated security testing and vulnerability scanning tool that can promote security awareness among developers:
- Built for Developers – empowers developers to detect and fix vulnerabilities on every build. It can initiative a scan based on crawling, HAR files generated per build/commit, OpenAPI (Swagger) files or Postman Collections for testing APIs.
- Smart scanning – uses sophisticated algorithms to carry out the right tests against the target, removing complexity for developers, and running scans fast to ensure they do not hurt developer productivity.
- Supports modern architecture – microservices, single page applications, SOAP, REST, and GraphQL APIs.
- No false positives – developers don’t have the time and expertise to weed out false positives from the results of security tools. Bright performs automated validation of every vulnerability detected, ensuring that every alert represents a real security threat.
- Integrates with CI/CD – provides a convenient CLI for developers, and integrates with tools like CircleCI, Jenkins, Jira, GitLab, Github, and Azure DevOps.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816893.9/warc/CC-MAIN-20240414192536-20240414222536-00507.warc.gz
|
CC-MAIN-2024-18
| 8,828
| 47
|
http://www.ps3hax.net/showpost.php?p=463841&postcount=428
|
code
|
Originally Posted by FoxhoundTSX
im with you brother, is amazing to see the amount of ignorance, some users would post a process or an instruction and right below it someone would put "can I install this on 4.30 OFW??? or "Can I downgrade from 4.21CFW?" *facepalm*
I tell you what.... if there was a real zombie apocalypse we can tell who is going to survive or die within the first hours by just looking at this thread... lol
What makes you the superior being in a zombie apocalypse ? Your skills not to answer questions?
Either you register with a forum to ask questions and help other dudes with their questions or you just gtfo. :>
Who really cares if a question gets asked 100 times...you either answer or you just don't read the question at all.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698693943/warc/CC-MAIN-20130516100453-00054-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 751
| 6
|
https://www.hyaking.com/configure-hp-integrated-lights-ilo-step-step/
|
code
|
Configure HP Integrated Lights Out (ILO) Step by Step
Easily one of the best features of HP servers is their Integrated Lights Out (ILO) remote management interface. Having the ability to remotely access HP servers from POST to OS is an invaluable tool. Standard ILO features include remote shutdown and startup, virtual media, text mode console redirect and access to hardware logs, status and diagnostic tools. Full graphical remote console redirection is available with the advanced license.
This article will outline step by step how to configure and access ILO on a fresh out the box Proliant ML350 G5 server.
First, connect the iLO designated network port to your switch or management network.
Most brand new HP servers come with an information tag attached. Printed on the tag is the server serial number and Integrated Lights Out access information including factory set username and password.
The easiest way to access the ILO configuration utility is during the POST by pressing F8 when prompted.
The menu is straightforward and self explanatory. Use the arrow keys to navigate. Select Enter while the Set Defaults option is highlighted to revert back to factory settings.
First, access the Network menu, disable DHCP and change the DNS name.
Then configure your static IP settings.
Next, set the Administrator password or create new user.
Note that the username and password are both case sensitive. Select Exit to save and reset ILO with the new settings. Test access to the iLO web interface.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00315.warc.gz
|
CC-MAIN-2022-21
| 1,505
| 11
|
https://forums.radioreference.com/threads/control-channel-only-scanning.226944/
|
code
|
- Aug 4, 2008
- NW Missouri
Does anyone know if you just put the control channel in if it will help you with P25 decoding. I have put all voice channels and control channels in, but get kinda chopy audio, I tried again with just the control channel it appeared to help, but I'm not sure if that's all true. Just gathering opinions...
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00112.warc.gz
|
CC-MAIN-2023-14
| 333
| 3
|
https://h30434.www3.hp.com/t5/Printers-Archive-Read-Only/having-trouble-connecting-my-hp-photosmart-premium-c310-to/m-p/984345
|
code
|
I have a facebook app on my printer ( Hp Photosmart Premium C310 ) says to go to Facebook.com/Device and enter a passcode and I only have 1 hour to do so or it becomes invalid, well so much for the hour cause i am still trying to find devices on facebook, It says it will store my authorization credentials ans use them to access and manage my facebook account. Its for printing photos,thats all i was going to print from there, but I can't find any way to do it. Can you please help. Thank You so much .....LOST and CONFUSED!!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00248.warc.gz
|
CC-MAIN-2021-21
| 527
| 1
|
http://www.france-in-photos.com/corsica/south-western-corsica/filitosa/maquis-cloud-near-cauria-1.jpg.php
|
code
|
Maquis and clouds
Many areas in central Corsica are covered by woods, but closer to the coast, the land is dryer and there are places like this one where only shrubs and thistle grow well. A paradise for crickets, locusts and cicadas.
Maquis and clouds by is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
It can be used for non-commercial purposes with the following text and link:
Photo from: <a href="http://www.france-in-photos.com/">France in Photos</a>
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00570.warc.gz
|
CC-MAIN-2021-25
| 509
| 5
|
https://writemyclassessay.com/recovering-jpeg-files/
|
code
|
I accidentally deleted all of my photos (JPEG format) from my digital cameraís CompactFlash (CF) card. Luckily, in the computer world, ìdeletedî tends not to mean deleted so much as ìforgottenî. My computer tells me the flash card is blank, but Iím sure that the image data still exists on the card, just the file names and locations have been deleted. For this assignment, write a program Recover.java to recover the photos from my flash card.Recover.java will need to read over a copy of my CF card, looking for JPEG signatures. The first four bytes of most JPEGs are either:0xff 0xd8 0xff 0xe0Or0xff 0xd8 0xff 0xe1Odds are if you find one of these patterns of bytes on a disk known to store photos, they mark the start of a JPEG. Each time you find a signature, open a new file for writing and start filling that file with bytes from my CF card, closing the file once you encounter another signature. File names should be 1.jpg, 2.jpg, …Iíve made a copy of the CF card that you can open directly by your program, i.e. do not copy to your local directory:/home/linux/ieng6/cs11wb/public/HW5/card.rawTo ensure your program works correctly, try opening each of the generated *.jpg(s). If you open them and donít see anything, something went wrong. I wonít tell you how many images should be recovered, but it is more than 5.Additional Information:Digital cameras tend to store photographs contiguously on CF cards. So, the start of a JPEG usually marks the end of another. Digital cameras generally initialize CF cards with a FAT file system whose ìblock sizeî is 512 bytes, meaning they only write to those cards in blocks of 512 bytes. JPEGís can span contiguous blocks. Otherwise, no JPEG could be larger than 512 B. Thus a photo thatís 1 MB (i.e. 1,048,576 B) takes up 1048576/512 = 2048 blocks on a CF card.The last byte of a JPEG might not fall at the very end of a block. A photo that uses a few bytes less than 1MB, e.g. 1,048,574 B, will still use 2048 blocks. Because the CF card was brand new when I started taking pictures, it was most likely ìzeroedî (i.e. filled with 0s) by the manufacturer, in which case any slack space will be filled with 0s. Itís ok if those trailing 0s end up in the JPEGs you recover.Rather than reading the cardís bytes one at a time, you can read 512 B at once into a buffer for efficiencyís sake. Thanks to FAT, the JPEGís signatures will be ìblock-alignedî. That is, you need only look for those signatures in a blockís first four bytes.
Is this question part of your Assignment?
We can help
Our aim is to help you get A+ grades on your Coursework.
We handle assignments in a multiplicity of subject areas including Admission Essays, General Essays, Case Studies, Coursework, Dissertations, Editing, Research Papers, and Research proposalsHeader Button Label: Get Started NowGet Started Header Button Label: View writing samplesView writing samples
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00266.warc.gz
|
CC-MAIN-2021-21
| 2,914
| 5
|
https://coderanch.com/t/615699/careers/EE-JEE-remote-part-time
|
code
|
This is Sharif. I have 3 years experience working with enterprise application development in J2EE/JEE6. I am currently looking for part-time freelance J2EE/JEE6 jobs which I can do remotely.
I am ready to commit full-time during weekends and 2 hours every weekdays.
My skill sets are as follows:
Back-end - J2EE, JEE6, Servlets, Spring, Hibernate, Seam, JPA, EJB, CDI, Log4j
Front-end - JSP, JSF, Primefaces, HTML5, CSS3, Twitter Bootstrap, Jquery, Ajax, ExtJs
Database - MySql
Build Tools - Maven, Gradle
Servers - Tomcat, Glassfish, Weblogic
IDE - Intellij Idea
VCS - Git
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038092961.47/warc/CC-MAIN-20210416221552-20210417011552-00435.warc.gz
|
CC-MAIN-2021-17
| 573
| 10
|
http://blog.ezyang.com/2012/05/an-interactive-tutorial-of-the-sequent-calculus/
|
code
|
An Interactive Tutorial of the Sequent Calculus
You can view it here: An Interactive Tutorial of the Sequent Calculus. This is the "system in three languages" that I was referring to in this blog post. You can also use the system in a more open ended fashion from this page. Here's the blurb:
This interactive tutorial will teach you how to use the sequent calculus, a simple set of rules with which you can use to show the truth of statements in first order logic. It is geared towards anyone with some background in writing software for computers, with knowledge of basic boolean logic.
Developing this system has been quite a fascinating foray into user interface and game design. While similar systems have been built in the past, my goal was a little different: I wanted something simple enough and accessible enough that anyone with a vague interest in the topic could pick it up, work through it in an hour, and learn something interesting about formal logic. I don't think this demo will be particularly successful among the math-phobic, but perhaps it will be more successful with people who have an intrinsic interest in this sort of thing. I must have incorporated dozens of comments from my many friends at MIT who put up with my repeated badgering about the system. The first version looked very different. I give my superlative thanks to my beta testers.
There is a lot of hubbub about the next generation of online teaching systems (edX), and this demo (because, really, that's what it is) is intended to explore how to blur the line between textbooks and video games. It doesn't really go far enough: it is still too much like a textbook, there is not enough creative latitude in the early exercises. I don't feel like I have gotten the right feel that a video game which progressively layers concepts (e.g. Portal). On the other hand, I do feel I have done a good job of making the text skimmable, and there are a lot of little touches which I think do enhance the experience. I am embarrassed to admit that there are some features which are not included because they were technically too annoying to implement.
If there is one important design principle behind this demo, it is that there is a difference between giving a person a map and letting a person wander around in a city for a few hours. But, fair reader, you probably don't have a few hours, and I am probably demanding too much of your attention. Nevertheless, forgive my impoliteness and please, take it out for a spin.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.42/warc/CC-MAIN-20231203161435-20231203191435-00768.warc.gz
|
CC-MAIN-2023-50
| 2,499
| 6
|
http://serverfault.com/questions/257485/how-to-change-the-hostname-of-a-vm-using-vmware/257493
|
code
|
When I create VMs, my automatic hostname is localhost.localdomain. This is creating some networking issues from my VM to another Windows computer that I have (cannot ping to my VM). How can I change my hostname of my VM? Do I need to also change it inside my VM as well as in the vSphere Client?
Update: I have changed my hostname of my RHEL VM to say "MyVM" and verified this in /etc/hosts and /etc/sysconfig/network. However, I am still unable to ping to MyVM from another Windows computer on my network. Does this have anything to do with the dnsdomainname? I get dnsdomainname: Unknown host. On my vSphere Client, it still says the Host is localhost.localdomain, but from your responses below, it should not matter about what the vSphere Client says..
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824230.71/warc/CC-MAIN-20160723071024-00176-ip-10-185-27-174.ec2.internal.warc.gz
|
CC-MAIN-2016-30
| 755
| 2
|
https://docs.generato.com/
|
code
|
This is the documentation for the newest release of Generato. Please keep in mind, that this documentation is still work in progress. For feedback feel free to get in touch with us here.
The best way to discover Generato is to try it out. Whether you were already using other software generator tools or not, get started with Generato for free with the following guide:
Check out our Frequently Asked Questions here:
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575844.94/warc/CC-MAIN-20190923002147-20190923024147-00506.warc.gz
|
CC-MAIN-2019-39
| 416
| 3
|
http://jessibird.net/2006/my-first-interesting-online-acquaintance
|
code
|
I just met someone through last.fm. I liked her crazy wild gypsy music and I sent her a message about it. Now we’ve been getting acquainted. Her parents had to leave Chile during the dictatorship. She’s a big traveler, a musician, and helps her aunt teach school in Chile sometimes though she lives stateside otherwise.
It’s cool to have someone to appreciate Gabriela Mistral with!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621273.31/warc/CC-MAIN-20210615114909-20210615144909-00587.warc.gz
|
CC-MAIN-2021-25
| 388
| 2
|
https://www.hackerearth.com/blog/developers/top-skills-a-full-stack-developer-should-have/
|
code
|
Defining, describing, and drawing you a picture…
I’m going to use the most popular example to define a full-stack developer. If there’s one person who wore many hats in his lifetime, it’s Leonardo Da Vinci. He was a painter, scientist, mathematician, cartographer, geologist, astronomer, historian, musician, and sculptor. People believe that diverse experiences fed into his creative genius, making him a nonpareil innovator.
If this extraordinary Renaissance man was a programmer today, he would be what we call a “full-stack” developer. Murky picture becoming a little clearer, I hope.
Until about a few years ago, a well-planned and perfectly functional website required just two kinds of people to be up and running: a web designer and a web developer. Cut to the present day.
Developers now identify with over 24 such specific job titles, including front-end web developer, back-end web developer, mobile developer, and desktop developer. Understandably, nomenclature is becoming increasingly complex to work with. It is often restrictive (and unfair) to have one job title to describe your distinct skill sets. For instance, consider the findings of a 2015 survey by Stack Overflow: for the third year in a row, a majority of respondents (30%) identified themselves as “full stack developers” when asked to categorize their occupation.
So who are full stack developers? No, they aren’t mythical creatures who are magically endowed with expertise across the software development terrain.
Full stack developers are ordinary mortals like the rest of us. Michael Wales of Udacity describes a full stack developer as one who can work cross-functionally on the full “stack” of technology, i.e. both the front end and back end. In layman’s terms, they are jacks of all trades and masters of one (or many, as opposed to none).
Gone are the days when developers got by knowing only one programming language. Today, designers and developers share a greater area of the product development Venn diagram. They are more equipped to work with myriad technologies starting with the back end to the front end.
It takes them years of rich practical experience to earn their stripes as good full stack developers, which is why many consider them a rare breed of talent and commitment. Despite the concept having its fair share of naysayers, these wizards are still sought after by smaller companies and startups in the fledgling phase where their diversity and versatility are highly appreciated.
Looks like your cup of tea?
If not, here’s an interesting video where Microsoft’s Scott Hanselman could well change your mind.
Taking a quick look at the basics…
For full stack development, you need to understand
- Hosting systems (the computer; the OS; and supporting services like DNS, SSH, email, and Apache)
- Application stack (web server like Apache or IIS; relational database like Oracle, MySQL, and PostgreSQL; and dynamic server-side web languages like Python, PHP, NodeJS, and Ruby)
That’s a whole bunch of “stuff.” Reeling? I’m not going to be dwelling on those terms, you guys.
This isn’t a tutorial, you know! However, I’m going to be…
Talking about some key skills full stack developers need to have…
Over the past few years, the full stack has become “fuller.” In simpler times, a stack was rather straightforward and consisted of the LAMP (Linux, Apache, MySQL, and PHP) or MEAN (MongoDB, ExpressJS, AngularJS, and NodeJS). But with the advent of tooling, cloud services, design, data, and networking, full stack developers now have to deal with a whole new ball game.
Every full stack developer is different and has his/her unique combination of skills suitable for specific startups. The first step is to decide where one’s core competencies lie (back-end vs. front-end) and where it is enough to know just the bare necessities.
What is a front-end developer responsible for? The architecture of immersive user experiences and user-facing code…
A full stack developer whose skill lies in the front-end has to write consistent and maintainable code that translates into a hassle-free user experience devoid of eyesores and unnecessary clicks. On top of scripting capabilities, a full stack developer who can also play around with typography, color and layout, is a coveted resource.
A web designer and a front-end developer are essentially different, with the later not required to actually design how websites look. But they can leverage their creativity. Converting website design into front-end code deals with user experience (UX) and user interface (UI).
- The expertise of an UI designer will include mockups, graphics, and layouts; design principles will focus on visual design, and visual strength will be colors and typography.
- The expertise of a UX designer will include wireframes, prototypes, and research; design principles will focus on human-centered design, and visual strength will be task flows and scenarios.
You can take a look a great diagram in this post from Ben Melbourne, a Brisbane-based digital strategist, to understand the roles better.
You can also see companies asking for Ajax skills. If you have very little idea about libraries like LESS, SASS, and Media Queries, then it’s time you did something about it. How about AngularJS, Bootstrap, Backbone, Foundation, and EmberJS—frameworks that you need to create successful web applications?
FYI: A front-end developer assembled all the copywriting, the photos, the graphics, and everything you see on this page into “web speak.”
How has your user experience been? Let us know.
For the user-facing part of the website to exist, some poor soul (albeit a brilliant one) needs to build and maintain a server, an application, and a database. Back-end developers “develop and maintain the core functional logic and operations of a software or information system” (Technopedia). For instance, as a backend developer, you can use ASP.NET, an open source web framework, which mandates a Windows server, and it works with a language like C#.
For the lay person, it is the back-end developer who makes pulling values from the database possible when you use a drop-down menu.
If a full stack developer’s forte lies in the back-end, it is imperative to understand basic server-side scripting and the art of providing dynamic responses to client requests. There is no dearth of server-side languages, but Ruby, Java, and Python are popular.
Full stack developers are expected to be able to create, query and manipulate databases with ease. There are several to choose from, ranging from SQLite to MongoDB to Oracle. The one to learn will depend on the project you’re working on. A hosted database will save the full stack developer the time and effort that comes with managing it. Big projects have dedicated database administrators.
It will be great if you know some frameworks, specific PHP ones, like Zend and Symfony or Django for Python, or Ruby on Rails for Ruby; version control software like SVN or GIT; and Linux.
The back-end developer also needs to learn about caching and key-value stores, queuing systems, search engines, and other tools like Carrierwave or Refile.
FYI: Each time you return to a site and log in, a back-end developer makes calling your stored data possible.
Beefing up your portfolio with these non-technical skills
Apart from hard-core technical skills, a full stack developer must be the glue that binds different teams together. It is hard enough to be a multidisciplinary polyglot if a developer is not capable of speaking at the same wavelength as the front-end and back-end teams. It is the best way to eradicate silos from the workplace and everybody on board moves quickly to get the product rolled out on time.
Full stack developers should be aware of the business dynamics they work in. This means that a deep understanding of customer needs must be the perpetual guiding force behind the design of the product.
As with every kind of programming job, soft skills are imperative to sandpaper the overall personality of a developer. Strong communication is no longer an exception — it is vital for full stack developers to help them bridge information gaps between the front-end and back-end to build a product they will be proud of. A tireless quest for new knowledge and of course, an open mind toward fresh ideas, (a success mantra all leaders swear by!) define a full-stack developer.
In conclusion, those who understand the stack in its entirety are poised to build applications that best suits customers’ requirements. So the next time somebody calls a full stack developer a generalist, please disillusion him.
Also, Read – How to hire full stack developers
Here's what you can do next
Check out FaceCode:
an intelligent coding interview tool
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00116.warc.gz
|
CC-MAIN-2021-43
| 8,836
| 45
|
https://community.idera.com/database-tools/powershell/powertips?pi874=2
|
code
|
Occasionally, it may become necessary to delete a set of subfolders within a given folder. Here is a simple chunk of code that deletes the folders from a list of folder names.
Warning: this code will delete the subfolders listed in $list without further confirmation. If the subfolders do not exit, nothing…
To manage auto starting programs on Windows, don’t bother writing extensive scripts. PowerShell can directly open the autostart manager included in task manager which does all you need:
PS C:\> Taskmgr /7 /startup
This opens a window and lists all auto starting programs, along with their…
The PowerShell Gallery not only offers public modules with new PowerShell commands but also public scripts. Before you invest time, you may want to investigate if someone else may have created PowerShell code that can do what you want.
Here is a quick example that illustrates how searching and downloading…
PowerShell remoting is insanely powerful: with Invoke-Command, you can send arbitrary PowerShell code to one or many remote machines and execute it there in parallel.
On Windows Servers, PowerShell remoting is typically enabled, so all you need are Administrator privileges. Here is a simple example:
New-Item can create new things on any PowerShell drive, including the function: drive that holds all PowerShell functions.
If you’d like, you can define new functions dynamically inside your code. These new functions would then exist only in the scope where they were defined. To make them script-global…
In the previous parts, we created the Test-Password function that can test user account passwords for both local and remote user accounts.
In our last part, we’ll add error handling to the Test-Password function so it responds gracefully when the user enters domain that does not exist or is unavailable…
In the previous tip we showed how PowerShell can validate and test user account passwords, however the password was requested in plain text. Let’s change this so that PowerShell masks the password when it is entered.
You can re-use the strategy used below for any other PowerShell function where…
PowerShell can test user account passwords for you. This works both for local and domain accounts. Here is a sample function called Test-Password:
Windows operating systems can be uniquely identified by a so-called 4K-Hash: this is a special hash string that is 4000 bytes in size. You can use this hash for example with “Windows Autopilot” to add physical and virtual machines.
The 4K Hash is just one piece of information required to…
If your laptop battery is going low too soon, or you’d like to investigate related issues, there is a simple way to generate an extensive battery charging report. It shows exactly when your battery was charged, what its capacity is, and how long it took to deplete.
Here is the code to create a…
Powered by IDERA
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400274441.60/warc/CC-MAIN-20200927085848-20200927115848-00505.warc.gz
|
CC-MAIN-2020-40
| 2,893
| 21
|
https://flylib.com/books/en/3.121.1.51/1/
|
code
|
The Workshop provides quiz questions to help solidify your understanding of what was covered in this hour. Answers are provided in Appendix A, "Quiz Answers."
When is an managed object destroyed?
How many .NET languages can mix managed and unmanaged code?
What is the feature of the CLR that keeps memory clean?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588341.58/warc/CC-MAIN-20211028131628-20211028161628-00292.warc.gz
|
CC-MAIN-2021-43
| 311
| 4
|
http://selfblast.xyz/archives/4045
|
code
|
Novel–Divine Emperor of Death–Divine Emperor of Death
Chapter 1426 – The True Hope vacation knee
She screeched as tears begun to flow like a river from her vision.
what do domes symbolize
No, she observed her society encountering a ma.s.sive change!
She viewed Ancestor Tirea Snowfall, directing her hatred at her.
She uttered with pure hatred as she viewed her husband, no, an illusion! She switched all over and scowled as she stood up.
Divine Emperor of Death
“Ancient person, you can’t just barge in that way… Start looking, I needed a great deal difficulties outside, attempting to influence the guardians so it had not been an enemy but an ally…”
Huge Elder Elise Alstreim possessed practically frosty such as a statue. The eyes in the other individuals had been also broad in disbelief, surprise, and enthusiasm. It absolutely was only after a moment managed Huge Elder Elise Alstreim dumbfoundedly sound out.
He couldn’t assist but laugh when Elise instantly closed down the space as she appreciated him once again. He could experience her body system tremble all over again although she experienced presently trembled though keeping him before. However, at this point, he tightly wrapped his solid arms all around her delicate physique rather than allow her to leave as she sincerely needed in the scent, nearly experiencing like the tears he held back were actually going to end up while he listened to her happy sobs whilst she was weeping into his c.h.e.s.t.
“Father…” He couldn’t assistance but mutter as his overall body sensed a chill.
the children of the world are wiser
Fantastic Elder Elise Alstreim muttered as she noticed them accept one another. She understood who these people were… How could she not?
“It can’t be…”
On the other hand, reviewing this fearless girl, she inwardly nodded her head.
“Father, mum… you’re in existence… you’re still living…”
Many feelings sunk while they performed the other person when two even more arms embraced them.
Ezekiel Alstreim observed profoundly satisfied currently. It was like his overall staying was staying revitalized together ambiance. The loneliness he got sensed was getting slowly taken away, and her cries made him know that she profoundly d.e.s.i.r.ed him when he acquired d.e.s.i.r.ed her these decades.
He couldn’t aid but laugh when Elise instantly shut down the distance as she adopted him once again. He could sense her system tremble just as before even if she experienced presently trembled although carrying him earlier. Even so, at this moment, he tightly packaged his strong hands approximately her gentle entire body and not allow her to keep as she sincerely took in the aroma, virtually emotion much like the tears he retained back were definitely intending to come out because he read her reduced sobs while she was sobbing into his c.h.e.s.t.
Suddenly, a slap echoed as Ezekiel Alstreim’s face swiveled to the side.
Huge Elder Elise Alstreim’s sight severely trembled. Her heart and soul stirred as tempestuous emotions and thoughts she obtained covered in her coronary heart increased up like a tide. She could not have herself back as her system started to shift towards him without her authorization.
Davis walked to the Ancestral Hall when complaining to Ezekiel Alstreim, as well as a few people.
Ezekiel Alstreim’s voice resounded out. The cover up he retained onto fell on the ground when he smiled, searching extremely relocated to see her lovely, possibly-elegant encounter when he termed her brand with a pa.s.sionate voice.
Ezekiel seriously smiled at his dumbfounded wife. It turned out entertaining as he considered that she smacked him to protect her virtue that has been restricted to him to begin with, with this life-time.
the argentine as a market economy
Keira Alstreim instantly flashed towards Nora, shutting down the distance as she appreciated her developed kid with all her toughness. Nora was a little bit gal when she acquired trapped, but she was now an enormous female, actually a mom like her she couldn’t assist but experience old.
What’s a lot more, they does get her at her instant of weakness!
“Elise… I’m back full of life…”
Nora Alstreim could only experience tremendous joy along with innumerable thoughts she possessed suppressed since that time her moms and dads died. She didn’t weep one particular time for the children after their demise as she vowed that she would are living their components as well, getting to be an Immortal even though she also immortalizes their legacy.
Grand Elder Elise Alstreim possessed practically iced much like a statue. Your eye area of your others were also huge in disbelief, amaze, and thrills. It absolutely was only right after a occasion do Fantastic Elder Elise Alstreim dumbfoundedly voice out.
“Daddy…” He couldn’t assistance but mutter as his whole body felt a chill.
“Nora, don’t say you neglected this creep actually…?”
“Dad…” He couldn’t assist but mutter as his whole body experienced a chill.
Mary Queen of Scots 1542-1587
“Aged man, you can’t just barge in individuals… Appearance, I needed a great deal of trouble out of doors, aiming to persuade the guardians it had not been an adversary but an ally…”
At this moment, another person sneakily retained his fingers.
“Nora, don’t say you did not remember this creep presently…?”
How dare they!? She was currently able to decrease the challenge and go her very own way, but persons in this article still dared to system against her, in order to make her prefer to reside just as before. They demonstrated an false impression of her hubby or maybe dared to obtain a different person conceal as her husband to alter her mind.
Her system transferred, her palms subconsciously handing over Laura into the equally amazed Huge Elder Valdrey Alstreim before she ran the same way Fantastic Elder Elise Alstreim jogged.
Novel–Divine Emperor of Death–Divine Emperor of Death
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711218.21/warc/CC-MAIN-20221207185519-20221207215519-00274.warc.gz
|
CC-MAIN-2022-49
| 6,009
| 41
|
https://enterpriseirregulars.com/31336/video-and-a-love-story/
|
code
|
e have been doing a lot of work with videos for marketing purposes, which is a new area for me.
Couple of things I have learned:
1) Short is better.
2) Video engagement metrics are evolving but in general web site visitors really like video content.
3) Generalize as much as possible because changing existing video is as expensive as creating new video.
4) Video done badly amplifies the negative much more profoundly than when video done well amplifies the positive.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00575.warc.gz
|
CC-MAIN-2023-06
| 468
| 6
|
https://www.phoenix1.co.uk/product/vermason-combo-tester-x3/
|
code
|
Vermason Combo Tester X3
- Wrist strap testing:
If you are not using a continuous or a constant monitor, a wrist strap should be tested while being worn at least daily. This quick check can determine that no break in the path-to-ground has occurred. Wrist straps should be worn while they are tested. This provides the best way to test all three components: the wrist band, the ground cord (including the resistor) and the interface with the operator’s skin. If the wrist strap tester outputs a FAIL test result, stop working, and test the wrist band and cord individually to find out which item is damaged. Replace the bad component, and repeat the test. Obtain a PASS test result before beginning work.
- Footwear testing:
If using a flooring / footwear system as an alternative for standing or mobile workers, ESD footwear should be tested independently at least daily while being worn. Proper testing of foot grounders involves the verification of the individual foot grounder, the contact strip and the interface between the contact strip and the operator’s perspiration layer. If the footwear tester outputs a FAIL test result, stop working, and test the foot grounder and contact strip individually to find out which item is damaged. Replace the foot grounder. Obtain a PASS test result before beginning work.
The Vermason Combo Tester X3 verifies the functionality of an operator’s wrist strap and footwear. It determines if an operator’s wrist strap and footwear will function correctly. The operator’s wrist strap and footwear (both feet) will test simultaneously with no need for separate tests. Green lights indicate that the wrist strap and footwear are passing. Red lights and an audible alarm indicate when the wrist strap and/or footwear (left or right) are failing. If failure occurs, the tester will also display if the grounding device’s resistance is too low or too high.Contact
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816863.40/warc/CC-MAIN-20240414002233-20240414032233-00663.warc.gz
|
CC-MAIN-2024-18
| 1,910
| 6
|
https://cockneycoder.wordpress.com/2009/05/27/scott-guthries-talk-on-lidnug/
|
code
|
Finished the webcast a couple of hours ago. Great session with lots of interesting questions and answers – thanks Scott!
The most important question from my point of view (and judging by the number of people that asked it, not just me – this was by far the most common one), was Scott’s take on Linq to SQL and how it sits alongside Entity Framework – and what’s the future of L2S. Whilst Scott doesn’t work directly on the ADO .NET team, he was able to largely answer this anyway:
L2S is definitely a part of the future of .NET and that they have no plans to kill it off (at the moment!). Microsoft see Entity Framework as hopefully taking a number of the best bits out of L2S and incorporating them into EF (which should be included in .NET 4), but they also plan on keeping L2S going, too. He also addressed for the much-publicised blog that the ADO .NET team put out which has been largely mis-interpreted by the public – they never meant to suggest that L2S is dead. He also mentioned that they will be putting a new blog post out in the near future addressing this issue further.
So, glad that’s been cleared up 🙂
For what it’s worth – I see L2S as being an excellent ORM tool for rapid application development, small applications, prototypes etc. – essentially anything where you are working with e.g. a 1-to-1 mapping between your database and your domain model – whilst you might prefer to look at EF for more complex object models. However, from what I have seen of EF (which admittedly is not a great deal) and from other blogs out there, there’s still much work to be done on it before it’s going to be adopted by the majority of the L2S crowd.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647299.37/warc/CC-MAIN-20180320052712-20180320072712-00655.warc.gz
|
CC-MAIN-2018-13
| 1,687
| 5
|
https://martin-stamm.de/
|
code
|
I finished the bachelors degree in 2018 with a grade of "very good" (1.5). Currently, I am looking for work starting in 02/2019 in the Berlin area.
I currently work as a teaching assistant at the Software Architecture group and had previous employment at the Computer Graphics Systems group. During my bachelors project I worked on GsSqueak, a Squeak integration into GemStone/S. You can find more of my projects here.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258058.61/warc/CC-MAIN-20190525124751-20190525150751-00110.warc.gz
|
CC-MAIN-2019-22
| 418
| 2
|
https://ftp.home.vim.org/ibiblio/docs/linux-doc-project/linuxfocus/English/May2002/article244.shtml
|
code
|
|This document is available in: English Castellano ChineseGB Deutsch Francais Italiano Nederlands Portugues Turkce
by Katja Socher
About the author:
Katja is the German editor of LinuxFocus. She likes Tux, film & photography and the sea. Her homepage can be found here.
Celestia and Open Universe are programs that let you travel
through the universe and explore all the planets and stars. If
you ever looked upon the sky at night dreaming of flying
through space visiting all those bright shining stars and
planets you will love them! Both are real time programs, that
means that you can view all the planets and stars move along
their paths, trace them and orbit them.
With Celestia you can go on a space travel and explore our
universe. When you start the program you will first see
Jupiter's moon Io. The voyage can begin.
But when you run the program for the first time you should first make a guided tour and go on a demo flight by pressing d-key. You will leave Earth and see some very nice pictures of our blue planet. Next is the moon, followed by pictures of the sun. Now you see the planets on their orbits. After this you travel to see Saturn, some star constellations and the milky way before going home again.
Now you have an impression of the program it's time to go on your own exploration:
There are several ways to navigate through space. You can
press the return key and enter the name of the planet, star or
constellation. Then choose a travel speed (e.g. F2, F3) and
press g-key. Off you go!
You can also travel through the universe by clicking and dragging with the mouse and selecting an object with a left mouse click. If its name is then shown on the top left of the program window the object is selected. This is really a cool feature as you can select almost every point that you can see on your screen. Press c-key to get the selected object in the center of your window. Choose a travel speed if you haven't already done so and press g-key. You are now traveling to your selected object. By clicking g-key again you can get closer to it.
With t-key you can track an object.
If you press n-key you get the names of the planets and moons, b-key gives you the names of the stars, = the constellation names and with v-key you get some information about your target. Pressing any of this buttons again lets disappear the names and information again.
This information really is very useful for your orientation.
A click on "h" (followed by "g" of course) brings you back to our sun which I find very helpful when I am lost in space once again ;-).
You can select different travel speeds with F2 to F6 (F2 being the slowest). Pressing F1 stops everything.
To get closer you have to press g-key again until you are as close as you want to. You can read "Traveling" written on the left bottom of the screen in addition to the moving stars and planets.
With ESC you stop everything.
To find out more read the Readme of the program which is included in the top level directory of the source code. If you prefer to read about the keybindings online then take a look at =>the keybindings page<=.
Here are a few screenshots:
The version used for this article was celestia-1.2.2. You
can download it from the Celestia webpage (http://www.shatters.net/celestia/).
The package, celestia-1.2.2.tar.gz, is about 10Mb big. To use
it you need a 3D graphic card and the Mesa 3D graphics
libraries. Packages, headerfiles and libraries should already be
included on the CDs of your Linux distribution.
The installation should be straight forward.
Open Universe is a program similar to Celestia. It doesn't have that many stars and planets because it focuses on our solar system. It hasn't been updated for a while now as the people of OpenUniverse are busy helping with Celestia, but it has a nice navigation bar where you can choose your target from a list of planets, stars etc. so that you don't get lost that easily. I really think it is worth looking at, too.
If you start it you will see some beautiful pictures of the
When using it for the first time you might also want to see a demo first. Click on Options (on the bottom of the menu) and an options menu pops up. Here you can choose demo mode. If you want to know the names of the stars and planets you are passing by make sure that you also have the options "info", "star labels" and "body labels" ticked.
Now lean back and enjoy watching for a while.
Okay, now it's time to go on a space exploration by ourselves! In OpenUniverse you are a bit more restricted than in Celestia but are also less likely to get lost in space that way. To navigate through space you choose an object from the source list and another from the target list. You can also set the camera mode. If you choose "body to body" you get a view from the target as seen from the source. If you choose "orbit" you orbit around the target. Now click "go there" and your voyage begins!
You can read the manual to get more information on how to use OpenUniverse. If you need help while traveling pressing h will also give you some clues.
The version used in this article was openuniverse-1.0beta3.
You can download it from the OpenUniverse webpage (http://www.openuniverse.org/).
The package, openuniverse-1.0beta3.tar.gz, is about 4Mb.
It requires a bit of manual code change to get it compiled but it is really worth it.
It is said on the installation page that the glui libs are optional but I could not get it to work without them. You get the glui_v2_1_beta sources at http://www.cs.unc.edu/~rademach/glui.
To compile the glui libraries:
tar zxvf glui_v2_1_beta.tar.gz
Edit the makefile and set the GLUT_ variables to fit your Linux system:
Set the CC variable:
Copy the resulting library lib/libglui.a to the place where your other open GL libs are:
cp lib/libglui.a /usr/X11R6/lib
Copy the header files:
cp algebra3.h arcball.h glui.h quaternion.h stdinc.h viewmodel.h /usr/X11R6/include/GL/
tar zxvf openuniverse-1.0beta3.tar.gz
./configure --with-gl-libs=/usr/X11R6/lib --with-glui-inc=/usr/X11R6/include/GL --prefix=/usr/local/openuniverse
To get the whole thing to compile under Mandrake I had to add
in the files src/cfglex.l src/cfgparse.y src/milkyway.cpp src/stars.cpp
#include <GL/gl.h> and #include <string.h>
in the file src/ou.h
Webpages maintained by the LinuxFocus Editor team
© Katja Socher, FDL
Click here to report a fault or send a comment to LinuxFocus
2002-05-02, generated by lfparser version 2.28
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818711.23/warc/CC-MAIN-20240423130552-20240423160552-00773.warc.gz
|
CC-MAIN-2024-18
| 6,456
| 66
|
http://forums.xbox-scene.com/index.php?/topic/393782-fs-xbox-16-mobo-in-canada/
|
code
|
Fs: Xbox 1.6 Mobo In Canada
Posted 06 May 2005 - 03:11 AM
This one was taken out of a xbox with a broken dvd drive.
Tested and working
I'm located by Eaton Center dt Toronto.
I can meet anytime
Posted 06 May 2005 - 03:16 AM
Posted 19 May 2005 - 12:04 AM
need to sell asap so $50 for both the motherboard and hd
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644062327.4/warc/CC-MAIN-20150827025422-00029-ip-10-171-96-226.ec2.internal.warc.gz
|
CC-MAIN-2015-35
| 382
| 11
|
https://my.wealthyaffiliate.com/jholloman/blog/addressing-speed-issues-on-your-site
|
code
|
Addressing Speed Issues on Your Site
I recently saw a comments by SurfsideBob & SDane under the topic "Speed issues on all my sites?" I tried to respond and help out, but I had some posting issues, so I thought I would share my experience in resolving speed issues for you site;
If you are having speed issues do the following:
1. Use Google's PageSpeed Insights:
It provides info on the performance of a page for mobile and desktop devices and provides suggestions on how that page may be improved.
2. Make sure your images are fully optimized. Large image file sizes can drastically reduce your page load times. I use two sites to compress my images:
3. Purge all page cache for your site. I use two different plugins to do this - W3 Total Cache and Autoptimize. Here are the videos I used to set them up:
These helped to leverage my browser cache and eliminate render-blocking on my site. Those seem to be the two that have a huge impact on your sites speed.
Using the above info, I have been able to get both my desktop and mobile configurations above 90% (green for Google).
Hope this helps.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212959.12/warc/CC-MAIN-20200923211300-20200924001300-00242.warc.gz
|
CC-MAIN-2020-40
| 1,096
| 10
|
https://ask.godotengine.org/71475/physic-simulation-cpu-and-gpu-performance
|
code
|
Hello, while using the engine for a 3D spaceship game, it happens sometime (i'm still not sure how to reproduce the issue, for now it seems random) that the game start stuttering (sometimes even freeze).
I notice that, when this happens, the GPU utilization is almost 0%, while it is normally around 10%.
While in this state, if I throttle up the spaceship (which cause more calculation both for new forces applied and orbital calculation) the GPU kicks in again (25% utilization) and the game fps goes up to 75 as usual.
This is very weird, i suppose somehow the engine fails at calling the GPU in, and the CPU has to compensate (failing).
Has this ever happened to you? Is there a way to directly control GPU/CPU usage?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100399.81/warc/CC-MAIN-20231202105028-20231202135028-00581.warc.gz
|
CC-MAIN-2023-50
| 721
| 5
|
http://what-when-how.com/Tutorial/topic-697g9na69h/Getting-Started-with-MakerBot-100.html
|
code
|
Creating or obtaining a water tight thing used to be one of the most bother-
some tasks for the early MakerBot adopter. Your STL file must be one con-
tinuous, solid, manifold object. A manifold object is essentially “water tight”.
If there are any holes, gaps, overlapping vertices and/or faces, in the model,
the software for the 3D printer will not be able to tell what is inside the object
and what is outside. There are tools that can help identify and close such
holes and gaps, but for best results you should address these potential de-
fects as you design your object.
If you can't get your design software to give you a watertight object, you can
repair your STL file in either MeshMixer or netfabb. To learn how to use net-
fabb and MeshMixer to repair STL files, see Chapter 9 .
When you design a thing that is very wide and long and flat, the corners will
often curl while printing. This occurs because as the extruded plastic cools,
it will shrink a bit. This is a bigger problem for ABS than for PLA, but it will
affect both materials. The easiest way to minimize corner curling and shrink-
age by enabling a raft when you are slicing your model. A raft is a large flat
lattice work of printed material underneath the bottommost layer of your
printed object. Use of a raft will help reduce warping and curling by allowing
your printed object to adhere better to your flat build surface.
Some users address corner warping by printing with shields or baffles en-
closing their build volume. The purpose of these baffles is to prevent slight
drafts of colder air from cooling the base of the build and to generally create
a more consistent temperature. MakerBot in-house designers plan for this
shrinking effect when designing their models. They will often add mouse
ears , which are a small flat circular structures on the corners of a large model.
This design feature acts as a mini-raft for the thing allowing it to better adhere
to the build surface and to cool at a more uniform rate.
Even if you're printing in PLA, you may be sharing your design
( Chapter 10 ) with people who are using ABS or older, more
finicky printers. So, it's helpful to be aware of these constraints
even if you're enjoying trouble-free PLA printing on your Rep-
Friction Fit and Moving Parts
As you create your model, be sure that there is enough clearance between
moving parts such as gears, cogs, or links in a chain. If there is not enough
space between parts, your prototype may be a solid, non-moving object. For
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585280.84/warc/CC-MAIN-20211019171139-20211019201139-00139.warc.gz
|
CC-MAIN-2021-43
| 2,515
| 35
|
https://documentation.mapp.com/latest/en/create-inbox-rendering-test-12569301.html
|
code
|
To check an email message before sendout for rendering across all clients and devices and to optimize deliverability.
Inbox rendering includes:
A rendering preview at various clients and devices.
A check to assure that your message passes common spam checks.
Important sender identity checks (DMARC, DKIM, SPF).
Tests for subject line length, a valid list-unsubscribe header, and functioning links.
Inbox rendering is an add-on feature through Mapp Connect and is active by default. Talk to your customer representative if you want to de-activate this feature for your system.
Create a message with the message editor or the CMS.
Inbox Rendering cannot be started directly in the CMS area. To create a message check for a message you created in the CMS, start the sendout process. Then open a preview window. There you can start a message check.
- During prepare sendout or during the sendout process, click Preview.
⇒ The Message Preview window opens.
- From the drop-down list Group, choose the group for the message check.
The group that you use for the message check affects the results of the check. Normally, you use the same group that you plan to use for the real sendout. The group contains information and settings that affect the final message and the sendout process. These settings include a prefix for the subject line or a message header, group attributes, member attributes, and the list unsubscribe header. Mapp Engage also uses the group email address to check spam lists.
- Click Send Rendering Test.
⇒The check begins. Mapp Engage sends out a batch of emails to test addresses at common email clients. Mapp Engage also runs tests for deliverability problems, common spam filters, sender identity, and more.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817181.55/warc/CC-MAIN-20240417204934-20240417234934-00068.warc.gz
|
CC-MAIN-2024-18
| 1,730
| 15
|
https://community.auth0.com/t/can-you-change-the-password-policy-strength-to-require-all-4-of-the-character-types/114346
|
code
|
I have a question about the password policy. If you select the policy ‘good’ it must at least contain 3 of the 4 types:
- lower case
- upper case
- special characters
Is there a way to change/influence the password policy to: it must contain all 4 types?
At this time this behavior is by design, Auth0 does not force users to meet all four factors. This has been requested in the past and if you would like to see this functionality in a future release of Auth0, we would encourage you to submit a feature request using this form: Auth0 Feedback
There is an alternative approach that leverages a private Auth0 endpoint which we do not provide SLAs or support for: GitHub - auth0/auth0-custom-password-reset-hosted-page: An example on how to do a custom reset password hosted page.
By using the ‘/lo/reset’ endpoint you could potentially host your custom reset password page and code your own password policy checker to conform with your use case. Again this would be entirely on your side and Auth0 wouldn’t be able to assist in maintaining/troubleshooting this password reset flow if the behavior is not as expected.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511106.1/warc/CC-MAIN-20231003124522-20231003154522-00427.warc.gz
|
CC-MAIN-2023-40
| 1,127
| 8
|
https://forum.fizz.ca/en/discussion/2646999/internet-cutting-out-yet-again
|
code
|
Internet cutting out - yet again
This question is for the other user - since I know the team will just copy paste the same premade answer to reboot the modem for the 38848383 times :
Did anyone ever managed to fix the internet shutoff where the modem is all powered up and the lights are good ?
I've already tried upping my plan to the max one they have and it didnt help.
I'm currently shopping for a new plan, but would rather not have to do a switch because of the hassle of having to return the modem.
Once again, please, I want a community answer, not a staff answer. I already tried restarting my rooter. It does not work.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817289.27/warc/CC-MAIN-20240419043820-20240419073820-00607.warc.gz
|
CC-MAIN-2024-18
| 628
| 6
|
https://forums.powershell.org/t/powershell-script-to-test-latest-http-vulnerability/4047
|
code
|
See here (https://isc.sans.edu/forums/diary/MS15034+HTTPsys+IIS+DoS+And+Possible+Remote+Code+Execution+PATCH+NOW/19583/#33943)
There is known exploit in the wild and we do not powershell way to test it. The one specified in this article does not work as AddRange would not allow to enter number bigger then Int64 and hence fail.
We need to use Private method within WebHeader collection but I’m stuck since code below does not return actual object which it shall.
[System.Net.WebHeaderCollection].GetType().GetMethod(“AddWithoutValidate”, 36) -eq $null
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00375.warc.gz
|
CC-MAIN-2021-21
| 558
| 4
|
http://www.orangemane.com/BB/showpost.php?p=4101175&postcount=189
|
code
|
Originally Posted by Flacko
Great round 1 of the NBA Playoffs, three game sevens in one night!
Yep. I haven't checked who's playing, but I assume the games today.
Happy the Warriors won. For some reason, can't get excited about the Clips/Warriors in the sense I really want one to win over the other. Probably the players/coaches or whatever. Just don't dig.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122886.86/warc/CC-MAIN-20170423031202-00011-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 358
| 4
|
http://in5stepstutorials.com/ms-outlook/set-reminder-in-microsoft-outlook-2016-2013-2010.php
|
code
|
Events you add to Microsoft Outlook's calendar can optionally have a reminder attached. But you don't need to go through the calendar: you can add your own to-do's ("tasks"), and set an optional reminder for a specific day and time. The screenshot shows an example of the reminder popup in Outlook 2016, used for this tutorial. But the steps are the same in Outlook 2013 and Outlook 2010.
In Outlook, the Ctrl+N keyboard shortcut opens a new type of item which depends on the area in which you are (Emails, Calendar, Tasks, Address book, etc.) So, instead, remember that the following hotkey will create a new task wherever you happen to be in Outlook: Ctrl+Shift+K. You can also add a new task by clicking on the New Items dropdown, and picking "Task" (always under the Home tab of the ribbon: and you can make the ribbon always visible!)
In the new reminder window, type your reminder in the Subject text field: it's the text that will appear in the popup ("Write a tutorial", in my example in the first screenshot). All the other fields are optional. The only thing left to be reminded to take care of that task is to check the "Reminder" checkbox.
Pick a due date for your reminder. Then, pick a time: this is not necessarily the due time, it's the time at which the reminder popup will open, as long as Outlook is running. The time dropdown only shows half-hour increments, but the time is also an editable text field, so you can type any hours and minutes you want. Once done, click on the Save & Close button (top left corner).
- Just like regular Outlook calendar events, you can snooze task reminders by picking a time span in the dropdown, and clicking on the Snooze button!
- To view all your tasks, click on the clipboard-and-checkmark icon in the bottom left corner of Outlook's main window (keyboard shortcut of Ctrl+4). Your tasks will appear at the bottom; all flagged messages appear at the top. Tasks and flagged emails that have a reminder attach show a bell icon on the right.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945372.38/warc/CC-MAIN-20230325191930-20230325221930-00379.warc.gz
|
CC-MAIN-2023-14
| 1,996
| 6
|
https://mail.python.org/pipermail/mailman-users/2006-April/050424.html
|
code
|
[Mailman-Users] Accessing lists
brad at stop.mail-abuse.org
Mon Apr 10 21:40:39 CEST 2006
At 11:44 AM -0700 2006-04-10, Allan Hansen wrote:
> Yes, of course, as I am interacting with Mailman after all, but
> my setup has one BIG advantage: speed. Using my setup I can search
> through all the lists in 1 second (the same subscriber may be on multiple
> lists and want to get off them all).
There's plenty of speed in the Mailmain command-line tools, too.
> Using the Mailman pages I first
> have to wait, then enter the list name, then wait, then enter my
> password, then wait, then enter the search, then wait, etc. Just to
> look at 1 of my 40+ lists. Then repeat all that 40 times...
How big are your lists? How big is your archive? I have over
ten gigabytes of e-mail archives that go back many years, and it
takes Eudora quite some time to search through all of them. Even if
I limit myself to just the Mailman-related lists for which I've only
been a subscriber for a couple of years, it takes a while. On the
other hand, bin/find_member can search through all the lists hosted
on python.org pretty quickly.
Brad Knowles, <brad at stop.mail-abuse.org>
"Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety."
-- Benjamin Franklin (1706-1790), reply of the Pennsylvania
Assembly to the Governor, November 11, 1755
LOPSA member since December 2005. See <http://www.lopsa.org/>.
More information about the Mailman-Users
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647584.56/warc/CC-MAIN-20180321063114-20180321083114-00695.warc.gz
|
CC-MAIN-2018-13
| 1,486
| 27
|
https://stenasynora.blogspot.com/2011/02/headerp-solutions-pvt-ltd-guide-in-it.html
|
code
|
Computer always have value and today the world entirely depends upon the computer for some of their uses. You can see nowadays, more companies rely a lot on computer programs to manage their huge data base. A company aspiring to participate in the global marketplace simply can't do without a reliable computer program, whether it is used for sales pipeline management, customer management systems, accounting and payroll systems or simply as a medium to promote itself. Every company either small or big need at least one computer more in its departments. As a result, this trend has flashed an increasing demand for IT professionals and the amazing evolution and development of computer science has lined the way for some of the most exciting and high paying jobs in the market. In fact, global IT related career has more than doubled since 2006. A 2006 salary survey for IT professionals showed a range from $71,930 to 118,100.
So most of the IT companies are relying heavily on some consulting companies to hire their needs and when they look for consultancies, they look for some major cities like Chennai for their hiring process. Headerp solutions pvt ltd is one of the reputed outsourcing and consulting company in Chennai satisfies the goals of the IT companies in all aspects, particularly in hiring process. Headerp solutions pvt ltd invest heavily in training their employees and also enable their employees to pursue their education and earn IT certifications. Technological sense people and particularly, those who excel in the fields of math and science would find that a degree in computer science is a suitable course that could be a springboard for a high paying job. Considering this fact, some of the people who don’t have computer science think that they are not suitable for this kind of jobs, but it is not true. As a non IT candidate, you can also look for some it job and for that you can approach the best consulting companies in Chennai like headerp for your IT related career. Companies also take candidates those who don’t have computer science degree, but having some deep knowledge on IT services.
When you approach headerp solutions pvt ltd, your career prospects is endless and you can enable yourself a strong foundation on computer programs with basic requirements. The career path for IT professionals is diverse and exciting. The more popular ones include database administrations and security management, network infrastructure and design, hardware and voice-over IP telephony. The recent trend of telecommuting or working from home would not be possible without the technological advancements of internet programs. A high profile job for computer scientist involves designing programs and sap maintenance. So prepare yourself for these kinds of IT enabled services by approaching headerp solutions. They are called as your career guide in this service and their human resource consulting team guides you in your entire career path in a excellent and efficient manner with all necessary details.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00007.warc.gz
|
CC-MAIN-2022-40
| 3,037
| 3
|
https://www.cryptomoonity.com/fundamentals-of-proof-of-work/
|
code
|
2019 is the year of the 51% attack. Once a problem only for cryptocurrencies of negligible value, high reputation and high market cap cryptocurrencies are now finding themselves victim to double spends, with exchanges taking the brunt of the damage.
As the attacks continue to grow in frequency and severity, exchanges are beginning to take steps to protect themselves. Originally this meant increasing the number of confirmations, however as the attacks have expanded from tens of blocks to hundreds of blocks, the effectiveness of this strategy is being called into question.
Without a significant course correction, we can expect the damages to grow, even to the point where exchanges may begin to fold. These 51% attacks are successful because of fundamental weaknesses in protocols of the targeted cryptocurrencies, and exchanges will ultimately need to be much more restrictive when selecting which cryptocurrencies to support.
Game Theory and Threat Models
Many decentralized protocols assume that at least 51% of all participants will participate honestly. Bitcoin has been successful because the protocol designers realized that this assumption is inadequate for real world decentralized protocols. In the anonymous, unregulated Internet, participants are free to act as economic agents, often with few consequences for deviant behavior. Instead of assuming that greater than 51% of all actors will be acting honestly, Bitcoin assumes that greater than 51% of all actors will be acting according to their best economic interest.
This threat model is substantially less forgiving. Instead of assuming that most participants will follow the protocol faithfully, Bitcoin developers assume that participants will proactively seek out ways to deviate from the protocol if those deviations can result in profit. This assumption greatly restricts the flexibility in protocol design choices, but has proven to be a crucial requirement for success out on the open Internet.
Bitcoin developers strive for something called incentive compatibility. If a protocol has incentive compatibility, it means that the optimal decision for each individual from their own perspective is also the optimal decision for the group as a whole. When protocols are incentive-compatible, individuals can be completely selfish because those selfish actions will benefit the group as well.
The game theory that keeps Bitcoin running securely is complex and often quite subtle. Many of the cryptocurrencies that have attempted to copy Bitcoin’s protocol design have made changes that have broken the incentive compatibility that is critical to keeping Bitcoin secure. As a result, these cryptocurrencies are not secure, and the deluge of double spend attacks is a clean demonstration that not everything is in order.
Though altcoin designers have broken incentive compatibility in many ways, nothing has been more beneficial to the recent double spend attacks than the decision to use shared hardware as the means for blockchain security. When the same hardware is able to mine on multiple cryptocurrencies, critical incentive compatibilities break down.
There are two primary categories of cryptocurrencies with shared hardware. The first and most prominent category covers the ASIC resistant cryptocurrencies. ASIC resistant cryptocurrencies actually have a goal of using shared hardware; the belief is that security is increased because more widely available hardware will lead to greater hashrate decentralization. The second category of shared hardware cryptocurrencies is cryptocurrencies that are ASIC mined but share the same algorithm as some other cryptocurrency. When multiple cryptocurrencies share the same proof of work algorithm, the same hardware (even if that hardware is specialized) is able to target any of the cryptocurrencies and this disrupts the incentive compatibility in many of the same ways that ASIC resistance does.
What Has Changed Since 2017
Shared hardware has been a theme in cryptocurrency for many years, and yet only recently have high profile 51% attacks become a problem. Truthfully, these attacks have become possible recently for the simple reason that the industry has become more sophisticated. Better tools exist, smarter attackers exist, and in general there is just more and better infrastructure. While this infrastructure has largely benefited honest participants more than anyone else, it has also benefited attackers, and made it easier for sophisticated individuals to attack insecure cryptocurrencies.
We’re going to be looking at a few of the developments which have been more important to 51% attacks, but even without these specific developments I believe that we would have eventually started to see high profile 51% attacks on shared hardware cryptocurrencies anyway. Shared hardware is simply a fundamentally insecure means to protect a blockchain against double spend attacks.
One of the key developments in enabling recent attacks has been the maturing of hashrate marketplaces. For shared hardware cryptocurrencies, knowing the most profitable cryptocurrency to mine at any particular moment requires a high degree of sophistication. Hashrate marketplaces allow hardware owners to rent their hardware out to more sophisticated miners, increasing the profits of all participants in the hashrate marketplace.
A side effect of hashrate marketplaces is that attackers now have a great pool of hardware that they can draw from quickly and temporarily when attempting an attack. Before hashrate marketplaces existed, attacking a cryptocurrency with 100,000 GPUs defending it more or less required owning 100,000 GPUs. Attacks of that scale would require many tens of millions of dollars to execute, which meant that heavily mined GPU coins were largely safe. After the development of hashrate marketplaces, the same 100,000 GPUs can be rented for several hours at a cost of just tens of thousands of dollars. Hashrate marketplaces cut the security margin of shared hardware cryptocurrencies by multiple orders of magnitude.
We also have to expect that hashrate marketplaces for shared hardware will only continue to grow, because all participants benefit from joining a hashrate marketplace — hashrate marketplaces make mining more efficient.
These hashrate marketplaces don’t make nearly as much sense for exclusive hardware cryptocurrencies. The benefit of a hashrate marketplace is that they help hardware owners avoid the complexity of deciding what to mine to make the most money. In an exclusive hardware cryptocurrency, there is only ever one thing to mine, which means there is not much to gain from joining a marketplace.
There is another critical game theory element at play with hashrate marketplaces. When a miner offers shared hardware up to a hashrate marketplace, there is a chance that the hardware will be abused to commit an attack. The shared hardware operator however is not incentivized to care, because the attacker is likely paying a small premium for the hardware (due to the need for burst access), and because the underlying hardware does not lose value if one of the cryptocurrencies that it targets is hit with a big attack — there are plenty of other sources of value for that hardware.
Exclusive hardware on the other hand can only derive value from the single cryptocurrency that it is able to target. Offering up exclusive hardware to an attacker is far riskier, because a successful attack has a more direct impact on the value of the hardware that is used. All hardware providers participating in a hashrate marketplace risk being wiped out by a successful attack on their sole source of income, and therefore are incentivized away from participating in marketplaces that reduce the security margins of the underlying cryptocurrency.
Large Mining Farms
The appearance of large mining farms has also played a big role in reducing the security of shared hardware cryptocurrencies. Many large mining farms exist that exceed 10,000 GPUs, multiple mining farms exist that exceed 100,000 GPUs, and the largest of the mining farms has well in excess of 500,000 GPUs.
From a security perspective, this means that any GPU mined cryptocurrency with less than 500,000 GPUs worth of hashrate on it can be single handedly 51% attacked by the largest mining farm. Cryptocurrencies with less than 100,000 GPUs mining on them are vulnerable to not just one farm, but multiple farms that are each capable of single-handedly launching a 51% attack and executing a double spend. Cryptocurrencies protected by less than 10,000 GPUs worth of hashrate are pretty much trivially vulnerable to attack.
Many of these GPU mining farms are purely motivated by profit, sharing little if any of the ideology of the cryptocurrency space. To some of these farms, if there is a way to make more money, then that is the best course of action, even if there is collateral damage to the underlying ecosystem.
Exclusive hardware addresses this in two ways. The first is that for exclusive hardware cryptocurrencies, there can fundamentally be at most only one mining farm that is capable of launching a 51% attack. Though it’s not a fantastic guarantee by itself, exclusive hardware cryptocurrencies are guaranteed to have to trust at most one entity. This is contrasted against the vast majority of ASIC resistant cryptocurrencies — most ASIC resistant cryptocurrencies could be attacked at any time by any of a multitude of different mining farms.
The more significant advantage of exclusive hardware is incentive alignment. For profit maximizing mining farms, profit is generally not possible by attacking an exclusive hardware cryptocurrency because the attack is going to reduce the value of the mining farm’s hardware. Even in the situation where one mining farm holds enough hashrate to commit a 51% attack, that mining farm is incentivized against executing that attack, because the total value of the hardware owned by the farm is greater than the total amount of money that the farm would be able to steal in an attack.
Increased Attacker Budgets and Sophistication
One of the major differences between cryptocurrency in 2019 and cryptocurrency in 2017 is that the space is a lot more valuable, the theory is a lot better understood, and the number of experts is a lot higher.
In 2017, the number of people who understood that these vulnerabilities existed was not very high. Further, the value of a typical cryptocurrency was also not very high, meaning even for individuals who knew how to execute an attack, there wasn’t much profit to be had by performing an attack.
In 2019, there are a lot more people out there who understand how cryptocurrencies work, and who understand how to attack cryptocurrencies that have fundamental flaws. Further, the potential payoff of committing a successful attack is much higher today, meaning that a larger percentage of capable individuals are going pursue attacks. The increased rewards also mean that attackers can commit more time, money, and resources to engaging an attack.
This is a trend that is going to continue. Today we are seeing 51% attacks because they are the lowest hanging fruit with the highest payoff. However many of the major popular dapps today have fundamental weaknesses, and as they grow in value and as attackers grow in sophistication, those fundamental weaknesses are going to increasingly be exploited. In particular, I have concerns for most of the cryptocurrency projects involving (in order of concern): novel consensus algorithms, on-chain governance, oracles, stablecoins, prediction markets — among other things. It’s often not the core ideas themselves that are broken, but rather the specific designs and implementations. This space currently suffers from a lack of peer review; many of the high profile projects deployed in our ecosystem have not been adequately reviewed and likely have significant active vulnerabilities.
Hardware Bear Markets
Hardware bear markets are a problem that impacts both shared hardware and exclusive hardware cryptocurrencies. If the value of mining hardware falls to the point where it is no longer profitable to mine, the hardware can become very cheap for an attacker to acquire.
The recent cryptocurrency bear market has substantially reduced the value of a lot of mining hardware, which simultaneously means that cryptocurrencies have a lower active total hashrate defending them and also means that attackers have much cheaper sources for renting or buying hardware.
The GPU marketplace is getting hit by a second big impact: there are now ASICs available for both Ethereum and Zcash. These two cryptocurrencies were previously driving most of the GPU hashrate, and that hashrate is slowly being pushed out by ASICs, which dramatically reduces the cost of renting GPUs to attack the lower value cryptocurrencies. As ASICs continue to come to market for the high value GPU cryptocurrencies, we can expect this effect to exacerbate, and 51% attacks will become increasingly common and inexpensive. I do not see this trend reversing, even with novel attempts at ASIC resistance on the horizon.
Bitcoin is also getting hit with a hardware bear market. It’s estimated that as much as 1/3rd of the Bitcoin hashrate has been put up for fire sale by mining farms that are now insolvent. S9’s are available today at prices far below the manufacturing cost, and while it doesn’t seem like this is a security issue for Bitcoin yet, it may become an issue if the price falls another 2–4x.
The manufacturers themselves have been hit extremely hard by the bear market. It’s estimated that Bitmain, Innosilicon, TSMC, and even Samsung all suffered substantial losses due to the sudden price drop, and because of that it’s less likely that we will see heavy over-production in the future — we now see that heavy production is very risky, and Bitcoin is now at a scale where companies are unwilling to take such high risk positions. My guess is that this is the most severe hardware bear market Bitcoin will ever see.
Other exclusive hardware cryptocurrencies however are not as large as Bitcoin, and hardware manufacturers may be more willing to risk overproduction, which in turn could cause hardware bear markets for those cryptocurrencies in the event of a sudden price drop or other turmoil.
The Impact of the Block Reward
Because hardware is very financially expensive to obtain and operate, the security of a cryptocurrency against double spend attacks is highly dependent on its block reward. The total amount of protection that a cryptocurrency receives is proportional to the amount of hardware protecting it, and if a low block reward prevents any substantial amount of hardware from mining the cryptocurrency, the cryptocurrency will not have any substantial amount of security.
In general, we need to be thinking about security in terms of how many dollars a 51% attack would cost. If the total value of hardware mining a cryptocurrency is one million dollars, then we can expect that any trade over one million dollars is strictly vulnerable to a 51% attack, because the counterparty to that trade could have just spent a million dollars buying or manufacturing enough new hardware to commit the double spend attack.
It’s difficult to appraise the total value of hardware mining a cryptocurrency, and difficult to appraise the cost of manufacturing a new set of hardware that’s worth enough to perform a 51% attack, but as a general rule of thumb it’s between 6 and 24 month’s worth of block reward. The open competitiveness of hardware mining generally ensures that it will be in that range.
This helps us to apply a maximum safe transaction value to a cryptocurrency, however before picking a value we need to talk about the phrasing ‘double spend’. The truth is that a double spend could really be a triple spend or quadruple spend, or whatever multiple of spend that allows an attacker to be successful. A single double spend attack could simultaneously double spend a dozen different exchanges all at once. So it’s actually not enough to consider a single transaction when contemplating security against a double spend attack, we need to also consider that other attacks may be happening simultaneously.
The actual upper bound for transaction value is going to be specific to each cryptocurrency, and depends on many factors that go beyond just the block reward. But as a general rule of thumb, I would start to get nervous about transactions that are larger than 1 month worth of block rewards for exclusive hardware cryptocurrencies, and I would get nervous about transactions that are larger than one hour worth of block reward for cryptocurrencies with large established hashrate marketplaces.
A short is essentially a loan. When you take out a short on a cryptocurrency, you are taking out a loan for a number of coins where you agree to return the same number of coins (usually plus some interest) in the future. Typically, when a person takes out a short they sell the coins immediately and then hope that the price drops so that they can buy them back cheaper and return them, having made a profit in the process.
Shorts require two sides. There is the person taking out the short or the loan, and then there is the person providing the loan. When it comes to cryptocurrencies, there is an important bonus element of tension between the person taking out the loan and the person providing the loan: the person taking out the loan may be using that money to attack the cryptocurrency and crash the price. An attack may be a double spend, or an attack may simply be a denial of service, where the attacker mines empty blocks forever. Or, depending on the cryptocurrency, there may be other advanced attacks that are being planned.
I bring this up for two reasons. The first is to warn exchanges and market participants against enabling short markets. If you are offering cryptocurrency loans, you are potentially funding attackers who will devalue the very asset you hope to get back in the future. Offering shorts for cryptocurrencies is substantially more risky than offering shorts for traditional markets.
The other reason is that a large short market increases risk for other parties depending on the security of that cryptocurrency. If a large short market exists for a cryptocurrency, then a potential attacker has a big source of capital that they can use to fund an attack, and if the attack is successful they will not need to return much of that capital. Therefore exchanges and other users should be particularly wary / avoiding of cryptocurrencies that have large short markets.
Limitations of Increasing the Confirmation Time
A common response to network turmoil is to increase the confirmation time for deposits. And in a lot of cases, this is good advice: increasing the confirmation time is sometimes very helpful in avoiding certain types of risks. However, sometimes increasing the confirmation time is not useful at all, and offers no additional practical protection.
One of the biggest areas that increased confirmation times help is with turmoil in the peer to peer network. If for some reason blocks are propagating slowly, or if the network splits in half, or if some peers are trying to withhold blocks or commit routing layer attacks, then increasing the number of confirmations can be very beneficial. Changing from 60 minute confirmation times to 24 hour confirmation times means that the longest chain has more time to propagate, the network split has more time to heal, or the routing layer attack has more time to be addressed.
Another place that increased confirmation times can help is during times of selfish mining, or during times of a rogue <50% hashrate miner. When there is heavy selfish mining, or if for some reason a large miner is mining weird or incorrect blocks, the chance of large reorgs goes up substantially. Instead of typically seeing 2–3 block reorgs, you might start seeing reorgs that are as many as a dozen blocks deep. However, because there is no 51% attack, it’s highly unlikely that you will see reorgs that go beyond a few dozen blocks. The network will generally still move in one direction.
For actual 51% attacks, increasing the confirmation time has a much lesser impact. Raising the confirmation time from 60 minutes to 6 hours will increase the amount of hashrate that an attacker needs to rent, or will increase the amount of time that a mining farm needs to spend on an attack, however this is really only going to be an effective tactic for cryptocurrencies right on the threshold of being attackable.
Something important to keep in mind is that when a cryptocurrency gets hit with a 51% attack, the attacker gets the full block rewards for all of the blocks that they mine. If the price only falls a bit following the attack, the attack will actually fund itself. This is one of the key reasons that increasing confirmation times does not help for small GPU mined cryptocurrencies. An attacker may be able to mine a whole week’s worth of blocks with just a few hours of hashrate rented from a marketplace, especially if that cryptocurrency is very small or has a low block reward.
Limitations of Address Blacklisting
One thing that has thwarted attackers previously is emergency blacklists applied to exchanges. When an attacker performs a double spend, they have to extract the money somehow. This usually involves transferring the money to another exchange and then trading further. Exchanges have been able to stop thefts and double spends in the past by blacklisting any addresses involved in a double spend attempt — one exchange will tell the others which addresses are problematic, and then the exchanges work together to ensure the money is returned.
Although this is sometimes effective, attackers will be increasingly able to get around this security measure. Whether it is by using privacy coins, or whether it’s by delaying the actual double spend until the stolen cryptocurrency has been moved to a wider set of wallets, or whether it’s by using decentralized exchanges instead of centralized exchanges to extract value, address blacklisting will get increasingly ineffective as attackers get more sophisticated.
This doesn’t mean that exchanges should stop using address blacklisting. It’s a good technique that has recovered lots of stolen funds. But exchanges shouldn’t be depending on address blacklisting to save their funds in the event of an attack, because many times address blacklisting will fail to recover funds.
Recommendations to Limit Risk
Though the situation is grim, especially for exchanges, there are a few things we can do to at least temporarily mitigate risk for some of the larger shared hardware cryptocurrencies. Ultimately, these mitigations can all be circumvented by a sufficiently sophisticated attacker, and fundamental developments in the space such as decentralized exchanges and decentralized hashrate marketplaces are also going to eventually nullify these mitigations. The only established long term solution is to require all cryptocurrencies to switch to exclusive hardware — every cryptocurrency on an ASIC friendly algorithm, and every cryptocurrency on a different ASIC friendly algorithm. But perhaps we can buy a little bit of time with reduced risk while everyone is given a chance to migrate.
Tracking Global Hardware Availability
One of the things that can help exchanges to manage risk is to keep an eye on the global hardware availability for each cryptocurrency. The percentage of useful hardware that is mining on a particular cryptocurrency is a good indicator of how much security that cryptocurrency has.
For exclusive hardware cryptocurrencies, the only thing that you really need to watch out for is low block rewards and hardware bear markets. If, for example, the majority of the hardware that once targeted a cryptocurrency is now no longer mining due to low profitability, then the cost of an attack is likely very low because hardware can likely be purchased by an attacker at a very low price. For all other situations, exclusive hardware cryptocurrencies are likely secure against hashrate attacks.
For shared algorithm cryptocurrencies that have ASICs or other highly specialized hardware, the key thing to look at is how much hashrate is mining each cryptocurrency. For cryptocurrencies that have more than 70% of the total hashrate actively mining, I would say there’s not much to worry about. For cryptocurrencies with between 10% and 70% of the total hashrate, I would say that 24 hour confirmation times are prudent. Even at 70% hashrate, there are games that larger mining farms could play to commit attacks and potentially succeed at executing double spends. With 24 hour confirmation times, these attacks become a lot less feasible. The shared algorithm cryptocurrencies with less than 10% of the total hashrate are likely insecure. The decision to halt deposits and withdrawals is of course always dependent on risk tolerance and other factors, however my general recommendation would be to halt deposits and withdrawals on these cryptocurrencies until the hashing algorithm is changed to something more secure.
For GPU mined cryptocurrencies, risk management really requires understanding the current state of the hashrate marketplaces and the state of the large mining farms that are in operation.
Though I have not spent a ton of time or rigor with these values, my estimate is that there is currently a total of between 100 million and 250 million dollars worth of GPUs available on hashrate marketplaces today. This number is critical for determining whether a cryptocurrency is vulnerable to a 51% attack. This alone is not sufficient however, as there have been reports which strongly suggest that certain large mining farms have also been participating in 51% attacks against smaller cryptocurrencies. In particular, at least one of the farms in the 10 million to 100 million dollars of GPUs range has seemed willing to attempt attacks.
Given the above, my recommendation for today would be to require 24 hours of confirmations for all GPU mined cryptocurrencies that have between 50 and 250 million dollars of hardware actively mining on them, and to disable deposits for all cryptocurrencies below this threshold. Below 50 million dollars of hardware, the cost and difficulty of mounting an attack just does not seem to be very high.
As the ecosystem evolves and the state of both large mining farms and hashrate marketplaces changes, the risk analysis for cryptocurrencies of various sizes and algorithm types will be changing. Exchanges who stay on top of these changes will have more accurate risk analyses and will be more able to make the best business decisions.
Relationships With Mining Farms and Hashrate Marketplaces
Some of the total risk might be able to be reduced by having exchanges form relationships with the large mining farms and the prominent hashrate marketplaces.
The hashrate marketplaces have been the source of most of the attacks. Centralized hashrate marketplaces have the ability to put limits on the total amount of hashrate that can be rented at once, and can even do things like Know Your Customer (KYC) for anyone attempting to buy substantial amounts of hashrate, and may reduce the risk of attack for smaller cryptocurrencies. At the very least, a hashrate marketplace may be able to warn exchanges when a bunch of hashrate is suddenly being pointed at a particular cryptocurrency.
A highly sophisticated attacker may be able to leverage Sybil attacks or even account compromises to circumvent these controls. And of course, the more controls that centralized marketplaces put in place, the more users will be driven towards decentralized solutions, where no such controls will be able to exist. So these controls will be at best a temporary solution, however a temporary solution may buy enough time for cryptocurrencies to migrate to better solutions.
Forming relationships with many of the larger mining farms is also likely to be highly beneficial. If nothing else, these relationships are likely to give insights into the current state of mining for various cryptocurrencies, and could give exchanges an idea for which cryptocurrencies might be more or less vulnerable. In terms of risk mitigation, I believe these relationships would have a larger than expected impact for the amount of effort required.
Automatically Halting Trading And Blacklisting Addresses
When a large reorg is detected on a cryptocurrency, trading should automatically be halted on that cryptocurrency, and if a double spend is detected the addresses involved in that double spend should be automatically blacklisted. This should happen across as many exchanges as possible, not just the exchanges impacted by the double spend attacks.
Though halting trading immediately won’t help with the fact that money has been stolen, it does substantially reduce the number of options that an attacker has for handling the stolen money. Also, attackers can often predict price movements following large attacks and make large trades against those price movements. If trading is frozen, that source of profitability is reduced for potential attackers.
Blacklisting addresses has a similar effect: it reduces options for attackers. Shutting down more options for attackers means more opportunities to recover the money, and also means fewer attacks in the first place, even if there are ways to circumvent all of these controls.
We can say from experience that attackers often aren’t that sophisticated, and often do make big mistakes. Even when there’s nothing you can do against a theoretically perfect attacker, real attackers are far from perfect. Actively pursuing attackers and hoping that they make a critical mistake can be incredibly effective.
There is a more advanced, and a more risky, option to handle double spend attacks, which is to launch a counter-attack. When an attacker mines a double spend on a cryptocurrency, the impacted exchange can potentially buy up a bunch of hashrate to extend the original chain, cementing the original transaction from the attacker.
The attacker can of course counter attack as well, responding to the extension of the original chain with an extension of the attack chain. The difficult thing here is that at every point in time, it makes sense for the exchange to spend more money extending the original chain, and it makes sense for the attacker to spend more money extending the attack chain. Even when the attacker and the exchange have both spent far more money than the theft is worth, it still makes sense for them to keep extending their respective chains in an attempt to get the money back.
Imagine that an attacker steals $50,000 from an exchange by spending $10,000 on proof of work. At this point, the attacker is +$40,000, and the exchange is $-50,000. The best move for the exchange here is to spend $10,000 themselves to restore the original chain as the longest chain, which means the attacker is now $-10,000, and the exchange is also -$10,000. If we let this game play out, we get the following:
Stage: Attacker Exchange
Stage 1a: +$40,000 -$50,000
Stage 1b: -$10,000 -$10,000
Stage 2a: +$30,000 -$60,000
Stage 2b: -$20,000 -$20,000
Stage 3a: +$20,000 -$70,000
Stage 3b: -$30,000 -$30,000
Stage 4a: +$10,000 -$80,000
Stage 4b: -$40,000 -$40,000
Stage 5a: +$0 -$90,000
Stage 5b: -$50,000 -$50,000
Stage 6a: -$10,000 -$100,000
Stage 6b: -$60,000 -$60,000
By the time that the attacker no longer stands to profit from the attack as a whole, the exchange has lost the same amount of money defending themselves that they would have lost if they had just let the attacker go in the first place. At no point in time is the exchange ever up, the exchange only stands to lose greater and greater amounts of money in the best case.
And, this game doesn’t really have an ending state. At all points in time, it makes sense for each party to keep trying to get the original $50,000 back, because at each step you are spending a new $10,000 to recover $50,000. That is why this strategy is called ‘scorched earth’ — nobody wins, and lots of money gets destroyed.
The value to this strategy is that the exchange can, at least in theory, prevent the attacker from making money. If an attacker knows ahead of time that an exchange is willing to commit to a scorched earth strategy, then the attack doesn’t make any sense and the exchange is unlikely to be attacked beyond the first few times.
There is another big complication with this strategy. The attacker has a big advantage in terms of preparation. An attacker can spend weeks or months preparing an attack, and an exchange needs to respond to the attack almost immediately. And, if the attacker is willing to engage the exchange like this, it may very well be the case that the attacker has some large advantage. For example, if the attacker is using code that is more heavily optimized, the attacker may only be spending $5,000 each round, while the exchange is spending the full $10,000 each round. The exchange has no way to tell whether or not the attacker has an advantage in this situation either.
There could also be issues with this strategy if multiple exchanges attempt to perform it simultaneously. The exchanges may end up getting to a hashrate war with eachother instead of the attacker, and that could get extremely expensive depending on the budgets for each exchange.
And a final consideration for this strategy is that it could have massive collateral damage on the ecosystem. Many cryptocurrencies aren’t really able to handle a large number of consecutive reorgs. Nodes may crash, other transactions may be lost or double spent in the middle of the war, and generally speaking users will be at much greater risk for the full duration of this scorched earth battle.
For all of the above reasons, I do not recommend that exchanges pursue this strategy to fight double spends.
The final strategy I wanted to bring up was developer arbitration, because it is a strategy that has been successful for cryptocurrencies in the past. When a theft occurs, the developers can always launch a hardfork that returns the stolen coins. This introduces a very high level of centralization around the developers, and also the developers are imperfect human beings who could potentially be tricked into misreading an attack, and instead of returning stolen coins, the developers may end up taking legitimate coins from a user and giving them to an attacker.
Developers could also begin signing blocks. Once a block is signed by the developers, that block is permanent, and the transactions in the block cannot be double spent. This has been done by cryptocurrencies numerous times through history, but itself is very perilous. If the developer key gets stolen, all sorts of problems can happen. And, the fact that developers are effectively deciding which transactions are allowed on the network potentially puts them in the unforgiving sights of financial regulators.
Developers should be genuinely cautious of doing things like this, because if a developer does make the wrong decision when returning funds, signs the wrong block, or allows a known terrorist group to make a transaction, there could be serious legal repercussions. Especially now that there is a lot more regulator attention on this space, I don’t recommend this avenue, even ignoring the usual centralization concerns.
As the cryptocurrency space continues to develop, we are going to continue seeing sophisticated attacks. In the next 6–12 months, most of these attacks are likely to be focused around double spends of cryptocurrencies with poor proof-of-work security, but increasingly the vulnerable decisions of developers are going to be exploited. Secure cryptocurrency design is difficult, and most cryptocurrencies and decentralized applications have not succeeded at ensuring their projects are secure.
That’s being felt to the tune of millions of dollars in thefts today resulting from shared hardware hashrate attacks, but these attacks are only the first wave of high profile attacks that are going to being hitting the cryptocurrency community.
To prevent further losses, steps need to be taken in the short term to protect exchanges from shared hardware hashrate attacks. In some cases jumping to 24 hours of confirmations should be sufficient, and in others deposits should probably just be disabled until the cryptocurrency is able to fork to a more secure paradigm. In the long term, exchanges are going to need to be more conservative with their risk models and more proactive about diligence with the coins that they choose to list.
Special thanks to Ethan Heilman for review and feedback.
Fundamentals of Proof of Work was originally published in Sia Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Get real time updates about new posts directly on your device.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202635.43/warc/CC-MAIN-20190322054710-20190322080710-00415.warc.gz
|
CC-MAIN-2019-13
| 37,144
| 109
|
http://coldfusion.sys-con.com/node/546211
|
code
|
|By Virtualization News||
|May 29, 2008 02:30 PM EDT||
3PAR is a provider of utility storage, a category of highly-virtualized, tightly-clustered, and dynamically-tiered storage arrays built for utility computing. Organizations use utility computing to build cost-effective virtualized IT infrastructures for flexible workload consolidation. 3PAR Utility Storage gives customers an alternative to traditional arrays by delivering resilient infrastructure with increased agility at a lower total cost to meet their changing business needs. As a pioneer of thin provisioning—a green technology developed to address storage underutilization and inefficiencies—3PAR offers products designed to minimize power consumption and promote environmental responsibility. With 3PAR, customers have reduced the costs of allocated storage capacity, administration, and SAN infrastructure while increasing adaptability and resiliency. 3PAR Utility Storage is built to meet the demands of open systems consolidation, integrated data lifecycle management, and performance-intensive applications.
International "Virtualization Conference & Expo" Call for Papers
Virtualization, the hottest subject in all IT right now, will be center stage in 2008. Key opinion-formers in the field of infrastructure and pioneers of virtualization technologies of all types have already begun submitting speaking proposals to Virtualization Conference & Expo 2008 East, being held in
Submissions on these and dozens of other topics have already begun streaming in. The Call for Papers is as always a 100% online process, found here.
IDC has stated that the virtualization services market alone is going to reach
$11.7 billion by 2011 and in general this technology, which has been around for
a good number of years, seems suddenly to be on everyone's mind.
In short, Virtualization is fast becoming a key requirement for every server in the data center, enabling increased workloads in server consolidation projects, efficient software development and testing, resource management for dynamic data centers, application re-hosting and compatibility, and high-availability partitions. Help with that transformation: submit your speaking proposal today.
Topics will include:
- Server Virtualization
- Desktop Virtualization
- File Virtualization
- The Future of the Virtual Enterprise
- Hosted Virtualization
- Virtualization Hardware Support
- Hardware-level Virtualization
- Storage Virtualization
- Virtualization for Server Consolidation and Containment
- Windows Virtualization
- Utility Computing
- State of the Virtualization Services Market
Technology Providers and Contributors in 2008
The following companies are among the providers and contributors of Virtualization technology: 3Leaf Systems, 3PAR, 3Tera, Acronis, Actional, Active Endpoints, ActiveGrid, activePDF, ActiveServers, ActiveState, Actuate, Agile Software, Agilent, AGiLiENCE, Agilysys, Akamai, Akorri, AlachiSoft, Altova, AMD, AMDAHL, Amentra, Amyuni, anacubis, Apani, APC, Appcelerator, Appistry, AppStream, Ascential, Astaro, Attune Systems, Autodesk, AutoVirt, Availl, Azul Systems, BEA Systems, B-Hive, Black Duck Software, Black Hat, Blackbaud, Blue Lane Technologies, BlueArc, BlueNote, BluePhoenix Solutions, BMC Software, Borland, Bristol Technology, Brix Networks, BroadVision, Brocade, Burton Group, Business Objects, CA, CalAmp, Cassatt, Cast Iron, Catbird, Cayenne Technologies, Ceedo Technologies, Cenzic, Certeon, CiRBA, Cisco Systems, Citrix Systems, ClearApp, ClearCube Technology, Compass America, Composite Software, Compuware, Configuresoft, Continuity Software, Coraid, Courion, Coyote Point Systems, DataDirect, DataSynapse, Dell, Double-Take Software, Ecora Software, EDS, Egenera, Elastra Corporation, Embarcadero, EMC Corporation, Enomaly Open Source, Enterprise Management Associates, Entuity, EqualLogic, ESRI, F5 Networks, Fortisphere, Forum Systems, Fujitsu, GemStone, Getronics, GigaSpaces, Green Hills Software, Grid Dynamics, GridGain Systems, GT Software, Hitachi Data Systems, HP, Hyperic, IBM, ICEsoft, Illumita, ILOG, IMEX Research, Information Builders, InstallFree, Intel, International Computerware, iTKO LISA, JBoss, Juniper, Kidaro, LynuxWorks, ManageIQ, Managed Methods, Marathon Technologies, Mellanox, Microsoft, Mindreef, MokaFive, MKS, Moka5, Motorola, MQSoftware, NASTEL, Ncomputing, NEC, NetApp, Netegrity, Neverfail, Nexaweb, NextAxiom, Nimbus, Novell, OpenSpan, OPNET Technologies, OpTier, Oracle, Panacea Software, Parallels, Pillar Data Systems, PlateSpin, PLX Technologies, Progress Software, Prolifics, Prosync Technology, Provision Networks, QLogic, Quest Software, Racemi, Raxco Software, Red Hat, Reflex Security, Resolutions Enterprises, Riverbed Technology, Rogue Wave, rPath, RSA Security, SanDisk, SAP, Saugatuck Technology, ScaleMP, Secure Command, ShavLik, ServInt Internet Services, Silpion IT Solutions, Skytap, Software AG, Splunk, StackSafe, Stoneware, StoreVault, StrikeIron, STT, WebOS, Sun Microsystems, Surgient, Sybase, Symantec, Tenfold, TheInfoPro, Thinstall, Third Brigade, TIBCO Software, Tideway Systems, TRANGO Virtual Processors, Transitive, Trend Micro, Trigence, Unisys, Verio, VeriSign, Virtual Iron, VirtualLogix, Vizioncore, VKernel, VMLogix, vmSight, VMware, Web Age Solutions, WSO2, XDS, Xiotech, xkoto, and Xsigo Systems.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Aug. 3, 2015 06:45 PM EDT Reads: 538
For IoT to grow as quickly as analyst firms’ project, a lot is going to fall on developers to quickly bring applications to market. But the lack of a standard development platform threatens to slow growth and make application development more time consuming and costly, much like we’ve seen in the mobile space. In his session at @ThingsExpo, Mike Weiner, Product Manager of the Omega DevCloud with KORE Telematics Inc., discussed the evolving requirements for developers as IoT matures and conducted a live demonstration of how quickly application development can happen when the need to comply wit...
Aug. 2, 2015 11:15 AM EDT Reads: 370
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities.
Aug. 1, 2015 10:00 AM EDT Reads: 345
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, described how to revolutionize your archit...
Jul. 30, 2015 07:30 PM EDT Reads: 1,431
MuleSoft has announced the findings of its 2015 Connectivity Benchmark Report on the adoption and business impact of APIs. The findings suggest traditional businesses are quickly evolving into "composable enterprises" built out of hundreds of connected software services, applications and devices. Most are embracing the Internet of Things (IoT) and microservices technologies like Docker. A majority are integrating wearables, like smart watches, and more than half plan to generate revenue with APIs within the next year.
Jul. 30, 2015 02:30 PM EDT Reads: 151
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Opening Keynote at 16th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, d...
Jul. 30, 2015 12:00 PM EDT Reads: 2,090
In his keynote at 16th Cloud Expo, Rodney Rogers, CEO of Virtustream, discussed the evolution of the company from inception to its recent acquisition by EMC – including personal insights, lessons learned (and some WTF moments) along the way. Learn how Virtustream’s unique approach of combining the economics and elasticity of the consumer cloud model with proper performance, application automation and security into a platform became a breakout success with enterprise customers and a natural fit for the EMC Federation.
Jul. 30, 2015 09:00 AM EDT Reads: 2,182
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of profound change in the industry.
Jul. 29, 2015 03:00 PM EDT Reads: 1,304
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect their organization.
Jul. 29, 2015 02:00 PM EDT Reads: 1,214
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society-changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his session at @ThingsExpo, Jason Mondanaro, Director, Product Management at Metanga, discussed how you can plan to cooperate, partner, and form lasting all-star teams to change the world and it starts with business models and monetization strategies.
Jul. 28, 2015 04:30 PM EDT Reads: 1,779
Converging digital disruptions is creating a major sea change - Cisco calls this the Internet of Everything (IoE). IoE is the network connection of People, Process, Data and Things, fueled by Cloud, Mobile, Social, Analytics and Security, and it represents a $19Trillion value-at-stake over the next 10 years. In her keynote at @ThingsExpo, Manjula Talreja, VP of Cisco Consulting Services, discussed IoE and the enormous opportunities it provides to public and private firms alike. She will share what businesses must do to thrive in the IoE economy, citing examples from several industry sectors.
Jul. 28, 2015 11:00 AM EDT Reads: 2,054
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee.
Jul. 27, 2015 10:00 AM EDT Reads: 2,049
Akana has released Envision, an enhanced API analytics platform that helps enterprises mine critical insights across their digital eco-systems, understand their customers and partners and offer value-added personalized services. “In today’s digital economy, data-driven insights are proving to be a key differentiator for businesses. Understanding the data that is being tunneled through their APIs and how it can be used to optimize their business and operations is of paramount importance,” said Alistair Farquharson, CTO of Akana.
Jul. 27, 2015 09:00 AM EDT Reads: 337
Business as usual for IT is evolving into a "Make or Buy" decision on a service-by-service conversation with input from the LOBs. How does your organization move forward with cloud? In his general session at 16th Cloud Expo, Paul Maravei, Regional Sales Manager, Hybrid Cloud and Managed Services at Cisco, discusses how Cisco and its partners offer a market-leading portfolio and ecosystem of cloud infrastructure and application services that allow you to uniquely and securely combine cloud business applications and services across multiple cloud delivery models.
Jul. 27, 2015 08:00 AM EDT Reads: 1,914
The enterprise market will drive IoT device adoption over the next five years. In his session at @ThingsExpo, John Greenough, an analyst at BI Intelligence, division of Business Insider, analyzed how companies will adopt IoT products and the associated cost of adopting those products. John Greenough is the lead analyst covering the Internet of Things for BI Intelligence- Business Insider’s paid research service. Numerous IoT companies have cited his analysis of the IoT. Prior to joining BI Intelligence, he worked analyzing bank technology for Corporate Insight and The Clearing House Payment...
Jul. 26, 2015 09:00 PM EDT Reads: 1,593
"Optimal Design is a technology integration and product development firm that specializes in connecting devices to the cloud," stated Joe Wascow, Co-Founder & CMO of Optimal Design, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
Jul. 25, 2015 02:00 PM EDT Reads: 406
SYS-CON Events announced today that CommVault has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. A singular vision – a belief in a better way to address current and future data management needs – guides CommVault in the development of Singular Information Management® solutions for high-performance data protection, universal availability and simplified management of data on complex storage networks. CommVault's exclusive single-platform architecture gives companies unp...
Jul. 25, 2015 01:00 PM EDT Reads: 1,979
Electric Cloud and Arynga have announced a product integration partnership that will bring Continuous Delivery solutions to the automotive Internet-of-Things (IoT) market. The joint solution will help automotive manufacturers, OEMs and system integrators adopt DevOps automation and Continuous Delivery practices that reduce software build and release cycle times within the complex and specific parameters of embedded and IoT software systems.
Jul. 25, 2015 12:15 PM EDT Reads: 490
"ciqada is a combined platform of hardware modules and server products that lets people take their existing devices or new devices and lets them be accessible over the Internet for their users," noted Geoff Engelstein of ciqada, a division of Mars International, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
Jul. 25, 2015 12:00 PM EDT Reads: 1,562
Internet of Things is moving from being a hype to a reality. Experts estimate that internet connected cars will grow to 152 million, while over 100 million internet connected wireless light bulbs and lamps will be operational by 2020. These and many other intriguing statistics highlight the importance of Internet powered devices and how market penetration is going to multiply many times over in the next few years.
Jul. 25, 2015 09:00 AM EDT Reads: 1,509
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990611.52/warc/CC-MAIN-20150728002310-00151-ip-10-236-191-2.ec2.internal.warc.gz
|
CC-MAIN-2015-32
| 16,395
| 65
|
https://www.freelancer.cn/projects/C-Programming/Automatic-buying-script-with-proxy/
|
code
|
Automatic buying script with proxy and robot catcha
这项目被授予 kumarmukesh102 ,费用为$900 USD。为像这样的项目获取免费报价
项目预算$750 - $1500 USD
I need a script that handle robot catcha and do a checkout in 0,5 sec or less. It need to be full automatic. I want to purchase 1000 of items at once, so maybe it need a server for this. The webshop is using a queue system, so we need a proxy to bypass this.
I don't create a milestone or pay anything before i see it works. If it works i will pay max. 750 usd dollars, and I got a lot more work for you, if you can handle it.
- The New York Times
- Wall Street Journal
- Times Online
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00588-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 662
| 8
|
https://www.netiq.com/documentation/securelogin70/user_guide/data/bb3goji.html
|
code
|
An application definition is a set of instructions telling Novell SecureLogin how to handle the login for a certain application. SecureLogin uses application definitions to automatically log you in to Windows, Web, or Java applications. Novell SecureLogin has predefined application definitions for some of the applications. You can use the Application Definition Wizard to create new application definitions.
The wizard captures and stores your login name (username), password, and any other information required for authentication.
You can also write your own application definitions. However, we recommend that you use the Application Definition Wizard to create your application definition.
SecureLogin stores all application definitions in a secure encrypted cache on your computer and in the corporate directory.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00244.warc.gz
|
CC-MAIN-2022-40
| 818
| 4
|
https://customerscanvas.com/help/admin-guide/workflows/intro.html
|
code
|
Workflows in Customer's Canvas allow you to arrange the process of personalization of print products. This process usually includes selecting a product, choosing a product variant, editing a product design, approving the result, and finally downloading print files.
Let's see how different workflows look like.
In this workflow, customers personalize a sticker for a box. First, they select a sticker form. Then, they add an image from a gallery. Finally, they see the result on a mockup.
Another workflow allows customers to create a business card. At the first step, they select a size and orientation. Next, they edit the product design on both sides of the card. At the final step, customers can see and approve the result.
Both these workflows include a design editor, selecting some options, and the result approval. The entire sequence factions is divided into steps. Every step is displayed on a separate screen, which includes a number of containers - panels. You can embed some widgets tools in the panels: editor, galleries, options, and so on. For more details about the workflow appearance, read the What is workflow? article.
In Customer's Canvas, workflows are described in JSON files. In these workflow files, you can find the configuration of personalization steps and widget. You can create workflow files from scratch or based on existing files with a special workflow editor. To read more about workflow files, read the Creating and editing workflows article.
A workflow file has a special structure that describes the content and behavior of elements. Such a file is divided into following parts:
- Attributes describing a product.
- Variables adding shortcuts to reduce the code.
- Widgets defining the controls to perform the workflow actions.
- Steps configuring the workflow actions.
- Additional properties.
Learn more details in the Structure article.
You can consider widgets as constructor details for building the user interface: buttons, text, editors, galleries, and more. At the same time, there are non-visual widgets for auxiliary operations. You can manage widget properties and change widget styles, connect widgets with each other, and make other actions. To know more about widgets, read the Widgets articles.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00200.warc.gz
|
CC-MAIN-2023-14
| 2,248
| 14
|
https://uiowa.joinhandshake.com/jobs/3017340/share_preview
|
code
|
Research Associate, Mass Spectrometry
Who we are:
Calico is a research and development company whose mission is to harness advanced technologies to increase our understanding of the biology that controls lifespan. We will use that knowledge to devise interventions that enable people to lead longer and healthier lives. Executing on this mission will require an unprecedented level of interdisciplinary effort and a long-term focus for which funding is already in place.
Calico is seeking to fill a Research Associate position in our Metabolomics Core Lab. The successful candidate will work as part of a multi-disciplinary team, assisting Calico’s scientists with execution of their basic aging research experiments by providing mass spectrometry expertise and service. The well-qualified candidate has a solid grasp of liquid chromatography-mass spectrometry and its application to biological questions.
- BS degree in Molecular Biology, Biochemistry, Chemistry or related discipline (PhD not expected) with a minimum of 2 years relevant experience
- Preparation of biological samples for LC-MS analysis
- Operation and maintenance of LC-MS instrumentation
- Excellent communication skills with an ability to work as part of a high-performing team
- Able to respond quickly to shifting priorities
- Efficient and careful in laboratory work
- Detail-oriented and organized
Nice to have:
- Knowledge of endogenous metabolism
- Knowledge of basic statistics, and some familiarity with R or python
- Previous experience analyzing endogenous metabolites or lipids
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141194982.45/warc/CC-MAIN-20201128011115-20201128041115-00532.warc.gz
|
CC-MAIN-2020-50
| 1,562
| 15
|
https://answers.sap.com/questions/12311435/workflow-triggering-on-fd32-when-customer-credit-l.html
|
code
|
I have a requirement to trigger workflow for approvals when credit limit(KNKK- KLIMK) changes in FD32 transaction code. I have created custom business object ZBUS1010 by copying BUS1010 and delegated. defined custom event for credit changes and did the configuration in SWUE and SWETYPV.
But somehow event is not getting triggered fro credit limit changes, tried all options and trying for some user exit/BADI but no luck. Please suggest any ideas.TIA.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00136.warc.gz
|
CC-MAIN-2021-39
| 452
| 2
|
https://movd.co/jason-starbuck-with-dylan-gigliotti
|
code
|
From Working 15 Jobs to Building The
Next-Level Messaging Game
My next guest is Dylan Gigliotti, the Founder of Sales and System. Discover how he started as an entrepreneur from literally flunking in school to working 15 jobs to getting his big transformation from his first Millionaire Mentor and then building his own next-level DMing process where he breaks down how he closes clients through Messenger. So if you are ready, click the link and GET ON IT!
Stay in touch:
💬 Facebook→ https://www.facebook.com/jason.starbuck.14
🖼️ Instagram→ https://www.instagram.com/starbuckonthemove/
🎙️ MOVD Entrepreneur Evolved Group → https://www.facebook.com/groups/movdentrepreneur?_rdc=1&_rdr
📈 MOVD Digital Growth Method→ https://digitalgrowthmethod.com/
📧 For Business Inquiries, e-mail me at → email@example.com
LET’S GET STARTED
MOVD DIGITAL GROWTH METHOD
and become powerful!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00261.warc.gz
|
CC-MAIN-2021-49
| 903
| 12
|
https://www.digitalnest.in/blog/data-science-scope-industry/
|
code
|
Over the years, the term “Data Science” has been overheard extensively in the IT industry and Data Scientists are evolving each year by 50%. Despite being a booming sector, there is still the adequate resource and the actual knowledge of data science in India isn’t enough for securing and segregating the raw data which is streaming between devices globally. In this article, you will know what actually data science, tools is and techniques are used to segregate the raw data and used!
What is Data Science?
So in the layman terms, it’s nothing but using the blend of tools for finding the raw data which is actually called Mining, making changes to that through the algorithms, and accessing the refined data in the organization for going business which is called Machine Learning. This, in turn, reduces costs, increase effectiveness, identify current market possibilities and further develop the organization’s competitive advantage.
It might sound similar to Big Data, but it’s actually different.
What is the difference between Big Data and Data Science?
Big Data is something that is used to interpret the data, be it structured, unstructured or semi-structured and dealing with the insights induced from the data which can lead to a better and strategic business moves. Big data challenges include retrieving data, storing, analysis, search, sharing, transfer, visualization, querying, updating and data privacy. Technologies like Hadoop and Spark are used to analyze and segregate the data in a better way.
Data Science is basically the blend of programming, algorithms, tools and inquisitive to discover patterns, cleanse the data, preparing and aligning it. This statistical Programming to predict data is done with the help of R for Data Science or Python Programming.
Where this Big Data and Data Science are used?
Big Data is used in the industries ranging from Retail, Communication and Financial Services whereas Data Science is used in Internet Searches, Search Recommendations and Digital Advertisements.
Who are Data Scientists and what do they do?
With the increasing amounts of data in each organization and finding data is a quite biggest challenge in the modern business, companies hire people who are generally referred as “Data Scientists”. They help in turning the raw data into volume business information which can be used further for the growth of the business. They process a combination of analytic, machine learning, data mining and statistical skill, as well as coding along with managing and interpreting large amounts of data, many data scientists are also tasked with creating data visualization models that help illustrate the business value of digital information
What are the benefits of Data Science to business?
The actual benefits that Data Science can do for a business are marvelous. It depends on the company’s goal and the strategies. It affects the benefits and works in Sales and Marketing departments.
- Through this one can mine the data and enhance the illegal activities
- Can know the customer’s pace and create the personalized recommendations
- Best mode and time of delivery
- Empowering management to take better decisions
- Identifying the current market opportunities and testing them
- Identifying and refining the Target Audience
What is the educational qualification or languages required to become a Data Scientist?
Data Scientists is someone who has the collective knowledge of ethical hacking, mathematics and Business/Strategy Acumen. 80% of the Data Scientists have a Master’s Degree, and 46% have PhDs.
Some of the skills that a Data Scientist has
- Expertise at SAS and/or R For Data Science
- Python coding is used in data science along with Java.
- SQL database/coding
- Working on unstructured data: The most prominent thing for the Data Scientist is that they should able to work with unstructured data in all the forms.
Data Science is beneficial for the organizations as the data is emerging so quickly in the modern business and they are looking for the data Scientists. The average annual salary for a Data Scientists is around $123,000. With the skills of learning R, Python or Data Science and Machine Learning Training an average data Analyst can earn the average amount of this.
IT professionals, Business managers, who want to choose or opt a career in Data Science or to make the business go with ease, can learn R programming, Data Science and Machine Learning Training and Python for Data Science and at Digital Nest, the leading company offering the flagship courses along with the placement assistance. Its value-based learning type and training by experts in the industry, hands-on project experience, and a course completion certificate will give you need to become an expert in the chosen and grab a top-notch job.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.37/warc/CC-MAIN-20220809003642-20220809033642-00212.warc.gz
|
CC-MAIN-2022-33
| 4,827
| 28
|
https://www.aadhu.com/fade-through-black-transition/
|
code
|
Have you ever seen the black screen that fades in and fades out when playing Minecraft maps (especially from Marketplace)? Now you can make one for your map easily. This addon is pretty simple.
Fade Through Black Transition is a simple addon that gives you the ability to add better transition between two conditions/places, by enabling the ability to fade-in and out of black. This addon will be useful for your map.
How to use this addon:
- Using this addon is pretty easy. Just execute “/function ftblack”.
- To use the recommended timing of fade-in, stay, and fade-out, execute “/function ftdefaultttime”
- To change the timing of fade-in, stay, and fade-out, execute “/title [PlayerTarget] times [fade-in] [stay] [fade-out]” (replace the square brackets and the text in it with number; 1 second = 20; replace the “[PlayerTarget]” with player(s)).
- Also comes in white! Execute “/function ftwhite”.
Please note that:
- This does not work with hidden GUI. This is on Minecraft’s end, as the function uses /title command with hidden Minecraft emojis/icons (which I retextured to black, and white, and resized to cover the entire screen). Make sure to un-hide it.
- You might encounter lag/freeze for first-time usage (counts from world creation/rejoin). Make sure to execute it first to get rid of the lag later.
- This goes without saying, but just in case, this addon is only for the fade-in, stay, and the fade-out black/white. If you want to trigger a condition (e.g changing places), you have to do it yourself.
- If you want more color, please tell me.
- It is recommended to credit me if you are using the addon.
- Please download the .zip file instead, if you are having problem with the .mcaddon.
- No Linkvertise or anything. I do not monetize this addon.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662604495.84/warc/CC-MAIN-20220526065603-20220526095603-00133.warc.gz
|
CC-MAIN-2022-21
| 1,790
| 15
|
https://forums.asp.net/post/2599312.aspx
|
code
|
Sep 04, 2008 01:47 AM|Benson Yu - MSFT|LINK
The “'System.Web.Extensions” is the Strong Named assembly that should be installed in GAC, the file not found issue should not caused by the relative path problem of the sub directory. In my opinion, the problem is the AJAX 1.0 extension is not installed
on the server.
To solve this issue, we can remove the “1.0.61025.0” version “System.Web.Extensions” assembly from the “assemblies” section. As you said, after doing that, the LINQ assembly not found issue appears. For this issue, I would recommend checking if .NET Framework
3.5 installed on the Server.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423512.93/warc/CC-MAIN-20170720222017-20170721002017-00056.warc.gz
|
CC-MAIN-2017-30
| 618
| 5
|
https://docs.helpmasterpro.com/docs/system-administration/control-sets/
|
code
|
Control Sets allow you to create custom forms for data capture
Control Sets are user definable forms that can be created for data entry, capturing data, displaying information and processing logic.
Creating and administration of Control Sets
Create custom database stored procedures to populate Control Set / Entity Item client/site pickers
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510575.93/warc/CC-MAIN-20230930014147-20230930044147-00552.warc.gz
|
CC-MAIN-2023-40
| 470
| 7
|
https://forums.galciv3.com/491825/lets-talk-about-weapons
|
code
|
What you are asking is a little complicated if you want it just to work on your ships/your weapons or specific weapons. As the globals are just that rules for all weapons. So if you change the rof (cool down) there then all of that weapon type will have that rof for you and for the ai.
The damage is controlled by the components so modding that to taste works fine. Though what you are proposing is totally overpowered and will break the base game balance and there isn't really a good way to balance it out because defenses are very week to attacks per second compared to pure damage. 1 dmg every .5 seconds will rip through shields pd and armor faster than someone doing 100 damage every 10 seconds. Even though the dps of the other one is higher since defenses act really funky in this game.
I have never tried to make the game animate that fast i typically mod out animations to make battles cleaner. (having weapons with no animation combined with ones that do. So you can load up the ship to the right power level but you do not shoot 100s of missiles kinetics.
Also another think to think about is how galciv animates weapons. Now this is not 100% but the ships tend to animate weapon effects from weapons placed on the ship and from ship parts that are classified as a weapon. So if your design has "fake" weapons on it from the design process there is a chance those will animate the weapon firing effect. Also note if you say put 20 kinetic weapons on a ship the game should (but not always) animate 1 shot for every single on of those placed. So with what you are proposing what will work for 1 equipped weapon will be a mess if you put a lot of weapons on a ship. That said animations are not actually what is happening, if you have 100 kinetic attack across 10 weapons it may show 10 shots but the game just cares about weapon attach as 1 # in hits/misses/defenses. So you are not hitting the enemy ship 10 times for 10 just once for 100 potential dmg.
Below is just what im riffing off the top of my head so i have no idea if there is a better way, there probably is.
Now the best way to get something close just for the player would be to(this is off the top of my head and may not get the desired effect it will take trial and error or there may be a better way most modding has no guides so you just figure it out yourself)
Make a custom weapon component that has the animation fx you like(there is a good white kinetic fx that i would use), figure out through trial and error the correct cooldown to get it to fire as fast as you want ,make it have a stat to reduce the cooldown that amount of that weapon type assuming the game can even achieve that quickness. (Also note that cooldown reduction stats work ship wide so lets say you are using op kinetic weapon we are making but you want to add normal kinetic weapons to the ship, their cooldowns will also be reduced so its all or nothing) Also make this one per ship allowed.
Then you would need to balance all weapon damage around that dps so the dps is the same even though as stated above anything that shoots that fast even at low dmg will be very op(this is assuming the game will allow you to have such small #s for dmg they like to limit stuff around 3 decimal places). Then you need to make copies of all the other weapons on that weapon type and give those weapon type animations that do not show anything can be achieved by creating a custom weapon fx that basically is not there, either by being too small or invisible. Finally you need to create a custom ability that only your faction has and assign a requirement that all the new stuff you made/modded require that ability. That way the ai who do not have this ability will not have access to all the op stuff you made. Because if the AI had access to this they will all stack their ships and just rip you apart.
The above should work as i do similar the opposite to make custom weapons that just fire once. This idea sounds interesting im going to fart around with it when i get home and see if i can make something work.
If the above is too much you can always just reduce the cooldown of a weapon to a small # and change the damage of the item and it should work. But if you place 5-10 of a super fast weapon it will be a mess and im not sure the game could handle like 20-30 of them shooting at fractions of seconds.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00094.warc.gz
|
CC-MAIN-2021-17
| 4,356
| 10
|
https://takhisis.livejournal.com/860435.html
|
code
|
You may be afflicted with a demon known as MIITAKK
Miitakk is the demon of complacence and slothfulness – many initially afflicted with this demon stop making an effort in any aspect of their lives. Without exorcism or care of any kind those possessed by Miitakk will suffer from bedsores, atrophy of the limbs and other ailments of the immobile. Signs: often those possessed by Miitakk take on a nearly catatonic state, and it is difficult to get them to respond. However, if the afflicted is prodded too much, they can suddenly become violent. Touching cool water causes those possessed by this demon to feel a burning sensation.
So... if you pester a severely depressed person and then throw cold water on them, and they get pissed off at you, obviously they're possessed by a DEEEEMON! It's the only logical explanation.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141686635.62/warc/CC-MAIN-20201202021743-20201202051743-00004.warc.gz
|
CC-MAIN-2020-50
| 826
| 3
|
https://club.myce.com/t/kprobe-test-range-error/125847
|
code
|
I’m just wondering if anyone knows what this error means? When I tried to start a scan, it took forever to try to detect the disk size and then I got the test range error. Before you ask what media, the answer is total crap media. I’m trying to decide weather I want to use this stuff as short term use non critical stuff that I would use for a little while and throw away, or just throw away all the blanks now. I’ve got about 100 of them and they are gq with a fake media code of “sony”. I’m not going to use them for anything important but I have to just toss 100 blank disks. I guess I need to try a diffrent burner to bur it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743216.58/warc/CC-MAIN-20181116214920-20181117000920-00370.warc.gz
|
CC-MAIN-2018-47
| 642
| 1
|
https://www.physicsforums.com/threads/a-conceptual-question-regarding-collision-conservation-of-energy.550235/
|
code
|
Say there's the following situation: A bullet with some velocity strikes a block connected to a spring. The bullet passes right through the block and the spring is compressed by x cm to the right of the block after the impact. Some internal energy is lost due to deformation of the block while bullet passes through it. Because the question states "compressed AFTER the impact", i was able to assume that the only energy being converted into spring energy is the kinetic energy of the block (When it begins to compress when the bullet has already exited the block), so I solved the problem. HOWEVER, WHAT IF the question had stated "the block compresses the spring DURING the impact", in other words, it compresses WHILE the bullet is passing through the block? Then would the spring energy come from BOTH the kinetic energy of the block and some of the kinetic energy of the bullet while it is moving through the block?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744513.64/warc/CC-MAIN-20181118155835-20181118181835-00254.warc.gz
|
CC-MAIN-2018-47
| 920
| 1
|
http://jokes4u.mycybernet.ca/bell3.htm
|
code
|
A priest puts an add in the newspaper for a new bell-ringer and the only applicant to reply is a fellow with no arms.
"You realize what this job requires," asks the priest.
"Sure do," replies the no-armed man, and I can assure that I am the best man for the job."
The priest is perplexed, "How do you plan to ring the bell with no arms?"
The no-armed man, of course, cannot pull the bellrope and instead he rushes to the top of the bell tower and proceeds to dive head-first into the side of the bell. The bell peals beautifully.
The alarmed priest rushes to him, "My God, man, if you can do that every hour your hired!"
And so every day, on every hour, the no-armed man dives at the bell and smashes it head-first. Until one day he misses and flys out of the belltower, falling 300 feet to his death.
The day after the tragic accident, the priest put another ad in the paper requesting applicants for the job. Shortly thereafter, a man came to his door to ask about the ad.
"Father, I've come to ask a favour. It was my brother who was recently your bellringer and met with his untimely death. I would like very much to be allowed to ring the bell in his honour today."
The priest, being very sentimental, of course agreed, and led the man to the belltower. Wanting to ring the bell just as his brother had done, the man took a running start and collided with the bell head-first. Unfortunately for him, he had not the same constitution as his sibling and was knocked quite senseless by the blow; in a dazed state he stumbled around the belltower and accidentally fell out the tower window to his death.
The crowd once again gathered around the fallen bellringer, a concerned onlooker once again wondering aloud who this fellow might be. The priest replied,
"I don't know his name, but he sure is a dead ringer for his brother!"
Click -- Finlay's Funnies -- to return to main index page.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202324.5/warc/CC-MAIN-20190320085116-20190320111116-00022.warc.gz
|
CC-MAIN-2019-13
| 1,888
| 13
|
https://chipgw.github.io/
|
code
|
Programmer. Tech artist. Writer. Some other things too.
Have a look at some things I've done in the past. It's mostly school projects and prototypes, ordered by approximate start date.
Made a game with 3 other people for an Unreal Engine game jam
I was the programmer in a team of 5 making a small adventure dungeon
I and two classmates worked together to make 5 small games over the course of a 10 week quarter
A diorama of a weapon room containing sword, shield, and ax props made earlier in the class. The sword is also on Artstation
Environment made for a class
A simple multiplayer strategy game prototype
A prototype for a puzzle game where tilting the world shifts blocks
A small collection of board games, with the option of playing local multiplayer or against AI
A prototype for a puzzle game involving lights and mirrors
Made for game jam with the theme "two birds with one stone" in November of 2014
A simple gravitational body simulator
A viewer for 3D images and videos, for VR and other display modes
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00000.warc.gz
|
CC-MAIN-2022-21
| 1,015
| 14
|
https://support.kaspersky.com/KSC/14/en-US/155093.htm
|
code
|
The settings of an update installation task may require approval of updates that are to be installed. You can approve updates that must be installed and decline updates that must not be installed.
For example, you may want to first check the installation of updates in a test environment and make sure that they do not interfere with the operation of devices, and only then allow the installation of these updates on client devices.
The usage of the Approved status to manage third-party update installation is efficient for a small amount of updates. To install multiple third-party updates, use the rules that you can configure in the Install required updates and fix vulnerabilities task. We recommend that you set the Approved status for only those specific updates that do not meet the criteria specified in the rules. When you manually approve a large amount of updates, performance of Administration Server decreases and may lead to Administration Server overload.
To approve or decline one or several updates:
The information box for the selected objects appears on the right side of the workspace.
The default value is Undefined.
The updates for which you set the Approved status are placed in a queue for installation.
The updates for which you set the Declined status are uninstalled (if possible) from all devices on which they were previously installed. Also, they will not be installed on other devices in future.
Some updates for Kaspersky applications cannot be uninstalled. If you set the Declined status for them, Kaspersky Security Center will not uninstall these updates from the devices on which they were previously installed. However, these updates will never be installed on other devices in future. If an update for Kaspersky applications cannot be uninstalled, this property is displayed in the update properties window: in the Sections pane select General, and in the workspace the property will appear under Installation requirements. If you set the Declined status for third-party software updates, these updates will not be installed on devices for which they were planned but have not yet been installed. Updates will still remain on devices on which they were already installed. If you have to delete them, you can manually delete them locally.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816863.40/warc/CC-MAIN-20240414002233-20240414032233-00503.warc.gz
|
CC-MAIN-2024-18
| 2,276
| 9
|
https://www.techadvisor.co.uk/download/security/bitdefender-antivirus-free-33232-android-3331401/
|
code
|
Bitdefender Antivirus Free is a simple app for detecting and removing threats to your Android device.
The app uses cloud-based scanning to protect you from the very latest malware. This means no bandwidth wasted by downloading signatures, no storage space lost, super-fast scans and minimal impact on battery life or system performance.
Bitdefender Antivirus Free doesn't need any complicated configuration. Mostly it just works, automatically scanning new apps as they're installed and removing any dangers. (You can run scans on demand, too.)
The app uses the same powerful engine and cloud scanning technologies as the commercial Bitdefender Mobile Security, so you can be sure of accurate results.
Upgrading to Bitdefender Mobile Security does get you some handy extra features, though, including real-time scanning of web pages and powerful antitheft tools.
• New design
Bitdefender Antivirus Free is short on features, but the engine is one of the best and overall it'll do a good job of keeping you safe.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00250.warc.gz
|
CC-MAIN-2020-16
| 1,013
| 7
|
http://ortsbezogenekunst.at/notes/view/top-ten-the-library-of-site-specifity/en?mobile=true
|
code
|
The aim of this class is twofold as the library of site-specificity does not only contain books about the topic, but at the same time is a reflection of its own concept. Designed for a special purpose and a specific location, will it be destroyed when removed (paraphrasing Richard Serra’s statement about the removal of his site-specific sculpture "Tilted Arc" in 1981)? Isn’t a library, "this universe" (Jorge Luis Borges), always site-specific?
How can an actual library and its books become tools for investigation, for a research practice that deals with keywords, correct citation indices AND the materiality of objects that, centuries ago, were named books?
With precision we will stroll through books and texts, filter, cut, copy, paste, collage, add. A library is a performance, universalism is a universe’s pitfall.
The first humble shelf will contain books and texts by Marc Augé, Michel Foucault, Robert Irwin, Rosalind Krauss, Miwon Kwon, Lucy Lippard, Brian O’Doherty, Pierre Nora, Barbara Rose, Laura Wollen (Womanhouse), and Jeffrey Kastner and Brian Wallis
See more information at Base Angewandte
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267161501.96/warc/CC-MAIN-20180925103454-20180925123854-00219.warc.gz
|
CC-MAIN-2018-39
| 1,122
| 5
|
https://addons.mozilla.org/nl/firefox/addon/element-locator-for-webdriv/
|
code
|
Download Mozilla Firefox, een snelle, gratis manier om op het Web te surfen, om de duizenden hier beschikbare add-ons te proberen!Sluiten
Welkom bij Firefox Add-ons.
Maak een keuze uit duizenden extra functies en stijlen om Firefox van uzelf te maken.Sluiten
Over deze add-on
PLEASE NOTE: To disable any locator types you don't need (E.g. Support, Ruby), just go to Tools>Add-ons (Ctrl+Shift+A), and uncheck them in the addon options.
This extension will attempt to populate the context menu with usable webdriver XPATH-based findElement commands for the dotnet, python and ruby bindings and Support Locator Library references for the focused web element.
To help prevent buggy tests (and save your time debugging), it will also check the locators for uniqueness, signified by red crosses and green ticks.
In addition, if elements have long, fragile, auto-generated attributes such as id="ctl00_ElementContainer_Inputs_txtForename" it will attempt to locate based on the final (and most significant) part of the value only.
If it struggles to locate via attributes it will also attempt to locate via text value.
The next things I'd like to think about are FindElements (with a displayed count), and maybe something to account for iframes. At the moment the addon will suggest a 'green' locator that IS unique in its DOM context, but is dependent on its iframe - causing the locator to fail at runtime if the frame isn't manually SwitchTo'd before location.
As always, any other suggestions are greatly appreciated.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689874.50/warc/CC-MAIN-20170924044206-20170924064206-00030.warc.gz
|
CC-MAIN-2017-39
| 1,514
| 11
|
http://www.mp3ster.com/aunty-removing-clothes-mp4-video-download-1.html
|
code
|
Free Aunty Removing Clothes MP4 Video Download
I created this video with the YouTube Video Editor (http://www.youtube.com/editor)
Subscribe us @ http://goo.gl/PGK7gV.
Hot Mallu Aunty Removing Saree.
Tamil drunken aunty stage Hot Record Dance subscribe us...
Hot Aunty Show how to remove bra without Removing dress.
hot mallu wife removing dress.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400373301.1/warc/CC-MAIN-20141119123253-00226-ip-10-235-23-156.ec2.internal.warc.gz
|
CC-MAIN-2014-49
| 345
| 7
|
https://help.sumologic.com/Send-Data/Applications-and-Other-Data-Sources/Nginx
|
code
|
The Sumo Logic App for Nginx provides searches and Dashboards that monitor log events generated by Nginx servers. Events in Dashboards are divided into the following categories:
- Deployment overview. Get an overall look at the activity of the sites running on Nginx servers.
- Visitor locations. Know at a glance where your visitors originate.
- Visitor access types. Gather insights on the devices and operating systems visitors are using to access your sites.
- Visitor traffic information. Find out which external sites are referring your visitors. Additionally, you can quickly view the amount and types of media being served up to visitors.
- Web server ops. Learn more about the errors generated from your Nginx servers.
The Sumo Logic App for Nginx assumes the NCSA extended/combined log file format for Access logs and the default Nginx error log file format for error logs.
All Dashboards (except the Web Server Operations dashboard) assume the Access log format. The Web Server Operations Dashboard assumes both Access and Error log formats, so as to correlate information between the two.
For more details on Nginx logs, see http://nginx.org/en/docs/http/ngx_http_log_module.html.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864872.17/warc/CC-MAIN-20180522190147-20180522210147-00382.warc.gz
|
CC-MAIN-2018-22
| 1,192
| 9
|
https://guides.enginethemes.com/knowledge-base/mje-how-to-remain-paypal-gateway-when-user-buy-pachages-to-post-on-the-website-but-remain-only-cash-when-an-user-buy-a-service-from-other-user/
|
code
|
Regarding your concern, this requirement is doable:
Step 1: Disable the feature for customers to checkout directly with PayPal. (when user buy a service from another user)
You can deactivate the MjE PayPal Express Checkout plugin in Plugins > Installed Plugins
Or go to Engine Setting > Payment Gateways > PayPal Express Checkout tab to disable this option
Step 2: Set Paypal payment gateway for your users to choose for package purchasing when posting new mJobs.
Please go to Engine Setting > Payment Gateways > General tab > 2. Default Payment Gateways > Paypal section and Enable this option.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474700.89/warc/CC-MAIN-20240228080245-20240228110245-00610.warc.gz
|
CC-MAIN-2024-10
| 595
| 6
|
https://forums.mydigitallife.net/threads/server-2003-stan-ent-oem-files-dell.2074/page-4
|
code
|
Discussion in 'Windows Server' started by Suicide Solution, May 6, 2008.
thanks again, I saw the change in the first post after my reply to this thread
-Does anyone still have these files, or a link to a location I can download them from? I need to convert my VL install of Server 2003 ENT 32bit to an OEM install. Thank you in advance for your assistance.
You need to login to view this posts content.
I'm using a Dell Poweredge 840 I need to instal the server 2003 on it.
Need to convert to Dell OEM...
I can use the DELL_Server2k3_Trial2OEM.rar
The hyperlink is no longer work...
Thank you. Paul.
sebus - where is the OEM_Files.rar that you mentioned? And is there difference in Ent vs. Std?
I need only the R2 STD oem files; and I'm not sure if mine is SLP or non-SLP - which seems to be a huge thing that would impact how we would burn the right set of stuff, right?
I did same exact as you: used 64-bit, downloaded the VLK version both CD1 & CD2 from M$.
BUT, I did not realize that also you have to change PIDGEN, DCPDLL & setupreg.hiv - from the top post in the forum, there is only the OEMBIOS files for STD & ENT, and crc.txt & SLP.TXT - I'm very unclear on the exact steps, especially how to modify whatever PID (setupp.ini). I changed the setupp.ini & winnt.sif files, copied those in, but did NOT change any of the pidgen, dcpdll & setupreg.hiv.
So, My Goal: Take Windows Server 2003 R2 Standard 64-BIT VLK, change it to OEM, so I can use my current/valid COAs, without using up my volume licenses. My COAs are stickers on these Dell 860 boxes, known to be 64-bit capable, and they are valid COAs for W2k3R2 STD 32-BIT, but nothing on the boxes anywhere says that they are real - live OEM, so... but I called Dell & Microsoft and they believe they are OEM product keys.
So, assuming they are OEM, It would be nice to have ONE SINGLE DOCUMENT (sorry for 'yelling' - lol) - that outlines the steps, exactly, i.e., Step 1, change pid in setupp.ini to be
a) nnnnOEM for 64-bit R2 2003 STD
b) nnn2OEM for 32-bit R2 2003 STD
c) nnn3OEM for 32-bit R2 2003 ENT, etc.
Step 2, Take the info in the SLP.TXT file and do 'something?' with it - or, is that just for reference?
Step 3, How do I really know if I have an SLP sever/bios or? how the heck do we tell?
Step 4, copy the pidgen, dcpdll and setupreg.hiv files in to your distro
Step 5, copy the SOMETHING-EULA file into the distro? is this needed?
Step 5, copy the OEM bios files in to your distro
Burn, enter valid Lic key,
So, is there somewhere with 'precise steps?' If not, that would be a very nice 'sticky' to have! Because the first post at top is nice, but it does NOT have a lot of the other files mentioned that need to be replaced.
And, per Dell & M$, I don't necessarily have to have any place that says OEM-nnnnn (etc.) for it to be an OEM COA.
And, [ALLEGEDLY] the COAs should work for either 32-bit or 64-bit, BUT another forum said that [some] COAs are tied to specific class of machine - i.e., might be 32-bit-ONLY COAs.
[BTW, the recent burn, even without copying in the pidgen and so forth, gets to the point of asking for license, and appears to look like the OEM version, but does not take any of the OEM keys I have, so... again, I'm sure I flubbed up by not copying/changing things the right way]
All I did - copy OEM* files, winnt.sif & setupp.ini (changing only the setupp.ini file to have the key that 'ss' had at the top of this forum - and it did try to auto-enter that key, and said "no go, dude. key not valid." So I entered my COAs - also, key not valid.
Any help is greatly appreciated! Thanks from the 'newbie.'
Thanks sebus! hoping I can get a 64-bit version of R2 modded & working
Thanks = I did, indeed, read most of the posts, but not necessarily all the links and so forth.
But no, I have official Microsoft VLK media - but maybe something I [didnt' do] caused it to change to MSDN???
I pulled down directly from our Volume Licensing Link on Microsoft's site. Matter of fact, I can use my VLK and activate it, but that costs 'twice,' so as one poster put it, "that's no fun."
One thing that came to mind: I keep seeing various posts that say, "[BIOS-changer?] Only works on 32-bit R2, and only on Enterprise," but I need 64-bit R2, and Standard. (or maybe I'm misunderstanding, but it looked as if previous posts did indicate that one of the BIOS mod utilities works only on the 32 bit ENT version.
At any rate, I will take your sage advice and read prev link and also post # 13 - many thanks!
Also note: my apologies for long post/question - I shall endeavur to read more deeply and to break my queries into snippets that make more sense - you folks do AWESOME work, btw! i know you don't get 'paid' for this, but you should
1) Good point(s). Yes, Post #13 (or... 12, actually) - those are the 'first' ones we tried, but obviously, we missed a step (or two), but I am [re]-trying now, so, the post 12-13, that = Trial to MSDN, then I need to do MSDN to OEM, right?
2) Somewhere it seems there is a step that we need to do where we use a tool to put the "Dell Server; Dell System" piece into the BIOS somehow, I think, after that, then we can use the media, right? (Is that the 'BIOSchanger' part?)
a) convert trial to MSDN via xdelta3
b) run the bioschanger
c) convert MSDN to OEM - copy oem* files, pidgen, winnt.sif, setupp.ini etc. to the distro (change PID to have OEM in it - change prod key)
d) burn the dvd/cd...
i will review the additional thread you provided as well. Thanks again - sorry for being newbie-question-guy.
Okay, the 'changer' says it is only for Enterprise version. All I have is 'Standard;' I will give it a try, BUT both the 'mirror' and 'fileden' are down, so... whenever they (hopefully) come back, then I can try this.
Since I am going from a valid, downloaded VLK, I actually should be able to skip the 'xdelta3' step, I think.
But again, I just never did copy the pidgen, and other files mentioned (and those are not included in the initial forum post, so again, it is somewhat confusing - apparently, there is an "OEMfiles.rar" that has those DLLs and so forth, right?) - I'm going back through all the posts and threads now, so I can "read more deeply", and I never ran the 'bioschanger' either, so I think those are the only steps I missed.
Okay, I re-ran the delta against the trial, no errors, but it doesn't give any display, like in the "Code: " part, where you guys are showing the "before and after" in your examples. But no errors should mean okay, right?
1) Where to get the DCPCLL.DL_, PIDGEN.DLL? Or do I even need to change those? (from that OEMfiles.rar that someone mentioned?)
2) Will your OEMBIOS files work just fine, even on 64-bit - the system doesn't care, right? (seems like you said they should work on either)
3) Do I need an altered setupreg.hiv - or that is one I don't have to change - leave as-is, right; since it already is from a Std version?
I'm changing the delta (which is now, I guess, MSDN) and copying in your OEMBIOS files, and changing setupp.ini
I think I also have to change Winnt.SIF, right? I'll make sure it says the OEM-style, like in your download (and the G78PD key)
UPDATE: Thanks to the patience of sebus, it worked! Well, okay, it did NOT accept any of the COA's I have so far, it said "invalid key," so I am thinking those are tied definitely to 32-bit version, maybe? So, I had to use the "generic SLP key" for W2k3 Server R2 64-bit as found on a public wiki page. Not sure why the generic keys are posted there - will see if any of that hinders activation - but per Dell / Microsoft licensing, I should be able to use either 32-bit or 64-bit. THANK YOU 'sebus.' !
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531429.49/warc/CC-MAIN-20210122210653-20210123000653-00126.warc.gz
|
CC-MAIN-2021-04
| 7,643
| 54
|
https://whatis.techtarget.com/definition/XSL-Transformations-XSLT
|
code
|
XSL Transformations (XSLT) is a standard way to describe how to transform (change) the structure of an XML (Extensible Markup Language) document into an XML document with a different structure. XSLT is a Recommendation of the World Wide Web Consortium (W3C).
XSLT can be thought of as an extension of the Extensible Stylesheet Language (XSL). XSL is a language for formatting an XML document (for example, showing how the data described in the XML document should be presented in a Web page). XSLT shows how the XML document should be reorganized into another data structure (which could then be presented by following an XSL style sheet).
XSLT is used to describe how to transform the source tree or data structure of an XML document into the result tree for a new XML document, which can be completely different in structure. The coding for the XSLT is also referred to as a style sheet and can be combined with an XSL style sheet or be used independently.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00505.warc.gz
|
CC-MAIN-2021-10
| 958
| 3
|
https://zalo.careers/job/senior-lead-full-stack-engineer-kiki-n-yEgG9QRkXtfdDY5j
|
code
|
We’re looking for a Full-Stack Developer to join the Kiki project (https://kiki.zalo.ai). If you're interested in the AI-driven world and would like to build a high-impact product for Vietnamese people, please take a look at the job requirement below.
What you will do
- Develop and maintain efficient data management, labeling, and analysis tools;
- Create interactive web applications that integrate AI services;
- Collaborate with cross-functional teams to gather requirements, design solutions, and implement them effectively.
What you will need
- Minimum of 2 years of experience as a Full Stack Developer;
- Strong expertise in Server-Side programming languages (Java & Python) and frameworks (Spring Boot, Django, etc.);
- Solid understanding of RESTful APIs and microservices architecture;
- Experience with databases, MySQL, Postgres, and MongoDB), proficiency in designing schemas and writing efficient queries;
- Familiar with with Elastic search engine;
- Strong problem-solving and analytical skills;
- Ability to thrive in a fast-paced, high-pressure environment;
- Possess a strong sense of ownership, an open-minded attitude, and a passion for continuous learning;
- Demonstrated responsibility and ability to work effectively within a team;
Nice to have:
- Experience with containerization technologies such as Docker and orchestration platforms like Kubernetes;
- Knowledge of DevOps practices and CI/CD pipelines.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646181.29/warc/CC-MAIN-20230530230622-20230531020622-00098.warc.gz
|
CC-MAIN-2023-23
| 1,434
| 18
|
https://msdn.microsoft.com/en-us/library/aa946986(v=office.14).aspx
|
code
|
|Important||This document may not represent best practices for current development, links to downloads and other resources may no longer be valid. Current recommended version can be found here. ArchiveDisclaimer|
Developing InfoPath Form Templates with Code
Last modified: March 25, 2010
Applies to: InfoPath 2010 | InfoPath Forms Services | Office 2010 | SharePoint Server 2010 | Visual Studio | Visual Studio Tools for Microsoft Office
The topics in this section provide information about creating form templates that have business logic written in managed code (Visual Basic or C#) against members of the Microsoft.Office.InfoPath namespace.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157472.18/warc/CC-MAIN-20160205193917-00143-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 644
| 5
|
http://www.muhendislikbilimlerikongresi.org/bildiriayrinti/medical-image-reasoning-with-the-convolutional-neural-netwok-based-fuzzy-logic_494
|
code
|
Convolutional Neural Networks (CNN), a specialized form of the Artificial Neural Networks (ANN), is widely used in computer vision (CV) for a variety of object recognition applications such as the medical image classification. Depending on the number of classes and the characteristic properties of the classification problem, an appropriate activation function for each output layer of the CNN is constructed. When the softmax function is used in the output layer, it exponentially grows with its input and then saturates to its maximum value, which is usually assigned as 1. In binary classification, when the probability values are close to each other, classification success is expected to converge to zero since the CNN fails to recognize the patterns in the input-output training data. However, the softmax function produces a value that converges to 0.5. In this case, a transformation function must be applied to the output of the network. Thus, the classification success of the proportional output value is presented more realistically. In this paper, Mamdani Fuzzy Model is implemented to enrich the output of the binary classification problem of the computed tomography (CT) images with the CNN. Both CNN and fuzzy logic applications are implemented in Matlab environment. The results are extensively analyzed and compared to show the efficiency of the proposed approach in this paper. This method, which is proposed to interpret the output in binary classification problems, may be the subject of future studies as a supportive method in separating data belonging to overlapping classes in multi-class classification problems.
Anahtar Kelimeler: Binary classification, Convolutional neural network, Fuzzy logic, Image processing
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710953.78/warc/CC-MAIN-20221204004054-20221204034054-00489.warc.gz
|
CC-MAIN-2022-49
| 1,741
| 2
|
https://forums.guru3d.com/threads/osd-does-not-show-up.292595/
|
code
|
Hi there, I'm having a little trouble with the OSD not showing up. I remember about a year ago, it used to work, but now, in Garry's Mod, (a Source game,) it doesn't now. So I have what I want selected to show in the OSD, now in the stats server, I have the proper executable selected, but it still doesn't show. Here's a picture. So if anyone can assist me in getting this to show, that would be excellent.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00553.warc.gz
|
CC-MAIN-2017-34
| 407
| 1
|
http://mudfish.net/forums
|
code
|
Cant seem to connect to BDO JP.Do you mind checking on Conoha Nodes. Thank you.
how do i change my subscription modE? i would like to play monthly
I'm aware this may be a silly question, but I am trying to play a game on an EU server from NA. Do I want a node nearest to me, or nearest the server?
I have bought mudfish credit, switched my profile to Subscription data plan. I have bought item Black Desert. I want to play Black Desert NA, I live in Finland, hence, EU. currently setting as follow: Destination Server: EU Amsterdam, Nodes: US West (with smallest score). Is it correct? or is it the other way around?
I need to ask because no matter what I did, my ping is horrible (300 ms) without mudfish it's around 190ms.
크래딧 구매관련 입금했는데 아직 안들어왔습니다. 확인 부탁드립니다.
Hello, i've got a problem with game called Black Desert Online. I have equiped item "Black Desert" but i still can't get past ip block in the game's launcher so i'm forced to use full VPN whitch takes extra traffic off my balance. Is it supposed to be like this? If so then why there's these "items" if they're not doing anything?
안드로이드 사용자입니다 캐나다 몬트리올 에 게임서버가있어서 설정후 핑을측정해보니 비슷비슷하길레 미꾸라지 사이트 핑테스트로 몬트리올 핑을 봤더니 원래이런것같던데 한국 캐나다간 최소지연율인가요ㅠ 씨애틀로 이중우회해서 쓰고있는데 한국과 외국간 핑이기때문에 감안하고써야하는건가요ㅜ
Just wondering if you're able to give any tips/advice/help regarding using OpenVPN For routing game traffic? My mate said that I should ask you since he thinks you use OpenVPN for routing, or something similar to it. I'm having issues with it "locking up" in-game. If I input more than one command (move + attack) it causes my latency to go up and up. If I only input one command (like attack) It stays low and stable.
티빙같은경우 한정이 국내IP전용인데
티빙은 지원하실 생각이 없으신지요?!
Can mudfish bypass China Firewall to access google and facebook? Because I'm using full VPN and still doesnt work.
I've checked the IP and it said to be working.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170875.20/warc/CC-MAIN-20170219104610-00047-ip-10-171-10-108.ec2.internal.warc.gz
|
CC-MAIN-2017-09
| 2,240
| 13
|
https://beforeatlantis.com/apps/
|
code
|
New Apps for Archaeological Research
Historically, certain directions have special significance and may be considered auspicious, even sacred. For thousands of years these directions, which include north, south, east, and west – the cardinal directions, the directions in which the sun rises and sets on the summer and winter solstices, and the directions of extreme motion of the moon (lunar standstills), have influenced the design and alignment of churches, mosques, temples, cities, and other places of importance throughout the world.
Sacred Directions is a Mac OS archaeoastronomy app that displays these directions on a satellite image at practically any location on Earth for the purpose of understanding sites in terms of their relation to the heavens.
Sacred Directions Links:
Sacred Directions AR
Sacred Directions AR is an iOS app that displays these same directions in augmented reality (AR) views as seen through the device’s camera.
Sacred Directions AR can be used anywhere as it does not require an Internet connection. Hold it up and rotate in a circle to find the cardinal directions, summer and winter solstice sunrise, and sunset directions, the summer, and winter major and minor lunar standstill moonrise and moonset directions. The time slider allows you to view these directions now or anytime within the past 170,000 years.
Sacred Directions AR Links:
Please send any comments to email@example.com.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655092.36/warc/CC-MAIN-20230608172023-20230608202023-00520.warc.gz
|
CC-MAIN-2023-23
| 1,428
| 9
|
https://scholar.archive.org/work/fe5q2ws62nh6rn7yuveq3rwwua
|
code
|
2L-3W: 2-Level 3-Way Hardware-Software Co-Verification for the Mapping of Deep Learning Architecture (DLA) onto FPGA Boards
FPGAs have become a popular choice for deploying deep learning architectures (DLA). There are many researchers that have explored the deployment and mapping of DLA on FPGA. However, there has been a growing need to do design-time hardware-software co-verification of these deployments. To the best of our knowledge this is the first work that proposes a 2-Level 3-Way (2L-3W) hardware-software co-verification methodology and provides a step-by-step guide for the successful mapping, deployment and
... erification of DLA on FPGA boards. The 2-Level verification is to make sure the implementation in each stage (software and hardware) are following the desired behavior. The 3-Way co-verification provides a cross-paradigm (software, design and hardware) layer-by-layer parameter check to assure the correct implementation and mapping of the DLA onto FPGA boards. The proposed 2L-3W co-verification methodology has been evaluated over several test cases. In each case, the prediction and layer-by-layer output of the DLA deployed on PYNQ FPGA board (hardware) alongside with the intermediate design results of the layer-by-layer output of the DLA implemented on Vivado HLS and the prediction and layer-by-layer output of the software level (Caffe deep learning framework) are compared to obtain a layer-by-layer similarity score. The comparison is achieved using a completely automated Python script. The comparison provides a layer-by-layer similarity score that informs us the degree of success of the DLA mapping to the FPGA or help identify in design time the layer to be debugged in the case of unsuccessful mapping. We demonstrated our technique on LeNet DLA and Caffe inspired Cifar-10 DLA and the co-verification results yielded layer-by-layer similarity scores of 99\% accuracy.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00178.warc.gz
|
CC-MAIN-2022-40
| 1,912
| 3
|
https://sc.edu/study/colleges_schools/cic/faculty-staff/wu_linwan.php
|
code
|
Faculty and Staff
Linwan Wu, Ph.D.
|Department:||School of Journalism and Mass Communications
College of Information and Communications
School of Journalism and Mass Communications
800 Sumter Street, Room 328
Columbia, SC 29208
B.A., Advertising, Huazhong University of Science and Technology (China)
M.A., Advertising, University of Florida
Ph.D., Mass Communications, University of Florida
Dr. Linwan Wu's research adopts the empirical and social scientific approach investigating how communication technologies influence consumers’ responses to strategic communication. He is interested in seeing how different features of digital media work together with other factors (e.g. message, individual, and contextual factors) to influence consumers’ cognitive, affective, and conative responses.
Wu is in the advertising sequence. He believes true knowledge comes from practice. He taught Advertising Research and International Advertising at the University of Florida. He currently teaches Media Analysis, Ad Team, and Literature of Mass Communication.
Wu, L. (2019). Website interactivity may compensate for consumers’ reduced control in E-Commerce. Journal of Retailing and Consumer Services, 49, 253-266.
Wen, T. J., Kim, E., Wu, L., & Dodoo, N. A. (2019). Activating persuasion knowledge in native advertising: The influence of cognitive load and disclosure language. International Journal of Advertising. Link»
Wu, L., & Wen, T. J. (2018). Exploring the impact of affect on the effectiveness of comparative versus non-comparative advertisements. International Journal of Advertising. Link»
Wu, L. (2018). Understanding how the message appeal of moral beauty influences advertising effectiveness under mortality salience. Journal of Marketing Communications. Link»
Wu, L., & Stilwell, M. (2018). Exploring the marketing potential of location-based mobile games. Journal of Research in Interactive Marketing, 12(1), 22-44. Link»
Wu, L., & Dodoo, N. A. (2017). Reaching goals and doing good: Exploring consumer responses to meaningful advertisements. Journal of Promotion Management, 23(4), 592-613. Link»
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657170639.97/warc/CC-MAIN-20200715164155-20200715194155-00534.warc.gz
|
CC-MAIN-2020-29
| 2,117
| 18
|
https://www.javascript.com/news/application-state-in-query-string-with-bidirectional-bindings-hywncjlfg
|
code
|
Application state in query string with bidirectional bindings
It's just a simple code that I was using in my many applications. After one more copy-paste, I thought it's time to extract it into a module :). It allows you to store simple application state in the query string, and notifies your application when state is changed. Hope you enjoy it :)!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187827662.87/warc/CC-MAIN-20171023235958-20171024015958-00524.warc.gz
|
CC-MAIN-2017-43
| 350
| 2
|
https://avpnfknt.web.app/lynum796zesu/cannot-find-proxy-server-error-quq.html
|
code
|
A Proxy server is an application or a server that comes in between the client computer and the website. When a user gets this error, he/she becomes unable to access the internet from his/her browser. When a user gets this error, he/she becomes unable to access the internet from his/her browser.
Above we have described a standard way to solve a problem when you cannot connect to a proxy server due to incorrect system settings. But there is a situation when viral applications purposely change the connection settings; some of them do this after restarting the computer, while others may act directly while the PC is running. But I am getting this error: Collecting datetime Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 407 Proxy Authentication Required ( Forefront TMG requires authorization to fulfill the request. System Update - Cannot connect to proxy server. My last successful system update was on 2/29/2008, afterwards I cannot connect to proxy server. I read an earlier post about uninstalling Google accelerator and did that but it made no difference. I also tried temporarily disabling ESET Smart Security, also no difference. Jan 10, 2015 · • Under the Proxy server section, please remove the check mark from Use a proxy server for your LAN (These settings will not apply to dial-up or VPN connections) and Under the Automatic configuration check mark in Automatically detect settings and click OK. It sounds like a DNS issue. To fix it, you need to ensure that /etc/resolv.conf has good entries for DNS servers. Google has public DNS servers that you can use.. So for example, you could add the following 2 lines to the top of your /etc/resolv.conf file (these point at the Google DNS servers) as detailed above:
I understand your concern about error: 'cannot detect proxy server settings' on your device. Do not worry we will help you with this issue. Do let us know if you are using wired/ wireless internet connection on your device? I suggest you to try the below troubleshooting steps and check if it helps. Step 1: Perform a Winsock Reset
Apr 26, 2007 · Once it is launched, you're computers Internet Explorer (or Firefox, or Opera, etc.) proxy settings will control how and if it makes an actual internet connection. Make sure those settings are correct, and that there isn't an actual issue with your proxy server at the time.
Mar 31, 2020 · Well, Proxy Server is a little bit similar to the VPN apps that we use. Proxy Server acts as a middleman between the server and the computer. Let’s say you are visiting techviral, so the proxy server comes between Techviral’s server and your computer. So, in this way, Techviral’s server will receive the IP Address of the Proxy that you
Cannot Link Emails, or Abacus Outlook Add-in is Missing; Cannot Link Emails, or Abacus Outlook Add-in is Missing Outlook is unable to connect to the proxy server I enabled the MRS Proxy Endpoint for the server by selecting the checkbox and press Save: Restarting the WebAppPool MSExchangeServersAppPool on the specific server using the following command to make sure everything applied and any cache are cleared: Mar 23, 2017 · Unable to connect to Proxy Server! - Fix!.Pull Down from top .Settings .WiFi.HOLD the Name of the WiFi Network.Modify Network Config.Enter WiFi Password.Check the box "Show Advanced Options"."Proxy". Choose "None"!
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303717.35/warc/CC-MAIN-20220121222643-20220122012643-00131.warc.gz
|
CC-MAIN-2022-05
| 3,493
| 6
|
https://www.econstor.eu/handle/10419/75145
|
code
|
Based on the assumption that the sectoral income distributions follow a Pareto distribution, a simple approximation to the aggregate Gini coefficient for a two-sector economy was developed which works on a minimum of information. As a matter of fact, merely the sectoral Gini coefficients, the mean incomes, and the distribution of population between sectors are needed to apply it. Its accuracy appears to be satisfactory and it should therefore be particularly useful, at least as a first investigative step, in the analysis of the income distribution of developing countries where the data base is small and the dual concept represents a meaningful approach, i.e. of countries with a distinctive difference between the export-sector, the modern sector or the urban sector etc. and the rest of the economy.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107881640.29/warc/CC-MAIN-20201024022853-20201024052853-00404.warc.gz
|
CC-MAIN-2020-45
| 808
| 1
|
https://www.replo.app/introducing-replo-blog-posts
|
code
|
Today, we’re announcing support for editing Shopify blog posts in Replo!
Since Replo launched, one of the most consistent pieces of feedback we’ve heard from Replo customers is that while Shopify blog content can be a huge driver for content marketing and SEO, it’s hard to style and edit.
If merchants want to insert non-text content into their posts (for example, a product feature with a “Buy Now” button in a recipe blog post) they have to resort to expensive custom-coding solutions or try to work around Shopify’s limitations using workarounds.
We’re pleased to announce that all types of Replo content, including products, customizable interactions, and animations are now available when you edit a Shopify blog post with Replo.
Creating a blog post in Replo
Getting started with blogs in Replo is simple: the same way you create a page, you can now create a blog post.
- In the Replo editor, find "Blogs" in the left-side menu.
- Click the "+" button to add a new blog post.
- Configure the settings for your blog post - you can set a title, featured image, SEO meta description, and more. If you have multiple blog feeds on your store, you can also choose the feed to add the post to.
- Click "Choose Template" to choose a template to start your post with
- Click "Create Blog" to create your post.
Blog templates and settings
Each new blog post uses the blog template from your Shopify theme (this template is usually where the blog post’s title, author, and featured image are shown).
From the settings button in the Replo editor header, you can configure all kinds of data about your blog post - the title and path are editable, just like pages, and you can also edit the post excerpt, featured image, and SEO data.
Adding content to blogs
It’s very common for content authors to write post content in other applications, like Google Docs or Notion, and paste it into Shopify to publish. We’ve extended the Text component in Replo to allow pasting in rich text content from many of these platforms so that you can write your content wherever you prefer, and add animations, product features, or any other kind of Replo component after pasting your copy into Replo.
We want to give special thanks to the merchants that beta tested this feature. If you have questions about how blog posts work in Replo or how to get started, feel free to reach out to us at firstname.lastname@example.org.
Today there are millions of e-commerce businesses built on platforms like Shopify, but developing websites for them is still a terrible experience. Marketing, design, and engineering teams all have to collaborate together to create content, but complex code and 15+ year old tech on platforms like Shopify create huge headaches for teams.
Replo is a new visual web development platform for high-performance, brand-driven teams on Shopify. Replo empowers brands and agencies to create beautiful pixel-perfect landing pages on Shopify - without developers. The product is an easy to use drag-and-drop editor that allows you to visually build React applications, starting with e-commerce. It’s backed by our super-flexible content management system and commerce APIs which integrate with 3rd-party platforms, including product, subscription, and user-defined data.
Replo is light years ahead of other page builders for customizability and page speed, and used by a ton of Shopify Plus brands and agencies. Replo’s template library has hundreds of proven landing pages and sections that anyone can use, as well as certified Experts that help build brand new landing pages in just a few days.
Get started with Replo today at https://replo.app
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648465.70/warc/CC-MAIN-20230602072202-20230602102202-00777.warc.gz
|
CC-MAIN-2023-23
| 3,661
| 21
|
https://www.remotely.jobs/remote-machine-learning-research-engineer-job-at-mindsdb
|
code
|
We are building MindsDB, an open-source explainable AutoML tool. We want to give everyone the ability to make informed predictions, using state-of-the-art ML models, with just a few lines of code.
We're looking for Machine Learning experts to join us in the journey of bringing state of the art Machine Learning to everyone.
You will be part of our Machine Learning team, with a focus on designing and implementing end-to-end Machine Learning workflows, we have a strong focus on explainability, so a lot of the work you will be doing will involve making sure that the Machine Learning models that our system generates, can explain to all (including non-technical people) where they should and should not trust the model when the model predict something, why are they making such predictions and to understand what within the data used should be of interest.
Since our project is Open Source, we encourage you to take a look at it: https://github.com/mindsdb/mindsdb & https://github.com/mindsdb/lightwood.
You are confident that you can solve the following challenge:
We are looking for a team member that is:
- Eager to understand things and experiment
- Good at working with minimal supervision in a remote environment
- Excited by discussing, deconstructing and challenging ideas
- Good at communicating in English, don't worry, you don't need to be a native speaker.
- Enjoys and can be confident at programing in Python 3
- Strong understanding of machine learning and artificial neural networks, for example through active research in a related PhD program or experience in research labs.
- Experience working with Pytorch is required.
- Papers in academic conferences (NeurIPS, ICML, ICLR, AAAI or domain-specific conferences).
Most important of all, is that you actually like MindsDB and believe in what we are trying to accomplish. We want to democratize Machine Learning and we want to make it explainable, most ML solutions out there focus on just making predictions, we also focus on the question:
- What is interesting in my data and why?
- When should I not trust this model and why?
- How can I improve this model?
- Why did the model give this prediction?
MindsDB is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, colour, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00712.warc.gz
|
CC-MAIN-2021-04
| 2,685
| 20
|
https://sourceforge.net/directory/natlanguage:turkish/natlanguage:english/os:mac/license:publicdomain/
|
code
|
The STAXX client installs in minutes and offers a simple wizard to configure your feeds. From there STAXX will download Indicators of Compromise (IOCs) on a scheduled basis. The client includes interactive dashboards and a powerful search engine. A free STAXX portal account provides full IOC details, including Geo-Location, WHOIS and Passive DNS. Easily export IOCs for internal use. STAXX is, and always will be, FREE.Sponsored Listing
- Audio & Video
- Business & Enterprise
- Home & Education
- Science & Engineering
- Security & Utilities
- System Administration
Yunus is a simple "visual" script language. Yunus is obsolete (left in 2004). However this site exhibits many projects; you will find my other "PHP, VS.NET, Flash, Delphi" projects. eOgr is my second complete application after Yunus.72 weekly downloads
The future of information technology will be based on controlling the flow of natural light. This project is an attempt to establish the code (or software) that will enable this to happen. It involves rewriting an OS from the ground up.
A benchmarking program that gives you primes per sec.
An alternative for physically working file explorers with database support and virtual representation of data. It can virtually present and manipulate externally stored data (such as CDs, DVDs) and local hard disk in one interface. www.virtualdataexplorer.com
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00357-ip-10-171-10-70.ec2.internal.warc.gz
|
CC-MAIN-2017-04
| 1,372
| 11
|
https://www.physicsforums.com/threads/finding-a-particular-solution-of-a-differential-equation.348434/
|
code
|
Find the particular solution of this differential equation:
y`` −3y` −10y=10t2+16t−19
The Attempt at a Solution
I'm not really sure what the roots look like for 10t^2 + 16t - 19. I thought t had roots (0,0). Does that mean t^2 has roots (0,0,0)? And -19 has no roots? So the roots of the entire right hand side is (0, 0, 0, 0, 0)?
And yp = at + bt^2 + ct^3 + dt^4 + et^5?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488249738.50/warc/CC-MAIN-20210620144819-20210620174819-00283.warc.gz
|
CC-MAIN-2021-25
| 377
| 5
|
https://www.techyv.com/questions/american-express-favicon-not-showing/
|
code
|
N/APosted on - 10/05/2012
I know it is not really a problem. But the thing is most websites' favicons are not displaying in my Firefox and Chrome. I've already reinstalled my browsers but the same thing happens. Any help would be good. Thanks!
American Express favicon not showing
Hi Nelson, try to check if your "short-cut icon" code, is position above "the all content" default. Move it to below the "all content". Save your work and refresh your browser, the favicon now shows.
Favicon or short-cut icon,web-site icon, URL icon or bookmark icon. Favicon is supported by the browser you are using. I will give you a link to understand better the format of favicon why it will not appear on some browser. You can go to this link Http:// en.wikipedia.org/wiki/favicon.ico#Browser_support.
This is how to verify if the website has favicon, you can go to
1. Navigate to the website
2. Tools menu> page info window
3. Select the media panel
4. Look for favicon (if not existing proceed below)
then in Firefox >tools>options>privacy panel>then add the favicon. For help how to add favicon go to this link Http://support.mozilla.org/en-US/kb/use-bookmarks-to-save-and-organize websites
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039748901.87/warc/CC-MAIN-20181121133036-20181121154724-00019.warc.gz
|
CC-MAIN-2018-47
| 1,180
| 11
|
https://www.burkard.it/2011/05/add-third-party-ssl-certificate-to-cisco-wlcs-web-authentication-page/
|
code
|
If you create a guest network with a Cisco Wireless Lan Controller, you will like to create and import a third-party SSL-Certificate for the Web Auth page. If you don’t add a third-party SSL certificate, your guest users will receive an error-message, that the WLC’s selfsigned certificate isn’t valid.
Cause i searched long time around, how to setup a third-party SSL certificate and it seems not to be the easiest thing, i wrote a Step-by-Step guide for integrating SSL-certificate to a Cisco WLC 5508 with Version 7.0.98.
To create and import a third-party SSL-certificate you will need:
- an WLC 5508 with IOS Version 7.0.98 (i didn’t test it with other WLC’s or other versions, but maybee it will run the same way)
- an external Certificate Authority (CA). in this document i will use www.startssl.org, which offers free Class 1 certificates.
- a separated VLAN for the guest network with a DNS- and a DHCP-server.
- OpenSSL 0.9.8h for Windows
- a TFTP-server software (i use TFTP32)
Prepare the Wireless Lan Controller
Create the interfaces
You have to create two interfaces for the guest network. The first interface is like each other interfaces with a name, an IP-address and a VLAN-tag.
The second interface is a virtual interface. To create it, click to Controller tab and select Interfaces. Click to the New-button:
Set the IP Address and define the DNS Host name as it should be named in the certificate:
Create the WLAN
Now you have to define a separate WLAN (SSID) for your guests. Click to WLAN’s tab, select Create new and klick to Go.
After set all other desired settings, click to Apply. If you scan now with your notebook for SSID’s the new one should be available withou any security. When you connect to it and browse to a webpage, you should receive a certificate error from a self signed SSL certificate.
Create a SSL certificate by startssl.org
To create a SSL certificate with www.startssl.org, you have to register an user-account. By creating an user account, you will receive a user certificate, which you will need to logon securely to startssl.org.
To be sure, the requested domain belongs to you, you have to validate your domain. Cause we will use only the free Class 1 certificates, there isn’t any need for other validations.
Validate your domain
To create an Class 1 certificate for your host wlc.domain.org, you have to validate the domain domain.org at www.startssl.org. First you have to logon at www.startssl.org with your user certificate. After logon go to StartSSL PKI and then to the Control Panel.
Go to Validation Wizard tab, select Domain Name Validation and click to Continue.
To validate that you are the domain owner, StartSSL sends an email to a predefined mail-address. Create one of the proposed email-addresses in your mail system and select it in the form:
Click to Continue button and wait for the validation mail. As soon you received it, click to the validation link in this mail, to validate the domain.
Request the Certificate at StartSSL.org
Click to the Certificates Wizard tab, select Web Server SSL/TLS Certificate and click to Continue.
Enter a password (10 – 32 chars). Don’t forget this password, you will need it later and you can’t recover it. Select a keysize of 2048 bits (WLC doesn’t support more than 2048 bits and StartSSL doesn’t support less than 2048 bits):
Copy the complete content from the textbox and paste it to a new text-file. Name the text-file private_key.txt. After creating the file, click to Continue.
Select the desired domain and click to Continue:
Download the Device Certificate
After the manual verification of StartSSL, you will receive a confirmation mail. Click to the Tool Box tab and select Retrieve Certificate:
Copy the full content from the textbox and paste it to a new text file. Save the text file as device_cert.pem.
Download the CA certificates
Click to the Tool Box tab and click to StartCom CA Certificates:
Combine the certificate
Create a new text file with the name All-Certs.pem and open it with a text editor. Insert the content of the following files in this order:
- Device certificate
- Class 1 Intermediate Server CA
- StartCom Root CA
Convert the certificate
To convert, you need openssl. i tested it with the Windows version 0.9.8h. Open a command prompt and run OpenSSL in it.
Run this two lines of code:
pkcs12 -export -in D:\All-Certs.pem -inkey D:\private_key.txt -out D:\All-certs.p12 -clcerts –passin pass:PASSWORD –passout pass:PASSWORD pkcs12 -in D:\All-certs.p12 -out D:\final-cert.pem -passin pass:PASSWORD -passout pass:PASSWORD
Where D:\ means the path, where your certificates lies and PASSWORD means the password, you defined before on StartSSL homepage. Both lines should be executed without errors.
Import the certificate to the WLC
Now you can import the SSL certificate to the Wireless Lan Controller.
Run you TFTP-server tool and select the path where your certificates lies:
Open the Web Interface from the WLC again. Click to the Security tab and select Web Auth –> Certificate. Select the checkbox near Download SSL Certificate and enter the values like below:
Click to the Apply button. After successfully downloading and installing the certificate, you need to reboot your WLC.
Additional needed network configurations
You have to configure your DHCP and DNS server in the guest vlan. At the DNS server you need to setup a zone entry for wlc.domain.org pointing to the IP address 188.8.131.52.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816939.51/warc/CC-MAIN-20240415014252-20240415044252-00097.warc.gz
|
CC-MAIN-2024-18
| 5,469
| 51
|
https://www.libhunt.com/topic/cbc
|
code
|
Top 8 cbc Open-Source Projects
Just a heads-up - there may be a significant vulnerability in your code. There doesn't appear to be any IV/nonce. If you use the same key to encrypt more than 1 message, an attacker can recover the plaintext. I see that the developer of aes-js has been warned about it also.
Pure-Python implementation of AES block-cipher and common modes of operation.
Appwrite - The Open Source Firebase alternative introduces iOS support . Appwrite is an open source backend server that helps you build native iOS applications much faster with realtime APIs for authentication, databases, files storage, cloud functions and much more!
🔓 CLI tool and library to execute padding oracle attacks easily, with support for concurrent network requests and an elegant UI.
Command Line deezer.com Player for Linux, BSD, Android, Windows
Linear Programming for Rust, with an user-friendly API. This crate allows modeling LP problems, and let's you solve them with various solvers.Project mention: -🎄- 2022 Day 19 Solutions -🎄- | reddit.com/r/adventofcode | 2022-12-18
I would like to see your code to see how you did it. I used the crate good_lp but it required an external command line "cbc" for me (on Windows) and failed to use GLPK later.
🔐 Fastest crypto library for Deno written in pure Typescript. AES, Blowfish, CAST5, DES, 3DES, HMAC, HKDF, PBKDF2 (by aykxt)
CryptHash.NET is a .NET multi-target library to encrypt/decrypt/hash/encode/decode strings and files, with an optional .NET Core multiplatform console utility.Project mention: What package(s) for offline password hashing and KDF? (Scrypt, Argon2, etc?) | reddit.com/r/csharp | 2022-04-06
AWS Cloud-aware infrastructure-from-code toolbox [NEW]. Build cloud backends with Infrastructure-from-Code (IfC), a revolutionary technique for generating and updating cloud infrastructure. Try IfC with AWS and Klotho now (Now open-source)
Small C++ cryptography library based on Qt and OpenSSL.
cbc related posts
Spotify Player: a command driven music player on the terminal
2 projects | reddit.com/r/programming | 4 Nov 2021
Rate My Work
1 project | reddit.com/r/Python | 28 Jun 2021
Help me setup Electron Cash on Linux (0.018bch bounty)
2 projects | reddit.com/r/btc | 28 Apr 2021
CLI Deezer player on OpenBSD (no account/no ads) with curl + jq + mpv [Need Feedbacks]
1 project | reddit.com/r/openbsd | 13 Apr 2021
What are some of the best open-source cbc projects? This list will help you:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00329.warc.gz
|
CC-MAIN-2023-06
| 2,470
| 22
|
https://www.epmpartners.com.au/blog/help-i-m-unable-to-add-an-existing-task-to-a-timesheet-a-baffling-project-server-timesheet-issue-resolved/
|
code
|
I thought I’d share a resolution to a baffling timesheet issue I encountered this week.
When I went to add an existing task back into my timesheet, I was confronted with the following error.
The error in the ULS was rather vague (as it often is). It stated the following:-
Well, since I’m privileged with working on many support tickets which get translated into timesheet assignments by way of our SPLINK solution, I have thousands of assignments. It turns out there is a limit to the number of assignment before the error above occurs.
The resolution I employed was to open the old Support projects no longer in use (Support FY14a, Support FY14b, SupportFY15a) and to “Closed Task to Updates” via the Close Tasks to Update view in PWA; ie. changed Locked status from No to Yes for all tasks. It should be noted that this view by default is not available to anyone as it isn’t associated to any Project Server security categories. In my case, I added the view to the “My Organization” category and then the view was available for use.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00734.warc.gz
|
CC-MAIN-2022-21
| 1,049
| 5
|
https://developers.raycast.com/information/background-refresh
|
code
|
menu-barat a configured interval.
1d. The minimum value is 1 minute (
1m), which should be used cautiously, also see the section on best practices.
isLoadingproperty is set to
false– which can be set programmatically or via React Suspense.
environment.launchTypein your command to determine whether the command has been launched by the user (
LaunchType.UserInitiated) or via background refresh (
isLoadingis set to false as early as possible
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572220.19/warc/CC-MAIN-20220816030218-20220816060218-00743.warc.gz
|
CC-MAIN-2022-33
| 444
| 8
|
https://wiki.bath.ac.uk/pages/diffpagesbyversion.action?pageId=104436828&selectedPageVersions=2&selectedPageVersions=1
|
code
|
- Open up console.cloud.google.com, and open the .CN Landing Page project.
- Open the www.universityofbath.cn storage bucket.
- Remove any unneeded files in there, and replace them with the new versions.
- For any documentation files (readme.md, the translation word doc, or other source material), ensure that "Share Publicly" is not enabled.
We should include, at the least, up to date Word doc translations of the page we are publishing. Otherwise it will become very difficult to compare versions. If this becomes a 'thing', we should also look at tracking which commit the live version corresponds to, or something like that.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864110.40/warc/CC-MAIN-20180621075105-20180621095105-00266.warc.gz
|
CC-MAIN-2018-26
| 630
| 5
|
https://issues.gradle.org/browse/GRADLE-424.html
|
code
|
[GRADLE-424] Create gradle_cache command to manage $HOME/.gradle Created: 22/Mar/09 Updated: 10/Feb/17 Resolved: 10/Feb/17
Some people like to keep their Maven and Ivy caches relatively clean and tidy – often, for example, by doing a "rm -rf ~/.m2/repository" : Maven has no cache management mechanism which makes this the easiest technique. It would be good if Gradle went one stage further than having "rm -rf ~/.gradle/cache" as the equivalent technique. Having a way of actively managing the Gradle cache would be good for people who like to keep things trim.
|Comment by Tom Eyckmans [ 22/Mar/09 ]|
I'm thinking of a gradle-cache command that would behave based on the directory it is executed in and look for a .gradle/cache directory in the current directory (so you can clean the global cache and project caches), it would have the following modes clean and info, clean would remove everything in the .gradle/cache directory, info would show you the size of the cache (for starters) this could be more elaborate and have additional options to show you what is actually in it.
|Comment by Russel Winder [ 23/Mar/09 ]|
Do we write the command in Java or Groovy. (Or better Python Actually I think we have to go with Java or Groovy as those can be guaranteed on a Gradle-aware installation. With Jython or JRuby, extra bits would be needed. Of course most system now come with Python installed as well as Perl – I guess Perl ois another option . I guess this means we should write a Groovy script and suffer the startup penalty.
Where would such a script go in the Gradle source tree? Perhaps we should create a place holder for experimentation.
I think what is needed to really kick this off is to create a document (by creating a TDD or BDD test perhaps?) that investigates the rules of what can be considered garbage and what must not be removed.
Is the intention for this to be a GUI-based tool, a command line tool or both?
|Comment by Tom Eyckmans [ 23/Mar/09 ]|
I'd go for a command-line tool initially, GUI would be nice eventually.
I agree with the Java / Groovy as these are available.
Platform dependent executable scripts $GRADLE_HOME/bin so the command is available when the gradle command is available, no additional setup. I'd add an org.gradle.cache package that ends up in a separate jar file (=separate bundle that can be started separately when we go the OSGI way).
I think that everything can be removed in the .gradle directories except the ~/.gradle/gradle.properties file and any none gradle specific files that users may put there. But I may be wrong about this as I'm not completely aware of what is in the cache dirs.
A lot depends on the amount of control that is expected:
off the clean mode:
off the info mode:
|Comment by Benjamin Muschko [ 15/Nov/16 ]|
As announced on the Gradle blog we are planning to completely migrate issues from JIRA to GitHub.
We intend to prioritize issues that are actionable and impactful while working more closely with the community. Many of our JIRA issues are inactionable or irrelevant. We would like to request your help to ensure we can appropriately prioritize JIRA issues you’ve contributed to.
Please confirm that you still advocate for your JIRA issue before December 10th, 2016 by:
We look forward to collaborating with you more closely on GitHub. Thank you for your contribution to Gradle!
|Comment by Benjamin Muschko [ 10/Feb/17 ]|
Thanks again for reporting this issue. We haven't heard back from you after our inquiry from November 15th. We are closing this issue now. Please create an issue on GitHub if you still feel passionate about getting it resolved.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100603.33/warc/CC-MAIN-20231206194439-20231206224439-00434.warc.gz
|
CC-MAIN-2023-50
| 3,643
| 24
|
https://anonymousknitter.com/2018/02/27/a-normal-day/
|
code
|
This morning I was up before 5. I heard a distinctive grumbling from the other side of the room.
Normally, I would have shouted back some language I am working hard on not using. Normally, I would have been hurt and angry.
In a meeting I realized that my reactions are not cool. I realized that I can not continue to punish him or myself for every mistake we have ever made.
I can not keep a list of every little time someone hurts me and hold it over them like a bloody axe. What I can do is gather more information.
So I did.
I handled it. Quietly.
I want this to be my new normal.
Thanks for reading my blog.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00236.warc.gz
|
CC-MAIN-2023-14
| 611
| 8
|
https://crushcrypto.com/open-platform-ico-review/
|
code
|
- Project name: Open Platform
- Token symbol: OPEN
- Website: https://www.openfuture.io
- White paper: https://s3.amazonaws.com/openmoney/OPEN+Platform+White+Paper+2018-03-08.pdf
- Hard cap: $30 million (ICO Participants receive 50% of total token supply)
- Conversion rate: 1 OPEN = $0.08
- Maximum market cap at ICO on a fully diluted basis: $60 million
- Bonus structure: N/A
- Presale or white list: Presale ongoing, whitelist open
- ERC20 token: Yes
- Countries excluded: They have not indicated restriction, but ask people to refer to their local regulation
- Timeline: TBD (Please refer to Open Platform's website for the most up-to-date information)
- Token distribution date: TBA
Video summary (video is 6:27 long):
What does the company/project do?
Open Platform makes it easy for applications to integrate with and accept cryptocurrencies as payments.
There are several of components that create the OPEN platform, such as Scaffolds, OPEN_states, the OPENWallet, and Developer Wallet.
Scaffolds are blockchain agnostic, general structure smart contracts which will act as the payment processor. They can be changed to suit the company’s payment model (i.e. subscription payment model, one time payments, in-game payments, etc), adding a lot of utility to blockchain payment systems.
Depending on how the scaffold is set up with a program, a user can pay for a service, or an in-game currency, and, in return, they get an OPEN_state, which represents the access to a promised service, or the number of in-game currency that person has.
The OPENWallet is one where a users’ cryptocurrencies and OPEN_states are stored. This wallet is then used as an access point for whichever program they are using, depending on what the OPEN_state represents.
The Developer Wallet is where the funds being paid into a scaffold end up going. This will utilize the OmiseGO SDK, which grants developers flexibility in choosing the kind of currency their wallet will receive.
They propose these components will come together, along with an easy to use API, to provide apps, developers, and businesses with a useful blockchain payment system.
How advanced is the project?
Open Platform is currently testing their MVP on the testnet, and has publicly announced a partnership with ZenSoft, which is a company with 150+ employees and provides backend services for a large number of companies and startups in Silicon Valley.
Below is a video demonstrating the MVP (video is 4:20 long):
Moving forward, Open Platform has provided this roadmap:
Q1/2 2018 – Open TestNet MVP, Open “Complex Applications” TestNet MVP, OPENWallet security design, open SDK 1.0 release.
Q3/4 2018 – Open 1.0 goes live (July), OPEN SDK 2.0, Release Open 2.0 (December).
Q1/2 2019 – Create the OPEN blockchain, and update to Open 3.0.
What are the tokens used for and how can token value appreciate?
The OPEN token will be used in a number of ways including for staking Scaffolds (locking up tokens when running a scaffold to deter bad behavior), to pay for purchases on the network, and to foster community involvement via airdrops from the pool for developers.
The token price should appreciate in value if there are enough developers and businesses that see this platform adding value to their infrastructure.
Depending on the required number of staked tokens for a given scaffold, if enough scaffolds are being set up proportional to the number of types of payment that people are hoping to achieve, demand will scale proportionally.
Open Platform has a team of 10 with the following being the core team members:
Ken Sangha, CEO – Over 9 years’ experience in leadership roles in business with the most recent being as founder and CEO of DoublePlay Entertainment Inc.
Andrew Leung, CTO – Over 10 years’ experience software engineering with the most recent being with TribalScale as the Director of Engineering.
Abishek Punia, Blockchain Lead Developer – 4 years’ experience in finance with just over 1 year in programming and blockchain development.
Chase Smith, Lead Architect – Over 3 years’ experience in research and development with the most recent being a Technical Due Diligence Officer at Veritas Due Diligence.
Dustin Sinkey, Lead Smart Contract Engineer – Over 6 years’ experience in finance and full stack software development with the most recent being an independent contractor.
Dennis Lewis, Director of Marketing – Over 30 years’ combined experience in marketing with the most recent being with icosuccess.com and Suchapp.
The team also has 4 core advisors, including Will Bunker, former president of Match.com, Lorne Lantz, a Paypal Partner, John Gardiner, contributed to Facebook messenger games and apps, and Chandler Guo, an avid investor.
- Open Platform allows applications to easily accept different cryptocurrencies, which can move cryptocurrency as a whole forward.
- The project makes it easy to for developers to integrate their existing applications as it automatically updates the application’s database. This makes the developers’ time easier and is a main differentiating factor compared to other payment processing projects.
- Because Open Platform can be used by any applications, the project will be frequently put in the headline as the platform partners with more and more applications.
- Virtually all existing apps are listed on platforms like Google Play (for Android) or App Store (for iOS). Those apps would not be able to integrate with Open Platform directly because they accept payments via Google/Apple.
- If Open Platform caters to decentralized applications (dApps), there are other protocols that allow easy exchange of cryptocurrencies such as Kyber Network and 0x.
- If Open Platform caters more broadly to online merchants, there are numerous other projects focusing on online payments using cryptocurrencies (Coinbase, Shapeshift, Coinify, CoinPayments, Cryptonator, Monetha, UTrust).
- 3% of each transaction is used as a type of network gas fee, which is going to form a developer growth pool. That puts paying through Open Platform as expensive as using Visa or MasterCard. One of the major advantages of transacting with cryptocurrency is the minimal fee, but this is not the case when transacting using Open Platform.
Overall, we are neutral about both the short- and long-term potential of this ICO. Our thoughts of the tokens for short term and long term are as follows:
For short-term holding
Neutral. The hard cap of $30 million is on the high side in this market and at the current ether price. We don’t see anything especially standout about this project to warrant a high unmet demand.
For long-term holding
Neutral. We see 3 areas of focus for Open Platform (targeting existing applications, broader online merchants, or dApps), none of which is compelling as detailed in our “Concerns” section.
Also, the 3% transaction value to fund the Developer Growth Pool adds up very quickly for applications, making it less appealing to use the platform.
Therefore, we are uncertain about the level of adoption Open Platform will have.
For more information about the ICO, please visit the following links:
* The information contained in this article is for education purpose only and not financial advice. Do your own research before making any investment decisions.
This article is contributed by Victor Lai with the help of our intern John Coburn.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488528979.69/warc/CC-MAIN-20210623011557-20210623041557-00344.warc.gz
|
CC-MAIN-2021-25
| 7,416
| 58
|
https://www.thewindowsclub.com/a-codec-is-required-to-play-this-file-download-install-codec-on-windows-10
|
code
|
The word Codec is an acronym of the words compressor and decompressor. Codecs are a program that compresses a video and later helps to decode it. So if you receive an error — A codec is required to play this file; it means you don’t have the codec to decode and play the file on your computer.
A codec is required to play this file. To determine if this codec is available to download from the Web, click Web Help.
Similar other messages you could see are:
- Windows Media Player cannot play the file because the required video codec is not installed on your computer.
- Windows Media Player cannot play, burn, rip, or sync the file because a required audio codec is not installed on your computer.
- Invalid File Format.
A codec is required to play this file
Think of it as a program which can reduce the size of a video file so the end-user can download it faster. Later, the consumer can decode the file and play it on his computer. Since there are many codecs, unless you have the right codec, you cannot play the file.
Further, there are many scenarios. Sometimes the video plays without audio, occasionally its the sound which plays with a blank screen. So what can do when the video doesn’t play, or it doesn’t open. We need the right codec. Some of you might have seen this with Windows Media Player.
But then how do we decide which codec is required? How can you check the installed codecs? It’s hard to guess unless the player gives out a specific name or you use the CodecInstaller. So in this post, we are listing some popular codecs and players which you can use to play any files.
How to download & install codec on Windows 10
You can configure Windows Media Player to download codecs automatically. To do this, open Tools > Options and click the Player tab. Select the Download codecs automatically check box, and then click OK.
You can also download and install the codecs manually. To install a codec, you have to clcik on its installer setup file. To uninstall the codec, you can do so from the Control Panel. Some codecs are available in the Microsoft Store. To uninstall them, look for the app in the Start Menu apps list and uninstall them from here.
Here is a list of Codecs you can download on your computer. If this doesn’t work, you can choose some favorite players which include many codecs and plays almost any file.
- Advanced Shark007 Codecs
- CCCP – Combined Community Codec Pack
- K-Lite Codec Pack
- LAV Filters
- Media Player Codec Pack
- Codec Installation Package.
These are more of codecs pack than just being a single pack.
1] Advanced Shark007 Codecs
Apart from the usual codecs, it can also play 4K UHD /HDR H265/HEVC and MVC using H264 codecs. It is activated by default. When installing the codec pack, it will ask you to disable or remove existing codecs from your computer. Here is the list of features:
- Full-color thumbnails including FLV’s and 10bit MKV’s. Along with the preview.
- Allow use of the PowerDVD decoders for 32bit LiveTV in Media Center.
- Support use of the LAV filters with the Play To function for MKV files.
- Support playback of MOD audio files and M4A files containing ALAC and more.
Download from here.
Read: The media could not be loaded, either because the server or network failed
2] CCCP – Combined Community Codec Pack
It includes a playback pack for Windows which supports most of the video formats. However, it was last updated in 2015. So you may want to check on other codecs as well.
Download from here at cccp-project.net.
Related: Video could not be decoded.
3] K-Lite Codec Pack
The packs include 32-bit and 64-bit codecs. The codec supports subtitle display; hardware accelerated video decoding, audio bit streaming, video thumbnails in Explorer and more.
Download from here.
It supports formats such as Xvid, DivX, and H.264. Along with this, it also includes a robust filter set that can enhance the video quality.
- Filters for resizing, de-interlacing, and displaying subtitles
- It enhances audio quality through normalization, down-/upmixing, and resampling.
The software pack also offers user interface which lets you configure codecs, show/hide filters, create a profile and so on. You can export and import settings as well if you ever had to reinstall.
Download from here.
5] LAV Filters
This decoder that uses libavformat to play all sorts of media files. libavformat is a library from FFmpeg. Ithe library offers a generic framework for coding and decoding audio, video and subtitle streams.
Download from here.
6] Media Player Codec Pack
The Media Player Codec Pack for Windows Media Player supports almost every compression and file type used by modern video and audio files.
- Compression types that you will be able to play include: x265 | h.265 | HEVC | 10bit x264 | x264 | h.264 | AVCHD | AVC | DivX | XviD | MP4 | MPEG4 | MPEG2 and many more.
- File types you will be able to play include: .bdmv | .evo | .hevc | .mkv | .avi | .flv | .webm | .mp4 | .m4v | .m4a | .ts | .ogm | .ac3 | .dts | .alac | .flac | .ape | .aac | .ogg | .ofr | .mpc | .3gp and many more.
Download it here.
7] Codec Installation Package
The Codec Installation Package from Microsoft can be used as an alternative to automatically downloading Windows Media Codecs, or to correct problems experienced with previously-downloaded codecs. It is available with Microsoft – but check if it applies to your version of Windows and WMP.
Use a Modern Media Player
Codecs were a massive problem in earlier days. Now, Windows 10 can successfully play most of the standard files. Windows Media Player, Movies and TV apps are good enough to play any videos. Further, the majority of best free media players can play almost any files, and with players like VLC, you never have to download anything from the internet.
Let us know in the comments if this helped.
- Manage, detect, remove broken Codecs and Filters with Codec Tweak Tool
- Identify audio & video codecs required, with VideoInspector.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657735.85/warc/CC-MAIN-20230610164417-20230610194417-00717.warc.gz
|
CC-MAIN-2023-23
| 5,975
| 56
|
http://onlinelibrary.wiley.com/doi/10.1029/2006GL026065/abstract
|
code
|
We built a device for translating a GPS antenna on a positioning table to simulate the ground motions caused by an earthquake. The earthquake simulator is accurate to better than 0.1 mm in position, and provides the "ground truth" displacements for assessing the technique of high-rate GPS. We found that the root-mean-square error of the 1-Hz GPS position estimates over the 15-min duration of the simulated seismic event was 2.5 mm, with approximately 96% of the observations in error by less than 5 mm, and is independent of GPS antenna motion. The error spectrum of the GPS estimates is approximately flicker noise, with a 50% decorrelation time for the position error of ∼1.6 s. We find that, for the particular event simulated, the spectrum of surface deformations exceeds the GPS error spectrum within a finite band. More studies are required to determine whether a generally optimal bandwidth exists for a target group of seismic events.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661277.63/warc/CC-MAIN-20160924173741-00173-ip-10-143-35-109.ec2.internal.warc.gz
|
CC-MAIN-2016-40
| 947
| 1
|
https://ostoday.org/linux/how-do-i-permanently-export-my-path-in-linux.html
|
code
|
How do I permanently set my Java path in Linux?
To Set PATH on Linux
- Change to your home directory. cd $HOME.
- Open the . bashrc file.
- Add the following line to the file. Replace the JDK directory with the name of your java installation directory. export PATH=/usr/java/<JDK Directory>/bin:$PATH.
- Save the file and exit. Use the source command to force Linux to reload the .
What is export path in Linux?
export PATH=”~/.composer/vendor/bin:$PATH” export shell built-in (meaning there is no /bin/export ,it’s a shell thing) command basically makes environment variables available to other programs called from bash ( see the linked question in Extra Reading ) and the subshells.
Where are personal paths stored in Linux?
On most non-embedded Linux systems, it’s taken from /etc/login. defs , with different values for root and for other users.
How do I permanently add to my path?
To make the change permanent, enter the command PATH=$PATH:/opt/bin into your home directory’s . bashrc file. When you do this, you’re creating a new PATH variable by appending a directory to the current PATH variable, $PATH .
How do you set a PATH variable?
- In Search, search for and then select: System (Control Panel)
- Click the Advanced system settings link.
- Click Environment Variables. …
- In the Edit System Variable (or New System Variable) window, specify the value of the PATH environment variable. …
- Reopen Command prompt window, and run your java code.
How do I use export path?
- Open the . bashrc file in your home directory (for example, /home/your-user-name/. bashrc ) in a text editor.
- Add export PATH=”your-dir:$PATH” to the last line of the file, where your-dir is the directory you want to add.
- Save the . bashrc file.
- Restart your terminal.
What is the path in Linux?
PATH is an environmental variable in Linux and other Unix-like operating systems that tells the shell which directories to search for executable files (i.e., ready-to-run programs) in response to commands issued by a user.
What does R mean in Linux?
-r, –recursive Read all files under each directory, recursively, following symbolic links only if they are on the command line. This is equivalent to the -d recurse option.
How do I see all groups in Linux?
To view all groups present on the system simply open the /etc/group file. Each line in this file represents information for one group. Another option is to use the getent command which displays entries from databases configured in /etc/nsswitch.
Where are executables stored in Linux?
Executable files are usually stored in one of several standard directories on the hard disk drive (HDD) on Unix-like operating systems, including /bin, /sbin, /usr/bin, /usr/sbin and /usr/local/bin. Although it is not necessary for them to be in these locations in order to be operable, it is often more convenient.
How do I find my path in Terminal?
To see them in the terminal, you use the “ls” command, which is used to list files and directories. So, when I type “ls” and press “Enter” we see the same folders that we do in the Finder window.
How do I remove something from a path in Linux?
To remove a PATH from a PATH environment variable, you need to edit ~/. bashrc or ~/. bash_profile or /etc/profile or ~/. profile or /etc/bash.
How do I change the path in Linux terminal?
How to change directory in Linux terminal
- To return to the home directory immediately, use cd ~ OR cd.
- To change into the root directory of Linux file system, use cd / .
- To go into the root user directory, run cd /root/ as root user.
- To navigate up one directory level up, use cd ..
- To go back to the previous directory, use cd –
9 февр. 2021 г.
How do I change path in Linux?
The first way of setting your $PATH permanently is to modify the $PATH variable in your Bash profile file, located at /home/<user>/. bash_profile . A good way to edit the file is to use nano , vi , vim or emacs . You can use the command sudo <editor> ~/.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991413.30/warc/CC-MAIN-20210512224016-20210513014016-00609.warc.gz
|
CC-MAIN-2021-21
| 3,997
| 45
|
http://www.gamefaqs.com/boards/663025-simcity/65636644
|
code
|
Trying to have all utilities in an auxillary city even though I've approved buying power in the region view, it just isn't coming?
Edit: Furthermore, I don't have access to building in one city despite my other having an upgraded city hall? --- ~~~New England Patriots ~~~ Boston Celtics ~~~ Steam: markcsii~~~ http://i.minus.com/i6kF5k1GpPngU.gif http://i.imgur.com/lOuG01R.gif
Pretty much everything region related is kind of messed up right now. Things work, and then they don't, and then they work again and so on. Hopefully once the servers stabilize region play will work better. --- http://gamerhorizon.com/
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738099622.98/warc/CC-MAIN-20151001222139-00095-ip-10-137-6-227.ec2.internal.warc.gz
|
CC-MAIN-2015-40
| 614
| 3
|
https://forums.pyblish.com/t/not-access-to-pyblish-maya/470
|
code
|
I was able to install PyQt5 and Maya in centos 7.
But when installing pip install pyblish-maya allows it.
[hquintanaro @ PyblishRG binary] $ pip install pyblish-maya
Requirement already satisfied: pyblish-maya in /usr/local/lib/python2.7/site-packages (2.1.4)
Requirement already satisfied: pyblish-base> = 1.4 in /usr/local/lib/python2.7/site-packages (from pyblish-maya) (1.5.3)
But when in python I call the Maya:
Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python2.7/site-packages/pyblish_maya/init.py”, line 3, in
from .lib import (
File “/usr/local/lib/python2.7/site-packages/pyblish_maya/lib.py”, line 12, in
from maya import mel, cmds
ImportError: No module named maya
I have errors
Do I have to assign a path?
or what is the fault.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571538.36/warc/CC-MAIN-20220812014923-20220812044923-00407.warc.gz
|
CC-MAIN-2022-33
| 785
| 16
|
https://www.intel.com/content/www/us/en/docs/programmable/683836/current/linux-position-independent-code.html
|
code
|
7.9.5. Linux Position-Independent Code
Every position-independent code (PIC) function which uses global data or global functions must load the value of the GOT pointer into a register. Any available register may be used. If a caller-saved register is used the function must save and restore it around calls. If a callee-saved register is used it must be saved and restored around the current function. Examples in this document use r22 for the GOT pointer.
The GOT pointer is loaded using a PC-relative offset to the _gp_got symbol, as shown below.
Loading the GOT Pointer
nextpc r22 1: orhi r1, %hiadj(_gp_got - 1b) # R_NIOS2_PCREL_HA _gp_got addi r1, r1, %lo(_gp_got - 1b) # R_NIOS2_PCREL_LO _gp_got - 4 add r22, r22, r1 # GOT pointer in r22
Data may be accessed by loading its location from the GOT. A single word GOT entry is generated for each referenced symbol.
Small GOT Model Entry for Global Symbols
addi r3, r22, %got(x) # R_NIOS2_GOT16 GOT[n] R_NIOS2_GLOB_DAT x
Large GOT Model Entry for Global Symbols
movhi r3, %got_hiadj(x) # R_NIOS2_GOT_HA addi r3, r3, %got_lo(x) # R_NIOS2_GOT_LO add r3, r3, r22 GOT[n] R_NIOS2_GLOB_DAT x
For local symbols, the symbolic reference to x is replaced by a relative relocation against symbol zero, with the link time address of x as an addend, as shown in the example below.
Local Symbols for small GOT Model
addi r3, r22, %got(x) # R_NIOS2_GOT16 GOT[n] R_NIOS2_RELATIVE +x
Local Symbols for large GOT Model
movhi r3, %got_hiadj(x) # R_NIOS2_GOT_HA addi r3, r3, %got_lo(x) # R_NIOS2_GOT_LO add r3, r3, r22 GOT[n] R_NIOS2_RELATIVE +x
The call and jmpi instructions are not available in position-independent code. Instead, all calls are made through the GOT. Function addresses may be loaded with %call, which allows lazy binding. To initialize a function pointer, load the address of the function with %got instead. If no input object requires the address of the function its GOT entry is placed in the PLT GOT for lazy binding, as shown in the example below.
For information about the PLT, refer to the "Procedure Linkage Table" section.
Small GOT Model entry in PLT GOT
ldw r3, %call(fun)(r22) # R_NIOS2_CALL16 fun callr r3 PLTGOT[n] R_NIOS_JUMP_SLOT fun
Large GOT Model entry in PLT GOT
movhi r3, %call_hiadj(x) # R_NIOS2_CALL_HA addi r3, r3, %call_lo(x) # R_NIOS2_CALL_LO add r3, r3, r22 ldw r3, 0(r3) callr r3 PLTGOT[n] R_NIOS_JUMP_SLOT fun
When a function or variable resides in the current shared object at compile time, it can be accessed via a PC-relative or GOT-relative offset, as shown below.
Accessing Function or Variable in Current Shared Object
orhi r3, %gotoff_hiadj(x) # R_NIOS2_GOTOFF_HA x addi r3, r3, %gotoff_lo(x) # R_NIOS2_GOTOFF_LO x add r3, r22, r3 # Address of x in r3
Multiway branches such as switch statements can be implemented with a table of GOT-relative offsets, as shown below.
Switch Statement Implemented with Table
# Scaled table offset in r4 orhi r3, %gotoff_hiadj(Ltable) # R_NIOS2_GOTOFF_HA Ltable addi r3, r3, %gotoff_lo(Ltable) # R_NIOS2_GOTOFF_LO Ltable add r3, r22, r3 # r3 == &Ltable add r3, r3, r4 ldw r4, 0(r3) # r3 == Ltable[index] add r4, r4, r22 # Convert offset into destination jmp r4 ... Ltable: .word %gotoff(Label1) .word %gotoff(Label2) .word %gotoff(Label3)
Did you find the information on this page useful?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100651.34/warc/CC-MAIN-20231207090036-20231207120036-00053.warc.gz
|
CC-MAIN-2023-50
| 3,308
| 28
|
https://utreon.com/v/1dqcXiqBLVI
|
code
|
crazy grandma tries dewitos new video Saturday thanks so much for liking and subscribing and comment what you wanna see next
jordans channel https://www.youtube.com/channel/UCasPPp5-vpn0HhO9q3YVilw
On this channel, you will find pranks, skits, vlogs, and so much more. Make sure to subscribe so you're never far behind. We love you all very much.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154796.71/warc/CC-MAIN-20210804045226-20210804075226-00596.warc.gz
|
CC-MAIN-2021-31
| 346
| 3
|
http://laforge.gnumonks.org/blog/20170410-libosmo-sigtran/
|
code
|
SIGTRAN/SS7 stack in libosmo-sigtran merged to master
As I blogged in my blog post in Fabruary, I was working towards a more fully-featured SIGTRAN stack in the Osmocom (C-language) universe.
The trigger for this is the support of 3GPP compliant AoIP (with a BSSAP/SCCP/M3UA/SCTP protocol stacking), but it is of much more general nature.
The code has finally matured in my development branch(es) and is now ready for mainline inclusion. It's a series of about 77 (!) patches, some of which already are the squashed results of many more incremental development steps.
The result is as follows:
General SS7 core functions maintaining links, linksets and routes
xUA functionality for the various User Adaptations (currently SUA and M3UA supported)
MTP User SAP according to ITU-T Q.701 (using osmo_prim)
management of application servers (AS)
management of application server processes (ASP)
ASP-SM and ASP-TM state machine for ASP, AS-State Machine (using osmo_fsm)
server (SG) and client (ASP) side implementation
validated against ETSI TS 102 381 (by means of Michael Tuexen's m3ua-testtool)
support for dynamic registration via RKM (routing key management)
osmo-stpbinary that can be used as Signal Transfer Point, with the usual "Cisco-style" command-line interface that all Osmocom telecom software has.
SCCP implementation, with strong focus on Connection Oriented SCCP (as that's what the A interface uses).
osmo_fsm based state machine for SCCP connection, both incoming and outgoing
SCCP User SAP according to ITU-T Q.711 (osmo_prim based)
Interfaces with underlying SS7 stack via MTP User SAP (osmo_prim based)
Support for SCCP Class 0 (unit data) and Class 2 (connection oriented)
All SCCP + SUA Address formats (Global Title, SSN, PC, IPv4 Address)
SCCP and SUA share one implementation, where SCCP messages are transcoded into SUA before processing, and re-encoded into SCCP after processing, as needed.
I have already done experimental OsmoMSC and OsmoHNB-GW over to libosmo-sigtran.
They're now all just M3UA clients (ASPs) which connect to
to exchange SCCP messages back and for the between them.
What's next on the agenda is to
finish my incomplete hacks to introduce IPA/SCCPlite as an alternative to SUA and M3UA (for backwards compatibility)
port over OsmoBSC to the SCCP User SAP of libosmo-sigtran
validate with SSCPlite lower layer against existing SCCPlite MSCs
implement BSSAP / A-interface procedures in OsmoMSC, on top of the SCCP-User SAP.
If those steps are complete, we will have a single OsmoMSC that can talk both IuCS to the HNB-GW (or RNCs) for 3G/3.5G as well as AoIP towards OsmoBSC. We will then have fully SIGTRAN-enabled the full Osmocom stack, and are all on track to bury the OsmoNITB that was devoid of such interfaces.
If any reader is interested in interoperability testing with other implementations, either on M3UA or on SCCP or even on A or Iu interface level, please contact me by e-mail.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943845.78/warc/CC-MAIN-20230322145537-20230322175537-00276.warc.gz
|
CC-MAIN-2023-14
| 2,935
| 32
|
https://discussions.unity.com/t/how-can-i-smooth-a-mesh-as-in-modelling-software-in-unity/112200
|
code
|
I am editing a mesh on mouse tap over the mesh, how can I achieve maya type smoothness?
This may also be of help; There is new asset on the Unity Asset Store called Polybrush.
Brackeys does a good job explaining it in this video:
There is no such feature in Unity to smooth meshes but it can be achieved programatically. Take a look at MeshSmother on Unity Wiki.
Actually the answer to this is incorrect in saying there is no such feature in Unity as there are means of achieving this with Unity.
Here is an earlier question asked which has answers covering a few ways to achieve this, in modelling applications and/or in Unity - hopefully this helps other people finding this:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506658.2/warc/CC-MAIN-20230924155422-20230924185422-00155.warc.gz
|
CC-MAIN-2023-40
| 677
| 6
|
https://community.asterisk.org/t/call-reference-length-as-a-variable/7281
|
code
|
how can I get the Call Reference Length as a variable??
When I do a “pri debug span X” I get something like this:
< Protocol Discriminator: Q.931 (8) len=10
< Call Ref: len= 2 (reference 9/0x9) (Terminator)
< Message type: SETUP ACKNOWLEDGE (13)
Now I need the 2 from the Call Reference Length in a variable.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00323.warc.gz
|
CC-MAIN-2022-21
| 312
| 6
|
https://mryslab.blogspot.com/2014/07/
|
code
|
Particle Physics Demo
Source code is also available here.
And some additional demo's
Mobile Robotics with Scratch: The Whole Truth About the Integration of Scratch, Arduino and Bluetooth
This is a nicely done article by Aldo von Wangenheim
and well worth the read
Has Been Translated To Chinese!!!
Thanks to Yufangjun 发自我的小米手机 for the translation. It can be found on Github.
3D Game Programming for Kids
When teaching programming to Middle School students (ages 10-14), it is essential to keep the lessons interactive, visually engaging, and intellectually challenging. “3D Game Programming for Kids”, by Chris Strom fulfills all these requirements, and then some!
So, I am now busy putting together lesson plans, all the while having a great time. The book is well written, and it not only focuses on creating 3D graphics, but quietly teaches solid programming principles. It even includes a section on debugging! For me, what is really compelling about this book, is that the reader is never bogged down in abstract concepts or nomenclature, but is just having a great time without realizing that there is a lot of heavy duty learning going on.
I give this book 5 Stars *****
Subscribe to: Posts (Atom)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644913.39/warc/CC-MAIN-20230529205037-20230529235037-00258.warc.gz
|
CC-MAIN-2023-23
| 1,224
| 13
|