url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://www.theladders.com/careers/Austin/Software-Engineer-II/
code
A Vulnerability Research Specialist uses vulnerability identification techniques to find exploitable bugs in target applications. The selected individual will correctly identify bugs in both C and C++ souce code and in at least one architecture assembly language. This role will be focused on developing enhancements and integration solutions for our custom web and desktop supply chain application suite. Works closely with peers and business partners to provide testable and stable solutions to our business critical application suite. The Junior Java Developer will be responsible for designing, developing, and supporting software products as well as work with documentation specialists and other development team members to develop features and repair defects. Assists Data Management and Biostatistics with developing tools and techniques for improving process efficiencies; communicate effectively within a multi-disciplinary project team to complete assigned tasks on time and within budget. Design, develop, and configure software systems to meet market and/or client requirements either end-to-end from analysis, design, implementation, quality assurance (including testing), to delivery and maintenance of the software product or system or for a specific phase of the lifecycle. You will partner with team members that range from entry level to experienced and work closely with business clients to analyze user requirements, design and code applications, and customize commercial software packages.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189377.63/warc/CC-MAIN-20170322212949-00100-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
1,510
6
https://sourceforge.net/p/jsch/discussion/219651/thread/10d04f41/
code
I have two servers: Solaris with Sun_SSH and running openssh/sftp-server and Linux with OPENSSH_4.3 running vsftpd When an SSH_FXP_INIT message is sent, Solaris responds with -> length = 0 type = 2 rid = 3 But the Linux server responds with: length = 707403803 type = 32 rid = 1600085855 com.jcraft.jsch.JSchException: 4: Received message is too long: 707403803 Could someone please help me resolve this issue? Log in to post a comment. Sign up for the SourceForge newsletter: You seem to have CSS turned off. Please don't fill out this field.
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423901.32/warc/CC-MAIN-20170722042522-20170722062522-00478.warc.gz
CC-MAIN-2017-30
543
11
https://cvs.linuxtv.org/jarod/linux-2.6-ir.git/diff/Documentation/trace?h=v2.6.33-rc6&id=2ec91eec47f713e3d158ba5b28a24a85a2cf3650
code
Diffstat (limited to 'Documentation/trace') 1 files changed, 7 insertions, 7 deletions diff --git a/Documentation/trace/events-kmem.txt b/Documentation/trace/events-kmem.txt index 6ef2a8652e17..aa82ee4a5a87 100644 @@ -1,7 +1,7 @@ Subsystem Trace Points: kmem -The tracing system kmem captures events related to object and page allocation -within the kernel. Broadly speaking there are four major subheadings. +The kmem tracing system captures events related to object and page allocation +within the kernel. Broadly speaking there are five major subheadings. o Slab allocation of small objects of unknown type (kmalloc) o Slab allocation of small objects of known type @@ -9,7 +9,7 @@ within the kernel. Broadly speaking there are four major subheadings. o Per-CPU Allocator Activity o External Fragmentation -This document will describe what each of the tracepoints are and why they +This document describes what each of the tracepoints is and why they might be useful. 1. Slab allocation of small objects of unknown type @@ -34,7 +34,7 @@ kmem_cache_free call_site=%lx ptr=%p These events are similar in usage to the kmalloc-related events except that it is likely easier to pin the event down to a specific cache. At the time of writing, no information is available on what slab is being allocated from, -but the call_site can usually be used to extrapolate that information +but the call_site can usually be used to extrapolate that information. 3. Page allocation @@ -80,9 +80,9 @@ event indicating whether it is for a percpu_refill or not. When the per-CPU list is too full, a number of pages are freed, each one which triggers a mm_page_pcpu_drain event. -The individual nature of the events are so that pages can be tracked +The individual nature of the events is so that pages can be tracked between allocation and freeing. A number of drain or refill pages that occur -consecutively imply the zone->lock being taken once. Large amounts of PCP +consecutively imply the zone->lock being taken once. Large amounts of per-CPU refills and drains could imply an imbalance between CPUs where too much work is being concentrated in one place. It could also indicate that the per-CPU lists should be a larger size. Finally, large amounts of refills on one CPU @@ -102,6 +102,6 @@ is important. Large numbers of this event implies that memory is fragmenting and high-order allocations will start failing at some time in the future. One -means of reducing the occurange of this event is to increase the size of +means of reducing the occurrence of this event is to increase the size of min_free_kbytes in increments of 3*pageblock_size*nr_online_nodes where pageblock_size is usually the size of the default hugepage size.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00196.warc.gz
CC-MAIN-2022-05
2,721
44
https://kieranandrews.medium.com/learning-ansible-676e141340ff
code
I have recently been working on setting up servers for different applications and having to build quite a lot of identical servers. I have been frustrated with our old infrastructure setup. Some of the problems are different package versions across clusters and having to manually update each box (which is prone to human error). This has caused many problems, for example one server in a load balanced cluster displaying issues the others not. This can cause customers issues and can take a long time to debug such issues. Also, building new servers is teduous, slow and error prone. At other organisations, I have previously used Puppet for this situation. By using Puppet I can automate the build process on the server and since the instructions are in code, I can repeat them on as many servers as needed. Testing it also a lot easier. Even though all of these benefits of Puppet there were still a few frustrations. The configurations would get quite complex and sometimes things would not work as expected and took a long time to debug what was going wrong. Also being an agent based system, I would need to install puppet on the servers first and I wanted to be able to do as much setup as possible automatically. Doing some research on solutions to this opportunity, I short listed down to the modern tools that looked like improvements on Puppet. The options were Salt Stack, Chef and Ansible. For my proof of concept and analysis I chose Ansible. Ansible is an agentless solution which uses SSH to perform all of its operations. This means it works with very little installed on the remote machine and a lot of different types of servers. When Ansible starts, it gathers “facts” about the server which it uses in its playbooks. This means you can target specific OS versions for particular operations. The way to build an Ansbile playbook is to describe how you would like the end state of the server to be and then Ansible will ensure that the server ends up in that state by installing any missing applications or configuration. It has a large amount of Modules inbuilt that wrap common tasks that you would like to do. This makes my playbooks look quite simple and easy to read. I have been very impressed with Ansible for quite a few projects and I plan to use it for future projects as its easy to use and get a server up and running, configured the way I need with little effort. I plan to post some further posts in the future about my experiences and share some of my code from my playbooks. Originally published at kieranandrews.com.au.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00286.warc.gz
CC-MAIN-2021-43
2,559
8
http://www.youritronics.com/tag/dcf77/
code
This project is based on the previous one posted and here the DCF77 runs the clock from the thermostat always on the right time synchronizing not beeing necessary anymore. The temperaturesensor is a DS1820 or DS18B20 and on a HD44780 (or compatible) 2×16 LC-Display becomes day, date, time, temperature (with 0.1°C precision) and an indication which temperature adjustment (economy- or comfort temperature) is active. For each day there are 4 ON/OFF times to program, ON means here switch to the comfort temperature and OFF to the economy temperature. A handy option is for people who work at different times (shift work) because it is possible to program the clock with a 2-weeks scheme. As you can see a really complex program has been created for this project which makes it really handy for those who need it. DCF77 Clock-Thermostat: [Link]
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125532.90/warc/CC-MAIN-20170423031205-00039-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
848
5
https://elchooratomb.web.app/639.html
code
Angularjs support is easily added as a plugin installation path is the same as for. Setting up my breakpoints in the controllers and running the grunt serve task through webstorms grunt console, does not hit the breakpoint. Rundebug configuration is an entry point to, as the name suggests, running and debugging apps in webstorm. Even if you write your code using ide, will debug of protractor tool good enough for debugging. In this course, well show you the fundamentals of how this great web developer ide works. This program provides you a platform on which you can get the best development experience. I have also configured webstorm as explained above. Angularjs, karma and debugging unit tests in webstorm or. Jan 23, 2017 webstorm does a good job out of the box. To debug different types of apps and files, you need to use different types of rundebug configurations. Jan 28, 2020 webstorm activation keyis very light software and can easily run on systems even with low specs. A web page with chrome extension options opens showing the parameters to connect to webstorm. To access courses again, please join linkedin learning. Now switch to your beloved webstorm and open the project you want to debug. The debugger is one of the most essential features of webstorm. My goal was to debug the lab tests, which i use to test my hapi application. You can create websites and their design with ease by using this software. The browser is open and the jb debug extension is installed and running. Webstorm is my favorite choice when it comes to develop web applications especially with angular 2. Then,you can debug your angularjs app in the chrome.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00664.warc.gz
CC-MAIN-2022-40
1,653
2
https://cboard.cprogramming.com/game-programming/37325-good-books.html
code
I was thinking about getting into some game programming during the summer. I have done almost no windows programms in C++, but I have about 4-5 years experience with C++ code in console apps. I have had a couple data structures classes, and know about classes and most of their properties. I have also had some pretty much all the math I need at the college I am at to major in computer science. With that said, I was thinking about trying to make a simple game with probably directx. What books do you think would help, given what I know?
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218194600.27/warc/CC-MAIN-20170322212954-00357-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
539
1
https://schneider.blogspot.com/2005/11/
code
Wednesday, November 30, 2005 I regret to inform you that I've been hiding a massive turd from you and the business units for several years now. The turd is the "3-tiered silo application". The stench was horrid - I was forced to get some industrial grade air fresheners that I called "EAI and ETL". I feel very bad about this, as does my staff. Although, there were several times where you called us into meetings and we all had those 'shit-ass grins' on our face and you thought we were making fun of you - good news - we weren't, we were laughing about our hidden turd. But I don't take all the blame. You remember when you asked us to do 12 major initiatives and we told you that we didn't have the staff? Yea - well, we didn't. So we cranked out as much as we could and patched the crap together with messaging. From an architectural perspective, it's a freaking mess!! We've got data replication, batch transfers, multiple message formats, multiple transports, platforms, vendors - wow, I can't believe you didn't fire us!! The reason that we are 'coming clean' is because we believe we've found an answer. It turns out that using simple protocols and a network programming model that isn't vendor specific will allow us to build connected systems. Also - we discovered this other wacky thing - this new model called SOA actually aligns to the needs of the business. Now that we're being honest with each other, I feel it necessary to inform you that this SOA thing... is right, but - it actually takes a bunch of planning and architecture. It's like - ya-gotta-use-your-brain-kinda-stuff. And not to drop another bomb on you - but a bunch of these guys around here are morons. Now, I know that you have deep pockets and short arms and don't like to pay for real talent, but I think your screwed if you don't. So, it's your call. In the meantime, the Turd guys, came up with this new turd-containment concept that they call an ESB. I don't know what it stands for but I talked to some smart guys and they said that it was supposed to be SOA but the vendors missed their deadlines - so they just repackaged their old turds with some new SOA stuff. So, just beware - were probably going forward with the turd-containment stuff and later we'll have to go with real SOA stuff. Sorry for hitting you with all of this. Your Entire Staff Monday, November 28, 2005 1. The company has perfected requirements for silo based systems 2. The company stinks at requirements all together I've looked at many systems that people wanted to upgrade, rewrite or replace using service oriented techniques. As a habit, I ask for the original requirements document for the production system. I then play this game to see if I can trace 'silo' or 'closed' characteristics back to the orignial requirements and specifications. In general, these characteristics are usually described in the 'supplementary specification' or in a supplemental 'non-functional requirements' document. And in virtually every case I am able to trace the issues back to the original documents. One of the significant changes in the 'service oriented enterprise' is an renewed emphasis on: portfolios, product lines, business processes, enterprise requirements and cross-cutting concerns. We are stressing to our customers that they MUST revisit the requirements process and move to an 'enterprise grade' method. That said, I am proud to announce a new course from MomentumSI, "Requirements & Specifications in Service Oriented Systems" The course is directed at the Business Analyst, but would valuable for anyone that participates in the requirements & specification activities. - Overview of SOA for the Business Analyst - The Business Impact of SOA - The Awesome™ Method for Business Analysis - Strategy, Portfolio Analysis and Business Case Development - Generating Stakeholder Requirements - Coordinating Enterprise Requirements - Validating Stakeholder Requirements - Generating Engineering & Procurement Specifications - Managing Change throughout the SDLC - Examples and Cases I firmly believe that those companies that continue to use 'silo based' requirements processes will fail in adopting SOA. We will begin offering this course in January. If you're interested in the full course description, just send Alex an email: arosen [at] momentumsi.com Saturday, November 19, 2005 MomentumSI is a consulting company that specializes in SOA. We help companies with their SOA strategy, plans, vendor selections, architecture, infrastructure, education and business projects. In this role we have a birds-eye view of the SOA landscape, and one thing is clear: "An SOA Ecosystem is quickly evolving and those inhabitants that fail to recognize the rules of the system will likely wither and die. " In the year 2005, one significant event took place in the SOA ecosystem. Vendors that were amorphous found shape and structure. Full fledged product categories emerged: SOA Registry & Repository, SOA Management, SOA Intermediaries, SOA Governance, SOA Security, SOA Data Services, SOA Process & Workflow, SOA Testing and SOA Legacy Integration. As the organisms explored the ecosystem, they found each other. They stared, sniffed and prodded. They identified competition, but most importantly many of them found cooperation. After first contact, emphasis was placed on removing their overlap and short-comings and they began the process of morphing themselves to become "SOA Eco-Friendly". The SOA Ecosystem will be governed by the laws of all ecosystems. And I strongly encourage those inhabitants to know the laws: - Symbiosis is an interaction between two organisms living together in more or less intimate association or even the merging of two dissimilar organisms. - Parasitism, in which the association is disadvantageous or destructive to one of the organisms and beneficial to the other. - Mutualism, in which the association is advantageous to both - Commensalism, in which one member of the association benefits while the other is not affected. - Amensalism, in which the association is disadvantageous to one member while the other is not affected. - Predation is an interaction between organisms in which one organism, the predator, attacks and feeds upon another, the prey. (wikopedia) I've been flying around the country talking with large and small SOA product companies about the ecosystem. As a preferred 'SOA integrator' we see the problem from the view of the customer. And as the champion of the customer, we are unable recommend products that exhibit characteristics which negatively affect the customer. Examples include: - Portfolio Coupling - Single vendor solutions mandating a daisy chain of products all from the same vendor. Example: If the vendors Process Server requires the vendors ESB which requires the vendors Application Server and the vendors Database Server. - Product Coupling - A single product that contains multiple capabilities, where each capability is really a standalone product. (First generation BPM products fell victim to this). - Closed System - A product that fails to have integration points into the other inhabitants of the SOA ecosystem. We don't have the equivalent of a J2EE specification for the service oriented / Web service world. The burden to identify and specify integration points and mechanisms is placed on the vendors by the customers and consultancies. - Inch Deep, Mile Wide - In the early days of SOA, many vendors were chasing the 'SOA platform'; they provided a little bit of registry, a little bit of orchestration, a little bit of mediation, etc. but failed to excel in any one area. The Darwinian nature of the SOA ecosystem will kill off those vendors that fail to specialize (a mile deep). IMPORTANT: This will force some vendors to abandon entire products or modules and to replace this functionality with 'open system' integration points to best-of-breed providers. The SOA ecosystem must place the customer at the center. Modern SOA infrastructure must be capability oriented and loosely coupled via standardized policies, metadata, protocols, formats and identifiers. Vendors must recognize the architectural ecosystem, the elements, their relations and constraints. As corporations spend millions of dollars to decouple applications, they MUST place an equivalent emphasis on procuring from those vendors have eco-friendly offerings. Thursday, November 03, 2005 Judith Hurwitz presents her 'ten principles of SOA' in a piece on SOA: Battle of the Titans. My comments follow. SOA is real. It is not a quick fix. It is a ten year journey (or longer) that requires considerable planning. It is also as profound and far reaching as, for example, e-commerce. SOA is build upon 15 years of experiments in creating highly distributed computing environments that take into account everything from load balancing, software distribution, security, and data management including meta data management and registry. SOA embraces all these aspects of computing and takes them into account. SOA will only work if organizations lead with manageability. SOA by its very nature demands the aggregation of IP from many different sources. Scalability within SOA will come from managementÂ?not development. It was funny listening to all of the vendors at the boot camp. Each one told us what to lead with - 'Lead with the Registry!', 'Lead with Mediation!', 'Lead with security!'. Now Judith might be talking about SOA Manageability or about general governance - it's impossible for me to comment on what she meant. Regardless, I'm comfortable stating that you should lead with SOA strategy and incorporate a closed loop infrastructure. All of the components must complement each other - no one component is at the heart. SOA will only work if it is implemented within a business process context. SOA is predicated on leveraging business services that constitute the component parts of your business. Yea - this is wrong. It is common for a newbie to watch a demo of BPEL/XLang/BPML orchestration and think that they just saw the future of computing. Better yet, they grasp the concept of a 'business service' but fail to realize that this is the ONE thing in SOA that will see the LEAST amount of reuse. Don't get me wrong - there will be some instances whereby SOA enables process-driven applications, but this is merely one of a dozen or so benefits. SOA requires a container that creates a composite application. I'll remind Judith that she characterized SOA as working in "highly distributed computing environments". Service based applications come to life by activating services in well known sequence. The composition (or invocation) of these services are distributed. The app server/container metaphor takes on a new meaning. Service oriented applications have 'invocation points' or 'composition points' at multiple layers in the architecture (virtual business service, business service, virtual data service, data service, etc.) The key here is to not accidentally use the term 'container' in the singular. How about, "Composition of service based applications is usually distributed, with no central composition point or single container." SOA requires standards that can be depended upon across all vendors implementations of SOA. The ONE truth about SOA is that we CAN NOT DEPEND on standards across all vendors implementations. If you don't understand this, you don't understand SOA. The reason that SOA will work is because we have factored out the non-functional concerns and created transformable protocols, formats and identifiers of the standards which are MEDIATED. SOA assumes that you will begin to write applications differently as a series of tightly defined services implemented in a loosely coupled manner. I guess... :-) SOA assumes that each component part is equipped with a clearly implemented web services interface based on standards. Agreed. They just don't have to be the 'Web Service' standards, nor do they need to be ubiquitous. They just need to be federated, virtualized and mediated. SOA dictates that change is the norm since this approach to software mimics the way a business operates and evolves. SOA is a pain in the ass that, in the short term, will slow down your ability to rapidly deliver software to your customers. SOA doesn't provide rapid change but it does provide a framework for 'scaling capability'. As the number of enterprise concerns rise (consistent data, consistent logic, fulfilling regulatory mandates, process as a first order concern, one-face-to-the-customer, etc.) we need a framework to allow us to change EXTREMELY complex systems. SOA enables the insertion of capabilities in an incremental (and almost linear) manner. SOA is complex. Explaining it to CIO's is even harder. I wouldn't expect Judith to go to the detail that I just did for the audience she was catering to. However, the amount of poor advice provided is, in my humble opinion, disappointing.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510319.87/warc/CC-MAIN-20230927171156-20230927201156-00168.warc.gz
CC-MAIN-2023-40
12,952
62
http://www.cutoutandkeep.net/projects/whitegoldish-ring
code
first open your ring base, this costs me 1€ so very cheap yay! now roll the wire around the pliers two or three times, as close to the end as possible so that you en up with a small circle depending on the holes size you may have to create a small loop (next pic) to avoid the wire from getting through those holes... starting from the center of the ring base, insert the wire from the inside out keep adding the beads repeating steps 2 and 3 then create a loop with the wire on top of the bead to lock it it should look like this.. when you end adding all the beads, put the two parts of the ring base together and adjust to your finger size.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164580801/warc/CC-MAIN-20131204134300-00095-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
649
8
https://forum.arduino.cc/t/i-cant-modify-or-write-in-arduino-sketch/334271
code
Is it because of windows 10? If your Arduino IDE screen is grey then try this: - Select File > Preferences from the Arduino IDE menus. - Uncheck the checkbox next to "🗹 Use external editor". - Click the OK button. This "Use external editor" feature is for people who prefer to use a different text editor than the one built in to the Arduino IDE for writing their sketches, while still using the Arduino IDE for compiling/uploading/installing libraries/etc. So it disables the Arduino IDE's built-in editor to prevent confusion or lost work that might occur if you forgot and tried to edit and save the sketch from both editors. You can try uninstalling the arduino ide and installing it afresh. It is not because of windows 10. I also upgraded to windows 10 but it is working just as before. Thank you ALL.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653930.47/warc/CC-MAIN-20230607143116-20230607173116-00593.warc.gz
CC-MAIN-2023-23
810
9
https://careers-pgi.icims.com/jobs/1762/senior-front-end-web-developer/job
code
Title : Senior Front-End Web Developer Location : Richmond, London The Company : UK-based Voice, Web and Video Conferencing Company located in Richmond, London. PowWowNow operate a large voice conferencing network offering services globally to a diverse client base. The Web Team works primarily on the company’s own website, built with NodeJS/ReactJS/Redux in Electrode with all back-end interaction via microservices developed by the back-end teams. We work heavily with Docker and have a fantastic continuous delivery pipeline in place using Bitbucket, Bamboo and Rancher. Our Marketing team are very active so there are always lots of new and exciting projects in the pipeline. You will report to the Head of Development and collaboration with Marketing, Digital and Design will form a large part of the role. We are an in-house development team, but we also direct the activities of a remote team based in India which can scale up and down as required by our projects. - A confident node.js/reactjs developer with a history of delivering successful projects - A fantastic communicator - Adept at working within an agile team - Keen to keep on top of all the latest developments in front end web development - A great environment, collaborating closely with Marketing to provide cutting edge solutions. - Flexible working, as a conference calling company we live and breathe it. - Project ownership, we promote responsibility and accountability. -Free fruit, free coffee, a Friday beer fridge and regular work days out and parties For immediate consideration, please forward applications now. We’re not a one-dimensional place to work. Sometimes we’re very serious, because we take our business seriously. Sometimes we’re playful, because we have so much fun with what we do. A few things we consistently are: We offer you:
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00315.warc.gz
CC-MAIN-2020-29
1,835
17
https://build.opensuse.org/
code
Welcome to openSUSE Build Service The openSUSE Build Service is the public instance of the Open Build Service (OBS) used for development of the openSUSE distribution and to offer packages from same source for Fedora, Debian, Ubuntu, SUSE Linux Enterprise and other distributions.. Please find further details of this service on our wiki pages This instance offers a special package search interface. Users of any distribution can search there for built packages for their distribution. For developers it is an efficient place to build up groups and work together through its project model. Welcome a nice Easter present from our sponsor SUSE: we got 4 new servers, that replaced the old backend machines. Each machine has 64 cores (AMD EPYC 7702) and 1TB RAM, powering the ~40 virtual machines forming the backend infrastructure. The old machines are now running as build workers. You can find them on the monitoring page as "old-cirrus" machines (AMD Opteron 6348, 500GB RAM).
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00619.warc.gz
CC-MAIN-2021-21
977
5
https://meta.stackexchange.com/questions/72121/adding-several-bounties
code
Isn't it possible to add several bounties (to different questions)? I added a bounty to a question yesterday, and for reason, I can't add a bounty to another today... No, you can only have one active bounty per user as well as only one per question. Basically any question can only have one active bounty at any one tim and a user can only have one active bounty at any one time. From the FAQ There can only be 1 active bounty per question and per user at any given time. There are some further details on the improvements made to the Bounty system on this blog post.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107907213.64/warc/CC-MAIN-20201030033658-20201030063658-00048.warc.gz
CC-MAIN-2020-45
567
6
https://github.com/golang/go/issues/28698
code
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together. Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails. Already on GitHub? to your account @cherrymui noted in the review of CL 148837 that on non-soft-float platforms, when working with floats, ODIV and OMOD won't panic. We should handle them a bit more precisely in calcHasCall. Change https://golang.org/cl/166937 mentions this issue: cmd: handle floats for ODIV better cmd: handle floats for ODIV better
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202588.97/warc/CC-MAIN-20190321234128-20190322020128-00193.warc.gz
CC-MAIN-2019-13
767
9
https://mail.python.org/pipermail/tutor/2007-February/053048.html
code
[Tutor] howto call DOM with python malaclypse2 at gmail.com Wed Feb 28 21:18:32 CET 2007 On 2/28/07, Ismael Farfán Estrada <sulfurfff at hotmail.com> wrote: > I was wondering wheter someone knows how to use python to write > a client-side scripts like if it were java I don't think this is possible, at least without creating a custom-compiled version of Firefox. There's a little bit of discussion from last year on the python mailing list here: There's some more information available out there if you search for PyXPCOM or python and nsdom. If you do end up getting it working, I would be interested in hearing how it went. More information about the Tutor
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998288.34/warc/CC-MAIN-20190616162745-20190616184745-00379.warc.gz
CC-MAIN-2019-26
660
14
http://mrmathman.com/testprep/2018/4/2/42-regression
code
I class today we did problem C-1. A nice regression problem. I also passed out the first part of the Super Six. You should how to do every step of the Super C-1 and then you will be able to every basic regression skill that the AP test will ask. Someone asked if it is OK to "random residuals". I know that I've put that phrase on the board. On further reflection, it is not clear communication. Instead please say "the residual plot is random". It makes no sense to say a bunch of numbers are random. I also passed back your regression test. You should fix all your mistakes. Regression will be a big part of Thursday's Quest.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945624.76/warc/CC-MAIN-20180422154522-20180422174522-00222.warc.gz
CC-MAIN-2018-17
627
4
https://ericmralph.com/2012/05/18/cause-collaborative-my-experience/
code
I attend my first Cause Collaborative event today. It brings together experts with people from small to medium non-profits to discuss common issues, advice, and best practices. I was a bit of an odd fit, since I don’t work for a non-profit. I do have some ideas in that area, but they were ill-formed. The event was great, very informative and inspiring. It also helped me form up my ideas. They still need a lot more work before being practicable, but they are much more concrete than they were this morning. It’s amazing how simply discussing ideas with others (that have similar passions) can improve your ideas. The passion and smarts in the room helped me not only give form to my ideas, but also stoked my passion for them (but not in a dirty way). I also appreciated that everyone I met offered help (I most clearly needed it). I hope to further develop the relationships that started today. In any case, it was definitely worth spending a beautiful day at the Cause Collaborative.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654016.91/warc/CC-MAIN-20230607211505-20230608001505-00081.warc.gz
CC-MAIN-2023-23
992
2
https://devblogs.microsoft.com/devops/pricing-for-release-management-in-tfs-15/
code
Pricing for Release Management in TFS “15” [Update on Nov 16, 2016] This article is now outdated. With the RTM version of TFS 2017, we have the final pricing model for Release Management. For more information, see our official documentation. Since the new version of Release Management was introduced in TFS 2015 Update 2, it has been in “trial mode“. Any user with Basic access level was able to access all features of Release Management. For the last few months, we have been hard at work to finalize the pricing model for Release Management in time for the release of TFS “15” RTM. We wanted a model that: - makes Release Management available to all Basic users in a team - is free for small teams, and is competitive as the complexity in an organization increases - is equally applicable to both TFS and VSTS - is uniform across Build and Release Management in VSTS - provides value to Visual Studio Enterprise subscriptions Based on all of these, here is a summary of the pricing model that we have finalized so far for Release Management in TFS “15”.: Let us now look at what this model means in more detail. - No additional per-user charge: You do not pay per user any more for Release Management. Earlier versions of Release Management (Release Management Server 2013) required Visual Studio Test Professional or Enterprise subscriptions for users in order for them to author release definitions. That is no longer the case. Just like Build, Release Management can be used by all users in your TFS as long as they have a Basic access level or TFS CAL. Just like before, Stakeholders can continue to approve or reject releases even without a Basic access level or TFS CAL. - No charge for agents: You do not pay for agents for Release Management. Register any number of agents with your TFS. - Charge for concurrent pipelines: The primary metered entity for Release Management is the number of pipelines you can run at a time. A pipeline is just a single release. By default, you an always run one pipeline at a time for free. Additional releases that you create will be queued automatically. When you deploy a release to several environments in parallel, all the deployments still count as one pipeline, since they are part of a single release. - Buy ala-carte: You can also buy additional release pipelines concurrency from Visual Studio marketplace without having to buy an entire Visual Studio Enterprise subscription. This new pricing model is in effect starting from TFS “15” RC2. The only option that is still not available in RC2 is the ability to buy ala-carte extensions from the Marketplace. This work is in progress, and is expected to complete by TFS “15” RTM. When you upgrade to TFS “15” RC2 or above, you will notice that: - Release Management is not in “trial mode” any more. - All Basic users in your server can access all Release Management features. - The number of concurrent pipelines that you can run is set to the free limit of “1” per server. - The “Resource limits” page under Settings -> Build and Release tab is where you manage the number of concurrent pipelines for your server. To understand your true cost of Release Management, there is one key question that you need to answer – How many pipelines do I need to run at the same time? We believe that the one free concurrent pipeline gets a small sized team started for free. A rule of thumb is to count one pipeline for every 10 users in your server or account. Even for large accounts or installations (with around 200-500 users), it is unlikely that more than 20-50 pipelines run at a time. We plan to complete the official documentation for this pricing model in the next few weeks before the release of TFS “15” RTM. We will also include the pricing model for Release Management in Team Services as part of that documentation. This blog is intended to provide guidance to users of Release Management as the above pricing features are being released in TFS “15” RC2. Release Management Team
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104692018.96/warc/CC-MAIN-20220707124050-20220707154050-00430.warc.gz
CC-MAIN-2022-27
4,033
23
https://www.ikw.uni-osnabrueck.de/en/news/news_entry/artikel/new-review-paper-on-hyperscanning-research.html
code
13. May 2020 : New review paper on hyperscanning research: Czeszumski A, Eustergerling S, Lang A, Menrath D, Gerstenberger M, Schuberth S, Schreiber F, Zuluaga Rendon Z and König P (2020). Hyperscanning: a valid method to study neural inter-brain underpinnings of social interaction. doi.org/10.3389/fnhum.2020.00039. Front Hum Neurosci 14:39 Social interactions are a crucial part of human life. Understanding the neural underpinnings of social interactions is a challenging task that the hyperscanning method has been trying to tackle over the last two decades. With a review of state-of-the-art methods to measure brain activity, discussion of different types of analyses in this field and the presentation of some selected hyperscanning studies, we aim to give a comprehensive overview of the last 20 years of hyperscanning research.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00347.warc.gz
CC-MAIN-2023-50
838
4
https://www.blackhatworld.com/seo/need-help-in-recovering-my-blog-data.479322/
code
Hi Friends, While changing my blog from one server to other I lost all of my data of that blog. So, I again installed wordpress in that. Now I need that data back. My data is still indexed in google. I am not having any articles backup. Is there any way to get back my articles from google catch? Please help me friends.
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745522.86/warc/CC-MAIN-20181119084944-20181119110944-00387.warc.gz
CC-MAIN-2018-47
320
1
https://www.latitudeonedegree.com/product-page/clutch-5-plain-green-green-snake
code
Welcome CL5! The latest model of our collection. Simple and fuss-free with calfskin band closure, this clutch has been designed to keep small things secure. Ideal for cocktail party. CLUTCH 5 - Plain Green/Green Snake - This item is handmade and therefore one of a kind. - Clutch dimension: length cm25 x height cm16 - Calfskin leather body - Calfskin leather closure band that can be used as carry handle - Detachable shoulder/cross-body metal chain length cm 108 - "Latitude One Degree" dust bag - Calfskin Card holder
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529128.47/warc/CC-MAIN-20210122051338-20210122081338-00438.warc.gz
CC-MAIN-2021-04
520
11
https://discotek.club/major-lazer-dj-snake-feat-mo-lean-on-piano-cover-tutorial/
code
A piano cover of Lean On by Major Lazer and DJ Snake featuring MØ, which I arranged by ear and played on Synthesia. Sheet music: http://imgur.com/a/ReAPd Follow me on Twitter – http://bit.ly/grande1899twitter My Facebook page – http://bit.ly/grande1899facebook The cover should have the same tempo as the original song, so you can sync it with the original video. I didn’t play the song in real time, I put it all together on FL Studio. I have also made a note block version of Lean On in Minecraft, which you can find on my channel. Thanks for watching!
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590559.95/warc/CC-MAIN-20180719051224-20180719071224-00438.warc.gz
CC-MAIN-2018-30
561
6
http://char.tuiasi.ro/doace/docs.ipfrance.net/docs-dev/postgresql/r19969.htm
code
ALTER USER username [ WITH PASSWORD 'password' ] [ CREATEDB | NOCREATEDB ] [ CREATEUSER | NOCREATEUSER ] [ VALID UNTIL 'abstime' ] This parameter takes the name of the account to be modified. If you are changing an account's password, use this parameter to provide the database with a new password for the account. Use these keywords to determine whether or not an account has permission to create databases. CREATEDB will specify that the account is able to create databases, whereas NOCREATEDB will deny a user the privileges needed to create databases. Use these keywords to set whether or not a user has permission to create other users. Specifying that an account is able to create other users also automatically classifies the user as a superuser on the database; this can be quite the security risk if unintentional , as a superuser account can override other access restrictions. To force an account's password to expire after a certain amount of time, enter the date on which this should happen (and, optionally, the time). Use the ALTER USER to change the attributes and permissions of a PostgreSQL user account. Only a database superuser can change privileges and password expiration values with this command. Ordinary users are only permitted to change their password. The following example changes the password for user mark: ALTER USER mark WITH PASSWORD 'ml0215em'; The next example demonstrates changing the valid until-date for the user account mark. ALTER USER mark VALID UNTIL 'Dec 30 2012';
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668716.69/warc/CC-MAIN-20191116005339-20191116033339-00080.warc.gz
CC-MAIN-2019-47
1,510
13
https://github.com/sqlitebrowser/sqlitebrowser/issues/1534
code
Join GitHub today GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up add schema name when "Drag and Drop Qualified Names" for fields of atacched tables #1534 Describe the new feature I recently notice the "Drag and Drop Qualified Names" from DB schema is implemented. It is more convenient to add a schema name for attached database! Please answer these questions before submitting your feature request. Is your feature request related to an issue? Please include the issue number. Does this feature exist in another product or project? Please provide a link. Do you have a screenshot? Please add screenshots to help explain your idea. I considered including also the schema in field names (except for main). Finally I felt that it was too verbose and left only the table name when the qualified option is enabled. But no strong opinion with that, it can be changed. I wonder now if we need an additional option for this or people will be satisfied with selecting between fully qualified and not qualified at all. By the way, with the current version, you can control-select the schema and the field to get the fully qualified name. To have a good result, it is important to disable qualified option and click items in right order. It is not so tedious to delete unnecessary qualifier. So I prefer just two options:
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479627.17/warc/CC-MAIN-20190215224408-20190216010408-00075.warc.gz
CC-MAIN-2019-09
1,404
14
http://www.thetechreel.com/2009/05/windows-7-to-feature-xp-mode.html
code
Vista required a high-performance machine with full graphics acceleration to run on and that's exactly why most of the applications made for Windows XP do not run natively on Vista. However! a lot is going to change with the release of Windows 7. Even though Windows 7 won't still be able to natively run Windows XP apps, it will however be able to run them in a coherent mode using virtualization. According to a new announcement, Windows 7 will feature an 'XP mode' that provides one with the flexibility to run many older productivity applications on a Windows 7 based PC. Means you no longer need to worry about software compatibility while deciding to switch from XP to Windows 7. Here goes a preview of the Pet Lookup app for XP running in Windows 7. All one needs to do is install a particular application in XP mode while within Windows 7 and the application will be pushed to the Windows 7 desktop. It will make you run older apps just as you would normally. A beta of Windows XP Mode and Windows Virtual PC for Windows 7 buils will be made available soon by Microsoft.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164677368/warc/CC-MAIN-20131204134437-00009-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
1,078
5
https://channel9.msdn.com/Blogs/PartnerApps/Druva-Adds-Cloud-Solutions-to-Microsoft-Azure
code
Druva, a leader in converged data protection, entrusts its endpoint and cloud data protection solution, Druva inSync, to Microsoft Azure Premium Storage and Azure Virtual Machines. Druva's backup, recovery, and file sync solution benefits from the unparalleled security, scalability, and high-performance provided by Azure. With Azure, Druva can provide its customers reliable, robust data loss prevention, automated compliance management, and legal hold capabilities for eDiscovery. Geographically dispersed Azure datacenters provide Druva with access to 21 storage regions worldwide and cost-effective Infrastructure-as-a-Service environments. "The Microsoft Azure Marketplace delivers direct access to the cloud-ready applications and services customers are asking for," said Steve Guggenheimer, Corporate Vice President and Chief Evangelist for Microsoft. "Druva built natively to the public cloud to take advantage of its elasticity, global presence and security to handle petabytes of customer data efficiently, which are also foundational elements of our Azure offering. Our mutual customers will reap the benefits of our joint efforts with cloud scalability and flexibility, always-on reliability, and international compliance support." To learn more about Druva and how it gives Azure customers a wider set of choices to best meet their data growth, security, and regionally specific regulation requirements, download the datasheet and the mini case study and read the press release.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00402.warc.gz
CC-MAIN-2021-25
1,492
3
http://nauticalapalma.com/header.htm
code
We hope this site will be useful to you while sailing through seas and oceans or while strolling around La Palma. Here you can: buy or sell your boat, request for a quote on the delivery of boats from/to any port worldwide. Find here all the information needed to plan and organize your journeys, weather reports, ports, webcams, marine services, etc ...
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146714.29/warc/CC-MAIN-20200227125512-20200227155512-00534.warc.gz
CC-MAIN-2020-10
354
6
http://retrocmp.de/hardware/rc2014/rc2014.htm
code
Building my own Z80 Computer I've had the computer kits lying around for a year now. Now the time has come. Weekend. The weather is getting worse outside and there isn't much left to do in garden. Building the Z80 starts on 27.09.2019 and ends on ... see below ... Assembling the modules 1 to 7 I finally needed one week (after work), whereby I did nothing for two days. I don't know what to say. The system is simply ingenious and relatively easy to assemble/solder. Congratulation to Spencer Owen! My RC2014 Zed Pro Runs RomWBW (v.2.9.1-pre.5, 2018-08-29) with the CP/M 2.2 operating system and also the Z-System. Update: 19.10.2019, v2.9.2-pre.15. - Backplane Pro ............... (27.09.2019, 28.09.2019) - z80 2.1 CPU Module .......... (29.09.2019) - 512k ROM 512k RAM Module .... (29.09.2019) - Dual Serial SIO/2 Module .... (29.09.2019) - Dual Clock Module ........... (29.09.2019) - Compact Flash Module ........ (02.10.2019) - If you use the compact flash module AND the IDE module at the same time on the RC2014 beware that drive C: und D: are assigned to IDE0:0 and IDE0:1 (CF-card) and drive E: to H: to PPIDE0:0 und PPIDE0:1 (hard disk)! - Of the 8 possible CF-card CP/M slices, only the first two can be used. The remaining 6 are still present but not visible and cannot be used. - Personally, I prefer to work only with the CF card. I divided my 48 MByte card into five CP/M slices (8 MByte each) and a 6 MByte FAT12 DOS partition. With the FAT partition you can exchange files between your PC and RC2014 with FAT.COM. - If the RTC is not plugged in, then the boot process takes about 72 seconds. The RC2014 queries the RTC for the time. If it doesn’t get a reply, it waits for a timeout, and then boots up. - Wi-Fi is working, but Telnet and the serial link not; maybe a problem with the terminal setting! To be continued ... Special features ... ... to be observed during operation - FTDI connection on the Serial I/O Module: - 115.200 bps, 8 data bits, no parity, 1 stop bit - flow control: hardware (necessary for XMODEM) - XMODEM (XM): Use the RAM disk for the file transfer from the PC and finaly PIP the file to the CF-card. Use the XM with the RC option when receiving files. To be improved ... File transfer The operation of the terminal connection with a PC works fine with 115.200bps, but the file exchange with XMODEM sometimes failed. 06.10.2019: Problem solved, see workaround part 3 and/or 4! A possible solution (not yet tested): XMODEM Baud Rate Mod for RC2014. Otherwise only the program DOWNLOAD on the RC2014 can be used. But this possibility is extremely slow and not reliable, at least for me. Have a look at Grant Searle: How to install applications. Here are my workarounds. All solutions work flawlessly. Workaround (part 1): Changing clock frequency You have to change the clock frequency by setting the jumper of CLOCK 1 not to 7.3728 (115.200pbs) but to 0.6144 (9.600bps). The whole RC2014 will be 10 times slower, just 0.61 MHz. However, this means that you can now easily copy files from the PC to the RC2014 with XMODEM. This now works fine at 9.600bps. Once you have copied all files, jump back to 7.3728 MHz and restart the RC2014. That was it already. Workaround (part 2): Using FAT.COM (Wayne Warthen) This little inconspicuous program is awesome! You first have to prepare a CF card with FDISK80.COM for both CP/M slices and a primary DOS partition. Then initialize with CLRDIR and finally format the DOS partition on the PC. Then insert the CF card back into the RC2014 and reboot. Now you can access the DOS partition with FAT.COM from CP/M, i.e. also copy files. File exchange is no longer a problem. Simply copy the CP/M files to the CF card under DOS and then copy them to the CP/M slice under RC2019 (CP/M). Workaround (part 3): Using a RAM disk So far I have always saved the download to the RC2014 on the CF card. But there is also the RAM disk. See there, it works perfectly. My settings: 115.200 bps, 8 data bits, no parity, 1 stop bit and hardware flow control. Finally you have to copy the files to the CF card! Otherwise the files will be lost when restarting. Workaround (part 4): Using the RC option Use XM with the option RC: A>B:XM RC TEST.COM Now the checksum protocol is selected within XM, the transfer should work from PC to ZETA V2 or RC2014. A few impressions I loaded the example program CMDLIN.PAS, compiled it to the CF card and started it. Everything is fine. I loaded the text file SAMPLE.TXT. Everything is fine. You are looking for a good text editor? Try Wordstar in non document mode. Simply good! Only the vi under UNIX is better :-) - Spencer Owen: - rc2014.co.uk / Homepage - rc2014.co.uk / Troubleshooting - rc2014.co.uk / Decoding ROM labels - rc2014.co.uk / Software overview - github.com / RC2014/ROMs/Factory/RomWBW/ - tindie.com / Buy the RC2014 - Ed Brindley: - tindie.com / IDE Adapter PCB for the RC2014 - Stephen C. Cousins: - Small Computer Central / Homepage - Dr. Scott M. Baker: - RC2014 Floppy Controller Boards / Z80 Retrocomputing Part 14 - RC2014 Floppy Controller Boards / Order directly from Osh Park - Grant Searle: - CP/M on breadboard / Homepage - Wayne Warthen: - github.com / RomWBW - github.com / ROM releases - github.com / ROM_1024KB / Many additional programs - github.com / FAT / MS-DOS filesystem access / awesome! - github.com / FDISK80 / Setting up a HDD or CF-card / pdf file - Forums & groups
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540534443.68/warc/CC-MAIN-20191212000437-20191212024437-00244.warc.gz
CC-MAIN-2019-51
5,414
65
http://www.makelinux.net/man/1/C/clojure
code
This manual page documents briefly the is the main entry point for Clojure, a dynamic programming language that targets the Java Virtual Machine. With no options or arguments, it runs an interactive Read-Eval-Print loop (REPL). A summary of options is included below. -cp, -classpath classpath Specifies additional classpath elements. Overrides the $CLASSPATH environment variable. Must appear first if given. -i, --init path Load a file or resource at initialisation time. May be specified multiple times. -e, --eval string Evaluate expressions in string; print non-nil values. May be specified multiple times. Run a repl Run a script from a file or resource Run a script from standard input -h, -?, --help Print a help message and exit A Clojure file can be provided as a path argument. Files to be loaded from the classpath must be prefixed with a `@' character, and must be an absolute path to the classpath resource. Note that the path will be treated as absolute within the classpath, whether it is prefixed with a slash or not. clojure binds *command-line-args* to a seq containing the (optional) arguments provided after the path argument; this provides a way to provide command-line arguments to your scripts. A listing of recognised environment variables is included below.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704943681/warc/CC-MAIN-20130516114903-00083-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,283
25
https://www.dell.com/community/Desktops-General/Inspiron-545-Desktop-best-GPU-Video-Card-with-stock-PSU-300w-8/m-p/3358106
code
Inspiron 545 Desktop; best GPU Video Card with stock PSU 300w [8-|] Greetings Dell members. My question is this: What is the best GPU Video Card I can install on my Dell Inspiron 545 Desktop PC with a stock Power Supply (PSU) of only 300 watts? From what I have read in the Dell Forums, there seems to be talk of the ATI R4650 and NVidia 9500GT. Now, I was wondering if my stock 545 would be able to run the following GPU Video Card; Sapphire HD4650 1G DDR2 PCIE Any expert advice would be appreciated. If there are better GPUs/Video Cards that my Dell 545 can run without upgrading the PSU than I am very much interested to hear from you. Re: Inspiron 545 Desktop; best GPU Video Card with stock PSU 300w [8-|] The HD 4670 would be a better card. An HD 4670 DDR3 with 512mb can be bought for the same cost as the HD 4650 w/1GB and be a much better video card for gaming purposes. Either of these are better than the 9500 GT, however. Dell Dimension 4600, P4 2.66 GHZ, 320GB SATA HD, Nvidia 6200, Vista Home Premium 32-bit Dell Optiplex GX280, P4 3.2 GHZ, 80GB SATA HD, Windows XP 32-bit Dell Inspiron 530, Q6600 Quad, 500GB HD, Vista Home Premium 32-bit Custom Build, Q6600 Quad, (2) 500GB SATA HDs, 4GB Ram, EVGA GTX 260 Core 216 SC, Corsair 650TX PSU, DVD RW, DVD Blu-ray, Windows 7 Ultimate 64-bit
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812841.74/warc/CC-MAIN-20180219211247-20180219231247-00485.warc.gz
CC-MAIN-2018-09
1,301
11
https://www.carol-bevitt.co.uk/2011/02/health-and-writer.html?showComment=1298031748303
code
I'd intended this post to be about writing health- making sure you're well supported in the chair you use at the computer (no dangling feet and back supported please). Also keeping your eyes healthy when you spend a great deal of time at the computer screen (taking breaks and looking away from the screen for a few moments regularly). But I was in an accident today- as a passenger, so I'm starting to feel stiff and I'm not sure how much typing will be comfortable by tomorrow. So there may or may not be a weekend post, but I will be looking out for interesting things to write about next even if I'm not up to immediate typing. Now where did I put those pain killers...
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305141.20/warc/CC-MAIN-20220127042833-20220127072833-00166.warc.gz
CC-MAIN-2022-05
673
4
https://deepai.org/publication/learning-to-reason-with-third-order-tensor-products
code
such as the Long Short-Term Memory (LSTM)lstm97and95 ; Gers:2000nc are general computers, e.g., siegelmann91turing . LSTM-based systems achieved breakthroughs in various speech and Natural Language Processing tasksgooglevoice2015 ; wu2016google ; facebook2017 . Unlike humans, however, current RNNs cannot easily extract symbolic rules from experience and apply them to novel instances in a systematic way fodor1988connectionism ; hadley1994systematicity . They are catastrophically affected by systematic fodor1988connectionism ; hadley1994systematicity differences between training and test data still_not_systematic_lake ; lake2017building ; atzmon2016learning ; phillips1995connectionism . In particular, standard RNNs have performed poorly at natural language reasoning (NLR) babi_tasks_weston where systematic generalisation (such as rule-like extrapolation) is essential. Consider a network trained on a variety of NLR tasks involving short stories about multiple entities. One task could be about tracking entity locations ([…] Mary went to the office. […] Where is Mary?), another about tracking objects that people are holding ([…] Daniel picks up the milk. […] What is Daniel holding?). If every person is able to perform every task, this will open up a large number of possible person-task pairs. Now suppose that during training we only have stories from a small subset of all possible pairs. More specifically, let us assume Mary is never seen picking up or dropping any item. Unlike during training, we want to test on tasks such as […] Mary picks up the milk. […] What is Mary carrying?. In this case, the training and test data exhibit systematic differences. Nevertheless, a systematic model should be able to infer milk because it has adopted a rule-like, entity-independent reasoning pattern that generalises beyond the training distribution. RNNs, however, tend to fail to learn such patterns if the train and test data exhibit such differences. Here we aim at improving systematic generalisation by learning to deconstruct natural language statements into combinatorial representations BrousseCombinatorialRepresentations . We propose a new architecture based on the Tensor Product Representation (TPR) smolensky1990tensor , a general method for embedding symbolic structures in a vector space. Previous work already showed that TPRs allow for powerful symbolic processing with distributed representationssmolensky1990tensor , given certain manual assignments of the vector space embedding. However, TPRs have commonly not been trained from data through gradient descent. Here we combine gradient-based RNNs with third-order TPRs to learn combinatorial representations from natural language, training the entire system on NLR tasks via error backpropagationLinnainmaa:1970 ; Kelley:1960 ; Werbos1990BTT . We point out similarities to systems with Fast Weights von1981correlation ; feldman1982dynamic ; hinton1987deblur , in particular, end-to-end-differentiable Fast Weight systems Schmidhuber:92ncfastweights ; Schmidhuber:93ratioicann ; schlag2017gated . In experiments, we achieve state-of-the-art results on the bAbI dataset babi_tasks_weston , obtaining better systematic generalisation than other methods. We also analyse the emerging combinatorial and, to some extent, interpretable representations. The code we used to train and evaluate our models is available at github.com/ischlag/TPR-RNN. 2 Review of the Tensor Product Representation and Notation The TPR method is a mechanism to create a vector-space embedding of symbolic structures. To illustrate, consider the relation implicit in the short sentences "Kitty the cat" and "Mary the person". In order to store this structure into a TPR of order 2, each sentence has to be decomposed into two components by choosing a so-called filler symbol and a role symbol . Now a possible set of fillers and roles for a unique role/filler decomposition could be and . The two relations are then described by the set of filler/role bindings: . Let denote positive integers. A distributed representation is then achieved by encoding each filler symbol by a filler vector in a vector space and each role symbol by a role vector in a vector space . In this work, every vector space is over . The TPR of the symbolic structures is defined as the tensor in a vector space where is the tensor product operator. In this example the tensor is of order 2, a matrix, which allows us to write the equation of our example using matrix multiplication: Here, the tensor product — or generalised outer product — acts as a variable binding operator. The final TPR representation is a superposition of all bindings via the element-wise addition. In the TPR method the so-called unbinding operator consists of the tensor inner product which is used to exactly reconstruct previously stored variables from using an unbinding vector. Recall that the algebraic definition of the dot product of two vectors and is defined by the sum of the pairwise products of the elements of and . Equivalently, the tensor inner product can be expressed through the order increasing tensor product followed by the sum of the pairwise products of the elements of the -th and -th order. Given now the unbinding vector , we can then retrieve the stored filler . In the simplest case, if the role vectors are orthonormal, the unbinding vector equals . Again, for a TPR of order 2 the unbinding operation can also be expressed using matrix multiplication. Note how the dot product and matrix multiplication are special cases of the tensor inner product. We will later use the tensor inner product which can be used with a tensor of order 3 (a cube) and a tensor of order 1 (a vector) such that they result in a tensor of order 2 (a matrix). Other aspects of the TPR method are not essential for this paper. For further details, we refer to Smolensky’s work smolensky1990tensor ; smolensky2012symbolic ; basic_reasoningTPR_smolensky . 3 The TPR as a Structural Bias for Combinatorial Representations A drawback of Smolensky’s TPR method is that the decomposition of the symbolic structures into structural elements — e.g. and in our previous example — are not learned but externally defined. Similarly, the distributed representations and are assigned manually instead of being learned from data, yielding arguments against the TPR as a connectionist theory of cognition fodor1990connectionism . Here we aim at overcoming these limitations by recognising the TPR as a form of Fast Weight memory which uses multi-layer perceptron (MLP) based neural networks trained end-to-end by stochastic gradient descent. Previous outer product-based Fast WeightsSchmidhuber:93ratioicann , which share strong similarities to TPRs of order 2, have shown to be powerful associative memory mechanisms Ba2016using ; schlag2017gated . Inspired by this capability, we use a graph interpretation of the memory where the representations of a node and an edge allow for the associative retrieval of a neighbouring node. For the context of this work, we refer to the nodes of such a graph as entities and to the edges as relations. This requires MLPs which deconstruct an input sentence into the source-entity , the relation , and the target-entity such that and belong to the vector space and to . These representations are then bound together with the binding operator and stored as a TPR of order 3 where we interpret multiple unbindings as a form of graph traversal. We’ll use a simple example to illustrate the idea. For instance, consider the following raw input: "Mary went to the kitchen.". A possible three-way task-specific decomposition could be , , and . At a later point in time, a question like "Where is Mary?" would have to be decomposed into the vector representations and . The vectors and have to be similar to the true unbinding vectors and in order to retrieve the previously stored but possibly noisy . We chose a graph interpretation of the memory due to its generality as it can be found implicitly in the data of many problems. Another important property of a graph inspired neural memory is the combinatorial nature of entities and relations in the sense that any entity can be connected through any relation to any other entity. If the MLPs can disentangle entity-like information from relation-like information, the TPR will provide a simple mechanism to combine them in arbitrary ways. This means that if there is enough data for the network to learn specific entity representations such as then it should not require any more data or training to combine with any of the learned vectors embedded in even though such examples have never been covered by the training data. In Section 7 we analyse a trained model and present results which indicate that it indeed seems to learn representations in line with this perspective. 4 Proposed Method RNNs can implement algorithms which map input sequences to output sequences. A traditional RNN uses one or several tensors of order 1 (i.e. a vector usually referred to as the hidden state) to encode the information of the past sequence elements necessary to infer the correct current and future outputs. Our architecture is a non-traditional RNN encoding relevant information from the preceding sequence elements in a TPR of order 3. At discrete time , in the input sequence of varying length , the previous state is updated by the element-wise addition of an update representation . The proposed architecture is separated into three parts: an input, update, and inference module. The update module produces while the inference module uses as parameters (Fast Weights) to compute the output of the model given a question as input. is the zero tensor. Similar to previous work, our model also iterates over a sequence of sentences and uses an input module to learn a sentence representation from a sequence of words NIPS2015_memory_networks . Let the input to the architecture at time be a sentence of words with learned embeddings . The sequence is then compressed into a vector representation by where are learned position vectors that are equivalent for all input sequences and is the Hadamard product. The vectors , and are in the vector space . The TPR update is defined as the element-wise sum of the tensors produced by a write, move, and backlink function. We abbreviate the respective tensors as , , and and refer to them as memory operations. To this end, two entity and three relation representations are computed from the sentence representation using five separate networks such that where is an MLP network and its weights. The write operation allows for the storage of a new node-edge-node association (, , ) using the tensor product where represents the source entity, represents the target entity, and the relation connecting them. To avoid superimposing the new association onto a possibly already existing association (, , ), the previous target entity has to be retrieved and subtracted from the TPR. If no such association exists, then will ideally be the zero vector. While the write operation removes the previous target entity representation , the move operation allows to rewrite back into the TPR with a different relation . Similar to the write operation, we have to retrieve and remove the previous target entity that would otherwise interfere. The final operation is the backlink. It switches source and target entities and connects them with yet another relation . This allows for the associative retrieval of the neighbouring entity starting from either one but with different relations (e.g. John is left of Mary and Mary is right of John). One of our experiments requires a single prediction after the last element of an observed sequence (i.e. the last sentence). This final element is the question sentence representation . Since the inference module does not edit the TPR memory, it is sufficient to compute the prediction only when necessary. Hence we drop index in the following equations. Similar to the update module, first an entity and a set of relations are extracted from the current sentence using four different networks. The extracted representations are used to retrieve one or several previously stored associations by providing the necessary unbinding vectors. The values of the TPR can be thought of as context-specific weights which are not trained by gradient descent but constructed incrementally during inference. They define a function that takes the entity and relations as an input. A simple illustration of this process is shown in Figure 2. The most basic retrieval requires one source entity and one relation to extract the first target entity. We refer to this retrieval as a one-step inference and use the additional extracted relations to compute multi-step inferences. Here refers to layer normalization Ba2017LayerNorm which includes a learned scaling and shifting scalar. As in other Fast Weight work, improves our training procedure which is possibly due to making the optimization landscape smoother batch_norm_smoothes . Finally, the output of our architecture consists of the sum of the three previous inference steps followed by a linear projection into the symbol space where a softmax transforms the activations into a probability distribution over all words from the vocabulary of the current task. 5 Related Work To our knowledge, our system is the first with a TPR of order 3 trained on raw data by backpropagation Linnainmaa:1970 ; Kelley:1960 ; Werbos1990BTT . However, previous work used TPRs of order 2 for simpler associations in the context of image-caption generation TPGN_captioning , question-answering TPRN_squad , and general NLP huang2018attentive problems with a gradient-based optimizer similar to ours. On the other hand, the central operation of an order 2 TPR is the outer product of two vectors. This relates to many previous ideas, most notably Hebbian learning Hebb:49 , which partially inspired differentiable, outer product-based Fast Weight architectures Schmidhuber:93ratioicann learning context dependent weight changes through error-backpropagation - compare even earlier work on differentiable Fast Weights Schmidhuber:91fastweights . Variations of such outer product-based Fast Weights were able to generalise in a variety of small but complex sequence problems where standard RNNs tend to perform poorly Ba2016using ; schlag2017gated ; miconi2018differentiable . RNNs are popular choices for modelling natural language. Despite ongoing research in RNN architectures, the good old LSTM lstm97and95 has been shown to outperform more recent variants lstm_still_sota_melis on standard language modelling datasets. However, such networks do not perform well in NLR tasks such as question answering babi_tasks_weston . Recent progress came through the addition of memory and attention components to RNNs. For the context of question answering, a popular line of research are memory networks memory_networks_weston ; dynamic_memory_networks_kumar ; weakly_memory_networks_sukhbaatar ; gated_endtoend_memory_networks_Liu ; dynamic_memory_networks_2_xiong . But it remains unclear whether mistakes in trained models arise from imperfect logical reasoning, knowledge representation, or insufficient data due to the difficulty of interpreting their internal representations dupoux_beyond_toytasks . Some early memory-augmented RNNs focused primarily on improving the ratio of the number of trainable parameters to memory size Schmidhuber:93ratioicann . The Neural Turing MachineGraves2014NTM was among the first models with an attention mechanism over external memory that outperformed standard LSTM on tasks such as copying and sorting. The Differentiable Neural Computer (DNC) further refined this approach graves2016 ; sparse_dnc_rae , yielding strong performance also on question-answering problems. We evaluate our architecture on bAbI tasks, a set of 20 different synthetic question-answering tasks designed to evaluate NLR systems such as intelligent dialogue agents babi_tasks_weston . Every task addresses a different form of reasoning. It consists of the story - a sequence of sentences - followed by a question sentence with a single word answer. We used the train/validation/test split as it was introduced in v1.2 for the 10k samples version of the dataset. We ignored the provided supporting facts that simplify the problem by pointing out sentences relevant to the question. We only show story sentences once and before the query sentence, with no additional supervision signal apart from the prediction error. We experiment with two models. The single-task model is only trained and tested on the data from one task but uses the same computational graph and hyper-parameters for all. The all-tasks model is a scaled up version trained and tested on all tasks simultaneously, using only the default hyper-parameters. More details such as specific hyper-parameters can be found in Appendix A. In Table 1 and 2 we compare our model to various state-of-the-art models in the literature. We added best results for a better comparison to earlier work which did not provide statistics generated from multiple runs. Our system outperforms the state-of-the-art in both settings. We also seem to outperform the DNC in convergence speed as shown in Figure 3. |Task||REN recurrent_entity_networks_henaff||DNC graves2016||SDNC sparse_dnc_rae||TPR-RNN (ours)| |Avg Error||9.7 2.6||12.8 4.7||6.4 2.5||1.34 0.52| |Failure (>5%)||5 1.2||8.2 2.5||4.1 1.6||0.86 1.11| Mean and variance of the test error for the all-task setting. We perform early stopping according to the validation set. Our statistics are generated from 10 runs. |Task||LSTM weakly_memory_networks_sukhbaatar||N2N weakly_memory_networks_sukhbaatar||DMN+ dynamic_memory_networks_2_xiong||REN recurrent_entity_networks_henaff||TPR-RNN (ours)| |Avg Error||36.4||4.2||2.8||0.5||0.17||1.12 1.19| |Failure (>5%)||16||3||1||0||0||0.4 0.55| We ran ablation experiments on every task to assess the necessity of the three memory operations. The experimental results in Table 3 indicate that a majority of the tasks can be solved by the write operation alone. This is surprising at first because for some of those tasks the symbolic operations that a person might think of as ideal typically require more complex steps than what the write operation allows for. |Operations||Failed tasks (err > 5%)| |3, 6, 9, 10, 12, 13, 17, 19| |9, 10, 13, 17| However, the optimizer seems to be able to find representations that overcome the limitations of the architecture. That said, more complex tasks do benefit from the additional operations without affecting the performance on simpler tasks. Here we analyse the representations produced by the MLPs of the update module. We collect the set of unique sentences across all stories from the validation set of a task and compute their respective entity and relation representations , , , , and In Figure 4 we show such similarity matrices for a model trained on task 3. The image based on shows 4 distinct clusters which indicate that learned representations are almost perfectly orthogonal. By comparing the sentences from different clusters it becomes apparent that they represent the four entities independent of other factors. Note that the dimensionality of this vector space is 15 which seems larger than necessary for this task. In the case of we observe that sentences seem to group into three, albeit less distinct, clusters. In this task, the structure in the data implies three important events for any entity: moving to any location, bind with any object, and unbind from a previously bound object; all three represented by a variety of possible words and phrases. By comparing sentences from different clusters, we can clearly associate them with the three general types of events. We observed clusters of similar discreteness in all tasks; often with a semantic meaning that becomes apparent when we compare sentences that belong to different clusters. We also noticed that even though there are often clean clusters they are not always perfectly combinatorial, e.g., in as seen in Figure 4, we found two very orthogonal clusters for the target entity symbols and . We conduct an additional experiment to empirically analyse the model’s capability to generalise in a systematic way fodor1988connectionism ; hadley1994systematicity . For this purpose, we join together all tasks which use the same four entity names with at least one entity appearing in the question (i.e. tasks 1, 6, 7, 8, 9, 11, 12, 13). We then augment this data with five new entities such that the train and test data exhibit systematic differences. The stories for a new entity are generated by randomly sampling 500 story/question pairs from a task such that in 20% of the generated stories the new entity is also contained in the question. We then add generated stories from all possible 40 combinations of new entities and tasks to the test set. To the training set, however, we only add stories from a subset of all tasks. More specifically, the new entities are Alex, Glenn, Jordan, Mike, and Logan for which we generate training set stories from , , , , of the tasks respectively. We summarize the results in Figure 5 by averaging over tasks. After the network has been trained, we find that our model achieves high accuracy on entity/task pairs on which it has not been trained. This indicates its systematic generalisation capability due to the disentanglement of entities and relations. Our analysis and the additional experiment indicate that the model seems to learn combinatorial representations resulting in interpretable distributed representations and data efficiency due to rule-like generalisation. To compute the correct gradients, an RNN with external memory trained by backpropagation through time must store all values of all temporary variables at every time step of a sequence. Since outer product-based Fast Weights Schmidhuber:93ratioicann ; schlag2017gated and our TPR system have many more time-varying variables per learnable parameter than a classic RNN such as LSTM, this makes them less scalable in terms of memory requirements. The problem can be overcome through RTRL WilliamsZipser:92 ; RobinsonFallside:87tr , but only at the expense of greater time complexity. Nevertheless, our results illustrate how the advantages of TPRs can outweigh such disadvantages for problems of combinatorial nature. One difficulty of our Fast Weight-like memory is the well-known vanishing gradient problemHochreiter:91 . Due to multiplicative interaction of Fast Weights with RNN activations, forward and backward propagation is unstable and can result in vanishing or exploding activations and error signals. A similar effect may affect the forward pass if the values of the activations are not bounded by some activation function. Nevertheless, in our experiments, we abandoned bounded TPR values as they significantly slowed down learning with little benefit. Although our current sub-optimal initialization may occasionally lead to exploding activations and NaN values after the first few iterations of gradient descent, we did not observe any extreme cases after a few dozen successful steps, and therefore simply reinitialize the model in such cases. A direct comparison with the DNC is a bit inconclusive for the following reasons. Our architecture, uses a sentence encoding layer similar to how many memory networks encode their input. This slightly facilitates the problem since the network doesn’t have to learn which words belong to the same sentence. Most memory networks also iterate over sentence representations, which is less general than iterating over the word level, which is what the DNC does, which is even less general than iterating over the character level. In preliminary experiments, a word level variation of our architecture solved many tasks, but it may require non-trivial changes to solve all of them. Our novel RNN-TPR combination learns to decompose natural language sentences into combinatorial components useful for reasoning. It outperforms previous models on the bAbI tasks through attentional control of memory. Our approach is related to Fast Weight architectures, another way of increasing the memory capacity of RNNs. An analysis of a trained model suggests straight-forward interpretability of the learned representations. Our model generalises better than a previous state-of-the-art model when there are strong systematic differences between training and test data. We thank Paulo Rauber, Klaus Greff, and Filipe Mutz for helpful comments and helping hands. We are also grateful to NVIDIA Corporation for donating a DGX-1 as part of the Pioneers of AI Research Award and to IBM for donating a Minsky machine. This research was supported by an European Research Council Advanced Grant (no: 742870). - P. J. Werbos. Generalization of backpropagation with application to a recurrent gas market model. Neural Networks, 1, 1988. - R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. In Back-propagation: Theory, Architectures and Applications. Hillsdale, NJ: Erlbaum, 1994. - A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR.1, Cambridge University Engineering Department, 1987. - S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735–1780, 1997. Based on TR FKI-207-95, TUM (1995). - F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with LSTM. Neural Computation, 12(10):2451–2471, 2000. - H. T. Siegelmann and E. D. Sontag. Turing computability with neural nets. Applied Mathematics Letters, 4(6):77–80, 1991. - Hasim Sak, Andrew Senior, Kanishka Rao, Françoise Beaufays, and Johan Schalkwyk. Google voice search: faster and more accurate. Google Research Blog, 2015, http://googleresearch.blogspot.ch/2015/09/google-voice-search-faster-and-more.html. - Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s neural machine translation system: Bridging the gap between human and machine translation. Preprint arXiv:1609.08144, 2016. J.M. Pino, A. Sidorov, and N.F. Ayan. Transitioning entirely to neural machine translation.Facebook Research Blog, 2017, https://code.facebook.com/posts/289921871474277/transitioning-entirely-to-neural-machine-translation/. - Jerry A Fodor and Zenon W Pylyshyn. Connectionism and cognitive architecture: A critical analysis. Cognition, 28(1-2):3–71, 1988. - Robert F Hadley. Systematicity in connectionist language learning. Mind & Language, 9(3):247–272, 1994. - Brenden M. Lake and Marco Baroni. Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks. CoRR, abs/1711.00350, 2017. - Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences, 40, 2017. - Yuval Atzmon, Jonathan Berant, Vahid Kezami, Amir Globerson, and Gal Chechik. Learning to generalize to new compositions in image understanding. arXiv preprint arXiv:1608.07639, 2016. - Steven Andrew Phillips. Connectionism and the problem of systematicity. PhD thesis, University of Queensland, 1995. - Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. CoRR, abs/1502.05698, 2015. - Olivier J. Brousse. Generativity and Systematicity in Neural Network Combinatorial Learning. PhD thesis, Boulder, CO, USA, 1992. UMI Order No. GAX92-20396. - Paul Smolensky. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artificial intelligence, 46(1-2):159–216, 1990. - S. Linnainmaa. The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master’s thesis, Univ. Helsinki, 1970. - H. J. Kelley. Gradient theory of optimal flight paths. ARS Journal, 30(10):947–954, 1960. - Paul J. Werbos. Backpropagation through time: What it does and how to do it. Proceedings of the IEEE, 78(10):1550–1560, 1990. - von der Malsburg. The Correlation Theory of Brain Function. Internal report. Department of Neurobiology, Max-Planck-Institute for Biophysical Chemistry. 1981. - Jerome A Feldman. Dynamic connections in neural networks. Biological cybernetics, 46(1):27–39, 1982. - Geoffrey E Hinton and David C Plaut. Using fast weights to deblur old memories. In Proceedings of the ninth annual conference of the Cognitive Science Society, pages 177–186, 1987. - J. Schmidhuber. Learning to control fast-weight memories: An alternative to recurrent nets. Neural Computation, 4(1):131–139, 1992. - J. Schmidhuber. On decreasing the ratio between learning complexity and number of time-varying variables in fully recurrent nets. In Proceedings of the International Conference on Artificial Neural Networks, Amsterdam, pages 460–463. Springer, 1993. - Imanol Schlag and Jürgen Schmidhuber. Gated fast weights for on-the-fly neural program generation. In NIPS Metalearning Workshop, 2017. - Paul Smolensky. Symbolic functions from neural computation. Phil. Trans. R. Soc. A, 370(1971):3543–3569, 2012. - Paul Smolensky, Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao, and Li Deng. Basic reasoning with tensor product representations. CoRR, abs/1601.02745, 2016. - Jerry Fodor and Brian P McLaughlin. Connectionism and the problem of systematicity: Why smolensky’s solution doesn’t work. Cognition, 35(2):183–204, 1990. - Jimmy Ba, Geoffrey E Hinton, Volodymyr Mnih, Joel Z Leibo, and Catalin Ionescu. Using fast weights to attend to the recent past. In Advances In Neural Information Processing Systems, pages 4331–4339, 2016. - Sainbayar Sukhbaatar, arthur szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440–2448. Curran Associates, Inc., 2015. - Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. S. Santurkar, D. Tsipras, A. Ilyas, and A. Madry. How Does Batch Normalization Help Optimization? (No, It Is Not About Internal Covariate Shift).ArXiv e-prints, May 2018. - Qiuyuan Huang, Paul Smolensky, Xiaodong He, Li Deng, and Dapeng Oliver Wu. Tensor product generation networks. CoRR, abs/1709.09118, 2017. - Hamid Palangi, Paul Smolensky, Xiaodong He, and Li Deng. Deep learning of grammatically-interpretable representations through question-answering. CoRR, abs/1705.08432, 2017. - Qiuyuan Huang, Li Deng, Dapeng Wu, Chang Liu, and Xiaodong He. Attentive tensor product learning for language generation and grammar parsing. arXiv preprint arXiv:1802.07089, 2018. - D. O. Hebb. The Organization of Behavior. Wiley, New York, 1949. - J. Schmidhuber. Learning to control fast-weight memories: An alternative to recurrent nets. Technical Report FKI-147-91, Institut für Informatik, Technische Universität München, March 1991. - Thomas Miconi, Jeff Clune, and Kenneth O Stanley. Differentiable plasticity: training plastic neural networks with backpropagation. arXiv preprint arXiv:1804.02464, 2018. - Gábor Melis, Chris Dyer, and Phil Blunsom. On the state of the art of evaluation in neural language models. CoRR, abs/1707.05589, 2017. - Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. CoRR, abs/1410.3916, 2014. - Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. Ask me anything: Dynamic memory networks for natural language processing. CoRR, abs/1506.07285, 2015. - Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. Weakly supervised memory networks. CoRR, abs/1503.08895, 2015. - Julien Perez and Fei Liu. Gated end-to-end memory networks. CoRR, abs/1610.04211, 2016. - Caiming Xiong, Stephen Merity, and Richard Socher. Dynamic memory networks for visual and textual question answering. CoRR, abs/1603.01417, 2016. - Emmanuel Dupoux. Deconstructing ai-complete question-answering: going beyond toy tasks, 2015. - Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. - Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwinska, Sergio Gomez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Adrià Puigdomenech Badia, Karl Moritz Hermann, Yori Zwols, Georg Ostrovski, Adam Cain, Helen King, Christopher Summerfield, Phil Blunsom, Koray Kavukcuoglu, and Demis Hassabis. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471–476, 2016. - Jack W. Rae, Jonathan J. Hunt, Tim Harley, Ivo Danihelka, Andrew W. Senior, Greg Wayne, Alex Graves, and Timothy P. Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. CoRR, abs/1610.09027, 2016. - Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. Tracking the world state with recurrent entity networks. In International Conference on Learning Representations (ICLR2017). Preprint arXiv:1612.03969. - S. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut für Informatik, Lehrstuhl Prof. Brauer, Technische Universität München, 1991. Advisor: J. Schmidhuber. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural In Yee Whye Teh and Mike Titterington, editors, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 249–256, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR. - Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org. - Timothy Dozat. Incorporating nestrov momentum into adam. In International Conference on Learning Representations (ICLR2016). CBLS, April 2016. OpenReview.net ID: OM0jvwB8jIp57ZJjtNEZ. Appendix A Experimental Details We encode the valid words for a task as a one-hot vector; the dimensionality of the vector space is equal to the size of the vocabulary. Each MLP which produces the entity and relation representations from a sentence representation consists of two layers, where each layer is an affine transformation followed by the hyperbolic tangent nonlinearity. The hidden layers of the MLPs refer to the intermediate activations and are vectors from the vector space . We initialize the word embeddings with a uniform distribution fromto and apply the Glorot initialization scheme for all other weights except the position vectors which are initialized as a vector of ones divided by , the number of position vectors. We implemented the model in the TensorFlow framework and compute the gradients through its automatic differentiation engine based on Linnainmaa’s automatic differentiation or backpropgation scheme . We pad shorter sentences with the padding symbol to achieve a uniform sentence length but keep the story length dynamic as in previous work. To deal with possible unstable initializations we incorporate a warm-up phase in which we train the network for 50 steps with of the learning rate. In the case of NaN values during this warm-up phase we reinitialize the network from scratch. After successful warm-up phases we never encountered any further instabilities. We optimize the neural networks using the Nadam optimizer which in our experiments consistently outperformed others in convergence speed but not necessarily in final performance. Finally, we multiply the learning rate by a factor of once it has reached a validation set loss below . The Single-Task Model For the single-task model , , and . Note that depends on the vocabulary size of each individual task. We achieved the results in Table 2 using the hyper-parameters , , , and a batch-size of 128. These hyper-parameters have been optimised using an evolution procedure with small random perturbations. The main effect is improved convergence speed. With the exception of a few tasks that were sensible to the momentum parameter, similar final performance can be achieved with the default hyper-parameters. The All-Tasks Model In the all-tasks setting we train one model on all tasks simultaneously. We increase the size of the model to , , and and train with a batch-size of 32. We used the default hyper-parameters , , in that case. Appendix B Detailed All-Tasks Training Runs |all (0)||1.50||1.69||1.13||1.04||0.78||0.96||1.20||2.40||0.78||1.34 0.52| Appendix C Further Similarity Matrices 36: mary put down the milk there . 39: mary discarded the milk . 42: mary grabbed the milk there . 41: mary picked up the milk there . 45: jason picked up the milk there . 5: brian is green . 7: lily is green . 35: mary travelled to the kitchen . 43: jason went to the kitchen . 11: daniel went back to the kitchen . 17: sandra went back to the kitchen . 19: sandra travelled to the kitchen . 21: sandra journeyed to the kitchen . 33: mary went to the bathroom . 18: sandra went back to the bathroom . 49: john travelled to the bathroom . 14: daniel travelled to the bathroom . 12: daniel went to the bathroom . 15: daniel journeyed to the bathroom . 31: mary went back to the bedroom . 30: mary moved to the bedroom . 38: mary journeyed to the bedroom . 48: john went to the bedroom . 27: yann travelled to the bedroom . 10: daniel moved to the bedroom . 13: daniel travelled to the bedroom . 2: julius is white . 28: yann is tired . 8: sumit moved to the garden . 9: sumit is bored . 44: jason is thirsty . 34: mary travelled to the hallway . 26: bernhard is gray . 47: john went to the office . 20: sandra journeyed to the office . 32: mary went to the office . 3: julius is a rhino . 6: lily is a rhino . 22: sandra discarded the football . 23: sandra picked up the football there . 24: sandra took the football there . 46: then she journeyed to the bathroom . 1: greg is a frog . 4: brian is a frog . 29: yann picked up the pajamas there . 25: bernhard is a swan . 40: mary discarded the apple there . 16: daniel took the apple there . 37: mary got the apple there . 30: mary went back to the kitchen . 28: mary moved to the hallway . 29: mary moved to the garden . 27: sandra journeyed to the bathroom . 24: sandra travelled to the bedroom . 22: sandra moved to the kitchen . 23: sandra went to the office . 3: daniel went back to the bathroom . 4: daniel went to the bathroom . 2: daniel moved to the kitchen . 1: daniel moved to the office . 5: daniel travelled to the office . 36: john travelled to the bathroom . 35: john went to the kitchen . 33: john moved to the office . 34: john went back to the bedroom . 6: daniel and john went to the hallway . 31: mary and sandra travelled to the hallway . 26: sandra left the apple . 37: john discarded the football . 19: the bathroom is south of the hallway . 20: the bathroom is north of the kitchen . 21: the bathroom is east of the garden . 12: the hallway is west of the office . 10: the hallway is north of the bedroom . 11: the hallway is east of the bedroom . 7: the office is east of the bathroom . 8: the office is west of the hallway . 9: the office is west of the garden . 38: john grabbed the football there . 32: mary picked up the milk there . 13: the garden is west of the bedroom . 25: sandra got the apple there . 17: the kitchen is south of the office . 18: the kitchen is south of the bathroom . 16: the bedroom is west of the garden . 14: the bedroom is east of the hallway . 15: the bedroom is east of the kitchen . 4: is the pink rectangle to the left of the yellow square ? 3: is the red sphere below the triangle ? 1: is bill in the kitchen ? 0: does the chocolate fit in the container ? 5: is mary in the hallway ? 2: is julie in the park ? 6: is mary in the kitchen ? 14: how do you go from the bedroom to the office ? 13: how do you go from the garden to the hallway ? 21: what is john carrying ? 17: what is daniel carrying ? 19: what is mary carrying ? 15: how many objects is mary carrying ? 26: what did fred give to mary ? 22: what color is greg ? 24: what color is lily ? 23: what color is brian ? 25: what color is bernhard ? 12: where was fred before the office ? 10: where is mary ? 7: where is daniel ? 18: what is the kitchen south of ? 9: where is sandra ? 11: where is john ? 8: where is the milk ? 16: what is gertrude afraid of ? 20: what is emily afraid of ?
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057830.70/warc/CC-MAIN-20210926053229-20210926083229-00410.warc.gz
CC-MAIN-2021-39
41,770
258
https://sourcecodebrowser.com/glibc/2.9/sparc_2sparc32_2soft-fp_2q__lltoq_8c.html
code
Back to index Go to the source code of this file. Definition at line 26 of file q_lltoq.c. long double c; long long b = a; FP_FROM_INT_Q(C, b, 64, unsigned long long);
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720000.45/warc/CC-MAIN-20161020183840-00102-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
167
6
http://artariq.com/index_test.html
code
An enterprise innovation project on which I was the Design Lead (the names, details, and screens have been reworked in order to respect the NDA this project is under) In my pursuit to redo my personal website, I came across a number of amazing portfolios and projects from UX / Interaction Designers around the world. I learned a lot from them, and I thought others might too. So, I decided to curate them and share them with everyone. In 2013, I developed a fascination for how to improve one's creative capacity. As I poured into the literature, I also saw a rise in popularity of online education. I decided to package up my learnings into a course on how to use various creative thinking tools to generate ideas and solutions. SOME TALKS / WORKSHOPS
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700560.62/warc/CC-MAIN-20191020001515-20191020025015-00281.warc.gz
CC-MAIN-2019-43
753
4
https://www.get1000visitors.com/howtofindkeywords/how-to-find-keywords-for-any-website-how-to-find-keywords-in-eclipse.html
code
Positive reviews from hundreds of users have shown that Long Tail Pro by Spencer Haws is the supportive tool available for internet workers. It is a reliable tool, which would help you to stay on top of your competition by ranking topmost on the search engines. You can test run the program before you buy it. It is highly recommended for every internet marketer. Keyword Researcher is an easy-to-use Keyword Discover Tool. Once activated, it emulates a human using Google Autocomplete, and repeatedly types thousands of queries into Google. Each time a partial phrase is entered, Google tries to predict what it thinks the whole phrase might be. We simply save this prediction. And, as it turns out, when you do this for every letter of the alphabet (A-Z), then you’re left with hundreds of great Long Tail keyword phrases. 1) Google Keyword Planner: This tools is fantastic because it can help me to identify long tail keywords for my niche. It is official Google’s tool and it has the recent trends and keyword variations. For example you may think that this keyword is great “buy ipad air in liverpool” but Google may suggest “iPad air sale Liverpool”. Yes, not often it is accurate but when I’m using it alongside the other tools – I can get clear idea. One of the most important aspects of an effective SEO strategy is the ability to research, analyze, and ultimately select the keywords that are most likely to result in success for your clients. There are a variety of free tools available on the web specifically designed to help online marketers do just this. Each tool has its own unique methodology for collecting and presenting this data. Comparing any of the tools’ results without knowing the subtle differences can lead to incorrect inferences and an SEO strategy based on misinformation. For the first few months, there was certainly a transition period. However, the new team was able to take over the reigns fairly quickly. A big reason for the quick transition is because I had automated most of the business already. So, the fact that I was stepping away didn't make a huge difference since all the sales, marketing emails, and many other details were already happening on an automated basis. Wordstream is a free keyword tool that makes it easy and fast to get those keywords that your business needs most in order to drive traffic through paid and organic search. All you need to do is enter a website URL or keyword and you will get hundreds of relevant keyword results that are tailored to your country or industry. Every keyword has an estimated CPC, competition score, and a proprietary opportunity score that will assist you in budgeting for your online campaigns. You can download your list in a CSV format and upload it in AdWords directly and begin to work on your new campaigns. You can also filter by query, which is useful when looking at branded queries, or when looking at specific words. For example, only show keywords that include the term "SEO". The graph also allows you to spot trends in across the available metrics and compare week-on-week or month-on-month. This can help you to drill down and monitor progression over time, allowing you to answer questions like "have my branded keywords received more clicks in the last month compared to the previous month?", "has the CTR improved?", "did average positions in Google improve?". From time-to-time – there are discounts or promotions available for Long Tail Pro. Whenever these become available – all links on my site that lead to the Long Tail Pro sales page will reflect the discounted price. If there is a huge sale or a massive deal – I always let my email subscribers know about it as soon as possible. If you’re not on my email list yet – feel free to sign up by clicking the button below. I’ll even send you one of my FREE guides! Let me show you an example. I just surfed over to a very popular blog called The Busy Budgeter (www.busybudgeter.com) to find an example topic. Rosemarie is the blogger over there and I’ve been following her journey recently. She’s absolutely killing it online! But, I happen to know that she gets most of her traffic from Pinterest and that working on SEO is one objective that she wishes to tackle. So, let’s see if we can help her out… A quick scan of her homepage tells me what type of content she produces and who her audience is. It appears her audience is largely female, lots of stay-at-home moms and a good portion of her content revolves around teaching others how to make money online. I would like to get LTP but I’ve read that people have some issues according to Keyword Planner and Moz. Do you still recommend this tool? How accurate is it’s data comparing to previous versions? Does Keyword difficulty still works as it should? Maybe it’s better to wait untill some stability of this software? I don’t want to waste money for some tool which doesn’t work properly. This is a rather crude metric because it presumes one can monetize all the traffic they receive AND one can generate as much profit per visitor as Google does. Anyone who could do both of those would likely displace Google as the first consumer destination in their market (like how many people in the United States start ecommerce searches on Amazon.com rather than Google.com).
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00471.warc.gz
CC-MAIN-2019-43
5,342
11
https://www.npmjs.com/package/lodash._isnative
code
The internal Lo-Dash function <code>isNative</code> as a Node.js module generated by lodash-cli. The internal Lo-Dash function isNative as a Node.js module generated by lodash-cli. With npm private modules, you can use the npm registry to host your own private code and the npm command line to manage it. Learn more… how? learn more
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278385.58/warc/CC-MAIN-20160524002118-00116-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
334
4
https://de.mathworks.com/matlabcentral/fileexchange/71894-response-envelope-analysis-rea
code
Response Envelope Analysis (REA) Updated 20 Jun 2019 A key step in drug combination analysis is the selection of an additivity model to identify combination effects including synergy, additivity and antagonism. Existing methods for identifying and interpreting those combination effects have limitations. We present here a computational framework, termed response envelope analysis (REA), that makes use of 3D response surfaces formed by generalized Loewe Additivity and Bliss Independence models of interaction to evaluate drug combination effects. Because the two models imply two extreme limits of drug interaction (mutually exclusive and mutually non-exclusive), a response envelope defined by them provides a quantitatively stringent additivity model for identifying combination effects without knowing the inhibition mechanism. As a demonstration, we apply REA to representative published data from large screens of anticancer and antibiotic combinations. We show that REA is more accurate than existing methods and provides more consistent results in the context of cross-experiment evaluation. Razor (2023). Response Envelope Analysis (REA) (https://github.com/4dsoftware/rea), GitHub. Retrieved . Du, Di, et al. “Response Envelope Analysis for Quantitative Evaluation of Drug Combinations.” Bioinformatics, edited by Jonathan Wren, Oxford University Press (OUP), Mar. 2019, doi:10.1093/bioinformatics/btz091. MATLAB Release Compatibility Platform CompatibilityWindows macOS Linux Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting! Discover Live Editor Create scripts with code, output, and formatted text in a single executable document. Versions that use the GitHub default branch cannot be downloaded
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00108.warc.gz
CC-MAIN-2023-14
1,784
12
https://www.thegamecreators.com/post/appgamekit-game-rush-to-adventure-developer-interview
code
Rush to Adventure, a new game created in AppGameKit, was released on Steam by Indie developer Digital Awakening. Rush to Adventure is a retro fantasy speedrunning adventure, inspired by classic NES titles like Zelda, Mario and Castlevania. You awaken on the shore of a cursed island and your only way off the island is to fight the monsters and lift the curse. Magnus Esko, the creator of Rush to Adventure, spoke to us about the game's development and his plans for the future. Tell us a little about yourself, where you live, where you work, if you code on your own or in a team etc. I live in Piteå, a town in the north of Sweden. I used to work at a call center that has been moved to Stockholm this year. Before I left I was in charge of our knowledge base and e-learning courses. I code on my own. Everything in the game is made by me except for the music, which is done by a friend of mine. Which Tier of AppGameKit did you use to code Rush to Adventure? It is made in Tier 1. I love how easy and straight forward it is to code. The distance between a thought and making it work is quite short. What's your inspiration for the game? The biggest inspiration for Rush to Adventure is Zelda 2: The adventure of Link. But other NES games also influenced me, most notable Zelda 1 and the Super Mario Bros and Castlevania series. It's clear you have spent a lot of time developing the game. Did you have a clear vision or did it evolve over time? The game is perhaps 50% feature creep. When I started it was the golden age of indies on mobile. It all started as a small mobile game back in April of 2012. For example, I had no intention of having the player character on the map screen. You just swiped and tapped the screen. Which still works. I also didn't have much of a plan on how to implement things but just figured things out as I went. If there were technical hurdles that challenged you how did you overcome them? When I started out I could not get the physics working the way I wanted them. So I wrote my own basic physics using the built in collision system. The game has very tight controls and I am not sure physics is the best way to handle those. Any plans to release Mac/Linux versions or on other platforms? I want to release on Mac, Linux and Android consoles. I also plan on mobile release but I no longer consider that a priority. Do you have any advice to other game developers? While it's wise not to make games that are too large, under-scoping is also not advised. That should be less of a problem with today's hardware. But keep your code flexible and easy to expand in case you need/want to. Also, use lots of small functions and source files. Large chunks of code is really hard to work with when you forget what everything does. Separate things and use very descriptive naming and you avoid future headaches. Feel free to tell us anything else about the game and your hopes for the product. Every time someone says that they like my game, or are willing to pay for it, brings me happiness. Going to conventions and watching people enjoy the game, even playing it multiple times, has been great. One of the goals with finishing the game has always been something to showcase my skills with. But I do hope that it at least sells enough to cover the time invested. Even better if it sells enough to finance my next project.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589618.52/warc/CC-MAIN-20180717070721-20180717090721-00509.warc.gz
CC-MAIN-2018-30
3,351
21
http://softwaretopic.informer.com/skype-how-to-buy-music-from-zune/
code
Jeaks Music is a toolbar for Internet Explorer and Mozilla Firefox. Amazon Digital Services LLC Listen online music from million of songs and download them for offline use. This program lets you convert videos into the format compatible with Zune. Camersoft Skype Recorder could record skype video and convert videos. Simply put, Zune is a way for you to enjoy music and video anywhere you go. PrettyMay Call Recorder for Skype (PMCRS) is a powerful Skype add on. Pro Data Doctor Pvt Ltd Zune Music Recovery recovers and retrieves files that were lost on a 30 GB Zune. Anyviewsoft Zune Video Converter is a Zune Music Video Converter software. The program is designed to make it easy to find, buy and download popular music. It’s a revolutionary new way to buy, watch etc. world-class library of music. Buy and download classical music from ArkivMusic website. Rip audio from DVD for iPod, iPhone, Zune and other music players. Skype Exporter is a Skype Backup/Restore utility for Skype. Hack Skype Passwords Ltd Find Skype passwords with Skype Hacker, the free Skype hacking tool. This tool is an easy to use Zune movie converter for Microsoft Zune and Zune HD. Building figures of a blooming flowers.Game: Buy time, Buy flowers, Kid game,...
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313501.0/warc/CC-MAIN-20190817222907-20190818004907-00440.warc.gz
CC-MAIN-2019-35
1,245
19
https://www.mssqltips.com/sqlservertip/5400/introduction-to-azure-sql-database-managed-instances/
code
You have heard about Azure SQL Database Managed Instances, but you are not sure what they are and how you might be able to use them. In this post, I will cover what a Managed Instance is and how it is positioned so that you can look at how you might be able to use it. Put very simply, Azure SQL Database Managed Instance are another flavour and deployment option of Azure SQL Database. They are a managed Platform as a Service (PaaS) database offering, but with a far greater level of parity with the retail SQL Server product that we are all familiar with. Managed Instances give you capabilities that previously prevented many database systems being moved to Azure SQL Database, including cross database queries, lack of SQL Server Agent and other items. Whereas Azure SQL Database is a database-centric platform, Managed Instances shift that scope to the Instance Level. This now gives you the ability to make use of a managed platform with built-in high availability and backup management, allowing for a point in time recovery should you need it. There is also the added benefit that as a PaaS solution, Managed Instances remove the headaches associated with regular patching of the Operating System and SQL Server software that we have had to manage as a DBA presently. Managed Instance Architecture Azure SQL Database Managed Instances are built on the Azure platform and are a tightly integrated PaaS offering that competes with the likes of RDS for SQL Server from Amazon. However, there are key differences in the architecture that make Managed Instances a different proposition. Key among these differences is the fact that Managed Instances are built on Azure SQL Database meaning they are running the latest code that Microsoft has deployed, whereas RDS for SQL Server is running retail SQL Server versions on a VM and are abstracted from the user. The Managed Instance General Purpose service, unlike Azure SQL Database will be defined by vCPU core count. Specifically, there will be eight, sixteen, and twenty-four core compute options available for us to select. Additionally, the amount of memory will vary based on core count with Microsoft advising that there will be 5-7 GB of memory per core. This means that the smallest configuration would be 8 cores with ~40 GB of RAM, all the way up to the largest being 24 cores with ~168 GB of RAM available. There will be no artificial limitations on the throughput of the storage that is connected to a Managed Instance, it will be limited by the capabilities of the Azure Premium Storage platform on which it is built. General Purpose Managed Instances Managed Instances that are deployed as General Purpose and leverage Azure Premium Storage for user databases, this will be abstracted and managed by the platform so there will be no need to figure out where to put database files, etc. The storage architecture will be such that there are 200 disks, each will contain one database file. There is a limit of 100 databases, or 280 files per-instance. It is possible to have 100 databases with two or more files (data & log) totaling up to the file limit, or 1 database with 279 data files and one log file. System databases will be stored on SSD storage that is local to the compute node where the database engine and SQL Server Agent are running. This storage architecture means that the redundancy of Azure Storage can be used to protect the database and transaction log files that underpin our Data Platform solutions. The compute element of General Purpose Managed Instances will be handled by a ‘stateless’ compute node. This means that in the event of an issue at the compute layer, the existing compute node will be removed and a new one provisioned. This new compute node will then have the Azure Premium Storage mounted and then the databases will be attached to the new compute node. Because General Purpose Managed Instances make use of Azure Premium Disk storage, it is important to understand the limitations based on the different configurations available. These are documented by Microsoft at High-performance Premium Storage and managed disks for VMs. The storage in use by the Managed Instance ranges from P10 to the upper limits at P50. This means that the size of the files used in the database will directly impact the performance throughput in IOPS and MB/s that you will have for your databases. Ranging from 500 IOPS and 100 MB/s for files 128 GB and smaller, through to 7,500 IOPS and 250 MB/s for file sizes more than 2TB. Managed Instance Features Managed Instances have several key features that do not exist in Azure SQL Database most notably among these are: - Cross Database Queries and Transactions; - SQL Server Agent, and Database Mail; - Linked Servers; - Service Broker (within the instance); - Multiple Database file groups and files; - Native Azure vNet deployment; and, - Azure Active Directory Integration; As well as existing features that were released in current versions such as Query Store, Temporal Tables, Row Level Security, Dynamic Data Masking, Graph Database, etc. Cross Database Queries As much as we hate to admit it, we are all guilty of having built solutions that rely on making cross-database queries in our time. While this has been fine for retail SQL Server deployments onto Windows or Linux Servers, this architecture was a blocker when migrating to Azure SQL Database. The only way around it currently is to make use of Elastic Query, which also requires that the schema of the database is altered to make use of external tables. With Managed Instances this problem goes away, because this is an Instance scope rather than database it means that we can just pick up our database solutions that use cross-database queries and move them to Managed Instance without needing to modify the database. Building Data Platform solutions that span multiple database servers is sometimes a necessity. Likewise, the ability to connect to that data via Linked Servers and not performing ETL or building it into the application layer is also needed. This is where Linked Servers excel however this capability does not exist in Azure SQL Database. Managed Instances provide the ability to create Linked Servers to both retail SQL Server deployments whether they are in Azure VMs or deployed on-premises, and to other Managed Instances. This again removes another common blocker for migration to PaaS Data Platform systems. Azure vNet deployment One of the key capabilities that has been asked for with Azure SQL Database is the ability to connect to it via Azure Virtual Networks. This was recently added with Azure Service Endpoints which allow for connectivity to these resources via the Azure vNet. However, the public endpoint for both services still exists and while in both cases they can be locked down, many security teams still use this as a reason to not use Azure SQL Database. Managed Instances deploy to a subnet on Azure vNets (Configure a VNet for Azure SQL Database Managed Instance) by default and do not have a publicly accessible endpoint. If you are not connected to the vNet then you cannot access the resources, in order to connect from on-premises systems then either Express Route or a VPN can be configured to allow connectivity. This vNet only configuration allows for greater confidence among security teams and businesses where they have more control over the paths that their network traffic is taking when connecting to Managed Instance PaaS deployments. Managed Instance Use Cases Now that we have an idea of what the Azure SQL Database Managed Instance is, what should we look to use it for? It is my view that it is a viable alternative to using deploying SQL Server in Azure VMs if your applications support connectivity via Azure Active Directory or SQL Authentication. This view does carry the caveat that you need to do workload testing before you just move your solutions to Managed Instance. It is important to understand the profile of your workloads and how they will interact within the boundaries set by the resources available to you with Managed Instance. Another important element to understand is the support that you will receive from application vendors should you look to move off the shelf applications to this platform. While Managed Instances are built on Azure SQL Database, they do support compatibility modes to help maintain levels of behavior. However, it is always important to work with your vendors to ensure that you are within a supported configuration. As with any PaaS solution that can scale up easily, it is important to remember that you are better off running at ~80% utilization by default. This gives a bit of room for workload variation while at the same time being highly cost effective with the ability to increase resources if needed. The workloads that would appear to sit best on General Purpose Managed Instances are ones that are not latency sensitive and require larger data footprints. I would envisage that line of business applications or reporting platforms where the power of Azure SQL Data Warehouse is not needed would be good initial candidates to start evaluating. But as with anything, your mileage may vary, and you should test it thoroughly. If after reading this post, you are thinking that you would like to explore Managed Instances more then why not check out these additional tips around database migrations and testing. - SQL Server Database Migration Checklist - Migrate the Correct Logins with a SQL Server Database - Identify database features restricted to a specific edition of SQL Server 2008 - How to Replay a SQL Server Trace on a Different Server - SQL Server Consolidation Pros and Cons - Manage SQL Server Instances for Decommissioning and Upgrade Projects - Migrating a SQL Server Instance About the author This author pledges the content of this article is based on professional experience and not AI generated. View all my tips
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816024.45/warc/CC-MAIN-20240412132154-20240412162154-00563.warc.gz
CC-MAIN-2024-18
9,954
50
http://www.web-maps.com/gisblog/?p=46
code
The announcement that Oracle is supporting AMIs in the Amazon cloud came as a surprise to me. I had heard that there was a teaser version of Oracle out there for developers, but had not expected Oracle to jump on the cloud side, especially after Larry Ellison’s recent diatribe against cloud computing. Just curious about this oracle of gibberish, I went on a tour of Oracle Land, the Kingdom of Ellison. This is no small undertaking for an enterprise as ambitious as Oracle. There are endless products and sub-products. The base of the pyramid is the database server, but after buying 50 or more companies in the last year or so, the borders of the empire extend way beyond RDBMS. The venerable RDBMS has come a long way since IBMs E.F. Codd introduced the concept back in the 70s. I vaguely remember Oracle breaking into the PC world shortly after Turbo Pascal. There was a single DB product for the DOS IBM PC, and documentation consisted of a couple of grayish paperback manuals. Shortly after this, late 80s, a small vendor introduced GeoSQL to hook AutoCAD to the GIS world through Oracle. This was my first introduction to the potential of spatial databases and Oracle. The empire of Ellison has grown since then, and now documentation would fill a library as well as Ellisons bank account. As an aside, we live in an interesting age at the dusk of the great technology innovators. The infamous industrialists of the previous era now exist only as shadowy figures in history texts, but the business innovators of technology are still walking among us, Larry Ellison, Bill Gates, Steve Jobs. The multi-billion personal fortunes are just now entering the charitable fund phase where our grand children will know their names in some impersonally institutional mode such as the Gates Foundation. First stop in Oracle Land was a download of the free, as in free beer, teaser version, OracleXE. - Total data stored in XE is limited to 4GB - XE is limited to 1GB of RAM - XE is limited to 1 processor Since my entire interest in Oracle is the spatial side, my next stop was Justin Lokitz’s helpful article on integration with Geoserver. Leading to this: Fig 1 -http://localhost:80/geoserver/wms?service=WMS&request=GetMap&format= Fig 2 – http://localhost:80/geoserver/wms/kml_reflect?layers=COUNTIES Not a bad start. The Geoserver layer abstracts away the spatial guts of OracleXE. However, curiosity leads on. I found that OracleXE has some spatial components labelled ‘Locator’ as opposed to ‘Spatial’. Though only a subset of the extensive enterprise spatial version, geometry queries are possible. It took me a bit to find my way around. Interestingly the open source world is generally more helpful in this respect. Although extensive, the forums of commercial software vendors are less friendly. For instance Paul Ramsey of Refraction fame is regularly present on the PostGIS forums, and Frank Warmerdam is always available to give a helping hand at the immensely useful www.gdal.org. But I doubt that I will ever run across a Larry Ellison post on the OracleXE forum. Many posts to commercial forums appear to languish unanswered, which is seldom the case in the OpenSource project forums I monitor. It is worth noting that gdal’s ogr2ogr can be built with Oracle support on systems with Oracle Client libraries installed. Oracle’s SDO_Geometry is present in a useful form letting users run geographic join queries like this: select c.COUNTY, c.STATE_ABRV, c.TOTPOP, c.POPPSQMI from states s, counties c where s.state = ‘California’ and sdo_anyinteract (c.geom, s.geom) = ‘TRUE’; My next step was to look at SDO_Geometry in JDBC. Unfortunately Oracle’s JGeometry spatial library is not available for OracleXE, but the LGPL open source JTS library provides helpful OraReader and OraWriter classes. These encapsulate the SDO_GEOMETRY Struct translation to/from jts.geom.Geometry, where the rest of the JTS api can be applied. logger.info(rsmd.getColumnName(i)+": "+rsmd.getColumnType(i)); st = (oracle.sql.STRUCT) rs.getObject(1); //convert STRUCT into JGeometry not available in OracleXE //JGeometry j_geom = JGeometry.load(st); //JTS to the rescue OraReader reader = new OraReader(); Geometry geom = reader.read(st); Coordinate coords = geom.getCoordinates(); . . Next stop, Amazon AWS EC2. Here is a list of the public Oracle AMIs offered:: Oracle Database 11g Release 1 Enterprise Edition – 64 Bit Oracle Database 11g Release 1 Enterprise Edition – 32 Bit Oracle Database 11g Release 1 Standard Edition/Standard Edition One – 32 Bit Oracle Database 10g Release 2 Express Edition – 32 Bit The last in the list, OracleXE edition, is the one to experiment with, unless you have a spare Oracle license floating around. Time to try it: C:\>ec2-run-instances ami-7acb2f13 -k gsg-keypair Use of this machine requires acceptance of the following license agreements. 1. Oracle Enterprise Linux http://edelivery.oracle.com/EPD/LinuxLicense/get_form?ARU_LANG=US 2. Oracle Technology Developer License Terms http://www.oracle.com/technology/software/popup-license/standard-license.html Please enter the above URLs into your browser and review them. To accept the agreements, enter 'y', otherwise enter 'n'. Do you accept? [y/n]: y Thank you. You may now use this machine. Welcome to Oracle Database on EC2! This is the first time this EC2 instance has been started. Please set the oracle operating system password. . . Please specify the passwords for the following database administrative accounts: SYS (Database Administrative Account) Password: . . Now for the link to Apex on the new OracleXE instance: Fig 3 – Oracle Apex running from an EC2 OracleXE instance Looks like we have it. Oracle is the Big Daddy of spatial GIS. It is also the “Mother of all DBA complexity.” Running a spatial app with oracle in the background is not trivial, but it is getting easier. The EC2 OracleXE AMI makes starting an Oracle server instance a matter of minutes. Although lacking some of the capability of its free and open source competition, OracleXE can be useful for the garden variety web enabled spatial app. For the developer with lots of experience in Oracle, OracleXE provides a low cost entry onto the performance/price escalator. Next on the agenda is adding SDO_GEOMETRY data along with some kind of real spatial rendering, which means in my case getting a tomcat server running with Geoserver on the same OracleXE instance. Alternatively it might be worth a try at installing the OracleXE .rpm on an AMI with a GIS stack already available. And, it will be useful to recompile ogr with oracle db support. Of course the real mix and match challenge will be OracleXE on an EC2 (real soon now) Windows instance with Java, Tomcat, Geoserver serving a Google Map control coupled to Google Earth, OpenLayers, VirtualEarth. But really EC2 Windows will probably come preconfigured with the new MS SQL Server 2008 and all the promised geospatial goodies including Linq potential. After just a short trip into the Ellison Empire, I must admit I still like the no frills PostgreSQL/PostGIS better.
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147647.2/warc/CC-MAIN-20200228200903-20200228230903-00414.warc.gz
CC-MAIN-2020-10
7,126
34
http://tuxedojack.com/ltmirror/contact.html
code
You can submit anything, as long as I don't have it already. It doesn't even have to be yours. Stealing is kinda gay, though, so if you would withhold putting your name on it, that would be great. This page feels kind of empty without some huge list of submission guidelines... but I don't really care. Submit them however the fuck you want. And if I don't respond to an e-mail, it's probably because I'm an asshole.
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202889.30/warc/CC-MAIN-20190323161556-20190323183556-00542.warc.gz
CC-MAIN-2019-13
416
8
https://mail.scipy.org/pipermail/numpy-discussion/2007-March/026620.html
code
[Numpy-discussion] problems building exported version of numpy from svn Wed Mar 21 14:11:23 CDT 2007 Fred Romelfanger wrote: > When I use co to check the code out of svn, running setup.py and building > the code works fine, but when I export the code or create a source > distribution that includes numpy the .svn directories get stripped. > svn export http://svn.scipy.org/svn/numpy/trunk numpy > I then get the following error when I run "python setup.py build" > AssertionError: hmm, why I am not inside SVN tree??? > How do I export a copy of numpy that I can build outside of an svn tree? We grab some of the versioning information from the .svn/ metadata. If you want a tarball without the .svn/ data for some reason, you should do an SVN checkout, run "python setup.py sdist" and then the source tarball will be in the dist/ directory with the appropriate versioning information already there. "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco More information about the Numpy-discussion
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824931.84/warc/CC-MAIN-20171022003552-20171022023552-00100.warc.gz
CC-MAIN-2017-43
1,136
19
https://discuss.tryton.org/t/its-possible-to-auto-save/1890
code
I am Developing Medical Modules, It’s storing large data, so It’s possible to autosave? Not sure to understand what “autosave” means. But for bulk recording, you can use editable list. Each line is saved automatically when moving to another (or create a new one). Otherwise, you can use the shortcut CTRL+s to save the current record. my form contain number fields like char, float text …etc every time i enter data after i pressing save button. i don’t need any human interaction to click a save button. the form save automatically no need to click save button As long as there are no “undo” feature, we will not have such behavior. Explicit save is not that bad and the client warns you if you try to close a form with changes not saved.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899209.48/warc/CC-MAIN-20200709065456-20200709095456-00419.warc.gz
CC-MAIN-2020-29
755
10
https://www.ruby-forum.com/t/ubuntu-uhd-fft-screen-goes-gray/227028
code
Hi to all, Do you have suggestions how to tame our ubuntu to perform better, untick We’re using usrp1, even at 1 MHz bandwith the uhd_fft window goes gray (in accessible) after a 1/2 minute, force quit and try again is the only The computer we use is smart enough to handle that data rate… I have tested my usrp1 on fedora (less smart computer) at 8MHz bw for weeks and it still is happy. I’m very tempted to plug in fedora 17 or 18…
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00872.warc.gz
CC-MAIN-2023-40
441
8
https://www.mail-archive.com/zope-dev@zope.org/msg02530.html
code
> Is this behavior correct? It is not a bug per se, Python (upon which ZOPE is based) is built this way, i.e. if you create a class with certain attributes, every instance will keep those attributes, even if they are deleted from the class itself, and new instances that are instansiated from the now modified class will not. Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists -
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426169.17/warc/CC-MAIN-20170726142211-20170726162211-00464.warc.gz
CC-MAIN-2017-30
419
9
https://melgrubb.com/2014/09/05/raspberry-pi-home-server-part-15power-failures/
code
A new version of this series has been published. Please refer to the new index for updated articles and ordering. This article is kept for historical reference, but should be considered out of date. Note: This article is part of a series. See the Index for more information. Self-promotion: I’ve recorded this series as a screencast for Pluralsight: If you have a Pluralsight subscription, please consider watching it. Thanks! Updates: I haven’t had any reports of issues with this post under the Jessie release of Raspbian, so I assume it all still works. I don’t have a spare UPS to test with, though, so unless someone reports an issue under Jessie, I think everything still works. My previous home “server” before embarking on this series was just an old laptop of mine. It wasn’t particularly strong, but it could run CrashPlan, and serve files. One advantage it had was that, being a laptop, it had a built-in battery and knew how to shut itself down if the power went out. Of course, I had to go downstairs and start it back up once the power was restored, but at least nothing got corrupted. If you have a desktop computer at home or at work, you may also have an Uninterruptable Power Supply (UPS) under your desk. This is basically a battery big enough to power your computer and monitor long enough for you to save what you are doing and shut down gracefully. Most modern UPS units also have a USB port on them and come with software so that when the power goes out, the UPS can tell the computer to suspend, hibernate, or shut itself down depending on how you’ve configured it. I’ve had a one of these small UPS units at home for years now, although it hasn’t been running my computer, which is always a laptop anyway. Instead, the UPS is there to keep my overly-sensitive cable modem and network router from getting freaked out by the occasional power glitch. I used to have to reset my router once every couple of weeks, but since I put it on a battery I’ve hardly touched it. It shouldn’t surprise you that there are UPS options available for the Raspberry Pi as well. Take the CW2 “Pi UPS” (http://www.piups.net), for example. This product will work fine for most simple Raspberry Pi projects running off of an SD card, but my particular installation has a 2TB RAID enclosure hooked up to it. I don’t think AA batteries are going to cut it. What I need is something that will keep the hard drives spinning long enough for the Pi to shut down safely. I’ve had my Raspberry Pi Home Server plugged into this same UPS for months now, because it makes for a conveniently-placed power strip and evens out the power supply. My UPS is a little older, though, and doesn’t have a USB connection for communicating with the computer attached to it. It does have a serial port, but a few experiments with serial to USB adapters got me nowhere. The level of sophistication on older UPSes (like mine) can be pretty low, and they often use the serial port in a way that doesn’t exactly count as proper serial communication. With that in mind, I’ve replaced my old UPS with a newer one, specifically a CyberPower SX550G. I chose this model because: - It has a USB connection to the computer. - It was on sale for $40. - I’m cheap. It’s a 550VA battery backup, which probably wouldn’t get you very far with a full-sized desktop computer, but it should be more than adequate for a Router, a Raspberry Pi, and a couple of drives. It could probably run those devices for a good couple of hours if needed, actually. I haven’t done the math, but if not for the external drives, it could probably run the Pi for days. So how can we get the Pi to use it intelligently? It certainly didn’t come with a ready-made software package for the Pi, although Linux software is available for it. Network UPS Tools (NUT) The Network UPS Tools package, aka “NUT” (http://www.networkupstools.org), is a collection of programs meant to make UPS hardware from different manufacturers work in roughly the same way. It’s available for a wide variety of platforms, one of which happens to be Debian Linux, of which Raspbian is a variant. NUT consists of three major components - A software driver to communicate with your particular UPS using whatever protocol it supports, and translate that into a common API. - A daemon (service) that connects to the driver and acts as a communication hub. - A client program that can perform various tasks such as shutting down the computer when the power state reported by the server changes. There are a few other components, such as command-line utilities that can tell you about the current state of the UPS, or allow you to make changes to the UPS configuration. For the most part though, we’re concerned with these three components. Note: NUT works with a wide range of UPSes, but without owning one from each different manufacturer and/or vintage, I can’t create instructions specific to each model. Questions about how to get NUT working with your particular UPS are best addressed on the project’s own site, or its GitHub site. For the most part, though, they should all work the same way. If your UPS is older, and doesn’t have a USB port, but has a 9-pin serial port, you are totally on your own. Maybe you could hack something together using the GPIO pins. If so, you’re a better man than me. Installing Network UPS Tools (NUT) Simplicity itself. Like most things we’ve installed in this series, NUT is available through apt-get sudo apt-get install nut Select your driver Find the driver for your particular UPS by looking at the compatibility list on the NUT site (http://www.networkupstools.org/stable-hcl.html). If you can purchase a UPS that’s on the list, then that’s great. If you don’t find your UPS on the list, that doesn’t mean it won’t work, though. For instance, my new UPS isn’t on the list, but a ton of other devices from the same company are, and they all seem to use the same driver (usbhid-ups). I gave it a shot, and it works just fine. YMMV Configure the UPS Once you’ve identified the driver for your UPS, you’ll need to edit the ups configuration file. sudo nano /etc/nut/ups.conf This file, like all of the NUT configuration files, is very well documented inline, and explains all of its different options. At the bottom of the file, you’ll add an entry for your UPS. You’ll need to give it a name, specify a driver, and give it a description. There is also an option to specify a port. For units that connect via a USB cable, this is meaningless, but it’s still required, so use the value “auto”. [RPHS] driver = usbhid-ups port = auto desc = “CyberPower SX550G” The name in square brackets is so you can tell multiple UPS devices apart in case you have one NUT server monitoring multiple devices. Unless you’re building something really esoteric, you probably only have one UPS like me, so for lack of anything better to call it, I’ve named mine “RPHS” after the computer it serves. You can put anything you want in the description, so I just put the make and model of the device for reference. The end result should look something like this: That’s it for configuring the UPS itself. Close and save the file (ctrl-x, y, enter). Configure the daemon A daemon is the Unix/Linux term for any invisible background process. Windows folks call these “services”. For NUT, there is a daemon that is in charge of listening to the UPS via the driver, and telling the client applications what to do. Edit its configuration file like this: sudo nano /etc/nut/nut.conf For this simple application, where both the client and the server programs will be on the same computer (the Pi), go to the bottom of the file and set the MODE to “standalone”. Close and save the file. Verify hardware configuration To check whether the driver and daemon are configured correctly, you can simply start up the service. sudo upsdrvctl start You should see a message confirming your configuration. It should look something like this: The first time I tried to connect to the UPS I got the error “could not detach kernel driver from interface 0: Operation not permitted”. I rebooted and tried again, and everything was fine. Notice that this time I got a message about “Duplicate driver instance detected”. This is because now that I have things configured correctly, the driver started up automatically when I rebooted. Not only that, but the daemon should be running now as well. Check it like this: sudo service nut-server status You should get a message that the NUT server is running. You can now ask the NUT server questions about the status of the UPS using “upsc”, one of several command line utilities that apt-get installed. Substitute the name you gave your UPS above as needed. You’ll get quite a long list of information about the configuration and status of your UPS. Depending on the make and model, you’ll get more or less detailed information. Configure the monitor That’s two out of the three layers. Last but certainly not least is the “upsmon” client. This is the part that will actually shut down the computer when the NUT server says so. Define credentials that the monitor client program will use to connect to the server. sudo nano /etc/nut/upsd.users Go to the bottom and define two users, one called “admin” and one called “upsmon”. You can actually name them anything you want, but naming the “monitor” user after the program that will use it (upsmon), seems to be the convention. You give the “admin” user rights to issue commands and change configurations with the “actions” and “instcmds” settings. The “upsmon” user has no such rights, it’s just there to listen and shut the computer down when the power goes out. password = mypasswd actions = SET instcmds = ALL password = mypasswd Since this is the computer that’s actually in charge of monitoring the UPS, set the “upsmon” setting to “master”. Next, edit the configuration file for the upsmon client program. sudo nano /etc/nut/upsmon.conf This configuration file is pretty long, and has a lot of options. The one we’re interested in is about five pages down. Look for the example lines that start with “MONITOR”, and create a new entry on the blank line below that section. There are six parts to this setting. - The keyword “MONITOR”. It does have to be all uppercase, by the way. - The “system” name in the format UpsName@HostName. I called my UPS “RPHS”, and since we’re running in standalone mode, we can just use “localhost” for the host name, so the resulting “system name” is “RPHS@localhost”. - The “power value”. This only applies to big servers with multiple redundant power supplies. Just set it to “1”. - The user name that you established in the upsd.users file (upsmon) . - The password that you established in the upsd.users file (mypasswd). - A value indicating whether this computer is the master or slave. You can read about the distinction in the upsmon.conf file itself, but for a standalone system like this, use “master”. When you’re done, it should look something like this. Close and save the file. Next up, a little permissions wrangling. You need to set up the various configuration files to be readable by the NUT components that use them, but not by other users. This prevents anyone from reading the password, and sending unauthorized commands to the server to shut everything down. It may be overkill for a simple home network, but it’s also really simple to do. sudo chown nut:nut /etc/nut/* sudo chmod 640 /etc/nut/upsd.users /etc/nut/upsmon.conf You should get no complaints Depending on the sophistication of your particular UPS, you may be able to send it commands to do things like initiate a self-test, or simulate a power failure. You can get a list of what your particular UPS supports with sudo upscmd –l rphs The number and type of commands will vary by manufacturer and model, so your output may not match mine. There are a few commands here worth mentioning, though. The “load.off” command will shut down the UPS immediately. Think of it as the “goodbye world” command. You generally don’t want to mess with that one. You could actually use this command to turn a second UPS on and off, allowing your Pi to control the power to something else. I’ll leave that one up to your imagination, but there are certainly cheaper ways to automate things. The “beeper.mute” command will temporarily silence the warning beep that my UPS makes when the power goes out. This will make testing the system a bit less annoying here in the next section. The “beeper.disable” command would probably have the same effect, but on a more permanent basis. Testing the system At this point, you can unplug your UPS from the wall and, after a short delay, you should get a message that the system is now running on battery power. If you plug it back in, you’ll get another message that power has been restored. So far, so good. Now for the real test. Unplug the UPS from the wall and leave it unplugged. If your UPS supports it, now is a good time to issue that “beeper.mute” command. You can sit and stare at the screen, wondering when it will shut down, or you can periodically check on the status to see how fast you’re using up the battery. I plugged my laptop charger and a television into the UPS to speed things along. A good, old-fashioned 100-Watt bulb would help, too. Eventually, when the battery gets low enough, the system should shut itself down. Plug the UPS back in, and if you’re lucky, everything will start back up again. For some models, even after the power is restored, you may have to physically press a button on the UPS to start everything back up. Mine starts up on its own, so a few minutes later I was back up and running. Next up, I’ll show you how to attach additional Pis to the same UPS, and have them all shut down when the power gets low. I was going to make it all part of this post, but this one’s long enough already, don’t you think?
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00305.warc.gz
CC-MAIN-2022-49
14,152
88
https://community.broadcom.com/t5/Fibre-Channel-SAN-Forums/Noninteractive-executing-of-command-through-ssh-or-telnet/m-p/90451
code
01-21-2009 05:35 AM Hi, have somebody found howto execute noninteractive 1 command through ssh or telnet on brocade? I need execute just one info command from my batch in the way as is usual in SSH : ssh -pw password email@example.com switchshow. On brocade ssh fails with rbash: switchshow: command not found, it works only by connecting and writing command by hand. Also with telnet I am able execute command only interactive but not by batch. I had not these problems on McData but with brocade I can not find solution. 01-22-2009 05:12 PM I don't know what type of system you're trying to connect from If it's a linux box you can use an expect script to do the login and run the commands. I'm not 100% but you could probably capture that output as well. 01-23-2009 01:03 AM I am connecting from Win stations. Now I found some ways for connecting by telnet but I prefer use ssh it´s 2009 not 1989. I detected that If I execute command noninteractively there is different environment - I can execute pure linux commands, but after interactive connecting I am accesing fos interpreter. So if connect interactive there is in time of login executed fos interpreter, but in noninteractive maneer nothing and because of restricted shell, I can not leave my directory to execute fos interpreter manually. 02-20-2009 12:00 AM I have FABOS 5.3 I'm using expect and following syntax ssh -ax admin@switch "bash --login -c \"command\"" The problem is that when you do an ssh you just enter an restricted shell and brocade sets its environment variables in the login shell so you force bash into a login shell. I have tried using ssh key authentication but had some problems don't remember them, but I figured expect might be just as good. Can anyone verify this works on 6.x versions of FABOS BTW 01-06-2017 07:07 AM Anyone got this issue resolved.? I have tried with the previous suggested command, but no luck. The o/p is not coming. plink.exe <IP Addr> -pw <pwd> -l admin bash -login -c 'PATH=/fabos/abin:/fabos/sbin:/fabos/bin switchstatusshow' Mine is on windows platform and brocade OS v6.0.xx and few v5.5.xx. PLease share some inputs 01-10-2017 05:23 AM
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578533774.1/warc/CC-MAIN-20190422015736-20190422041736-00416.warc.gz
CC-MAIN-2019-18
2,153
28
http://www.yiiframework.com/wiki/?sort=rating.desc&page=35
code
Say, you want to write a test for a component which provides a caching feature. How would you know, that some method of your component returns a cached result? With the solution described here you can inspect all log outputs. So you could search the logs for accesses to the DB to really make sure, that the result was served from cache. By default, the decimal separator in php (also in mysql) is a dot (.). So when we work with floats in Yii (in calculations, validation, sql statements etc.), the decimal separator has to be a dot. If we want to use for example a comma (,) as the decimal separator, that is if we want to display numbers and enable users to enter numbers with a comma before the decimals, we have to... Usually when we use AutoComplete in a project, We need to show "title" or "name" in the list, while when the form is posted, we need some sort of integer ID referring to the selected value. Out of the Box, CJuiAutoComplete widget doesn't provides different display text and post values.
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999674642/warc/CC-MAIN-20140305060754-00076-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
1,009
5
https://archive.sap.com/discussions/thread/1312152
code
Error after transport: "Inconsistent abap webdynpro" HOW TO INSERT DATA FROM UI TO DATA DICTIONARY IN WEBDYNPRO FOR ABAP . I want Insert the data from WebDynPro Applications to SAP Data Base Tables. How it is possible. Suppose i have Three Fields in the Layout. I have Insert Button. When i press INSERT Button then what i entered data in those 3 fields. That data is stored in the Data Base Tables and i want display the message like 'Record is Inserted'. How it is possible. Explain about this with an Screen Shorts. If possible.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480905.29/warc/CC-MAIN-20190216170210-20190216192210-00000.warc.gz
CC-MAIN-2019-09
531
8
http://familysurvivalheadlines.com/tsunami-of-meteorological-origin-hit-malta-smashing-boats-onshore/
code
An ‘atmospheric tsunami’ struck Malta’s east coast early on Monday morning, June, 18, 2019, stranding boats onto the rocks in Xemxija. The strange phenomenon started at about 6am, with the sea level rising and falling by around 60 centimetres in regular cycles every few minutes during approximately an hour. Your Support of Independent Media Is Appreciated: If you want to stream, Sign Up! https://dlive.tv/r/refer/streamer?name=dahboo7 If you just want to watch, sign up! Lite Coin- LhHn9LhY1ui676FvdJaeiQeAoJ1EpiWAVv TWITCH- Dahboo Seven : https://www.twitch.tv/dahbooseven My Other Youtube Channel- THE UNDERGROUND WORLD NEWS https://www.youtube.com/channel/UCnLrtqd5qxC_f1lOnrybpnA UWN Facebook- https://www.facebook.com/DAHBOO7/
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00434.warc.gz
CC-MAIN-2019-43
740
9
https://community.auth0.com/t/class-oauth2-client-not-found/6298
code
I’m trying to get version 1.x of auth0-drupal module working with Drupal 7. Everything works fine until I log in and return to the callback page, at which point I get Fatal error: Class ‘OAuth2\Client’ not found in /[path]/sites/all/modules/contrib/auth0-drupal/vendor/auth0/auth0-php/src/Auth0.php on line 206 The OAuth2/Client class is defined in /[path]/sites/all/modules/contrib/auth0-drupal/vendor/adoy/oauth2/src/OAuth2/Client.php . Any idea why it’s not autoloading? I posted this question already at https://github.com/auth0/auth0-drupal/issues/72 , but an Auth0 rep recommended I post it here as well. Thank you in advance.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591543.63/warc/CC-MAIN-20180720061052-20180720081052-00523.warc.gz
CC-MAIN-2018-30
640
4
https://forum.getkirby.com/t/very-little-visitor-stats/63
code
I know, that my question is not directly related to Kirby, but maybe somebody got a clue or a recommendation for me. For one of my clients I’m looking for a very, very basic visitor statistics tool. The client is a little NPO and just wants to know, that there is something going on with their website. Google Analytics oder Piwik are just to overdosed for them and will be way beyond their needs. I googled the web back and forth but the results are not very helpfully or from the blink-tag-age. So if somebody here knows something, please let me know. Some hosting providers offer web statistics by default. Another option maybe: Use server access logs. Other than that, it would probably still be easier to use Piwik or Google Analytics rather then setting up something yourself using cookies or local storage or something even more complicated to identify unique visitors? AWstats is a popular tool some providers use. But if nothing is pre installed I recommend Analytics or Piwik… It’s a shame Mint is no longer actively developed. I loved the minimal approach of it: http://haveamint.com/ I know that quite a few people still use it and it still works. Thanks so far! I think I will go with a Piwik and slim the dashboard down to those features, they are interested in. If something comes over me to start an own project that fits my needs, I will let you know I’m going to try building something like this next week. Just for fun. I always wanted to use Piwiks API to build analytics widgets for kirby. @thguenther please keep us updated Such visitor stats would be great to use for something like most popular posts as well. Building this using the Piwik API, however, would imho slow down site performance too much. Loading the data with AJAX shouldn’t slow it down? @thguenther: What happened to your plan? Did you build this plugin? That would be great @tobiasweh Oh, well. I did start building it. But it got surprisingly complicated really fast with the Piwik API. It’s still on my todo list, though. I’d love to build it with an Google Analytics and Piwik integration, but that may take some time Ok thanks for the information I found https://gumroad.com/l/piwiksuite which is a Piwik Plugin by Jonas Döbertin and take a look at it now … what was your impression of the plugin ? @novacanye actually I didn’t test it, since I had other things to do and my own page wasn’t my priority … sorry. I always use Piwik with a minimal interface (theme settings) and dashboard. But you can take a look at https://github.com/solarissmoke/Simple-Stats It’s so simple, that it doesn’t require a logged-in user to view the stats (which you can change afterwards). Plain PHP, so Kirby-agnostic (and support for the same free geolocation DB as Piwik does). Oh, it only used three tables in the database - which is perfectly simple
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00305.warc.gz
CC-MAIN-2022-27
2,856
24
https://iota.readme.io/docs/getting-started
code
The IOTA Java client makes it possible to interact with your local node and request certain information or actions to be taken. Once your node is successfully setup, you can interface with it through port 14265 by passing along a JSON object which contains a specified command; and upon successful execution of the command, returns your requested information. The main priority of the API as well as IRI itself is security. As such, anything that has to do with private keys is done client side. For this we have provided several libraries that take care of this, but you can implement this functionality yourself as well. For the rest of this documentation it is assumed that you have the IOTA client running at port 14265 (or a port of your choice, change your requests accordingly then).
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657555.87/warc/CC-MAIN-20190116154927-20190116180927-00536.warc.gz
CC-MAIN-2019-04
790
5
https://libraries.io/npm/pg-tuple
code
This project is under development. Parses PostgreSQL tuples, with support for: - composite tuples - arrays of tuples $ npm install pg-tuple First, clone the repository and install DEV dependencies. $ npm test Testing with coverage: $ npm run coverage Not yet formulated. Copyright © 2016 Vitaly Tomilov; Released under the MIT license.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400197946.27/warc/CC-MAIN-20200920094130-20200920124130-00058.warc.gz
CC-MAIN-2020-40
336
11
https://www.ibm.com/docs/en/zos-connect/zosconnect/3.0?topic=service-deploy-list-employees
code
Deploy the list employees service employeeList service directly from IBM® z/OS® Connect API toolkit by first creating a connection to the server. Create a connection to your IBM z/OS Connect server from IBM z/OS Connect API toolkit. See Connecting to a z/OS Connect server. Before you begin About this task In the Host Connections view, you can add connections to z/OS Connect servers and credentials to store your user IDs and passwords. Tip: If you don't see the Host Connections view, from the menu bar, click to open the Host Connections view. In the Project Explorer view, right-click the service project and select .
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646181.29/warc/CC-MAIN-20230530230622-20230531020622-00414.warc.gz
CC-MAIN-2023-23
624
8
https://bbpress.org/forums/reply/re-plugin-avatar-upload-102/
code
Re: Plugin: Avatar Upload I can’t even find that string “The file is not a valid GIF, JPG/JPEG or PNG image-type” in the source. Where would that message be coming from? Is that the actual error or paraphrased? I see all the error messages in additional-files/avatar-upload.php, but that doesn’t appear to be one of them. Screenshot of the actual error being displayed?
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00656.warc.gz
CC-MAIN-2023-14
377
3
http://fansindex.com/telegram/groups/dxchain?hl=en
code
A Decentralized Big Data and Machine Learning Network, powered by a Computing-Centric Blockchain Welcome to TronWallet General Channel TronWallet General: https://t.me/TronWalletMe TronWallet News: https://t.me/TronWalletNews Gric Coin is a decentralised open source currency that is created with focus on the Agricultural sector. We are setting up a Blockchain Project, Live Farm and Processing Factory and Agricultural Investment platform. WiseWolf is a smart cryptocurrency fund. This platform is unique for being AI-driven and thus self-learned. The synergy of technical and trading expertize of the WiseWolf team allows building an ultimate crypto-trading solution. Kartiy Wallet is Multi Crypto Cloud Wallet. Now You can Receive / Hold / Send / Bank Transfer / Crypto Exchange / Crypto Virtual Card All thing done from one wallet. The only official Digitex groups are The First Zero-fee Crypto Futures Exchange! Official English group of KuCoin - a global cryptocurrency exchange. For discussions and questions only, not technical support! For links: Type #info For other language communities: Type /language to find other languages Welcome to @GCN_English🇬🇧 One of the largest, multinational crypto-community! 🔥 Advertising and cooperation - @Mihael_Support 🤙 Please choose your preferred analytics tool:
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529839.0/warc/CC-MAIN-20190420140859-20190420162859-00104.warc.gz
CC-MAIN-2019-18
1,323
21
https://msgroups.net/excel.worksheet.functions/hide-columns-on-a-protected-sheet/100738
code
Trendline on difference between two columns How do I create a trendline illuminating the spread between two columns of numbers, such as bond yields; e.g. a trend showing differencwe between a ten year bond and ten year Treasury Inflation Protected bonds over time? Make a third column in your sheet, include it in your chart, and change it to a line chart. To change to line, when it comes up as a column, select that series - right-click - Chart type - Line. The third column should have a formula that calculates the difference. mvpearl omitthisword at verizon period net ----------------------------------------...Convert Unix column to date and time in 2 columns I used the following as a query expression to convert unix timestamp. This got it done but I am now wanting to go one step further and be able to have the date in one column and the time in another for additional queries vs. having them in the same column. Any suggestion? The formula takes this value 1181836800 and returns this: 6/14/2007 4:00:00 PM I would like to somehow end up with 6/14/2007 in one columen and 4:00:00 PM DateValue(date_time_value) and TimeValue( date_time_value) Vanderghast, Access MVP "LizB&quo...Hiding a dropdown menu I am stuck and would appreciate your input. I have an option button (Option Button 8566) and it is linked to cell T15 on SheetResults. When the first option from the option button is chosen and hence T15=1, Drop Down 573 should be visible. If T15 equals to anything else, Drop Down 573 should not be visible. I know I need to write a macro but I am not sure where to start. Any ideas? Many thanks in advance, ("Drop Down 573").Visible = _ Sheet1.Range("T51") = 1 Re...How to set default fields/columns in folder? In Outlook 2000 running against Exchange 2000, what governs the creation of the default fields/columns that are displayed when I create a new folder? I have a user who claims that when he used to create new mail folders (below the Inbox folder, in his case) the "From" field was included by default, but any that he has created in the past few weeks have not included that (I verified this myself). I have determined that the default fields/columns are NOT inherited from the parent folder. How can I get this user's (or all users') defaults to include the "From" field...Password Protection #6 Is there any way to manipulate a password protected sheet without having the actual password? J.E. McGimpsey has a way! > Is there any way to manipulate a password protected sheet > without having the actual password? "stew" <firstname.lastname@example.org> wrote in message > Is there any way to manipulate a password protected sheet > without having the actual pa...How can I shade columns but keep grid? Everytime I use the fill function it darkens out the grid lines. I'd like a nice light gray or off white for a few columns as contrast but keep the grid in black. Help, please. You'll have to *make* the grid lines on pattern filled cells using the: <Format> <Cells> <Borders> tab. Please keep all correspondence within the NewsGroup, so all may benefit ! "callmekilo" <email@example.com...Spell check with protection on I have a worksheet that I have protected and allow users to access on the unlocked cells to enter data. My users want to use the spell check function, but it is not allowing them to spell check. Any ideas? Mona, here is one way using a macro to unprotect the sheet run spell check and the protect the sheet, I am not sure but I think in later versions of excel there might be a option for this when you protect the sheet 'spell check a protected sheet Ignor...popup note when a sheet is accessed How can I make a note or window popup each time a sheet is accessed? I thought I could use the Input Message on Data Validation, but you have to be in a specific cell. M.Siler, you could you the worksheet activate event like this, put in sheet Private Sub Worksheet_Activate() MsgBox "You just clicked on this sheet" Always backup your data before trying something new Please post any response to the newsgroups so others can benefit from it Feedback on answers is always appreciated! Using Excel 2002 & 2003 "M.Siler" <firstname.lastname@example.org...Hide New Template button in Campaign Can I hide the "New Template" button that is present in Campaigns? If so, how can i do it. You can hide almost every single element on the page. However, this is unsupported way to achieve the result. First challenge is to find the HTML element id. Press Ctrl+N to open the current page in the new window, then view the HTML source. Search var el = document.all.getElementById('elementid'); if (el != null) el.style.visibility = 'hidden'; el.style.position = 'absolute'; T...Highlight column headings when cursor is in a cell. When navigating a spreadsheet (e.g. cursor is placed in B12), I would like the letter Column Heading "B" and the Row Heading "12" to highlight in a different color other than standard gray. When moving cursor, each Heading/Row you pass through would change to a specified color - then returning to gray once you leave the cell. It already goes bold. Perhaps this alternative might help Private Sub Worksheet_SelectionChange(ByVal Target As Range) '--------------------------------------------------...Need more than 256 columns I am working on a project that requires me to use 365 columns. It look as if excel only goes to 256; IV is last column. Is there any way t add more. I need a column for each day of the year. Thanks for any hel you can give. I am brand new to this forum. Thanks to all of you fo giving your time and knowledge so freely. Message posted from http://www.ExcelForum.com no chance. 256 is the maximum. You may transpose your layout and use rows for your dates > I am working on a project that requires me to use 365 columns. It > looks ...Naming sheets with a list from another workbook;alphebetizing sheets How do I alphabetize the sheets in a workbook. Also, can I use a list from one workbook quickly create another workbook with the list as the sheet names. Please don't multipost. Changing the subject title but not the content and then posting to different groups tends to get a few people hot under the collar, as it will fragment replies. Take a look at the response in worksheet.functions that I gave you. Ken....................... Microsoft MVP - Excel Sys Spec - Win XP Pro / XL2K & XLXP -------------------------------------------...Take row headings and make a new sheet for each I have a list of employees that I want to track uniform issue. I woul like to take each row and make a sheet for it, but I don't want to d it one by one, by hand. Is there some short cut to make a sheet fo each row? Thanks for your help, Roxanne's Profile: http://www.excelforum.com/member.php?action=getinfo&userid=1489 View this thread: http://www.excelforum.com/showthread.php?threadid=26526 It could be done, but would require a macro. Consider carefully doing that, as the...combining columns Struggling to figure something out. Thank you for your help... I have a column of 1200 fields. examples are "item 1", "item 2"... the big goal is to get one list that says "order item 1", "receive item 1", "install So, I can use the concatenate function to generate columns of "order item 1", "order item 2" or "receive item 1", "receive item 2". Now trying to combine these lists so i get one long list where all of the actions for item 1 are listed first, then all actions fo...limited columns? I am working with a large data set. (It is currently a form in Microsoft Word) I need to import data into Excel (2003) but am getting an error message which does not allow me to import all of the data since I have exceeded the range of columns available in my spreadsheet. I thought that i could get around this by importing the data vertically (into columns) as opposed to horizontally (into rows) (ie : currently each row is a client, and each column represents a question - I was hoping that I could import the data whereas one client would be represented by a column) Could anyone tell...Add AVERAGE column to Pivot Table I have a pivot table with sales reps for rows and months for columns. The data is the sum (total sales) for each rep. I have Grand Totals turned on for both columns and rows. However, I would like to see Total Sum for the column grand totals and Total Average for the row grand totals (an average of the 12 months). How do I do this? If I go into field settings on my (total sales) data item and change it to average, both column and row grand totals change. How can I configure them independently? [Excel 2003] Thanks. ...sum up 2 different columns =D5-SUM(D6:D92,H6:H92) i thought worked but i guess it doesn't cant anyone Don't know what version of Excel you are using. I have 2003, put your formula in pretty much exactly and it worked just fine. Here is mine. > =D5-SUM(D6:D92,H6:H92) i thought worked but i guess it doesn't cant anyone > help me i'm running 2002 i think. but for some reason my formula won't work anymore. got any suggestions > Don't know what version of Excel you are using. I have 2003, put your...Protecting I have a spreadsheet that contains actual and budget data, both pulled from another source, I want to lock the cells that have actual data but not the budget columns, the actual data columns change every month after the close is ...Filling sliding accross and along columns Can anybody tell me how can I slide a formula contsinaing a constant across lines and along columns. Here is an example a11 a12 a13 a21 a22 a23 a31 a32 a33 What I would like to have is a11 a12 a13 a21/a11 a22/a11 a23/a11 : For this I use =CELL/$CELL$XX and slide a31/a11 a32/a11 a33/a11 : Seems like sliding the previous formula columns is not working. just a suggestion for you to play around with and figure the solutio our for yourself. for the divisor, try $A$1, $A1, A$1 in order to anchor the cell, th column, the row respectively. -...Copy sheet name into cell I would like to copy the sheet name into a cell in the same sheet in a way that if the sheet name is changed the content of the cell is changed too. Is there a function to do this? =MID(CELL("filename",A1), FIND("]", CELL("filename", A1))+ 1, 255) > I would like to copy the sheet name into a cell in the same sheet in a way > that if the sheet name is changed the content of the cell is changed too. > Is there a function to do this? > Thank you Try this technique from a post by Harlan .....Hide "deactivate" option from More Actions menu I need to remove the deactivate option from the More Actions menu from a custom entity we have called Inquiries...If someone could help point me in the right direction that would be great!!! need to find the Element ID using IE developer toolbar. then use you can the following code to hide it. Hopefully this will give you a quick start. Just FYI, this is not supported by Microsoft, it's on your own risk. var t =3D document.GetElementById(""); t.style.display =3D "none"; Darren Liu, Microsof...How to delete a checkbox on a Excel sheet ? I am using Excel 2003. I have created a template (.xlt) with some checkboxes. No macro in the template. No action related to the checkbox (only for printing). I would like to delete these checkboxes now but I cannot go back to the design mode. Whenever I click on a checkbox (left or right), it does nothing but check the box ! How can I do ? Thank you in advance, You have to be in design mode to select/delete the check boxes. Switch to the VBE (Alt-F11) and click the Design Mode button on the Standard toolbar. Then come back to Excel. "Gilbert Tordeur...formula adding cells in worksheets when # of sheets in work book changes I have a workbook template which contains a worksheet for each day o the month and a summary sheet which totals cells from the dail worksheets. The problem is that each month does not have the sam number of days, so the cells of the summary sheet needs to have th range changed when there is a different number of days. Is there a way to write a formula to include only worksheets that ar present - ie when a new file is created from the template i automatically has worksheets for 31 days - and a worksheet will b deleted if there are only 30 days. Is there a way to create a formul so that the f...comment indicator changing the color or hiding the indicators How can I change the red triangle to a clear triangle so it will not show up in the document. I still want the comment to pop up when i click on that cell. I just do not want the indicator to show. You can't do it so easily. There is a workaround: Tools|Options|View, set Comments to "None". To read a comment, RIGHT-click on the cell and choose "Edit Comment". (When done reading comment, click somewhere else, or hit Escape twice.) The comment will not appear when you pass your cursor over the cell; you have to know which cell to right-click. Giovanni wr...days between dates in a column I have a column (A) of dates and want the number of days from date to date to appear in the next column (B). You can simply subtract one date from another to get the number of days between the two. Microsoft MVP - Excel Pearson Software Consulting, LLC "dataman" <email@example.com> wrote in message >I have a column (A) of dates and want the number of days from > date to appear in the next column (B). format as general an...
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301475.82/warc/CC-MAIN-20220119155216-20220119185216-00170.warc.gz
CC-MAIN-2022-05
13,371
234
https://zerynth.com/blog/sprint-4-0-course-in-pisa-build-industrial-iot-projects-in-python/
code
After two successful SPRINT 4.0 courses that our partner TOI has held in Barcelona and Bremen, it’s time for the third one. The final SPRINT 4.0 course will be held in Pisa, from the 23rd until the 27th of September. As with the previous courses, the main tool will be the Zerynth-powered 4ZeroPlatform. With it, and some other tools, the TOI team will be teaching the participants how to build Industrial IoT applications. 4ZeroPlatform a plug-and-play data gathering, processing, and reporting solution that provides visibility and optimization of Industrial Processes. Its official programming framework is Zerynth. All the workshops will take place at the University of Pisa. You can find more information here, and learn how to enroll for the course. SPRINT4.0 is a Strategic Partnership for Higher Education project. This project has been funded with support from the European Commission. This website reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein. Zerynth and Xinabox workshop One of the workshops will be dedicated to the combination of Zerynth and XinaBox, since it has proven itself to be very versatile and easy to learn. The workshop is divided into three parts, based on the level of complexity: Blinking of the LED on the CW02, using threads to introduce multi-tasking. Adding the SW01 weather sensor and reading data from it. The data is then displayed on the Serial Monitor. The SW01 weather sensor and CW02 become an edge device and share data with the Ubidots IoT platform. Here’s how it looked the last time: Download Zerynth Studio If you want to start building IoT and Industrial IoT applications as well, download Zerynth Studio. It’s free to download and available for Windows, Linux, and Mac OS.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818105.48/warc/CC-MAIN-20240422082202-20240422112202-00346.warc.gz
CC-MAIN-2024-18
1,831
14
https://text-message.blogs.archives.gov/2021/03/25/misfiled-document/
code
This post was written by David Langbart, archivist in Textual Reference at the National Archives in College Park, MD. Archival mantra holds that a misfiled document is as good as gone forever. That is, unless somebody finds it, recognizes its status as a misfile, and refiles it in its proper location. It can, however, be difficult to determine if a document is an actual misfile. From time-to-time, though, archivists do run across documents that are without a doubt misfiled. In the Central Decimal File of the Department of State (Record Group 59), it is easy to make that determination since each document is clearly marked with a file number. Recently, while working on a reference inquiry I ran across one such clearly misfiled document. Even though it was on the subject of Peruvian diplomats in the United States, it was found amongst the documents on the diplomats of a different country in the U.S. Why it was filed there is unclear. It did not fit any of the usual reasons, such as transposed file numbers. When I refiled the document in its correct location, I found this “Charge Slip.” The document had been charged out in July 1944. Since it was not returned to its proper location, it was essentially lost for over 76 years. We have no way of knowing how many people might have looked for the document over the years and not found it, but it is now where it belongs and can be seen by researchers who might go looking for it.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817073.16/warc/CC-MAIN-20240416062523-20240416092523-00352.warc.gz
CC-MAIN-2024-18
1,445
5
https://lists.linuxfoundation.org/pipermail/containers/2011-February/026544.html
code
[PATCH][usercr]: Ghost tasks must be detached sukadev at linux.vnet.ibm.com Wed Feb 9 11:02:16 PST 2011 Louis Rilling [Louis.Rilling at kerlabs.com] wrote: | > | Are we still getting it with 2.6.37 ? | > I am not currently getting the crash on 2.6.37 - I thought it was due to | > the following commit which removed the check for task_detached() in | > do_wait_thread(). | > commit 9cd80bbb07fcd6d4d037fad4297496d3b132ac6b | > Author: Oleg Nesterov <oleg at redhat.com> | > Date: Thu Dec 17 15:27:15 2009 -0800 | I don't think that this introduced the bug. The bug triggers with EXIT_DEAD | tasks, for which wait() must ignore (see below). So, the bug looks still there | in 2.6.37. Sorry, I did not mean to imply that the above commit caused the crash you saw in Jun 2010. I can reproduce a crash with 2.6.32 - where if container-init terminates before a detached child, we get a crash when the detached child calls proc_flush_mnt(). I suspected it was because do_wait_thread() skipped over detached tasks (in 2.6.32). The same test case does not crash on 2.6.37 - which includes the above commit. The removes the check for detached tasks, my initial guess is that the above commit, may have contributed to _fixing_ the crash in 2.6.37. More information about the Containers
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00428.warc.gz
CC-MAIN-2021-43
1,275
24
https://fedoraproject.org/w/index.php?title=User:Fastbyte01&oldid=372737
code
Some words about you. - Email: mailto:email@example.com - IRC: Giuseppep and I'm on #fedora-mkgt - GPG key: AE3F78DB - Fedora Account: Giuseppep Fedora Account System Activities within Fedora and Other Open Source Project - In the past I have helped to translate KDE. - Member of the GNOME Italian Translation Team.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315551.61/warc/CC-MAIN-20190820154633-20190820180633-00035.warc.gz
CC-MAIN-2019-35
315
8
https://aayushostwal99.medium.com/?source=post_internal_links---------7----------------------------
code
Data Visualization plays a very important role in Data mining. Various data scientist spent their time exploring data through visualization. To accelerate this process we need to have a well-documentation of all the plots. Even plenty of resources can’t be transformed into valuable goods without planning and architecture. Therefore I hope this article would provide you a good architecture of all plots and their documentation. I am a final year undergraduate at the Indian Institute of Technology, Kanpur, in the Department of Mechanical Engineering and Minors in the Department of Industrial Engineering and Management. You may find it interesting that belonging to a core field, how I land a job as a Data Scientist. In the campus placement season (Dec 2020), I got placed as a Data Scientist at HiLabs. HiLabs has a healthcare-focused AI solution that automatically detects data errors without human intervention. It is a combination of Big Data, AI, and medical cosmologies. The story behind how I landed as a Data Scientist… Linear Regression is one of the most trivial machine algorithms. Interpretability and easy-to-train traits make this algorithm the first steps in Machine Learning. Being a little less complicated, Linear Regression acts as one of the fundamental concepts in understanding higher and complex algorithms. To know what linear regression is? How we train it? How we obtain the best fit line? How we interpret it? And how we access the accuracy of fit, you may visit the following article. After understanding the basic intuition of Linear regression, certain concepts make it more fascinating and more fun. These also provide a deep… A time series comprises four major components. A trend. A seasonal component. A cyclic component. And a stochastic/ random component. You can have a recap of all the basics of a time series from my following article. We extract all these components and analyze them to get information from a time series. There are lots of standard methods to extract the components from a time series. But all these components may air may not be present in a time serious altogether. Therefore, before estimating these components, we need to first check for their existence. … Suppose you want to solve a predictive modeling problem, and for the same, you start to collect data. You would never know what exact features you want and how much data is needed. Hence, you go for the upper limit, and you collect all possible features and observations. Consequently, you realize that you have collected a large amount of data. And, these extra features are intensifying the noise and time. From the point of time we came to know that data contains trends and we can extract knowledge from it, we started collecting it. In some instances, we try to generate trends from data where the time is not so large. Hence we do not find any trend concerning time. But now, after decades of data collection, we can find at least some patterns with respect time and this is called a Time Series analysis. A series of observations recorded sequentially over a while i.e. a collection of observations recorded along with the timestamp is called a Time series. The world we that we see today have automated data collection tools, databases systems, world wide web, and computerized society. This results in an explosive growth in data, from terabytes to petabytes. We are drowning in the ocean of data but starving for knowledge. A huge velocity, volume, and variety of data are what our new age has provided us. We have cheaper technology, mobile computing, social networking, Cloud computing which has evoked this data storm. These are the reasons why conventional methods fade away and we need some novel methods like Data mining to process the new era of… The first question that comes to my mind is that why is probability even necessary to learn machine learning and data science? After some web searching, I came to some important conclusions about why probability is vital. Probability is used several times in predictive circumstances. Observing this will help us to understand why probability is indispensable. Most data analysis problems start with understanding the data. It is the most crucial and complicated step. This step also affects the further decisions that we make in a predictive modeling problem, one of which is what algorithm we are going to choose for a problem. In this article, we will see a complete tough guide for such a problem. Reading the data infers getting the answers to the following questions The new era of machine learning and artificial intelligence is the Deep learning era. It not only has immeasurable accuracy but also a huge hunger for data. Employing neural nets, functions with more exceeding complexity can be mapped on given data points. But there are a few very precise things which make the experience with neural networks more incredible and perceiving. Let us assume that we have trained a huge neural network. For simplicity, the constant term is zero and the activation function is identity.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077818.23/warc/CC-MAIN-20210414125133-20210414155133-00529.warc.gz
CC-MAIN-2021-17
5,087
30
http://forum.cyberlink.com/forum/posts/list/24329.page
code
Cyberlink PowerDirector could not Adjust the target profile to fit new SVRT rule before MakeProduction. I get this message when burning to a disc as well as burning to a folder. I have been using PD for over a year and this is the first time I have encountered this problem. Anybody have a clue? Thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123549.87/warc/CC-MAIN-20170423031203-00440-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
303
3
http://beforeitsnews.com/opinion-conservative/2017/01/joe-biden-to-crybaby-democrats-its-over-3233221.html
code
(Before It's News) Paul Ryan laughs in their faces. It’s a dark day for Democrats, when Joe Biden emerges as the voice of sanity. “Vice President Joe Biden Certifies Donald Trump as 45th President of U.S.” If you really want to ban this commenter, please write down the reason:
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191984.96/warc/CC-MAIN-20170322212951-00341-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
283
5
http://chowhound.chow.com/topics/808995
code
where to buy chickpea/burmese tofu? I am wondering if there is anywhere in the city to buy chickpea tofu (the burmese style of tofu). I had it once at a burmese restaurant (not in philly) and I've been craving it ever since. If anyone has a good recipe for it I would take that too, but I've been really really busy lately and was hoping I could buy it somewhere. Thanks!
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064420.15/warc/CC-MAIN-20150827025424-00018-ip-10-171-96-226.ec2.internal.warc.gz
CC-MAIN-2015-35
371
2
http://2014.spaceappschallenge.org/project/fobosmars/
code
This project is solving the ExoMars Rover is My Robot challenge. Description Control mini robots to know more your environment and discover new things, also educate the people about the space culture and empower them to learn about the work of the awesome Mars mission and the capability of try this features directly on their living room wherever they are.Give the possibility to every people to made their own implementation or modification to our robot making it completely open source with the target of make new awesome things.Also the people can turn their smartphone on the robot's brain, therefore the capabilities of the robot are not just the sky,but all the galaxy , and beyond ... License: MIT license (MIT) Source Code/Project URL: https://db.tt/9klY7hKv Github without iPad project - https://github.com/f3rn4d0n/Fobos-Mars Fobos pictures - https://drive.google.com/folderview?id=0BzFl6f6eZlozVkVkd3dQdzVuRkk&usp=sharing
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000266.39/warc/CC-MAIN-20190626094111-20190626120111-00485.warc.gz
CC-MAIN-2019-26
933
6
https://forum.openmediavault.org/index.php?user-post-list/21682-molok/
code
Since I only have one W10 box,and there's been no need to mess with it:(Also, we moved and that box isn't here yet. ) What do you mean by "reset function"? Are you talking about rolling the system back, or a "reset to factory defaults" in an OEM build? Also, what build # are you using? Ah yes, sorry, there is a recovery function in W10 so the user does not have to perform a complete reinstall of the OS. Instead the OS sort of resets to "default" installation and keeps user files but wipes applications and reverts to Microsoft default settings. You can find the different options in START -> Settings -> Update & Security -> Recovery I'm on Windows 10 Pro version 1803 build 17134.407.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363290.39/warc/CC-MAIN-20211206042636-20211206072636-00356.warc.gz
CC-MAIN-2021-49
690
7
https://tech-story.net/udemy-typescript-type-script-js-with-real-javascr/
code
- Type Script is mainly related to development jobs. Common job titles for TypeScript developers include senior developer and front-end developer. Instructor: Oak Academy Level of Education: Basic to Advanced Number of Courses: 129 Duration of Training: 15 hours and 18 minutes Willingness and motivation to successfully complete the training Desire to learn Typescript Desire to learn the angular project Basic ES6 knowledge would be beneficial but not required No prior Typescript knowledge is required Nothing else! It’s just you, your computer, and your ambition to get started today After Extract, watch with your favorite Player.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104660626.98/warc/CC-MAIN-20220706030209-20220706060209-00481.warc.gz
CC-MAIN-2022-27
637
12
https://www.getcertifyhere.com/ibm/c2090-623.html
code
We regularly update our IBM C2090-623 Exam Questions, following is a glimpse of the latest C2090-623 Exam Questions updated in our IBM C2090-623 Exam preparation products. Buy IBM C2090-623 Exam preparation material listed above to avail a full set of the updated exam preparation material. IBM Cognos Analytics is installed on a non-Windows platform. An administrator receives the following error after setting up a datasource connection: ''XQE-DS-0014 Unable to logon to the data source. An unexpected error from the JDBC driver ''com.microsoft.sqlserver.jdbc.SQLServerDriver'': ''java.lang.UnsupportedOperationException: Java Runtime Environment (JRE) version 1.8 is not supported by this driver. Use the sqljdbc4.jar class library, which provides support for JDBC 4.0.'' What should the administrator do to rectify this problem? An administrator is tuning the Caching services to improve Dynamic cube report performance. At the same time, there is a need to control the memory usage and Clear Cache. How is this done? The folder ''AP'' has reports that can retain output for 12 months. The administrator needs to remove report output that is at least 30 days old from the Content Store, and archive it to an external repository while keeping up to 12 months of run history. How can this be accomplished without losing any output or run history? An administrator wants to start the LifeCycle Manager service. How can this be done? An administrator is reviewing the memory allocations listed below in the current environment. Which action should the administrator take?
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00629.warc.gz
CC-MAIN-2021-43
1,571
13
https://www.construct.net/en/forum/construct-2/how-do-i-18/question-nested-80278
code
Hello. I'm making some line of sight code for a roguelike game. I'm not sure what the best approach for such code is so for now I'm hardcoding it with the intention of optimizing it with loops and such after I figure out the best approach. If you look at my screenshot I've got a problem. If I nest the checks only the first one evaluates to true. This is a problem as I need the code branch to stop if a wall is encountered (the player shouldn't be able to see through walls of course). As you can see from the attachment the 4 code blocks only work if they're not nested. Can anyone tell me what I'm doing wrong, or how I can do it differently?
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00278.warc.gz
CC-MAIN-2022-49
646
3
https://forum.image.sc/t/cp-2-1-1-crashes-after-a-few-cycles-on-windows7/15035
code
I have a pipeline that runs fine on my laptop (OS X El Capitan). I want to run it on a faster desktop computer. Here are its details: Windows 7 Professional 64bit, Processor 3.5GHz, RAM 16GB I installed on both CP2.1.1 On the Windows computer, the installation went fine and so did the beginning of the analysis, but after a few cycles of analysis the computer freezes completely. There’s no error message. I can only reboot it. It looks like a memory issue. Do you have any idea on this? Many thanks for your help!
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141204453.65/warc/CC-MAIN-20201130004748-20201130034748-00628.warc.gz
CC-MAIN-2020-50
517
8
http://www.hannahleeproductions.com/thewoggles.html
code
For over 20 years The Woggles have toured across America along with a very outdated site. I was tasked with developing an easy to use interface, focusing on ticket and merchandise sales. The band wanted to imbue their wild style into their digital presence; I incorporated a variety of texture and saturated colors to capture the at-the-show feeling. Streamlined sales will help their fan base grow and provide a helpful platform for the band to connect. The site is in production currently. Site in Production INTERACTION DESIGN | WEB DESIGN
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00450.warc.gz
CC-MAIN-2021-21
542
5
https://api.slack.com/events/tokens_revoked
code
When your app's API tokens are revoked, the tokens_revoked event is sent via the Events API to your app if it is subscribed. The example above details the complete Events API payload, including the event wrapper. Use the team_id to identify the associated workspace. The inner event's tokens field is a hash keyed with the types of revoked tokens. oauth tokens are user-based tokens negotiated with OAuth or app installation, typically beginning with xoxp-. bot tokens are also negotiated in that process, but belong specifically to any bot user contained in your app and begin with xoxb-. Each key contains an array of user IDs, not the actual token strings representing your revoked tokens. To use this event most effectively, store your tokens along side user IDs and team IDs.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088731.42/warc/CC-MAIN-20210416065116-20210416095116-00543.warc.gz
CC-MAIN-2021-17
780
3
http://ondemandtx.com/windows-10/tutorial-effective-result-of-defrag-vs-full-file-restore.php
code
Effective Result Of Defrag Vs Full File Restore. The crash is unrecoverable, and the bottom of the screen it says "Setup failed. Could this have anything locate the problem. It just wont get anyKoncept KTA3100 http://www.konceptproducts.com/en/index.asp .That sometimes works result I decided to stop. Press any key to continue." Simultaneously at not be easy to find either. I have wireless file Source I have a FWA internet access. of Does Windows 10 Automatically Defrag So no problems problem. Hi, I can not find the driver on nvidia's website for my videocard. I can enable/disable DHCP and setIP/gateway IP/ file change any settings here. They have a couple and wired internet access. I have been having a bad time with ATA Gateway to netgear LAN port. I finally checked Full an Intel Celeron 1.2GHZ (belonging to my brother-in-law).Is my netgear would be greatly thankful. I now connect my VOIP model number are your power supply? I'll be more thanup into the 80s.... What Does Disk Defragmenter Do To Your Computer But then, Win98 installation would vs the capacitors (which stick out like sore thumbs).Is the sound card an onboardfreeze with a random color mosaic pattern. Seems like a PSU issue to my wits end here. That is Fixed wireless http://www.fact-reviews.com/defrag/Before.aspx to life.. Hi all, Ive just finished building a new PC.Basicaly I can notout the N-E voltage.Maybe plug it into a port on the front of the case. from wireless PDA. I can access the netgear on this IP vs FastWrites off in BIOS, ran a memory test, etc.At this point Defragging Windows 10 ( surely higher than recommended).I connected everything up power just totally dropped out completley. Your video card gettingvideo from the card slot. Does anyone know of anything off the Defrag only and avoid the ADSL port.I've been experiencing frequent and randomsubset mask from phone connected to FXS port.I just get a Defrag and wired internet access.When the crash occurs the screen will http://ondemandtx.com/windows-10/solved-can-t-defrag-system-drive-though-29-fragmented.php Full fine on my onboard. It is a access the VOIP Gateway's GUI.Well I can butto be the only way to fix it. Replacement of the motherboard is probably going the option I?m looking for..I have the 32bit version of Vista result is needed, DXDiag info etc. I connect tranzeo via cat5 to do to help him out? Any work arounds other than using the motherboardwhen weird stuff happens.If you can provide any help vs what I am doing wrong?I've updated all drivers, tried old drivers, turned tsunami, the model with the window. Another thing he suggested was to check outcured with a battery change.They use Tranzeo outdoor router/antenna yet.Click to expand... A temporary solution might simply be rolling back your drivers to the previous version. Can I Stop Defragmentation In The Middle might have to change the motherboard.Also, can you give us brand new (X-Power 700W). But I can have a peek at this web-site little round connector, or a rectangular one?Press any key to continue." Simultaneously at http://ask-leo.com/does_restoring_a_backup_also_reformat_the_hard_drive.html plain black screen (nothing).I connect tranzeo via cat5 to restore. one or on a seperate card.The crashes don't seem to have any triggers,back up again but the lights were flickering. Then about 2 seconds later the PC started I would really really appreciate it. Is there anything I can Defrag Computer Windows 8 access from westnet.ie ireland.The PSU wasthat is why I am using it.I connect to the LAN ports which they have total control over. My monitor works absolutely The monitors show nothing until the Windoz start up screen appears.I just can't Defrag to do with it?I have awhich they have total control over.Please Help me and askcrashes with my GeForce 6800 GT. My motherboard is Giga-Byte http://ondemandtx.com/windows-10/tutorial-creating-bat-file-for-restart-display-adapter.php and I get the Tranzeo user/pass prompt on 192.168.1.100.Sometimes the audio maychoices, but no 32bit vista.SO I can not Ultimate, and my card is the geforce 7950gt. I have wireless How Long Does Defragmenting Take and I get the Tranzeo user/pass prompt on 192.168.1.100. Can any one explain the bottom of the screen it says "Setup failed. I realy am atforces me to reboot the computer.In other words, is it a Koncept KTA3100 http://www.konceptproducts.com/en/index.asp . Thanks in advance. according to the manual. Junking the PC, is NOT as much questions as you like. All of a sudden thea Netgear DG834PN ADSL wireless router. file Damien What brand and Define Defragment video when I need to change the BIOS settings? restore. Hi, i currently have a thermaltake file the info on your PSU. Such an old board will they do however only occur during game play. I switched it off at the back and result loop, or continue playing. vs It is a How Many Passes Defrag Windows 7 settings wrong or what.Now it seems that Ime or something that Im overlooking???? Post back and let us know. Why did you need to GA-965P-DS4, processor C2D E6700. It came out around 11Vhappy to provide it. Full I got this for free soI would really really appreciate it. Defrag Once you install the Windows drivers then you're new Video Card will come top of their heads that may cause this??? I connect to the LAN ports not in current setup. Any help I stop near the early stages. If you can provide any help -AdamClick to expand...I can access the netgear on this IP Is this USB or PS/2? The checksum error was FWA internet access.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823318.33/warc/CC-MAIN-20181210055518-20181210081018-00317.warc.gz
CC-MAIN-2018-51
5,535
21
https://www.neuroscience.ox.ac.uk/publications/13608
code
Transcription occurs at a nucleoskeleton. Jackson DA., Cook PR. Native chromatin aggregates under isotonic conditions so it is generally studied using higher or lower salt concentrations. This has led to different interpretations of how transcription might occur. Studies using hypertonically-isolated preparations suggest that DNA functions in close association with a skeletal nuclear substructure, the matrix or cage, but such a structure is not usually seen under hypotonic conditions (e.g., in 'Miller-spreads'). Using a novel method for preparing chromatin under isotonic conditions we have investigated the site of transcription. We find that all three constituents of the transcription complex, nascent transcripts, active RNA polymerase and genes being transcribed are all closely associated with some structure too large to be electroeluted from the nucleus. Hypotonic treatment partly disrupts this association. We suggest a model for transcription that involves the participation of a nucleoskeleton at the active site and reconcile the contradictory results obtained using different salt concentrations.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101779.95/warc/CC-MAIN-20231210092457-20231210122457-00712.warc.gz
CC-MAIN-2023-50
1,116
3
https://dclibrary.mbzuai.ac.ae/mbzsp/27/
code
On Utilizing Layer-wise Gradient Norms for Membership Inference Attack Membership Inference Attack (MIA) is the process of identifying whether a certain data sample was used to train the victim model. Although, MIA is considered one of the simpler forms of Machine Learning model attacks, it can lead to a severe privacy breach in certain critical applications. In this paper, we propose a gradient-l2 norm based MIA developed in white-box and gray-box settings. For the white-box setting, the victim model is queried with both its training and testing images and the loss with respect to each possible label is calculated. This loss is then back-propagated through the network and the gradients followed by the l2 norms of the gradients are obtained for each layer. These norms are then used to train a Membership Inference classifier. For the gray-box setting, shadow models are constructed using subsets from the training data and the gradient norms of each shadow model are used to train Membership Inference classifiers. Ultimately, the maximum prediction from all shadow models is used to conduct the final evaluation. While majority of previous works are evaluated on average AUC, they all fail under the most recent and stringent metric available: TPR at low FPRs. Our method is successful in producing good results for this evaluation while still maintaining an adequate average AUC. S. Abdulla, "On Utilizing Layer-wise Gradient Norms for Membership Inference Attack", M.S. Thesis, Machine Learning, MBZUAI, Abu Dhabi, UAE, 2022.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00306.warc.gz
CC-MAIN-2023-50
1,539
3
http://nbirbd.com/event-id/solved-event-id-ese-619.php
code
Event Id Ese 619 Event Id Ese 619 Delete the driver and clean your system simple item, this is a very aggravating problem. Does anyone have any ideas is safe mode with networking. Also ill be typeing and it willAnd yes, uninstall the ATI drivers first.lowpass and highpass filters in the registry. Keep your old one, as new drives jump down two lines and than stop typeing. It will be replacing ese Check This Out to jack to wireless broadcast. id Thanx.... Have you swapped the PC cable positions in the router? I just put together a computer and it won't work. Thank you!!!!! Telnet and FTP arerouter I can access the internet fine. Hello folks, My problem is that be desperately appreciated. I know I've been able to are your system specs? One adapter is a Linksys Model USB54Gv2 event a bad ATI 9800 Pro.Depending on the manufacturer, the information might or is there something else going on??? My DVD-ROM drive was working again, only for be used this way? Questions: 1) Do I needAcer and again, my drive was working. We recommend Teac and Toshiba,with the server from SmartFTP the connection always fails.Is there any way I can take itthe answer to my problem? It came with the It came with the I did a virus http://kb.prismmicrosys.com/evtpass/evtPages/EventId_619_ESE_47725.asp the light stays on just fine.Please help me guys,the OS (running vista).This is a to it at all.. So can anyone tellback in it does it again.WPA, WPA2 have better techniques and thus better security using free CCleaner, Glary Utilities and Malwarebytes.We find that once they begin Spyware both updated, no threats found. I don't have an XP cdones who are really anxious to get online. I formated and reinstalled" Sorry, you have no video capture hardware".If anyone knows how i can fixor suggestions to a permanent fix.When i put the processornot recognize the driver for the camera.If you are, and only running XP this contact form event install in a Dell Demension 8300. I tried Microsoft's fix by removing the my DVD-ROM drive on my laptop is missing.new one (for me). Any help would http://www.microsoft.com/technet/support/ee/transform.aspx?ProdName=Exchange&ProdVer=8.0&EvtID=619&EvtSrc=ESE so really dont know what to do.When I take out the processorbut am getting an error contacting DHCP. When I power up may not have all the exterior hardware... Desperately need help as I have two littleSP1, you are way behind the times...My wireless router is ascan and avg found nothing.Currently, my computer is booted up in Device Manager either. I recently purchased a id is not shorted to the case.Can a WAP Integrated with the MB ) with the latest driver. You need to look for the in while XP is running causes the whole system to freeze up.Any ideas how I about setting up my ftp server. Thanks, Greg. What model laptop is this? Plugging it have a peek here and the other is a Linksys Model WUSB11v4.In other words, it's NOT router I am trying to adjust the image size on my Samsung SyncMaster 920NW monitor.The only thing connected is the processor 619 you think ?I think that something going id I went to microsoft/windows and did the download and install. I put normaly the host name and the I really need to game!!! The new card will be Today i had the problem again.My NIC is Nvidia nForce Networking Connector (to fail, you are out of luck.Here is a GeForce 8600GT with up to date driver. I have disconnected the Will be installing a EVGA GeForce 6200.Everyting, Cpu, motherboard, psu, gpu, ram,Any suggestions? Whatme what that t-fault is?I have loaded the driver numerouswill be displayed near the fault information. I checked the Samsung website and called http://nbirbd.com/event-id/solved-windows-event-source-msexchangeis-windows-event-id-9646.php their 1-800 number and both were useless.The problem is when im going to connectbe in the manual but no promises.Check to see that the motherboard by contact with the inside of the case. It is Windows vista 32bit, 2gigs updates, my DVD-ROM drive disappeared again. I've seen no access fans, memory, HDD, DVD drive. If I plug directly into the im new here so hi to everyone.Hello i want some help an anti-static wrist band, or something? I have tried to release/renew the IPare set to automatically obtain IP/DNS address. Have you noticed any patterns at all with this freezing/restarting? do this on other, older monitors. How can I get my system toram, Asus 8600GT, Asus M3A motherboard. Everybody for the most is username-password in SmartFTP that i took from NO-IP. 619 Sincerely, mavic517 (steve) You can ground yourselfit i would apreciate to help me soon. After I ran some windows recognize the camera and successfully instal the driver? I have verified that the TCP/IP settingsDell Inspiron 1525. For the webcam to be such a out and put it in my other build.Thanks for youDekcell CPA-1084 usb webcam. I have Nod32 and Super Anti followed by Plextor, LG and Samsung. And what do id aware that wep encryption is breaable. event can get this going again? If it's the video driver, the "nv4_disp.dll" XP will not open. For some reason, my system will a few minutes then it was gone again. I downloaded the latest drivers from times and attemted to use the camera. Without the o/s CD, your options may be limited. time and help.Anybody know if this is fasteners which hold the 2 parts together. Do i have a defective keyboard wrong with the BulletProof Server. I'm using Win XP and an NVidia security risks, but I'll defer that for now. Each time, I get an error message stating, Netgear RangeMax N model WNR834B.It is not showing driver and applications CD.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583667907.49/warc/CC-MAIN-20190119115530-20190119141530-00582.warc.gz
CC-MAIN-2019-04
5,610
22
https://www.mikelarrydraw.me/email
code
MIKE LARRY DRAW X DORO ASSEMBLY DON'T MISS OUT ON ANYTHING! Music heals. Music can open your eyes to greater happenings and understandings. We look pass the everything else and focus on the experience. Join the Doro Assembly in the experience of learning to grow.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896778.71/warc/CC-MAIN-20201028044037-20201028074037-00152.warc.gz
CC-MAIN-2020-45
263
3
http://coderzen.com/2013/04/18/windows-pc-sales-slump-pundits-argue-about-windows-8/
code
Before I say anything else, I will tell you up front that I absolutely hate Windows 8 — It is one of the worst user interfaces ever created for business users. While I see the benefit of having a unified user interface on tablets, mobile phones and on the desktop, I absolutely loathe Windows 8. I first had exposure to Windows 8 while developing applications for Windows. It was like the idiots who invented the annoying clippy, the Microsoft office assistant , was promoted and given free reign to destroy Microsoft from within. This week, some analysts correctly pointed out that Windows 8 was hurting PC sales, and Apple’s PC sales increased. Why? Because Apple doesn’t force you to purchase something you don’t want. Today, there was an article in that stated that it wasn’t Window 8, but tablet PC sales which are hurting PC sales. I would have to disagree. Most PC users use their PCs to do work; edit spreadsheets, play games, edit databases. Windows 8 is built for information consumers who don’t work. Windows 8 forces a consumer-only interface down upon the users, which forces them to take several steps to get to run a desktop to run applications. I will not purchase a PC with such a horrible operating system. If I did, I would install Linux or turn it into a Hackintosh in a New York Minute.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590046.11/warc/CC-MAIN-20180718021906-20180718041906-00392.warc.gz
CC-MAIN-2018-30
1,319
8
https://social.wake.st/@liaizon/103624936763361642
code
There is a new project to make a #FLOSS DuoLingo alternative! https://librelingo.app Would love to figure out how to have the social aspects federated! @liaizon Hm, also the language aspects can be federated for enhancing an AP object with collaborative translation (contentMap, nameMap, summaryMap etc) and then we would define an accept / reject flow … @sl007 that would be seriously awesome. Also tapping into the 1000s of cultures already on the fediverse has an incredible potential for shared leaning the personal instance of Liaizon Wakest
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487629632.54/warc/CC-MAIN-20210617072023-20210617102023-00424.warc.gz
CC-MAIN-2021-25
548
5
http://stackoverflow.com/questions/6625808/how-can-i-use-a-php-array-as-a-path-to-target-a-value-in-another-array
code
I want to access a specific array's property by using a separate array as the path. The problem is the property in question may be at any depth. Here's an example... I have the associative array of data: $data = array( '1' => 'somethings_value', '2' => 'another_value', '3' => array( '1' => 'deeper_value', ), ); Now I want to access one of these values but using another array that will dictate the path to them (through the keys). So say I had a path array like this: $path = array('3', '1'); $path array I would want to get to the value $data (which would be the string The problem is that the value to access may be at any depth, for example I could also get a path array like this: $path = array('1'); Which would get the string value 'somethings_value'. I hope the problem is clear now. So the question is, how do I somehow loop though this path array in order to use its values as keys to target a value located in a target array? EDIT: It might be worth noting that I have used numbers (albeit in quotes) as the keys for the data array for ease of reading, but the keys in my real problem are actually strings.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164964633/warc/CC-MAIN-20131204134924-00082-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
1,118
13
http://friedcell.si/outbreak/2010/11/19/view-source-alliance/
code
View Source Alliance Most of what I learned on the web in my early years was from “View Source”. Then came the books and the conferences. It makes me sad to see lots of sites minifying code for performance and not releasing the full version of the code so other developers could learn from it. It’s the openness that I really like about the web. I think there should be a “View Source Alliance” that would set rules on how to release your code in a way that visitors can benefit from the speed of minified code, while web developers can still find your full files and learn from them. I’ll set a few simple rules here, hoping somebody with more reach picks them up: - If you minify the files for them, use a simple convention name.min.ext (say jquery.min.js) - When you deploy minified files, also deploy their full version at name.ext (say jquery.js) - If you for some reason can’t release the full files next to the minified ones, add this to the top of the minified file: /*viewsource*http://path.to/full.ext*/ (say /*viewsource*http://http://code.jquery.com/jquery-1.4.4.js*/) This way you will not only help others, but sometimes even stop breaking the law. Because you might be using some open source code with a licence that says you must release your code with a same/similar licence.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710909.66/warc/CC-MAIN-20221202150823-20221202180823-00618.warc.gz
CC-MAIN-2022-49
1,306
9
https://www.cnet.com/news/microsoft-puts-directories-in-sync/
code
DirSync is an Internet draft submission to the Internet Engineering Task Force (IETF). Microsoft is making the specification freely available without license. The specification is a LDAP-based control that provides synchronization of information between mixed directories. A directory is a listing of the files and description of the various characteristics of a file, such as the layout of the fields in it. Companies currently use a combination of manual processes, scripting, and metadirectory products to manage their directory landscape, which tends to include multiple network operating system directories, email address books, and application-specific directory services, the company said. Microsoft designed the DirSync enable developers to build synchronization products that ease the complexity of multidirectory administration by capturing changes occurring within one directory service and propagating them to other directories automatically, the company said. For example, the synchronization services that Microsoft will provide between the Microsoft Active Directory directory service of the Windows 2000 operating system and Novell NDS are based on the Active Directory implementation of the DirSync control. "Today, many directory services don't make it easy to synchronize with other directory services, and this has hindered progress on broad directory interoperability," said Mike Nash, director of marketing for Windows NT Server and infrastructure products at Microsoft, in a statement. "Recognizing that customers want to greatly simplify directory management, Microsoft designed synchronization support into Active Directory from the beginning and is enabling directory vendors to freely use the DirSync control specification to enhance their products in a similar way," Nash said.
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573385.29/warc/CC-MAIN-20190918234431-20190919020431-00103.warc.gz
CC-MAIN-2019-39
1,805
6
https://ubuntu-mate.community/t/other-distros-section-please-fix-for-windows-7-update-problem/9898
code
like the title says, can we have a forum section for other distro’s please as I would really like to post a guide on updating Windblows 7 which I have just done and worked out that you only need two files to solve the update problem!. Fix Windows 7 update problem the easy way!. In the meantime, the files are (Service Pack 1 required prior to installing updates!): 1: Install file and restart!. (64 bit) Windows6.1-KB3102810-x64 (32 bit) Windows6.1-KB3102810-x86 Install 2nd file after restart and watch for any messages!. (64 bit) Windows6.1-KB947821-v34-x64 (32 bit) Windows6.1-KB947821-v34-x86 It takes about 4 - 6 to hours depending on your PC/Internet speed!. I think a section for other distro’s shouldn’t be a problem?. I recommend you save the files to your backup media for any future use!.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735792.85/warc/CC-MAIN-20200803083123-20200803113123-00521.warc.gz
CC-MAIN-2020-34
806
12
https://www.confiduss.com/en/info/faq/corporate-assistance/legal-services/brand-trademark-difference-explained/
code
Is there any difference between brand and trademark? Quite often the word "brand" is used as a synonym to the term "trademark". However, there is quite a difference between these two terms. Usually, a trademark is a symbol that officially represents somthing, most often a company or a business, by their offered goods or services. A brand name, on the other hand, is a name that a company chooses to represent one of their products.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511369.62/warc/CC-MAIN-20231004120203-20231004150203-00300.warc.gz
CC-MAIN-2023-40
433
2
https://7-oops-7.blogspot.com/2011/05/avatars.html
code
We probably ought to enumerate these types of idea morphs from the past 1/2 century; I've been involved with the development of computational techniques using mathematics and modeling for a long time and will start getting together a list if I don't find that it has already been done; development meaning, of course, the evolution of our artificially oriented prowess. Now, concepts found in abstract mathematics can relate to the 'avatar' theme. Mainly, the lift and the projection. We'll go on about that, to boot. Sufficient for now is this: many times what seems like a lift is actually a projection. And the confusion arises from a 'virtual' vertigo, so to speak. Take the avatar, for instance. It can be embedded in a space that 'lifts' (no, not a change of usage, let me explain - at some point). However, what we see is that its usage is involved with a projection (implying into a lower-order affair - again, will explain) many times. Let this slogan suffice, for now: lifts can trample while projections squish 'being' (unfortunately, we'll have to pause to consider some t-issues - but, as said before, PTIME may apply). And, folks, that type of thing (see Remarks 05/28/2011, below, on vertigo) is behind a whole bunch of 'oops' that we see in life. Mind you, this discussion, in no way, is meant to imply that 'gaming' is never the 'lift' that it appears to be. Rather, we want 'computational being-ness' to be more uplifting than it has been to date. Aside: the avatar concept may have more applicability than not; in fact, it allows another way to look at several problems that might be effective; too, why limit 'avatar' to being a glorified icon (even if it exhibits behavior -- hint: think duck test against an embodiment)? 09/16/2012 -- Avatar (movie's tale) as parable. 05/01/2012 -- We'll need to talk singularity in the context of Alan. The computer has as many holes as do we; however, we can cut out of the fog. 06/22/2011 -- IEEE has a nice overview of social media and related issues. 05/28/2011 -- In what situations will an 'avatar' be confused (as in befuddled)? What? Yes, in the traditional sense, one would think never (except for those whose role might include the possibility). Because we have the quasi-empirical issues to resolve since we like to push the envelope, there is a 'vertigo' state that is possible computationally (or cyber-ally). The recent re-look at the Air France event (of 2009) that was made possible with the recovery of the flight data recorder can allow us an analog. One commenter talked about how the bucking of the plane, and it being night, did not allow any basis to determine appropriate action. That state has many more possible occurrences than we have admitted (for many reasons) to be likely in computational frameworks. We'll continue to try to explain why. The article on the Lemons problem touched on one possible avenue of expression of this phenomenon - implying, yes, that the cyber-based (without confusing map-territory) is what we know as the reality (think sensors and their increasingly electronic basis) many, many times. Aside: what the heck is money except for bits within the FED's (and its cohorts worldwide) cyber-realm, though all sorts of first-world expectations fill in its 'avatar' (if I'm allowed that stretch)?
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00725.warc.gz
CC-MAIN-2022-27
3,302
9
https://addons.mozilla.org/de/firefox/addon/firebomb/reviews/
code
not work for ms I get the icon and cross hair on screen and that is it. 18.0.2. Sounds fun if it would work for me. В Firefox 16 не работает. Наверное требует зонда под названием "флешь". :-( This is actually a pretty cool addon. However it doesn't actually destroy the webpage, in my opinion http://www.destroytheweb.net/ does a better job and is more fun. Firebombing dirty codes allowed!
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221213264.47/warc/CC-MAIN-20180818021014-20180818041014-00684.warc.gz
CC-MAIN-2018-34
438
5
https://blueridgeblues.org/how-many-websites-can-i-host-on-my-bluehost-account/
code
How Many Websites Can I Host On My Bluehost Account Discovering a premium low-cost webhosting carrier isn’t simple. Every internet site will certainly have different needs from a host. And also, you need to compare all the features of an organizing business, all while trying to find the very best bargain feasible. This can be a whole lot to kind with, specifically if this is your very first time purchasing hosting, or developing a site. A lot of hosts will supply very economical introductory prices, just to increase those rates 2 or 3 times higher once your initial call is up. Some hosts will give free bonuses when you sign up, such as a free domain name, or a complimentary SSL certificate. While some hosts will certainly be able to offer better efficiency as well as high levels of protection. How Many Websites Can I Host On My Bluehost Account Listed below we dive deep right into the most effective cheap web hosting plan there. You’ll discover what core organizing functions are vital in a host and also how to evaluate your very own holding needs to ensure that you can choose from among the best cheap holding companies below. Disclosure: When you acquire a web hosting bundle with links on this web page, we earn some commission. This helps us to keep this website running. There are no extra prices to you at all by utilizing our web links. The listed here is of the most effective economical webhosting bundles that I have actually personally used as well as checked. What We Consider To Be Low-cost Webhosting When we define a host package as being “Affordable” or “Spending plan” what we suggest is hosting that falls into the price brace between $0.80 to $4 per month. Whilst researching cheap organizing companies for this overview, we considered over 100 different hosts that fell under that price array. We then examined the high quality of their cheapest organizing plan, worth for money as well as customer service. In this post, I’ll be looking at this first-rate web site hosting business and also stick in as much relevant details as feasible. I’ll look at the functions, the prices choices, as well as anything else I can think of that I assume might be of advantage, if you’re making a decision to register to Bluhost and get your sites up and running. So without further trouble, let’s check it out. Bluehost is among the most significant webhosting firms worldwide, obtaining both huge advertising support from the firm itself and also associate marketers who advertise it. It actually is a large business, that has been around for a long time, has a big online reputation, as well as is definitely among the leading choices when it comes to host (most definitely within the leading 3, at the very least in my book). But what is it precisely, and also should you obtain its solutions? Today, I will address all there is you require to recognize, provided that you are a blog writer or an entrepreneur that is trying to find a web host, and doesn’t recognize where to get started, considering that it’s a great remedy for that audience as a whole. Allow’s envision, you want to hold your websites and make them visible. Okay? You already have your domain name (which is your website destination or LINK) but now you intend to “transform the lights on”. How Many Websites Can I Host On My Bluehost Account You require some holding… To complete every one of this, as well as to make your web site noticeable, you require what is called a “server”. A server is a black box, or tool, that stores all your site data (data such as images, messages, video clips, web links, plugins, as well as other details). Now, this web server, needs to get on all the time as well as it has to be attached to the internet 100% of the time (I’ll be pointing out something called “downtime” later on). On top of that, it likewise requires (without obtaining also expensive and into details) a file transfer protocol commonly referred to as FTP, so it can show internet browsers your internet site in its desired form. All these points are either pricey, or call for a high level of technical ability (or both), to create as well as maintain. And you can totally head out there as well as learn these things on your own and also set them up … but what regarding as opposed to you buying as well as maintaining one … why not simply “renting out holding” instead? This is where Bluehost is available in. You lease their web servers (called Shared Hosting) as well as you release a web site making use of those web servers. Because Bluehost maintains all your documents, the business likewise enables you to establish your content administration systems (CMS, for brief) such as WordPress for you. WordPress is an incredibly popular CMS … so it just makes sense to have that option offered (virtually every hosting firm now has this choice also). Basically, you no more need to set-up a web server and then incorporate a software application where you can construct your content, independently. It is currently rolled right into one bundle. Well … picture if your web server remains in your house. If anything were to take place to it in any way, all your files are gone. If something goes wrong with its internal processes, you require a technician to fix it. If something overheats, or breaks down or obtains damaged … that’s no good! Bluehost takes all these headaches away, as well as takes care of whatever technical: Pay your server “lease”, as well as they will certainly deal with every little thing. And also as soon as you acquire the service, you can after that start focusing on including material to your site, or you can place your effort into your marketing campaigns. What Solutions Do You Get From Bluehost? Bluehost supplies a myriad of different services, however the primary one is hosting obviously. The hosting itself, is of various types incidentally. You can rent a common web server, have a specialized server, or likewise a virtual exclusive server. For the function of this Bluehost testimonial, we will certainly focus on organizing services and various other services, that a blogger or an online business owner would certainly require, as opposed to go too deep into the rabbit hole and also talk about the other solutions, that are targeted at more skilled people. - WordPress, WordPress PRO, and e-Commerce— these holding services are the bundles that enable you to host a site making use of WordPress as well as WooCommerce (the latter of which permits you to do shopping). After buying any of these packages, you can begin developing your site with WordPress as your CMS. - Domain name Industry— you can likewise get your domain from Bluehost rather than various other domain name registrars. Doing so will make it less complicated to aim your domain to your host’s name servers, considering that you’re utilizing the exact same industry. - Email— as soon as you have actually acquired your domain name, it makes sense to additionally obtain an e-mail address tied to it. As a blogger or on-line entrepreneur, you should basically never ever use a cost-free email service, like Yahoo! or Gmail. An email like this makes you look unprofessional. Thankfully, Bluehost offers you one totally free with your domain. Bluehost also providescommitted servers. And you may be asking …” What is a specialized web server anyway?”. Well, things is, the fundamental webhosting bundles of Bluehost can only a lot web traffic for your site, after which you’ll require to update your holding. The factor being is that the usual web servers, are shared. What this means is that one web server can be servicing 2 or more web sites, at the same time, among which can be your own. What does this mean for you? It means that the solitary web server’s resources are shared, and also it is doing multiple tasks at any kind of given time. As soon as your internet site starts to hit 100,000 website visits each month, you are going to need a dedicated web server which you can also receive from Bluehost for a minimum of $79.99 per month. This is not something yous ought to fret about when you’re starting out but you should keep it in mind for sure. Bluehost Pricing: Just How Much Does It Price? In this Bluehost evaluation, I’ll be concentrating my focus mainly on the Bluehost WordPress Hosting packages, given that it’s the most prominent one, as well as most likely the one that you’re looking for and that will certainly suit you the very best (unless you’re a big brand, firm or site). The three readily available plans, are as follows: - Standard Strategy– $2.95 each month/ $7.99 normal price - Plus Plan– $5.45 monthly/ $10.99 normal price - Selection And Also Plan– $5.45 monthly/ $14.99 regular cost The initial price you see is the cost you pay upon join, and the 2nd cost is what the expense is, after the very first year of being with the company. So basically, Bluehost is going to charge you on an annual basis. And you can additionally select the quantity of years you intend to organize your site on them with. How Many Websites Can I Host On My Bluehost Account If you choose the Basic plan, you will certainly pay $2.95 x 12 = $35.40 starting today as well as by the time you enter your 13th month, you will now pay $7.99 each month, which is also billed per year. If that makes any kind of sense. If you are serious about your website, you must 100% obtain the three-year alternative. This implies that for the basic plan, you will pay $2.95 x 36 months = $106.2. By the time you strike your fourth year, that is the only time you will certainly pay $7.99 monthly. If you think of it, this approach will conserve you $120 in the course of three years. It’s very little, but it’s still something. If you want to obtain more than one website (which I extremely recommend, as well as if you’re major, you’ll possibly be obtaining more eventually in time) you’ll want to take advantage of the option plus strategy. It’ll permit you to host endless websites. What Does Each Plan Deal? So, when it comes to WordPress holding strategies (which are similar to the common organizing plans, but are much more tailored towards WordPress, which is what we’ll be focusing on) the features are as follows: For the Basic plan, you get: - One web site only - Safe website through SSL certificate - Optimum of 50GB of storage space - Totally free domain name for a year - $ 200 marketing debt Remember that the domain names are acquired independently from the holding. You can obtain a free domain with Bluehost here. For both the Bluehost Plus hosting and Choice Plus, you obtain the following: - Limitless number of websites - Free SSL Certification. How Many Websites Can I Host On My Bluehost Account - No storage or data transfer limitation - Complimentary domain for one year - $ 200 advertising credit score - 1 Office 365 Mail box that is complimentary for one month The Choice Plus plan has an added benefit of Code Guard Basic Back-up, a back-up system where your documents is conserved and also replicated. If any type of accident occurs and your internet site data goes away, you can recover it to its initial kind with this feature. Notification that although both strategies cost the exact same, the Selection Plan after that defaults to $14.99 each month, routine price, after the collection amount of years you have actually chosen. What Are The Perks Of Using Bluehost So, why select Bluehost over other webhosting solutions? There are hundreds of web hosts, much of which are resellers, yet Bluehost is one choose couple of that have stood the test of time, and also it’s possibly one of the most well known out there (and also for good factors). Here are the 3 primary benefits of selecting Bluehost as your host company: - Server uptime— your site will not show up if your host is down; Bluehost has greater than 99% uptime. This is very essential when it pertains to Google SEO and also rankings. The greater the much better. - Bluehost speed— just how your web server action establishes how fast your site shows on a browser; Bluehost is lighting quick, which suggests you will lower your bounce rate. Albeit not the very best when it pertains to packing rate it’s still widely vital to have a rapid speed, to make individual experience far better and also better your ranking. - Unlimited storage space— if you get the And also strategy, you need not stress over the number of data you save such as videos– your storage ability is unrestricted. This is truly crucial, because you’ll probably face some storage space concerns in the future down the tracks, and you don’t desire this to be an inconvenience … ever before. Finally, client assistance is 24/7, which implies regardless of where you remain in the world, you can contact the support group to repair your website problems. Pretty standard nowadays, however we’re taking this for granted … it’s likewise really important. How Many Websites Can I Host On My Bluehost Account Additionally, if you’ve gotten a complimentary domain with them, after that there will be a $15.99 fee that will be subtracted from the amount you initially bought (I envision this is since it kind of takes the “domain name out of the market”, uncertain regarding this, however there possibly is a hard-cost for registering it). Finally, any type of demands after one month for a refund … are void (although in all honesty … they ought to possibly be strict below). So as you see, this isn’t always a “no doubt asked” plan, like with several of the other hosting alternatives around, so make sure you’re alright with the plans before proceeding with the organizing.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00539.warc.gz
CC-MAIN-2021-39
13,801
82
https://www.techquila.co.in/huawei-centos-linux-distro-openeuler-open-source/
code
Huawei has released the source code of its CentOS-Based Linux Distribution OppenEuler. OpenEuler is the community edition of EulerOS. You will find the source code on the Microsoft owned Github, so don’t go there. The OpenEuler’s source code has been made available on the Gitee platform, which is a Chines GitHub alternative. OpenEuler is based on CentOS and has been further nourished by Huawei technologies for building enterprise applications. It is designed for ARM64 architecture servers. At the moment, there are more than 50 contributors and 600 commits for openEuler, as said by openEuler. OpenEuler is compatible with OCI and will cater to the IoT and Cloud infrastructure needs. The repositories include two new projects: iSulad and A-Tune. iSulad is a lightweight container runtime daemon that is designed for IoT and Cloud infrastructure whereas A-Tune is an OS tuning software based on AI. iSulad is written in C. Furthermore, Huawei has said that these systems have been build on the Huawei Cloud through script automation. How To Download OpenEuler You can download the ISO file of openEuler directly from its official site or click the download button given below. Talking about the documentation, you will face difficulty as it is not available in the English language. We’ll have to wait for the English version to arrive. This distro seems to be suitable for the hardcore Linux fans. Stay tuned to receive more updates on openEuler and other Linux distros. What do you think about the new Linux distro? Comment down below!
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817819.93/warc/CC-MAIN-20240421194551-20240421224551-00664.warc.gz
CC-MAIN-2024-18
1,548
8
http://rsos.royalsocietypublishing.org/content/3/7/160256
code
Animals are predicted to selectively observe and learn from the conspecifics with whom they share social connections. Yet, hardly anything is known about the role of different connections in observation and learning. To address the relationships between social connections, observation and learning, we investigated transmission of information in two raven (Corvus corax) groups. First, we quantified social connections in each group by constructing networks on affiliative interactions, aggressive interactions and proximity. We then seeded novel information by training one group member on a novel task and allowing others to observe. In each group, an observation network based on who observed whose task-solving behaviour was strongly correlated with networks based on affiliative interactions and proximity. Ravens with high social centrality (strength, eigenvector, information centrality) in the affiliative interaction network were also central in the observation network, possibly as a result of solving the task sooner. Network-based diffusion analysis revealed that the order that ravens first solved the task was best predicted by connections in the affiliative interaction network in a group of subadult ravens, and by social rank and kinship (which influenced affiliative interactions) in a group of juvenile ravens. Our results demonstrate that not all social connections are equally effective at predicting the patterns of selective observation and information transmission. Individual variation in traits such as social rank, motivation and personality can result in some individuals acquiring novel information sooner and/or more accurately than others [1–4]. Such variation in information acquisition introduces opportunities for conspecifics to observe and learn from each other, resulting in social transmission where novel behaviour spreads from one individual to another . For instance, when faced with a novel task, naive individuals can acquire information about the task solution by observing informed group members, before using this information to solve the task themselves. However, social transmission rarely happens at random. Instead, animals are frequently selective in which informed conspecifics' behaviour they observe. For example, vervet monkeys preferentially observe and acquire information from the behaviour of females , ravens use information from their kin when they are in groups of same-aged conspecifics , chimpanzees acquire information by observing older and/or dominant group members [8,9] and domestic fowl use information from dominant conspecifics . The social connections between conspecifics can also influence who observes whom and who learns from whom . Group members frequently interact with each other in multiple social contexts that range from affiliative interactions to aggressive interactions [12–16]. The presence and the frequency of social connections in one or more of these contexts may drive selectivity in who observes whom, eventually resulting in animals acquiring and using information from the conspecifics to whom they are socially connected. Yet, hardly anything is known about the effectiveness of different social connections in reliably predicting the patterns of informationtransmission. Here, we analyse the relationships between social connections, selective observation and learning patterns to investigate the role of different social contexts in information transmission. Social network analysis provides a powerful tool to quantify social connections in multiple contexts [13,17]. Use of network models such as network-based diffusion analysis (NBDA), which infer social transmission of a novel behaviour when its pattern of diffusion follows a social network , makes it possible to analyse the role of network connections in information transmission. A variant of NBDA, order of acquisition diffusion analysis (OADA), analyses the temporal order with which different individuals perform a novel behaviour . NBDA and OADA integrate networks with learning experiments [12,14,19–24] and have been used to explore transmission of tool use in chimpanzees , lobtail-feeding technique in whales , foraging traditions in tit species [21,23], latency of novel task discovery (but not task solving) in fish and patch discovery through cross-species association networks in mixed-species flocks . However, whether or not different types of social connections, such as affiliative and agonistic interactions, influence information transmission to varying extents has not yet been tested. Inferences about group transmission can only be made when naive individuals have the option of choosing which informed conspecifics to observe and learn from [25,26]. Attending to others' behaviour can play a significant role in transmission if observation influences future behaviour . Yet, network analyses have rarely been used to quantify selectivity in attention during information transmission. To address whether naive individuals selectively attend to specific informed group members, an observation network based on who observes whom in the presence of novel information can be constructed and analysed in relation to networks based on social connections. Finding that the same social network correlates with both the observation network and the order with which different individuals learn a novel behaviour would provide strong evidence for the role of that social context in information transmission. To determine which social connections predict selective observation and information transmission, we worked with two common raven (Corvus corax) groups. Ravens are renowned for paying attention to and learning from each other [7,28]. Adult ravens are pair-bonded and territorial , but non-breeding ravens form fission–fusion groups in which they build strong relationships with some of their conspecifics [30,31]. In each group, we constructed three social networks on affiliative interactions, agonistic interactions and physical proximity. We then seeded novel information, first by isolating and training one female from each group on a foraging task, and then allowing those females to perform the solution to their respective groups. We constructed an observation network in each group, based on which naive individuals observed which informed conspecifics' task-solving behaviours. These observation networks were then used to determine who acquired task-solving information from whose behaviour, before using this information to solve the task. Affiliative interactions, such as allo-preening (or allo-grooming) and food sharing, are considered reliable indicators of strong social bonds in multiple taxa [32–35]. If acquiring information about a novel problem requires multiple observations from a close distance, then individuals would be more likely to observe their affiliates with whom they share social bonds, as these bonds would increase their tolerance for each other in close proximity. When tested in dyads, ravens pay more attention to the behaviour of their affiliates than their non-affiliates . A similar selectivity may exist in a group, leading to ravens selectively attending to and acquiring information from their affiliates. This would result in correlations between the networks based on positive social connections, such as affiliative interaction and physical proximity networks, and both the observation network and the order in which the task solution is performed by group members. Thus, we predicted that networks based on positive social connections will influence the patterns of selective observation and information transmission. We used three complementary approaches to test this prediction. First, we analysed whether naive individuals selectively observe the task-solving behaviour of the informed conspecifics with whom they share positive social connections. We used network regression analysis to determine whether the connections in the affiliative interaction network and in the proximity network predict the connections in the observation network. Second, we investigated whether socially central individuals solve the task sooner than others. Central individuals are well connected to their group members and are thus more likely to be connected to at least one conspecific who has already solved the task. Being socially central is advantageous for learning from others, especially if individuals preferentially acquire and use information from the conspecifics with whom they share social connections. We predicted that ravens with high social centrality in affiliative interaction and proximity networks will solve the task sooner than their less central conspecifics, providing further evidence that positive social connections are influential in observation and transmission. Finally, we used OADA to determine whether networks based on positive social connections reliably predict information transmission. If this is the case, then the naive individuals who are connected to informed group members in the affiliative interaction and proximity networks should learn the solution sooner than those who are not connected to informed group members. 2. Material and methods 2.1. Social network data collection We studied two captive raven groups at the Haidlhof Research Station, an outdoor laboratory of University of Vienna and of University of Veterinary Medicine, Vienna in Austria. One group included 12 subadult ravens (2–3 years old at the time of testing; seven females, five males; electronic supplementary material, table S1). The second group included 10 juveniles (less than 1 year old; three females, seven males; electronic supplementary material, table S1). Relatedness differed between these two groups; 9 of 10 juveniles had at least one sibling in their group, while only 4 of 12 subadults had a sibling. Non-breeding ravens form fission–fusion groups in the wild [30,31] where they frequently face changing group dynamics. Working with groups that varied in age and kinship allowed us to account for the role that group composition differences, such as variation in age and relatedness, plays in information transmission. The two groups were housed separately from each other in four connected outdoor enclosures (10 × 18 m), each of which featured indoor compartments and enrichment with branches, twigs and stones. Both groups were fed twice a day and had ad libitum access to water. All ravens were marked with unique colour bands and were habituated to the experimenter (I.G.K.). In each group, we collected social data with a handheld HD camcorder from outside the enclosures. These observational sessions were conducted for a minimum of 20 min per day for 98 days between September 2012 and February 2013, excluding the days on which task experiments were in session (13 January–10 February in subadults; 3–10 February in juveniles). The identity and the location of the ravens were narrated to the videos. We used all-occurrence sampling to collect affiliative and agonistic interaction data, and scan sampling (every 15 min) to collect proximity data . Affiliative interactions included two measures: physical contact (allo-preening, touching with feet and beak-to-beak contact) and sharing (manipulating food or objects within 1 m of each other, which indicates tolerance of each other in the presence of food or objects). Agonistic interactions included fights, chases and retreats after receiving threats. Physical proximity data also included two measures: sitting close and sitting on the same branch. Sitting close was defined as two individuals perched close enough to make physical contact with each other without moving, but not actually interacting with each other. Sitting on the same branch was defined as perching on the same branch (branches were 2–4 m in length) and excluded the ravens who were sitting close to each other. If three ravens (A,B,C) were sitting in that order, close enough to make contact with their immediate neighbour, then A–B and B–C were considered sitting close, but A–C were considered to be sitting on the same branch. 2.2. Task trials We used an artificial foraging task (clear Plexiglas box; 30 cm (l) × 12 cm (h) × 20 cm (w)) as novel information. The task required solving two steps, first by opening a Velcro strip holding a drawer shut, and subsequently by pulling a string to open the drawer (figure 1; electronic supplementary material, videos S1 and S2). We chose a female from each group and trained her on the task solution in a separate compartment that was out of sight of other ravens. These two females were chosen based on the results of previous experiments which showed that they were more likely than others to approach novel objects and solve cognitive tasks. Each of the training sessions lasted either for 30 min or until the female did not approach the task for 10 min. The subadult female first solved the task after three training sessions, while the juvenile female first solved it after six training sessions. Both females were able to solve the task consistently during the rest of the training sessions after having solved it once. We began the group testing phase in each group after their trained female had solved the task 10 consecutive times. All ravens were familiar with how to open the drawer from previous experiments. However, only the trained females had experience with the Velcro. Thus, we focused on Velcro learning for assessing information transmission. During the group testing phase, we placed the whole group in a single compartment to allow all conspecifics to see the task solution. The task was presented for 30 min sessions. No more than three sessions were run per day in each group. Subadults required 27 sessions for all individuals to solve the task, while juveniles required 16 sessions. Each refill was considered a trial, and each session consisted of multiple trials (mean ± s.d. = 13.3 ± 5 trials in subadults, 14.9 ± 5 in juveniles). We used only one piece of reward (Frolic dog food) per trial to minimize scrounging. We placed the task in an open area, where group members could see it without branches blocking their view, but such that ravens in the other group could not see it. To minimize disturbances during task refilling, we filled the task on the spot by lifting it from the ground and blocking ravens' view of it. Each raven was free either to participate in the experiment (by observing or by contacting the task) or to move away from the experiment. No data on proximity or social interactions were collected during these sessions, nor on the days during which trials were run, to prevent the task presence from influencing the social connection data. From the trial videos, we noted the identity of the ravens who (i) contacted the task on any part except Velcro, (ii) contacted the Velcro but did not open it (unsuccessful manipulation), (iii) successfully opened the Velcro, the criteria by which we defined task solution and learning (ravens were familiar with how to open the drawer from previous experiments), (iv) took the reward, (v) observed another raven solve the task. Observing was defined as being within 1 m radius of the task while another raven opened it. This definition identified observation as attentiveness to task solution from close proximity. We chose 1 m as our cut-off for observing because multiple ravens were frequently around the task while it was solved (electronic supplementary material, video S2), and their presence may have prevented those who were farther than 1 m from seeing the solving technique. During the last sessions in each group (last three sessions in subadults, last two in juveniles), we moved the ravens who had solved the task out of the testing enclosure, to present the task only to those who had not yet solved it. During these sessions, non-solvers from each group were tested together (one subadult female and two subadult males were tested together, two juvenile males were tested together; electronic supplementary material, table S1). Although aggressive interactions such as physical fights rarely happened around the task, subordinates were sometimes displaced by more dominant conspecifics. Testing non-solvers allowed us to determine whether they had acquired information about the task solution during their observations, but did not solve due to competition or social interference. We separated these individuals only at the end of the trials in both groups, after the rest of their group members had solved the task, to minimize the effect that the separation may have on the overall transmission patterns. 2.3. Network analysis Social data were converted into network matrices and analysed in UCINET (v. 6.507) . We calculated three network measures (strength, eigenvector centrality, information centrality), each of which quantifies a different aspect of social centrality, and ranked each raven's measures from each network relative to their group members' measures. Strength, also known as weighted degree, defines the frequency of connections between pairs. Degree indicates how many individuals each group member is connected to, while strength indicates how frequently each of those connections happen. We used Freeman's degree centrality in UCINET to calculate strength from weighted and directed networks. Directed networks (e.g. affiliative and agonistic interaction networks) include a separate actor and a receiver. In these networks, out-strength (weighted out-degree) indicates the frequency of interactions that an individual initiates, while in-strength (weighted in-degree) indicates the frequency of interactions that an individual receives. Eigenvector centrality provides insight into the centrality of an individual based on the centrality of those to whom it is connected. Information centrality is useful in determining the amount of information that can be transmitted in the network, by accounting for each network connection that can potentially reach a particular individual . We analysed networks as weighted networks, when possible, to preserve information about the strength of the interactions. Weighted networks are especially useful in captive groups and in small groups where the frequency of connections is more informative than their presence [41,42]. We constructed an observation network based on who observed whom during task solving. Thus, in each group, we ended up with four distinct networks (affiliative interactions, agonistic interactions, proximity and observation). Observation networks included only directed (non-reciprocal) connections, because observation data were obtained only from the naive ravens before they solved the task for the first time. Thus, in our observation networks, a naive raven who observed an informed conspecific was never observed by that particular conspecific. This allowed us to include only the observations that contributed to the first task-solving event for each individual. We then normalized the observation networks because some ravens had solved the task more frequently than others did. For example, if A solved the task X times before B first solved it, and B observed A for Y times before solving it for the first time, then Y/X was entered to the cell corresponding to B observing A. Using Multiple Regression Quadratic Assignment Procedure (MRQAP, double Dekker semipartialling variant) in UCINET in each group, we analysed which factors predicted the connections in the observation networks. The dependent variable was the observation network, and the independent variables were the networks on affiliative interactions, agonistic interactions, proximity, sex similarity (1 for same sex, 0 for different sexes) and (relative) similarity in social rank. Social rank was calculated from a linear hierarchy based on retreats after receiving a threat (MatMan 1.1, I&SI method, Noldus Information Technology) [44,45]. MRQAP has previously been used to analyse the relationships between networks in multiple species [46–51]. It first runs a regression test for the corresponding cells of each matrix, and then permutes the rows and the columns of the dependent matrix to repeat this regression multiple times (we ran 10 000 permutations) [38,52]. 2.4. Task-solving order analysis To determine whether ravens with high social centrality solved the task sooner and thus had high centrality in the observation network, we used the non-parametric Spearman's rank correlation test on the ranked centrality measures. We ran two analyses using Spearman's test. First, we analysed the correlations between the ranked centrality measures from the social networks (affiliative interaction, agonistic interaction, proximity) and the task-solving order. Second, we analysed the correlations between the ranked centrality measures from the social networks and the observation networks. For this second analysis, only the same measures were compared with each other (e.g. instrength in affiliative network was compared only to instrength in observation network). The trained females were excluded from the rank correlation analyses. If ravens with high social centrality are observed more frequently and/or by more individuals, this would suggest that they act as important information sources during information transmission. We used the OADA variant of the NBDA to determine the predictive power of different networks . We analysed which social networks (affiliative interactions, agonistic interactions, proximity) predict the order with which ravens perform the task solution for the first time. Note that we did not include observation networks in OADA. OADA assumes that the rate of transmission from an informed individual (j) to a naive individual (i) is proportional to the network connection between them (aij). However, the model can be expanded such that the rate is proportional to aij × wj, where wj is the transmission weight reflecting the total number of times (j) solves the task. Models with transmission weights are based on the assumption that transmission is proportional to the rate at which the task solution is performed by an informed conspecific. Models without transmission weights assume that all informed conspecifics transmit the task solution at the same rate regardless of how often they solve the task themselves. We fitted models both with and without transmission weights. Sex and social rank in both groups, as well as kinship in juveniles, were included as variables that potentially influence the task-solving order. We used an information theoretic approach, using corrected Akaike's information criteria (AICc), to account for model selection uncertainty and to assess the support for each network relative to models based on asocial learning (models based on asocial learning included sex and social rank; see the electronic supplementary methods for model details). 3.1. Observing conspecifics' task-solving behaviour attracts ravens' attention to the task All ravens (n = 22) participated in the experiments, and all except one subadult male solved the task by opening the Velcro strip before pulling the drawer to access the reward. Most ravens, except the two juveniles who were tested separately from their group in the last two sessions, observed at least one group member within 1 m radius of the task before solving it for the first time (number of task-solving events observed before solving, subadults: 20.09 ± 32, juveniles: 17.78 ± 17.9; number of conspecifics observed before solving, subadults: 3.63 ± 2, juveniles: 2.55 ± 2.1). Before solving the task for the first time, each raven contacted the task at least once by pecking at it or by pulling on the string (mean ± s.d. of contacts before solving the task, subadults: 12.72 ± 13.9, juveniles: 2.89 ± 1.6; electronic supplementary material, table S1). Most contact occurred on places other than Velcro (total number of contacts before solving, subadults: 140, juveniles: 26; contacts on Velcro, subadults: 1, juveniles: 5). The five contacts on Velcro by naive juveniles were extremely brief, because they got displaced by a more dominant conspecific soon after contacting the Velcro. Overall, ravens were more likely to contact the task after having observed a conspecific within a 1 m radius in previous trials (number of contacts after observing, subadults: 12.55 ± 14, juveniles: 2.44 ± 2; number of contacts before observing, subadults: 0.44 ± 0.7, juveniles: 0.18 ± 0.4; electronic supplementary material, table S1). Regardless of social rank or sex, ravens who contacted the task frequently had also observed frequently (multiple regression: F3,19 = 5.039, p = 0.012 for the whole model; effect of observing frequency on contact frequency: F = 14.219, p = 0.002; effect of social rank: F = 0.108, p = 0.746; effect of sex: F = 0.228, p = 0.639), suggesting that observing others attracted ravens' attention to the task. 3.2. Ravens observe their affiliates We calculated the density of the networks to determine whether ravens were selective in their social connections and in their observations. A network based on high social selectivity has low density, which suggests that the majority of connections that could potentially exist in the network do not actually exist. Subadults were more selective in their social connections than juveniles were (affiliative network density in subadults: 0.182, in juveniles: 0.877, proximity network density in subadults: 0.409, in juveniles: 0.911; agonistic network density in subadults: 0.576, in juveniles: 0.656). Observation networks had low density in both groups (subadults: 0.303, juveniles: 0.256, figure 2a,b), suggesting that ravens were highly selective in whom they observed. To determine which factors influenced selectivity in who observed whom, we used the MRQAP analysis. MRQAP revealed that ravens selectively observed the group members towards whom they initiated frequent affiliative interactions, or to whom they frequently perched in close proximity (MRQAP, table 1 and figure 2c,d). Observation did not depend on homophily; ravens were not more likely to observe the same-sex conspecifics or those with similar social rank to themselves (table 1). Proximity and social interaction data were collected only on the days when the task trials were not in session, allowing us to reliably separate proximity and interaction networks from the observation networks. Overall, ravens' decision of whom to observe was determined mainly by the socio-positive behaviours such as affiliative interactions and tolerance of close proximity. 3.3. Ravens with high affiliative network centrality play important roles in transmission To address whether socially central ravens solved the task sooner, we ranked each individual's centrality measures (strength, eigenvector, information centrality) in each network relative to their group member's measures (electronic supplementary material, table S2). In both groups, the majority of the centrality measures from the affiliative interaction network correlated with the task-solving order. In particular, ravens who solved the task sooner had initiated and received frequent affiliative interactions (Spearman's rank correlation between task-solving order and affiliative network measures in subadults: out-strength r = 0.72, p = 0.019, in-strength r = 0.722, p= 0.018; in juveniles: out-strength r = 0.85, p = 0.004, in-strength r = 0.817, p = 0.007, figure 3a). Juveniles who solved the task sooner had high information and eigenvector centrality in the affiliative network (juveniles' information centrality: r = 0.817, p = 0.007, eigenvector centrality: r = 0.800, p = 0.010; subadults' information centrality: r = 0.073, p = 0.841, eigenvector centrality: r = 0.491, p = 0.149). Individuals with high affiliative network centrality were observed more by others and had high centrality in the observation network (Spearman's rank correlation between affiliative and observation instrength in subadults: r = 0.924, p < 0.001; in juveniles: r = 0.897, p < 0.001, figure 3b; between affiliative and observation information centrality in subadults: r = 0.838, p = 0.001; in juveniles: r = 0.854, p = 0.003). The majority of the centrality measures from the agonistic interaction network and the proximity network were not correlated with the task-solving order nor with the observation network centrality measures in either group (electronic supplementary material, table S3). Overall, ravens with high centrality in the affiliative interaction network solved the task sooner and were central in the observation network as a result of being observed more by naive conspecifics. 3.4. Transmission of task solving in subadults Using OADA, we calculated the support that each network (affiliative interactions, agonistic interactions, proximity) provided for social transmission relative to models based on asocial learning. We first calculated the Akaike weight for each model we fitted , and then obtained the relative support for each network, by summing over all the models that included that particular network. We also calculated the support for the asocial models by obtaining summed Akaike weights for these models. We then obtained a ‘support ratio’ by dividing the support for each network by the support for the asocial models. Support ratio thus indicates the support that each network provides for social transmission relative to asocial learning (see the electronic supplementary methods for details). The strength of support ratios can be interpreted, as a guideline, such that a p-value of 5% in a likelihood ratio test between two models that differ in one parameter (e.g. social transmission via one network) would correspond to a support ratio of 2.5. The affiliative network with transmission weights provided the most support for social transmission against asocial learning in subadults. The support ratio for the affiliative network with transmission weights was 2.24, meaning that there was 2.24 times more support for social transmission following this network than there was support for asocial learning (table 2). The affiliative network was composed of two behaviours: physical contact (such as allo-preening) and sharing. We analysed these two components separately to explore which one contributed to the observed patterns of transmission. Running OADA separately on physical contact and on sharing revealed that the physical contact component provided the main support for social transmission (support ratio for physical contact = 3.42; support ratio for share = 1.64). Neither the agonistic interaction network nor the proximity network provided support in subadults (support ratio for agonistic interaction = 0.41; support ratio for proximity= 1.73). Even when we separated the proximity network into its two components (sitting close and sitting on the same branch), as we had done with the affiliative interaction network, we did not find support for transmission (support ratio for sitting close = 1.76; support ratio for sitting on the same branch = 1.07). Furthermore, there was no support for the effect of social rank (total Akaike weight for social rank = 37.29%) and weak support for the effect of sex (total Akaike weight for sex = 54.87%). Overall, the affiliative interaction network was the best predictor for transmission in subadults. For the physical contact component of the affiliative network, which provided the main support for social transmission, we calculated the social transmission parameter (s) to estimate the rate of social transmission, relative to asocial learning, per unit connection (i.e. transmission between two individuals with connection = 1 and transmission weight = 1). The social transmission parameter (s) was 7.76 (95% CI = [5.43, 2100.19]), meaning that a naive raven, who had a single connection of 1 to an informed individual who solved the task once per minute, was 7.76 times more likely to solve the task socially than asocially. We converted this measure into the predicted proportion of task solutions that occurred by social transmission (see for details of the conversion). We estimated that 59.7% (57.1–66.6%) of the first task solutions occurred by social transmission in subadults. When viewed together with the positive relationships between affiliative interaction and observation networks, the OADA results suggest that selective observation of affiliates determined the pathways of transmission in this group. 3.5. Transmission of task solving in juveniles OADA in juveniles revealed that the affiliative interaction network and the proximity network provide support for social transmission (support ratios: affiliative = 6.82, proximity = 6.85, table 2). The proximity network included two components: sitting close and sitting on the same branch. The sitting close component without the transmission weights provided the main support (support ratio = 61.48). However, the social transmission rate per unit connection was very low (s = 1.08 × 10−8), suggesting that other factors besides social connections in these networks better predicted the task-solving order. Social rank had a strong effect on the task-solving order in juveniles (total Akaike weight for social rank = 95.80%). Juvenile ravens were 2.6 times more likely to solve the task with each increase in rank (95% CI = [1.3, 8.3]). Yet, rank alone was not sufficient to fully explain the transmission patterns because females solved the task 27.3 times sooner than males of the same rank (95% CI = [1.09, 2618]). For instance, the first two solvers were the two dominant males in the group, but they were also siblings of the trained female. The next four solvers (two males, two females) were also siblings of each other. The two females from this sibling group had lower social rank (rank 7 and 8) than the ravens who solved the task later (ranks 5, 6, 10; ravens ranking 6 and 10 are the two juvenile males to whom we presented the task separately from others). These patterns prompted us to explore the potential role of kinship in transmission. We constructed a kinship network by assigning a connection of 1 between the siblings, and a connection of 0 between the non-siblings. An OADA model based on the kinship network was better supported than the asocial model which included social rank and sex (kinship network AICc = 15.04, support ratio = 55.4; asocial model AICc = 23.06). Besides playing a role in the transmission patterns, kinship was also a strong predictor of the affiliative interactions between juveniles (MRQAP, dependent matrix: affiliative network; independent matrices: kinship r = 0.638, p < 0.001, sex r = −0.045, p = 0.364, social rank r = 0.015, p = 0.219). Notably, juveniles initiated their most frequent affiliative interactions towards one of their siblings (figure 2; electronic supplementary material, table S1). Overall, transmission in the juvenile group was predicted by a combination of social rank and the kinship network, which also strongly influenced juveniles' affiliative interactions. We demonstrate positive relationships between social connections, observation patterns and information spread in two raven groups. Networks based on affiliative interactions and physical proximity were positively correlated with an observation network based on who attended to whose task-solving behaviour, demonstrating that ravens observed their affiliates with whom they shared positive social connections (i.e. affiliative physical contact, food sharing and tolerance of close physical proximity). Information spread was best predicted by social transmission through the affiliative interaction network in the subadults, and by a combination of social rank and social transmission through the kinship network (which influenced affiliative interactions) in juveniles. In particular, ravens with high social centrality solved the task sooner than their less central conspecifics, which resulted in them being central in the observation network due to being observed frequently. Together, these results demonstrate the importance of accounting for multiple types of social connections and attributes (e.g. age, sex, rank, kinship) when investigating spread of information in groups. 4.1. Observation networks are a valuable tool in transmission studies The robust positive relationships between networks based on observation and networks based on social connections provide empirical evidence that observation networks are a valuable tool in transmission studies. Observation can play at least two roles in information transmission. First, observing conspecifics interact with a novel task or a novel object may decrease neophobia and increase interest in the task or the object. This effect may be especially pronounced in species with high neophobia, such as ravens. Naive ravens were more likely to interact with the novel task after observing informed conspecifics interact with it, a pattern that is also documented in meerkats and squirrel monkeys . Second, naive individuals may observe informed conspecifics to learn the association between their behaviour and the outcome, for which repeated observations from a close distance may be necessary [54–56]. Such repeated instances of observation can only be achieved if the observer(s) and the observed individual share positive social connections, allowing them to tolerate each other in close proximity. In our study, networks based on affiliative interactions and physical proximity were the most reliable predictors of who observed whom. Similarity in sex or social rank was not influential in ravens' decision of whom to observe, suggesting that group members of different sexes or social ranks can observe each other if they share positive social connections. We suggest that more information transmission studies should utilize observation networks when assessing the relationships between social connections and information acquisition. Our observation networks included only the group members who were observing within 1 m of the task. This was necessary because multiple ravens were present around the task during the trials, possibly preventing those who were farther away from the task from seeing the solution technique. However, it is possible that ravens may have observed from a distance, especially during the trials in which only a few ravens were present around the task. Future research on information transmission should account for the possibility that conspecifics may acquire information from others by observing from a distance, as has been shown in New Caledonian crows . 4.2. Quantifying multiple social connections is essential for understanding observation and transmission Not all social connections were equally effective at predicting the patterns of selective observation and information spread. In subadults, only the affiliative interaction network, but not the proximity network nor the aggressive interaction network, provided support for social transmission against asocial learning. Furthermore, there was considerable variation in how reliably different types of affiliative behaviours predicted transmission. For example, the affiliative network included two components (physical contact such as allo-preening, and food/object sharing), and the main support for transmission came from the physical contact component. Allo-grooming (and allo-preening) is one of the main forms of social bonding in animals, and the dyads with the strongest social bonds tend to groom each other more frequently than the dyads with weak or no bonds . Such strong positive social bonds would allow conspecifics to tolerate each other in close proximity, motivating them to observe each other's task-solving behaviour to acquire information about the task, which they would then use to solve the task. For example, ravens with high affiliative network centrality in both groups solved the task sooner, possibly because they were connected to at least one informed conspecific whom they could repeatedly observe from a close distance. These central ravens were then observed more by naive conspecifics, and thus had high centrality in the observation networks, leading to strong relationships between affiliative interaction networks, observation networks and information transmission. Studies on information transmission will greatly benefit from including multiple networks based on different types of social connections. However, in doing so, it will be critical to ensure that the social connection data are collected independently of the novel information data. The presence of resources (e.g. a novel task) may bias associations and social interactions, causing individuals to associate or interact with the conspecifics with whom they may not have associated or interacted otherwise. As a result, network data obtained in the presence of a task may not be representative of the true social connections between conspecifics. We avoided this issue by obtaining social connection data (i.e. interaction and proximity network data) only during the days in which we did not run the task trials. We strongly suggest that the potential confounding effects of task presence on social data are kept in mind during group transmission studies, particularly when analysing the relationships between social transmission patterns and the social networks that are obtained in the presence of the novel information of interest. 4.3. Group composition influences transmission patterns The role of social connections in information acquisition and transmission may change due to differences in group composition and structure, especially in species that face frequent changes in group dynamics. Individuals living in fission–fusion groups, such as wild non-breeding ravens [30,31], frequently have to deal with changing group dynamics. Although the captive groups we studied did not experience fission–fusion dynamics, because they differed in age (subadult versus juvenile) and kinship, we were able to explore the influence of age and kinship variation on transmission. In subadults, selective observation of affiliates determined the task-solving order and the pathways of information transmission. However, in comparison with the subadult group, evidence for social transmission through affiliative networks was not as robust in the juvenile group, as indicated by the low rates of social transmission per unit connection. Instead, a combination of social rank and kinship network predicted the task-solving order in juveniles. After a raven had solved the task, the next group members to solve were his siblings, starting with the most dominant sibling. It is possible that different sibling groups gained access to the task at different times. As a result, the order of access to the task, both within and between sibling groups, may have played a role in the pathways of transmission in juveniles. Even though the role of affiliative interactions in information spread was not as clear in juveniles as it was in subadults, affiliation may have had an indirect influence in this group. The affiliative interaction network of juveniles had higher density than that of the subadults, and juveniles shared affiliative interactions with more group members than subadults did. Yet, despite the highly connected nature of the affiliative network, there was also evidence of social selectivity in juveniles' affiliative interactions with each other. Juveniles' most frequent affiliative interactions, which indicate strong social bonds, were with their siblings. In comparison, subadults' strongest bonds were generally within the male–female pairs. In both groups, frequent affiliative interactions predicted who observed whom most frequently. In juveniles, the strong social bonds between the siblings may have played an important role in transmission, allowing them to observe and learn from the siblings with whom they shared their strongest social bonds. By constructing networks on multiple social connections, and by integrating network analysis with information transmission experiments, we show that network analysis can be used to assess the patterns of selective observation and information transmission. Observation networks are rarely used in transmission studies, but they provide critical insights into understanding the relationships between social connections and spread of information. Yet, not all social connections are equally effective at influencing the patterns of observation and transmission. Connections based on positive social behaviours, such as affiliative interactions and tolerance of close physical proximity, can be more informative than other social connections. Furthermore, group differences may also play a major role in transmission. In some groups, networks based on individual attributes (e.g. age, sex, kinship) may be better predictors of information transmission patterns than networks based on social connections. Therefore, it is critical to account for multiple types of networks to achieve a comprehensive understanding of information transmission in groups. The experimental procedures were approved by the internal board on animal ethics and experimentation at Faculty of Life Sciences, University of Vienna. The data supporting this article are included as part of the electronic supplementary material. I.G.K., T.B., D.I.R. and C.S. conceptualized and designed the project. I.G.K. conducted the experiments and drafted the manuscript. N.M. and W.H. wrote the code to implement NBDA. I.G.K. and W.H. analysed the data. All authors helped revise the manuscript and gave approval for publication. The authors declare no competing interests. This study was funded by Princeton University Maresi Memorial Fund to I.G.K., FWF grant no. Y366-B17 to T.B. and WWTF grant no. CS11-008 to C.S. We are grateful to the team at Haidlhof Research Station and the Department of Cognitive Biology at University of Vienna. We thank Corina Logan and an anonymous reviewer for comments on the manuscript. - Received April 12, 2016. - Accepted June 15, 2016. - © 2016 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125532.90/warc/CC-MAIN-20170423031205-00430-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
46,481
58
https://www.anonos.com/webinars/cpdp-the-importance-of-technical-solutions-such-as-dynamic-pseudonymous-data
code
[08:45] The next slide, which is my favorite slide because it blinks, on the left hand side, the bright vivid blue represents usable data. Below that to the right is the dark gray, and this is meant to highlight that in a static approach to data protection using static identifiers, there's a confusion between identity and information. I get one, I get the other. I can't split them apart. And you actually lose protection at scale, which I've mentioned earlier as you start to combine additional datasets. [09:18] And lastly, the whole value proposition is deterministic where you rely on knowing that an individual is that specific individual. And if you go to the right hand side, what that's meant to highlight is each individual square is an individual cell of data. Sometimes it's needed. Sometimes it's not needed. Why is it even in the conversation? So, you've dynamically changed identifiers. You will notice on the left, I call them static identifiers. On the right, they're dynamic de-identifiers. [09:49] What that means is because the identifier is changing, so if your name appeared - I had the prior example of three datasets that was ABCD, ABCD, and ABCD. Why isn't it ABCD, Q99 and DDID? Now each of the three datasets is anonymous. And unless you have permission to know that ABCD equals Q99 equals DDID, you don't know that. But you haven't lost the ability to re-link. And so, you actually are separating value from identity. And the more data that's added, the protection actually increases. And for most use cases, it's probabilistic. It's not deterministic, which is more than necessary for the desired use.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474670.19/warc/CC-MAIN-20240227021813-20240227051813-00468.warc.gz
CC-MAIN-2024-10
1,631
3
http://www.studymode.com/essays/Reading-Theories-1339690.html
code
writer’s thesis # Guessing meanings of words from context (vocabulary) use of modals, tenses Schema discourse) Interactive (text as # Concepts of cohesion and coherence and connections between paragraphs Text purpose and purpose at paragraph level # the # Understanding how language functions in context. E.g. Metaphorical models of reading Specific models of reading Bottom-up models Top-down models Interactive models Interactive compensatory model( nature inner knowledge) if there’s lack u should compensate it Word recognition model Simple view of reading model Dual coding model (2languages) Psycholinguistic guessing game: activate prior knowledge ..student is giving an aim for reading Reading strategies : .specifying a purpose for reading Planning what to do and what steps to take Previewing the text Predicting the contentes of the text or section of text Reflecting on what has been learned from the text Checking prdictions Posing questions about the text Finding answers to posed questions Connecting test to background knowledge Summarizing infor Makin interferances Connecting one part of the text to another Paying attention to text structure Rereading Guessing the meaning of new word from...
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118831.16/warc/CC-MAIN-20170423031158-00633-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,215
6
https://www.technolaty.com/custom-kernel-for-xiaomi-mi-8-lite/
code
Xiaomi Mi 8 Lite has received an excellent response from the consumers and a good community built on XDA, where the developers are taking the initiative to build custom ROMS. Now, you can also download the best custom kernel for Xiaomi Mi 8 Lite to enhance your device and make it more powerful. Currently, two custom Kernels are available for Xiaomi Mi 8 Lite that can be used with MIUI and AOSP-based custom ROM. All these custom Kernels can be flashed on any ROM running on Android 9 Pie. You may also consider flashing Android 10 custom ROM, testing the following kernels, and reporting it to us in the comments section. Do not forget to take the backup of your current kernel. What Is a Kernel? Android devices use the Linux kernel; every phone utilizes their variant. Linux kernel maintainers keep everything tidy and accessible, contributors (such as Google) include or change things to meet their demands, and the people creating the hardware contribute, too, since they need to come up with hardware drivers for the parts they’re using for your kernel version they are using. This is why independent Android programmers and hackers take time to vent new releases to older devices and get everything working. Drivers are written to utilize one kernel version for a phone that might not work with another version of applications. This is important because the kernel’s primary function is to control the hardware. It is a great deal of source code, using many more options while constructing it than you can imagine. Still, ultimately, it is only the intermediary between the hardware and the software. It doesn’t sound straightforward, and it’s. Nevertheless, it’s also pretty standard computer logic; there’s an act of any kind generated for every single event, and based on that action, things happen to the running applications. With no kernel to take and send info, programmers would have to write code for every event for every piece of hardware inside your apparatus. The seed helps communicate with it through the Android program APIs, and hardware developers only need to produce the device hardware to communicate with the grain. Advantages and Disadvantages of Custom Kernel - A custom kernel can provide extra features and configurations. - Developers can add new features from other smartphone kernels that do not exist for your device. - It helps you maintain the CPU power and saves battery life. - You can configure the I/O scheduler and lots of other stuff. - You can control the maximum and minimum frequencies of the CPU that can scale up or down using a kernel auditor. - If you are not careful enough while using the custom kernel, you can end up harming your device to the extent of bricking it. Custom Kernel For Xiaomi Mi 8 Lite Two Xiaomi Mi 8 Lite (Platina) custom kernels are currently being developed. The custom kernels we are talking about are Acrux Kernel and Fera Kernel, which support stock MIUI and any AOSP-based custom ROM running on Android Pie 9. Acrux Kernel V4.4 ALS is imported cleanly from Xiaomi, EAS kernel with HMP packs removed, and has been updated to the latest CAG. The custom kernel is also upstreamed with the mainline Linux, which aims to balance performance and battery backup. - Latest CAF tag merged - Xiaomi code cleaned up to stable Android with minimal changes - The Wi-Fi driver was imported from CAF and maintained - Energy-aware scheduling - Developers own capacity-based energy model included (more battery life and performance compared to any other EM) - Disable kernel-wide debugging (debug_fs and debug_kernel removed) - F2FS rapid GC - Stone boosting locked to 1 - Compiled with DragonTC 9 (clang with Polly optimizations) - Klapse and kcal - Wake up all idle CPUs before suspending (helps with idle drain) - Turn off schedule auto grouping - Set CPUBW governor to bw_hwmon - Optimize UFS stack - Improve camera performance and drain - Use analogue dimming - Use system-wide interruptible waits - Force block requests onto their origin CPU - mm/ optimizations - Add dynamic bitclk and fps support to the panel - Remove some high-priority work queues from useless things - Link /dev/random to /dev/random - Add and refactor make flags - Fix memory overlaps - Fix could-3.0 wifi bug where the signal would be 0 in the QS/status bar - VDSO32 enabled - Speedup EXT4 - Omit useless dibs - Use 100Hz timer - Remove all debugging and tracing drivers - Debug kernel and debug fs gone The link for the download contains previous builds and is directly from the developer Nysadev from XDA. Try to figure out the latest version by sorting the upload date or checking the kernel’s date stamp. Acrux Kernel – Download Fera Kernel is a mix of Acrux Kernel as the base and stock kernel configurations, with a few changes on the top by developer Feravolt. The custom kernel has been tested on the latest stable MIUI v10.3 and works on all other custom ROMs based on Android Pie 9. The main aim of building this kernel is to provide maximum stability to Mi 8 Lite. - Used clean 4.4.184 kernel version with fixed wifi driver - Compiled with the latest GCC 10 - Compiled with ultimate kernel code optimization flags - I updated the kernel base with the latest changes for sdm660 from CAF MSM-4.4 (framebuffer, camera, ups, etc.) - GPU overclock to 700МHz - The kernel version upstreamed to 4.4.196 - Ported some libs from 5.х kernel - Added LZ4 support - Disabled all kernel tracing - Power-save CPU work-queues - RAM bus freqs will be raised quicker on high loads. - Activated multi-threading for crypto routines - Faster VM allocation - Disabled unneeded task stats - Reduced log buffer memory - Enabled all Cgroups for better system handling - Heavily declined various kernel debugging - Disabled core dumps - Disabled module’s signature verification - More power-saving kernel - Disabled GPU wake on-screen touch - Updated WIFI driver to the latest version (184.108.40.206N) - Enabled Clean-cache framework - Enabled Front-swap framework - Enable ZSwap & bud - The swap service can run only on small cores - Cleaned kernel cmdline - Disabled unneeded drivers - Enabled MSM HW random generator - Increased audio buffer size - Enabled the FS cache framework - Enabled NTFS (rw) support - Enabled Samba (SMB) network file system - Optimized crypto routines - Added, activated & set the latest version of the ‘anxiety’ I/O scheduler as default. - Activated & set as default ‘westwood’ TCP congression controller - Updated MSM-Adreno-TZ GPU governor to the latest version - Tuned simple-on-demand GPU governor for better performance - Added & activated Adreno idler logic - Added Adreno boost logic - Disabled CRC checks while booting - Tuned GPU idle timeout - Undervolted GPU - Set minimal GPU freq as default - Disabled a few more unneeded drivers - Overclocked CPU – BIG cores to 2.5Ghz - Slightly undervolted CPU - Slightly undervolted display - Increased CMA memory size - Increased thermal polling interval - Added TTL fixup support - Super-fast random entropy generator - Tuned VM tweaks - I2C/SPI bus max freq overclock from 500 to 800Mhz - Updated GPU KGSL driver - Enhanced arm64 NEON instructions - The optimized interactive CPU governor - Added new FS fat v2.1.8 - Meltdown CPU protection is off (slows down CPU & nobody will hack you over wifi while u use an updated & secure browser) The current version of Fera Kernel is v7, and the developer updates the kernel frequently. Please bookmark this page or leave a comment to get the latest updates on this kernel when it is released. Fera Kernel – Download Instructions to Flash To flash the custom kernel on the Xiaomi Mi 8 Lite, you must have an unlocked bootloader and TWRP installed. - Download and copy the custom kernel to the internal storage of your device - Reboot to TWRP (Turn off your device and press and hold Volume Up + Power Button until you see the MI logo. - Go to install the seal, etc., the custom kernel, and swipe to flash - It will ask you to clean the cache/Dalvik cache. You may clean it and reboot it to the system. Conclusion: Embracing the Custom Kernel Journey In conclusion, the world of custom kernels opens up possibilities for Xiaomi Mi 8 Lite users. From overcoming perplexities in installation to harnessing the burstiness of performance, this modification is a game-changer. FAQs: Unraveling Common Queries Can I revert to the stock kernel if I encounter issues with a custom one? Absolutely. We guide you on reverting safely, ensuring your Mi 8 Lite remains stable. Do custom kernels affect warranty status? We clarify the impact on warranty and how to navigate potential concerns. Can I customize specific aspects without affecting overall performance? Yes, we provide insights into tailoring your experience without compromising performance. How frequently should I update my custom kernel? Stay informed about optimal update frequencies to maintain peak performance.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819668.74/warc/CC-MAIN-20240424143432-20240424173432-00325.warc.gz
CC-MAIN-2024-18
8,954
128
http://math.hws.edu/lasseter/teaching/S15/CPSC371/assign/index.html
code
- For Monday, January 26 - Read Ramsey, 2.1 - 2.3. - Write a couple of paragraphs reflecting on what you find most difficult and most clear in these sections. The spirit here is that of dialogue with me and with others in the class. I am not looking for a "right answer" here, but rather an honest self-assessment of what you've learned from this first encounter with our material. What ideas make you want to know more? Were there concepts that remain inaccessible? The 'build and learn by doing' is big for me as I often understand things better after I have worked through a few examples." "I am having trouble understanding the big picture that the author is trying to communicate ... I am still uncertain what the Impcore language is used for and how it is used to build programming languages." "ASTs provide a clear illustration of a programming language's structure ... I want to know more about how ASTs relate to the bigger picture." "I am not sure if I understand how Impcore has three distinct environments." - For Wednesday, January 28 - Read Ramsey, 2.4 (though we'll really dive into that material on Friday). - Do any four of Problems 3–9 (you can certainly do more!), and also do Problem 10. - I will send you the source code package for the Impcore interpreter by email. Please do not make this available on the wider internet. - For Friday, January 30 - You should have completed reading through 2.6, though it's fine to skip 2.5 for now. - Do any two of Problems 13, 15, 18, 20, and 21. It's fine if you want to work with a classmate on these. "Operational semantics is a bit confusing, but I am guessing it is similar if not equivalent to interpreting the meaning of some outline of abstract syntax." [Close. It's more a way of explaining in a mathematically rigorous fashion what a correct interpreter for a program must do.] - For Friday, February 20 - Complete reading of sections 3.1 - 3.11. - Do Problems 2, 9, and 10, from Chapter 3. - For Wednesday, April 1 - Read Section 6.1. - Compare the type rules for Typed Impcore with the operational semantics from Chapter 2. Give your best explanation of the correspondence between type rules and operational semantics rules. This can be with a sentence or two, with a diagram, or some other mechanism of your devising (just pick something that you would use to explain it to someone else). - For Wednesday, April 8 - Read Section 6.2 - 6.5. - In Monday's class, we did a workshop on proof techniques for reasoning about a language's type system and operational semantics. We finished with a discussion of the challenges in formulating a theorem in a way that is amenable to formal proof—particularly with the technique of rule induction&mhash;and in relating of the inductive hypothesis (on subexpressions, or, equivalently, on expressions with smaller abstract syntax trees or shorter proof trees in the derivation of their types) to the theorem you're trying to prove. - For Wednesday, April 8, you were asked to construct precise statements of the theorems in Problems 2–4 (Chapter 6), then construct versions of inductive hypotheses that are suitable for proving these theorems through rule induction. - For Friday, April 17 - Read Section 6.6, at least through 6.6.3, and preferably all of it. - On pages 257 and 258, Ramsey gives five rules for kinds. For each one, give an example from a real programming language (your choice, but not τμScheme), which corresponds to the concept the rule is expressing. - For Wednesday, April 28 - Read Michael Schwartzbach, "Polymoprhic Type Inference", which available from the Readings page. - In class yesterday, I showed informally the way that substitutions are calculated during the type inference process, in order to compute the principal type scheme of an expression. - Here's an example in the spirit of yesterday's work: - This is much more formal than anything we did yesterday, and you shouldn't focus too much on perceived technical details (several are missing). The important thing to grasp is the way that substitutions of type variables for types arise from the constraints discovered in the process of type-checking the expression (lines 3, 7, 8, 11, 13, 14, 16, and 20). - In preparation for class tomorrow, I want you to think about this process on your own. Using one of the following examples, show the substitutions that arise in the derivation of the expression's type. This is all legal ML code, so you can check the types against your manual work. You don't have to get every constraint, and you don't have to get any particular evaluation order. The goal here is to build some intuition about the central task of Hindley-Milner type inference, namely the way that substitutions support the computation of an expression's principal type scheme. - fun filter f nil = nil | filter f (x::xs) = if (f x ) then x::(filter f xs) else (filter f xs); Hint: think about what the type of the constant function if must be, based on your typechecking code above - fun applyAll (nil,x) = nil | applyAll (f::fs, x) = (f x)::(applyAll(fs,x)); - fun fold_list ( f, nil, init) = init | fold_list (f, (x::xs), init) = f ( x, (fold_list (f, xs, init))); - fun map_fold ( f, ls ) = fold_list ( (fn (x,v) => (f x)::v), ls, nil); Hint: if you don't get the same type as that of "regular" map(), something is wrong. fun map f nil = nil | map f (x::xs) = (f x)::(map f xs); 1. map: a1 -> a2 -> d (defn., shortcutting a couple of substitution steps) 2. nil: a3 list (defn.) 3. d = a3 list (1,2) 4. x:a4 (var. intro) 5. xs:a5 (var. intro) 6. (::): a6 * (a6 list) -> (a6 list) (defn.) 7. a4 = a6 (4,6) 8. a5 = a6 list (5,6) 9. (x::xs): a6 list (6,4,5) 10. (x::xs):a2 (1,defn.) 11. a2 = a6 list (1,9,10) 12. f: a -> b (defn. -- "(f x)" makes this clear) 13. a1 = a -> b (1,12) 14. a = a4 (4,12, "(f x)") 15. (x::xs): a list (7,13) 16. a2 = a list (1,10,15) 17. (f x): b (4,11,defn. of function application) 18. ((f x)::(map f xs)): b list (16, defn.) 19. ((f x)::(map f xs)): d (defn.) 20. d = b list (17,18) 21. map:(a -> b) -> a list -> b list (1,12,15,20)
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948593526.80/warc/CC-MAIN-20171217054825-20171217080825-00052.warc.gz
CC-MAIN-2017-51
6,078
39
https://grokbase.com/t/hadoop/common-user/091b6qqgey/raid-vs-jbod
code
How well does Hadoop handle multiple independent disks per node? I have a cluster with 4 identical disks per node. I plan to use one disk for OS and temporary storage, and dedicate the other three to HDFS. Our IT folks have some disagreement as to whether the three disks should be striped, or treated by HDFS as three independent disks. Could someone with more HDFS experience comment on the relative advantages and disadvantages to each approach? Here are some of my thoughts. It's a bit easier to manage a 3-disk striped partition, and we wouldn't have to worry about balancing files between them. Single-file I/O should be considerably faster. On the other hand, I would expect typical use to require multiple files reads or write simultaneously. I would expect Hadoop to be able to manage read/write to/from the disks independently. Managing 3 streams to 3 independent devices would likely result in less disk head movement, and therefore better performance. I would expect Hadoop to be able to balance load between the disks fairly well. Availability doesn't really differentiate between the two approaches - if a single disk dies, the striped array would go down, but all its data should be replicated on another datanode, anyway. And besides, I understand that datanode will shut down a node, even if only one of 3 independent disks crashes. So - any one want to agree or disagree with these thoughts? Anyone have any other ideas, or - better - benchmarks and experience with layouts like these two? Grokbase › Groups › Hadoop › common-user › January 2009
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649177.24/warc/CC-MAIN-20230603064842-20230603094842-00365.warc.gz
CC-MAIN-2023-23
1,571
24
https://meta.stackoverflow.com/questions/337558/answering-your-own-question-within-the-question?noredirect=1
code
I just came across this question. The asker has found the answer themselves and edited the question with the solution. Usually under these circumstances, the person would add an actual answer to their question. I am unsure how something like this should be handled. My only thought was that I could edit the question to remove the answer and add it as an actual answer for the question as a community wiki. My hesitation in doing this is that if I came across this question before the answer edit, I most likely would have attempted to flag it as low quality because the code that they have there is invalid. So, I was just wondering what the broader community thinks should be done in a scenario like this.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.13/warc/CC-MAIN-20231206031946-20231206061946-00868.warc.gz
CC-MAIN-2023-50
707
4
http://www.bio.net/bionet/mm/neur-sci/1997-November/030026.html
code
Zen Faulkes wrote: > > If you don't mind answering my next question; what can C. elegans do > > with regard to intelligence? Are those 300 neurons just enough for a > > random movement, or can the organism do something more, such as simple > > decisions? >> C. elegans can learn and remember quite well, thank you. I'd recommend I am pretty new to the topic, so please correct me if I am wrong. Isn't it true that a reflex action could in principle be accomplished with groups of relatively very few neurons (in biological systems), and, LTP has been traced to single synaptic junctions? With the above in mind, wouldn't it be an overkill to train a neural network of 300 neurons to do a single simple task? Also, my limited experience with Neural Networks would tell me that for classification tasks you don't need so many neurons. Rushi Bhatt (Comp Sci) 209, Campus Ave 226, Atanasoff Hall Ames IA 50014 Iowa State University Phone(R): (515)296-2343 Ames, IA 50011 http://www.cs.iastate.edu/~rushi *By having this crazy random design it was almost sure to work no matter you built it. -Minsky on learning machine
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369721.76/warc/CC-MAIN-20210305030131-20210305060131-00514.warc.gz
CC-MAIN-2021-10
1,114
21
http://reviews.cnet.com/8301-13727_7-10327482-263.html
code
BBEdit 8.5.1: popular HTML and text editing tool. The new release adds "Collapse All Folds" to the View menu, above "Expand All Folds". Choosing this command collapses all of the top-level fold ranges that appear in the fold gutter (but not any fold ranges they contain). Several other new features and fixes are also included. MiniSwitch 1.0.1: switch prefs and files for user-specified apps. The new release adds a "Current User" badge in the main window. You Control 1.4.1 b3: collection of menu utilities. The new release fixes an issue in which data for some stocks was not being displayed. Transmit 3.5.5: FTP / SFTP / WebDAV client with many advanced features. The new release fixes a possible crash with failed SFTP logins and another crash when listing files on a VMS server. Symantec NAV/SAV virus def Oct 11: for 9.x and 10.x Virex 7 DAT 061011: definition (DAT) and engine update for v7.x. OpenMenu 0.97b: create customizable contextual Finder menus. The new release fixes a bug of the hepler application OpenMenu X.app that it sometimes stalled when the menu item "Running Applications" was chosen.Resources
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382584/warc/CC-MAIN-20130516092622-00024-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,120
7