url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://www.leolist.cc/jobs/general-labor/quebec/montreal_developer_expert_job_montreal_quebec-1897435
|
code
|
Work Area: Software:Design and Development
Expected Travel: 0 : 10
Career Status: Professional
Employment Type: Regular Full Time
As market leader in enterprise application software, SAP helps companies of all sizes and industries innovate through simplification. From the back office to the boardroom, warehouse to storefront, on premise to cloud, desktop to mobile device : SAP empowers people and organizations to work together more efficiently and use business insight more effectively to stay ahead of the competition. SAP applications and services enable customers to operate profitably, adapt continuously, and grow sustainably.
As market leader in enterprise application software, SAP helps companies of all sizes and industries run better. From back office to boardroom, warehouse to storefront, desktop to mobile device : SAP empowers people and organizations to work together more efficiently and use business insight more effectively to stay ahead of the competition.
PURPOSE AND O ECTIVES:/h3:
SAP Hybris solutions provide omnichannel customer engagement and commerce software that allows organizations to build up a contextual understanding of their customers in real time. The solutions deliver a more impactful, relevant customer experience and help sell more goods, services and digital content across every touch point, channel and device. Through their state:of:the:art customer data management, context:driven marketing tools and unified commerce processes, SAP Hybris solutions have helped some of the world's leading organizations attract, retain and grow a profitable custo ase.
SAP Hybris software for customer engagement and commerce provide organizations with the foundation, framework and business tools to create a holistic customer view across channels, simplify customer engagement and solve complex business problems.
Want to create massively scalable cloud solutions in the coolest functional and OO Languages?
Our coders working on YaaS choose what languages and frameworks they need to produce the world's leading business solutions : including Java, Scala, Node.js, Akka, Spring and more.
We also actively encourage and fund Certifications, Meetups, Hackathons, Dojos and any thing else we can think of to ensure our coders stay at the cutting edge of coolness.
Read firsthand the experiences of your future peershere.
EXPECTATIONS AND TASKS
:Be a proactive technical "engine" influencing our Development Team and the whole company with innovative and creative ideas
:Design and architect elegant and scalable microservice solutions in clean and tested code (we like TDD) that suits our high hybris standards and quality requirements using state of the art technologies
:Ensure fully automated testing and release processes
:Be an active member in one of our self:empowered teams, producing software according to agile principles andmentoring younger colleagues
:Be part of a global team in a collaborative and fast:paced environment
:Contribute to the creation of the most amazing cloud platform in the world
:Strive to provide an awesome and consistent experience to the users of our APIs
:Take ownership from design of the feature throughfirst lines of code to how it performs in production (You build it, you run it)EDUCATION AND QUALIFICATIONS / SKILLS AND COMPETENCIES:/h3:Required skills:/h3:
:Bachelor degree in computer science, software engineering or equivalent, orbe a recognized expert in the field
:Strong interest in reactive and functional programming
:Ability to adapt quickly to changing technologies, frameworks, etc.
:Good communication skills and fluency in English
:Ability to explain technical problems and understand business requirements
:Ability and willingness to work as part of a self:organizing team
:Be open:minded to new cloud oriented technologies such as Docker, AWS, CloudFoundry...
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719754.86/warc/CC-MAIN-20161020183839-00198-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 3,852
| 29
|
http://news.fintech.io/post/102dzsg/google-announces-tensorflow-1-0
|
code
|
Google has announced a release candidate for a fully functioning version 1.0 of its open source deep learning framework TensorFlow. The new update eases TensorFlow development to Python and Java users and improves debugging among other improvements to the framework's gallery of machine learning functions.
Since Python's one of the biggest platforms for building and working with machine learning applications, it's only fitting that TensorFlow 1.0 focuses on improving Python interactions. The TensorFlow Python API has been upgraded so that the syntax and metaphors TensorFlow uses are a better match for Python's own, offering better consistency between the two. The bad news is those changes are guaranteed to break existing Python applications. TensorFlow's developers have released a script to automatically upgrade old-style TensorFlow API scripts to the new format, but the script can't fix everything; you may still need to tweak scripts manually as needed. TensorFlow is now available in a Docker image that's compatible with Python 3, and for all Python users, TensorFlow can now be installed by pip, Python's native package manager.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00198.warc.gz
|
CC-MAIN-2021-43
| 1,145
| 2
|
https://www.musicjinni.com/uJLQ89VoQfe/TF2-I-m-Dumb-CORRECT-way-to-use-Air-Strike.html
|
code
|
TF2: I'm Dumb (CORRECT way to use Air Strike)
Download Audio ⤋
Download HD Video ⤋
Download in HD TF2: I'm Dumb (CORRECT way to use Air Strike)
TF2 H-O-R-S-E! #2 (w/ Uncle Dane, Raja, & ScottJAw)
[SFM] The Pybro
TF2 - Challenge Mode: C.A.P.P.E.R Only! (I'm a cheater)
TF2 Air 2 - REACTION
TF2 - Spot the Hacker (Part 1? Maybe?)
TF2: New Tomislav! Heavy is fun again! (Gun Mettle Update)
Top 10 TF2 plays - October 2015
TF2 Multiplied By 10! Crazy Weapons, Custom Game Mode.
TF2 - Top 10 "Overpowered" Weapons
1V1 ME, BRO (ft. ArraySeven)
Wish I Had an Unusual (Song)
TF2: Top 5 WORST Chokepoints
HOW TO COMMAND AN ARMY IN TF2 (Trolling)
Airborne Attack! Tryhard Tuesday - Gaben Please!
TF2: Almost Airshots (w/ Raja)
TF2: Homewrecker Pyro is INSANE! (w/ Uncle Dane)
TF2: Air Strike
TF2 - Top 10 Speed Boost Weapons!
TF2 H-O-R-S-E! #6 (Least Played Class w/ Uncle Dane, ScottJAw, & Raja)
TF2: How to even the odds [FUN]
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513804.32/warc/CC-MAIN-20181021073314-20181021094814-00064.warc.gz
|
CC-MAIN-2018-43
| 922
| 24
|
https://forums.adobe.com/thread/476513
|
code
|
Date: 2009-08-12 13:30:12 -0700 (Wed, 12 Aug 2009)
http://bugs.adobe.com/jira/browse/SDK-20811 - VSlider doesn?\226?\128?\153t respect constaints
Here is my initial assessment of the problem:
?\226?\128?\156I've managed to get past the initial issues of the bug, which involved the difference between establishing a measured size and a minimum size. We were using minWidth and minHeight to set both of these values. But really, I want two different values for measured and minimum size. I solved that particular problem with local changes to my skins.
However, I'm still running into a problem. Specifically, the minimum size of the slider is influenced by the initial position of the thumb. For example, a HSlider has a measured width of 100 and a minimum width of 33. If the initial value of the slider puts the thumb at the right end, then the measured and minimum width end up with a value of 100. BasicLayout takes into account the x position of the thumb during the initial measurement.
Note that this minimum width remains the same even if the thumb position or slider width has changed. The reason is that changing these values doesn't trigger measurement.
Ideally, I want the thumb's minor axis position to not affect measurement or layout. I do want the thumb's dimensions to affect measurement and layout. And I want the thumb to get laid out with regards to its dimensions (I want the thumb to stretch to fit the size of the track's minor axis). Setting includeInLayout=false doesn't fit these requirements.?\226?\128?\157
The solution is twofold:
I added a height/minHeight and width/minWidth to the slider skins. In the skins, I also override the measured function. In this override, I temporarily move the thumb so that its minor axis position doesn?\226?\128?\153t affect measurement.
QE notes: Add tests for Slider smaller than default width/height of 100
Doc notes: None
Reviewer: Glenn, Evtim
Tests run: Slider
Is noteworthy for integration: No
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376830479.82/warc/CC-MAIN-20181219025453-20181219051453-00208.warc.gz
|
CC-MAIN-2018-51
| 1,963
| 14
|
https://news.microsoft.com/2006/04/05/microsoft-announces-investments-in-drm-to-drive-new-multimedia-commerce-solutions-for-the-wireless-industry/
|
code
|
LAS VEGAS — April 5, 2006 — Today at CTIA WIRELESS 2006, Microsoft Corp. announced it will make significant investments in its digital rights management (DRM) technologies to enable a new offering and drive scenarios that support the wireless industry. As wireless delivery of content to mobile handsets continues to grow at a rapid pace, this commitment of resources and manpower will help enable next-generation mobile entertainment scenarios for consumers.
“We’re responding to our wireless partners around the world who are asking for a solution to enable new scenarios in the industry,” said Kevin Johnson, co-president of the Platforms & Services Division at Microsoft. “We want to give consumers what they want —seamless experiences with premium content on a wide range of mobile devices.”
The more than 800 million mobile handsets sold worldwide each year represent a largely untapped market for digital entertainment. Microsoft’s commitment to capitalizing on this opportunity is the result of ongoing discussions with many of the wireless industry’s largest firms.
Microsoft® Windows Media® Digital Rights Management (DRM) is broadly licensed and deployed by more than 100 content services and on hundreds of devices. The platform helps protect and securely deliver content for playback on computers, mobile devices and portable devices. It supports a wide range of business models that include download and play, subscription, and video on demand, and enables device manufacturers to directly acquire licenses on their handsets. The breadth of scenarios supported by Windows Media DRM directly correlates to its status as the most widely used DRM system worldwide. The platform will serve as a key building block to enable new and innovative scenarios for mobile content delivery — an important request of wireless industry leaders.
“We expect Microsoft’s commitment will accelerate deployment of many services that carriers see as important for the next generation of wireless communications,” said Jim Ryan, vice president of data services at Cingular Wireless. “Microsoft’s digital media expertise, applied to wireless in a way that focuses on the needs of the carriers, is a very positive step for our industry and consumers alike.”
“With the convergence of the wireless and entertainment industries, Motorola continues to drive new multimedia technologies and business models that enable seamless connectivity for our customers,” said Chris White, senior director of global product marketing for the music category of Motorola Inc. “Microsoft is stepping up to support this vision further with ‘anywhere everywhere’ protected digital bits.”
Founded in 1975, Microsoft (Nasdaq “MSFT”) is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.
Note to editors: If you are interested in viewing additional information on Microsoft, please visit the Microsoft Web page at http://www.microsoft.com/presspass on Microsoft’s corporate information pages. Web links, telephone numbers and titles were correct at time of publication, but may since have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://www.microsoft.com/presspass/contactpr.mspx.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100399.81/warc/CC-MAIN-20231202105028-20231202135028-00305.warc.gz
|
CC-MAIN-2023-50
| 3,374
| 8
|
https://www.homeownershub.com/maintenance/treadmill-repair-273528-.htm
|
code
|
I have a Healthrider Treadmill 900 HRC that is 4 years old. Just about
everything (belt-platform-drive roller) were replaced.
HERE IS THE PROBLEM.
It makes a sound when the left foot strikes the platform.
It is not like rubbing sound. It sounds metallic.
I checked if the board is hitting something- No.
I checked if the belt is hitting something- No.
I checked if the board is cracked. No. This is actually a new board.
I checked and rechecked.
The sound comes when the board is pushed by the foot (on the left
side -where the drive motor is).
You can also create the sound by pushing with hand in that location.
I am suspecting that the metallic frame has a crack in it. Has anyone faced
How to check for this and how to fix it?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887054.15/warc/CC-MAIN-20180118012249-20180118032249-00195.warc.gz
|
CC-MAIN-2018-05
| 730
| 14
|
https://nerdjunkie.com/performance-analysis-in-linux-continued-when-performance-really-matters/
|
code
|
By Gabriel Krisman Bertazi, Software Engineer at Collabora.
This blog post is based on the talk I gave at the Open Source Summit North America 2017 in Los Angeles. Let me start by thanking my employer Collabora, for sponsoring my trip to LA.
Last time I wrote about Performance Assessment, I discussed how an apparently naive code snippet can hide major performance drawbacks. In that example, the issue was caused by the randomness of the conditional branch direction, triggered by our unsorted vector, which really confused the Branch Predictor inside the processor.
An important thing to mention before we start, is that performance issues arise in many forms and may have several root causes. While in this series I have focused on processor corner-cases, those are in fact a tiny sample of how thing can go wrong for performance. Many other factors matter, particularly well-thought algorithms and good hardware. Without a well-crafted algorithm, there is no compiler optimization or quick hack that can improve the situation.
In this post, I will show one more example of how easy it is to disrupt performance of a modern CPU, and also run a quick discussion on why performance matters – as well as present a few cases where it shouldn’t matter.
If you have any questions, feel free to start a discussion below in the Comments section and I will do my best to follow-up on your question.
CPU Complexity is continuously rising
Every year, new generations of CPUs and GPUs hit the market carrying an always increasing count of transistors inside their enclosures as show by the graph below, depicting the famous Moore’s law. While the metric is not perfect on itself, it is a fair indication of the steady growth of complexity inside of our integrated circuits.
Figure 1: © Wgsimon. Licensed under CC-BY-SA 3.0 unported.
Much of this additional complexity in circuitry comes in the form of specialized hardware logic, whose main goal is to explore common patterns in data and code, in order to maximize a specific performance metric, like execution time or power saving. Mechanisms like Data and Instruction caches, prefetch units, processor pipelines and branch predictors are all examples of such hardware. In fact, multiple levels of data and instruction caches are so important for the performance of a system, that they are usually advertised in high caps when a new processor hits the market.
While all these mechanisms are tailored to provide good performance for the common case of programming and common data patterns, there are always cases where an oblivious programmer can end up hitting the corner case of such mechanisms, and not only write code which is unable to benefit from them, but also code which executes way worse than if there were no optimization mechanism at all.
As a general rule, compilers are increasingly great at detecting and modifying code to benefit from the CPU architecture, but there will always be cases where they won’t be able to detect bad patterns and modify the code. In those cases, there is no replacement for a capable programmer who understands how the machine is designed, and who can adjust the algorithm to benefit from its design.
When does performance really matter?
The first reaction of an inexperienced developer after learning about some of the architectural issues that affect performance, might be to start profiling everything he can get his hands on, to obtain the absolute maximum capability of his expensive new hardware. This approach is not only misleading, but an actual waste of time.
In a city that experiences traffic jams every day, there is little point in buying a faster car instead of taking the public bus. In both scenarios, you are going be stuck in the traffic for hours instead of arriving at your destination earlier. The same happens with your programs. Consider an interactive program that performs a task in background while waiting for user input, there is little point in trying to gain a few cycles by optimizing the task, since the entire system is still limited by the human input, which will always be much, much slower than the machine. In a similar sense, there is little point in trying to speed-up the boot time of a machine that almost never reboots, since the reboot time cost will be payed only rarely, when a restart is required.
In a very similar sense, the speed-up you gain by recompiling every single program in your computer with the fastest compiler optimizations possible for your machine, like some people like to do, is completely irrelevant, considering the fact that the machine will spend most of the time in an idle state, waiting for the next user input.
What actually makes a difference, and should be target of every optimization work, are cases where the workload is so intensive that gaining a few extra cycles very often will result in a real increase of the computing done in the long run. This requires, first off all, that the code being optimized is actually in the critical path of performance, which means that that part of the code is actually what is holding the rest of the system back. If that is not the case, the gain will be minimum and the effort will be wasted.
Moving back to the reboot example, in a virtualization environment, where new VMs or containers boxes need to be spawned very fast and very often to respond to new service requests, it makes a lot of sense to optimize reboot time. In that case, every microsecond saved at boot time matters to reduce to overall response of the system.
The corollary of the Ahmdal’s law states just that. It argues that there is little sense in aggressively optimizing a part of the program that executes only a few times, very quickly, instead of optimizing the part that occupies the largest part of the execution time. In another (famous) words, a gain of 10% of time in code that executes 90% of time is much better for the overall performance than a 90% speed up in code that executes only 10% of the time.
Continue reading on Collabora’s blog.
Learn more from Gabriel Krisman Bertazi at Open Source Summit Europe, as he presents “Code Detective: How to Investigate Linux Performance Issues” on Monday, October 23.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195069.35/warc/CC-MAIN-20201128040731-20201128070731-00539.warc.gz
|
CC-MAIN-2020-50
| 6,215
| 21
|
https://mail.python.org/pipermail/tutor/2001-October/009250.html
|
code
|
[Tutor] Now I have a variable-passing problem!
Sean 'Shaleh' Perry
Sat, 13 Oct 2001 19:46:42 -0700 (PDT)
>> Should I even bother or just re-write the body of the if-test to re-prompt
>> without a recursive call?
> It's not too much trouble to correctly recursify it, esp if you've already
> built the function.
the problem with recursion is the case where the user (or whatever) just keeps
sending bad input. Then the recursion just gets deeper and deeper, python will
eventually hiccup or segfault. Also recursion is more expensive than the loop,
especially if the user/input has many incorrect entries.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865438.16/warc/CC-MAIN-20180623225824-20180624005824-00587.warc.gz
|
CC-MAIN-2018-26
| 604
| 11
|
https://blog.traklight.com/online-anti-cheating-tool-proctorio-uses-dmca-to-silence-critics
|
code
|
If you'll forgive visiting once more with a frequent topic of this blog, we have to talk yet again about the DMCA. Specifically, about the ability of seemingly any company to use the DMCA for other than what it was ostensibly created, which is to protect copyright on the internet. We've seen the fruits of the maximalist position that most corporations have taken: every video that even makes mention of a product ends up flagged, regardless of the dictates of fair use. It's rote at this point to say that the implementation of the DMCA has been manipulated to the point of near-uselessness, but it's worth saying over and over again, in the hopes something might change.
What's equally true of the abuse of the system is the way in which it can be used to silence critics as well, and while it doesn't rise to the level of a First Amendment violation as it's not the government undertaking these actions, it still flies in the face of the generally accepted principle of being able to take your lumps if you put out a product.
Offending that principle this time is Proctorio, a browser extension designed to prevent cheating in at-home testing, a seemingly important tool for the COVID Age. As reported in Techdirt, a security expert named Erik Johnson downloaded the Proctorio extension and dug into its code, as anyone with the knowhow could do, in order to assess how the program really works and how it might be failing. He then tweeted out his findings, along with a link to his further writeup on Pastebin, which included snippets of the code to illustrate his points.
If you've read my other iterations on what are versions of this same story, you probably know what happens next: Proctorio got Johnson's tweets taken down via DMCA requests, and had the Pastebin post memory-holed on the grounds that sharing those parts of the code violated their copyright. As Geigner points out in Techdirt, Proctorio's claim of copyright violation would have some merit were it not a clear case of critique as provided for under fair use. Was Proctorio misapprehending copyright protections and violations, or were they merely looking to take down a post that was less than positive about their product? It's impossible for those of us on the outside to say for certain, but we probably have a reasonable guess.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038092961.47/warc/CC-MAIN-20210416221552-20210417011552-00606.warc.gz
|
CC-MAIN-2021-17
| 2,308
| 4
|
http://elijah.mirecki.com/about/
|
code
|
I am currently a Software Engineer at Amazon. I work mainly with Scala, Spark, and assorted AWS tools. I attended the University of Toronto, and recieved an HBSc Double Major in Physics and Computer Science. I was President for the Mathematical and Computational Sciences Society in the 2016-2017 academic year. You can visit my LinkedIn page for a full work and academic bio.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00378.warc.gz
|
CC-MAIN-2022-40
| 376
| 1
|
http://www.dzone.com/links/ibm_aix_java_process_size_monitoring.html
|
code
|
In this second post, we create project templates to give your coding a jumpstart.
Luxurious and elegant templates make an assertion of the quality of what you offer. No matter,... more »
Peter was showing the relationship between age and reputation on stack overflow. The... more »
Because it was well-received and extremely practical presentation, I wanted to re-iterate a... more »
While developers struggle to adapt their apps for 64 bits this works seamlessly with Codename... more »
Yesterday, I looked at how to use a single ngRepeat directive, in AngularJS, to render a list... more »
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067214.90/warc/CC-MAIN-20141017150107-00249-ip-10-16-133-185.ec2.internal.warc.gz
|
CC-MAIN-2014-42
| 596
| 6
|
http://iot.sys-con.com/node/3034507
|
code
|
|By Marketwired .||
|March 31, 2014 07:00 AM EDT||
PALO ALTO, CA -- (Marketwired) -- 03/31/14 -- Cloudera, a leader in enterprise analytic data management powered by Apache Hadoop, today announced a $900 million round of financing with participation by top tier institutional and strategic investors. This financing round includes the previously-announced $160 million of funding from T. Rowe Price and three other top-tier public market investors, Google Ventures, and an affiliate of MSD Capital, L.P., the private investment arm of Michael Dell and his family, and a significant equity investment by Intel that gives them an 18% share of Cloudera.
"The market opportunity for companies to gain insight and build transformative applications based on Hadoop is tremendous," said Tom Reilly, CEO of Cloudera. "Clearly, demand is accelerating and the market is poised for growth -- for all of the players in this space, and we believe Cloudera will be the company to lead this global shift in extracting value from data. This position of strength and leadership is evidenced by the strong support of public market investors, large institutional investors and now key strategic investors including Intel, who've made sizable and significant contributions to cement our platform offering."
Validating the opportunity for Hadoop and Cloudera
Industry analysts who follow the market share different estimates on the market opportunity, but they agree on one thing, it's significant and growing fast. Many enterprises are re-architecting their data centers, emulating the web-scale companies who pioneered the use of open source software like Hadoop, combined with industry-standard hardware instead of dedicated engineered systems. A few indicators of the growth of big data include these reports:
- Gartner estimates the market for data management infrastructure (database management systems including data warehousing, storage management, BI, ECM and data integration, and related systems as $74 billion in 2014, growing to $94 billion by 2017.(1)
- IDC predicted the big data technology and services market will grow at a CAGR of 27% from 2012 to 2017, growing to $32.4 billion,(2) and that the Internet of Things will generate 30 billion autonomously connected endpoints.(3)
- Gartner predicted that Big Data would drive $232 billion in IT spending through 2016.(4)
- IDC estimates that the amount of data in the world will grow fifty-fold from 2010 to 2020.(5)
Cloudera pioneered the commercial Hadoop market, when it was founded in 2008 and was the only company promoting this new architecture. Today most industry analysts who follow big data would agree that Hadoop is the underlying technology behind a growing number of big data projects. Examples include web-scale properties that have pioneered the development and deployment of Hadoop in their data centers, and more frequently by enterprises who are rethinking their data centers. Although enterprises of any size will benefit by becoming more information-driven, today the greatest traction is occurring in large enterprises. They are gathering all their information and then extracting and quantifying those insights to push out in the form of new products and services. Many projects based on Hadoop, the de facto standard for data management, start out small and grow over time.
Within the last year Cloudera brought to market its version of an enterprise data hub (EDH), a reference architecture based on Apache Hadoop surrounded with open source components. The EDH is a platform that plays a critical role in data management. A Cloudera-powered EDH is open source at the core, and has an open architecture which enables ISVs to integrate directly to the platform. That gives customers choice and flexibility. Since its founding, Cloudera has taken the lead to identify or found the projects and components that deliver a core platform and it has developed unique software that adds critical capabilities for security, data management and governance, which are essential for storing, accessing and using data.
"Intel's sizable investment in Cloudera, alongside funding from institutional investors, Google Ventures and MSD Capital, are all indicative of both the very large market opportunity and the leadership position of Cloudera," said Jim Frankola, Chief Financial Officer for Cloudera. "These investments give us significant financial resources to accelerate growth and deliver long-term sustainable value to our customers and partners."
Cloudera will use the funding to: support the previously-announced collaboration agreement with Intel, further drive the enterprise adoption of and innovation in Hadoop, to which it is the largest open source contributor, and promote the enterprise data hub (EDH) market; support geographic expansion into Europe, Asia and now China through Intel's market presence in that region, and expand its services and support capabilities for new open source projects; and scale the field and engineering organizations.
Dean Bradley Osborne Partners and Allen & Company served as financial advisers to Cloudera and assisted the Company in arranging the financing.
The financing is expected to close in the second quarter of 2014, subject to the satisfaction of customary closing conditions, including applicable regulatory requirements.
Cloudera is revolutionizing enterprise data management by offering the first unified Platform for Big Data, an enterprise data hub built on Apache Hadoop. Cloudera offers enterprises one place to store, process and analyze all their data, empowering them to extend the value of existing investments while enabling fundamental new ways to derive value from their data. Only Cloudera offers everything needed on a journey to an enterprise data hub, including software for business critical data challenges such as storage, access, management, analysis, security and search. As the leading educator of Hadoop professionals, Cloudera has trained over 20,000 individuals worldwide. Over 900 partners and a seasoned professional services team help deliver greater time to value. Finally, only Cloudera provides proactive and predictive support to run an enterprise data hub with confidence. Leading organizations in every industry plus top public sector organizations globally run Cloudera in production. www.cloudera.com
Cloudera, Cloudera Platform for Big Data, Cloudera Enterprise Basic Edition, Cloudera Enterprise Flex Edition, Cloudera Enterprise Data Hub Edition and CDH are trademarks or registered trademarks of Cloudera in the United States and in jurisdictions throughout the world. All other company and product names may be trade names or trademarks of their respective owners.
(1) Gartner Enterprise Software Forecast 2013 Q3 Update
(2) IDC Worldwide Big Data Technology and Services Forecast, 2012-2017, Doc 244979
(3) IDC Worldwide Predictions, 2014
(4) Gartner, October 2012
(5) IDC Digital Universe Study, December 2012
+1 (650) 644-3900 ext. 5907
SYS-CON Events announced today that Ciqada will exhibit at SYS-CON's @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Ciqada™ makes it easy to connect your products to the Internet. By integrating key components - hardware, servers, dashboards, and mobile apps - into an easy-to-use, configurable system, your products can quickly and securely join the internet of things. With remote monitoring, control, and alert messaging capability, you will meet your customers' needs of tomorrow - today! Ciqada. Let your products take flight. For more inform...
Apr. 18, 2015 07:00 AM EDT Reads: 1,490
SYS-CON Events announced today that GENBAND, a leading developer of real time communications software solutions, has been named “Silver Sponsor” of SYS-CON's WebRTC Summit, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. The GENBAND team will be on hand to demonstrate their newest product, Kandy. Kandy is a communications Platform-as-a-Service (PaaS) that enables companies to seamlessly integrate more human communications into their Web and mobile applications - creating more engaging experiences for their customers and boosting collaboration and productiv...
Apr. 18, 2015 06:00 AM EDT Reads: 2,194
SYS-CON Events announced today that BroadSoft, the leading global provider of Unified Communications and Collaboration (UCC) services to operators worldwide, has been named “Gold Sponsor” of SYS-CON's WebRTC Summit, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. BroadSoft is the leading provider of software and services that enable mobile, fixed-line and cable service providers to offer Unified Communications over their Internet Protocol networks. The Company’s core communications platform enables the delivery of a range of enterprise and consumer calling...
Apr. 18, 2015 05:30 AM EDT Reads: 2,129
VoxImplant has announced full WebRTC support in the newest versions of its Android SDK and iOS SDK. The updated SDKs, which enable audio and video calls on mobile devices, are now compatible with the WebRTC standard to allow any mobile app to communicate with WebRTC-enabled browsers, including Google Chrome, Mozilla Firefox, Opera, and, when available, Microsoft Spartan. The WebRTC-updated SDKs represent VoxImplant's continued leadership in simplifying the development of real-time communications (RTC) services for app developers. VoxImplant (built by Zingaya, the real-time communication servi...
Apr. 18, 2015 02:45 AM EDT Reads: 1,773
The IoT Bootcamp is coming to Cloud Expo | @ThingsExpo on June 9-10 at the Javits Center in New York. Instructor. Registration is now available at http://iotbootcamp.sys-con.com/ Instructor Janakiram MSV previously taught the famously successful Multi-Cloud Bootcamp at Cloud Expo | @ThingsExpo in November in Santa Clara. Now he is expanding the focus to Janakiram is the founder and CTO of Get Cloud Ready Consulting, a niche Cloud Migration and Cloud Operations firm that recently got acquired by Aditi Technologies. He is a Microsoft Regional Director for Hyderabad, India, and one of the f...
Apr. 18, 2015 01:00 AM EDT Reads: 863
SYS-CON Events announced today that Optimal Design, an Internet of Things solution provider, will exhibit at SYS-CON's Internet of @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Optimal Design is an award winning product development firm offering industrial design and engineering services to the consumer, medical, and defense markets.
Apr. 17, 2015 05:30 PM EDT Reads: 1,562
SYS-CON Events announced today that Vicom Computer Services, Inc., a provider of technology and service solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. They are located at booth #427. Vicom Computer Services, Inc. is a progressive leader in the technology industry for over 30 years. Headquartered in the NY Metropolitan area. Vicom provides products and services based on today’s requirements around Unified Networks, Cloud Computing strategies, Virtualization around Software defined Data Ce...
Apr. 17, 2015 02:00 PM EDT Reads: 1,304
What exactly is a cognitive application? In her session at 16th Cloud Expo, Ashley Hathaway, Product Manager at IBM Watson, will look at the services being offered by the IBM Watson Developer Cloud and what that means for developers and Big Data. She'll explore how IBM Watson and its partnerships will continue to grow and help define what it means to be a cognitive service, as well as take a look at the offerings on Bluemix. She will also check out how Watson and the Alchemy API team up to offer disruptive APIs to developers.
Apr. 17, 2015 12:00 PM EDT Reads: 1,386
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Apr. 17, 2015 12:00 PM EDT Reads: 2,042
With IoT exploding, massive data will transform businesses with opportunities to monetize almost anything that can be measured. In this C-Level Roundtable Discussion at @ThingsExpo, Brendan O’Brien, Aria Systems Co-founder and Chief Evangelist, will lead an expert panel of consultants, thought leaders and practitioners who will look at these new monetization trends, discuss the implications, and detail lessons learned from their collective experience. Finally, the panel will point the way forward for enterprises who wish to leverage the resulting complex recurring revenue models, adding valu...
Apr. 17, 2015 11:15 AM EDT Reads: 1,366
How is unified communications transforming the way businesses operate? In his session at WebRTC Summit, Arvind Rangarajan, Director of Product Marketing at BroadSoft, will discuss how to extend unified communications experience outside the enterprise through WebRTC. He will also review use cases across different industry verticals. Arvind Rangarajan is Director, Product Marketing at BroadSoft. He has over 19 years of experience in the telecommunications industry in various roles such as Software Development, Product Management and Product Marketing, applied across Wireless, Unified Communic...
Apr. 17, 2015 09:45 AM EDT Reads: 1,529
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? Join this panel of experts as they peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you’ll have no problem filling in your buzzword bingo cards.
Apr. 16, 2015 05:30 PM EDT Reads: 2,097
Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of robomq.io, and Fred Yatzeck, principal architect leading product development at robomq.io, will discuss how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at the same time reduce Time to Market (TTM) by using plug and play capabilities offered by a robust I...
Apr. 13, 2015 12:15 PM EDT Reads: 1,870
@ThingsExpo has been named the Top 5 Most Influential Internet of Things Brand by Onalytica in the ‘The Internet of Things Landscape 2015: Top 100 Individuals and Brands.' Onalytica analyzed Twitter conversations around the #IoT debate to uncover the most influential brands and individuals driving the conversation. Onalytica captured data from 56,224 users. The PageRank based methodology they use to extract influencers on a particular topic (tweets mentioning #InternetofThings or #IoT in this case) takes into account the number and quality of contextual references that a user receives.
Apr. 12, 2015 04:00 PM EDT Reads: 1,892
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.
Apr. 11, 2015 09:15 AM EDT Reads: 2,222
IoT is still a vague buzzword for many people. In his session at @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, discussed the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. He also discussed how IoT is perceived by investors and how venture capitalist access this space. Other topics discussed were barriers to success, what is new, what is old, and what the future may hold. Mike Kavis is Vice President & Principal Cloud Architect at Cloud Technology Pa...
Apr. 11, 2015 09:00 AM EDT Reads: 6,009
The only place to be June 9-11 is Cloud Expo & @ThingsExpo 2015 East at the Javits Center in New York City. Join us there as delegates from all over the world come to listen to and engage with speakers & sponsors from the leading Cloud Computing, IoT & Big Data companies. Cloud Expo & @ThingsExpo are the leading events covering the booming market of Cloud Computing, IoT & Big Data for the enterprise. Speakers from all over the world will be hand-picked for their ability to explore the economic strategies that utility/cloud computing provides. Whether public, private, or in a hybrid form, clo...
Apr. 8, 2015 02:30 PM EDT Reads: 3,986
The WebRTC Summit 2015 New York, to be held June 9-11, 2015, at the Javits Center in New York, NY, announces that its Call for Papers is open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 16th International Cloud Expo, @ThingsExpo, Big Data Expo, and DevOps Summit.
Apr. 8, 2015 09:00 AM EDT Reads: 2,270
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, will provide some practical insights on what, how and why when implementing "software-defined" in the datacenter.
Apr. 7, 2015 12:00 PM EDT Reads: 1,487
While not quite mainstream yet, WebRTC is starting to gain ground with Carriers, Enterprises and Independent Software Vendors (ISV’s) alike. WebRTC makes it easy for developers to add audio and video communications into their applications by using Web browsers as their platform. But like any market, every customer engagement has unique requirements, as well as constraints. And of course, one size does not fit all. In her session at WebRTC Summit, Dr. Natasha Tamaskar, Vice President, Head of Cloud and Mobile Strategy at GENBAND, will explore what is needed to take a real time communications ...
Apr. 6, 2015 10:00 AM EDT Reads: 1,636
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246634331.38/warc/CC-MAIN-20150417045714-00291-ip-10-235-10-82.ec2.internal.warc.gz
|
CC-MAIN-2015-18
| 19,067
| 64
|
https://wit3.fbk.eu/2012-03
|
code
|
Training and development sets for the MT track
The IWSLT 2012 Evaluation Campaign includes the MT track on TED Talks. In this edition, the official language pairs are two:
from Arabic to English
from English to French
In addition, for ten language pairs training, development and evaluation sets are provided:
from German, Dutch, Polish, Portoguese-Brazil, Romanian, Russian, Slovak, Slovenian, Turkish and Chinese to English
Submitted runs on additional pairs will be evaluated as well, in the hope to stimulate the MT community to evaluate systems on common benchmarks and to share achievements on challenging translation tasks.
The archive with training and development sets is available at this link.
If you use this corpus in your work, please cite the paper:
M. Cettolo, C. Girardi, and M. Federico. 2012. WIT3: Web Inventory of Transcribed and Translated Talks. In Proc. of EAMT, pp. 261-268, Trento, Italy. pdf, bib.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945168.36/warc/CC-MAIN-20230323132026-20230323162026-00126.warc.gz
|
CC-MAIN-2023-14
| 924
| 10
|
http://madre-deus.com/cms/lib/download-Progress-in-Pacific-Polymer-Science-3%3A-Proceedings-of-the-Third-Pacific-Polymer-Conference-Gold-Coast%2C-Queensland%2C-December-13%E2%80%9317%2C-1993.php
|
code
|
0 enough of 5 Balkan StarsGood download Бумажные денежные знаки России и СССР. 0 particularly of 5 for realiza cognitive for a app, consists the labs with a covalent viable goods. infrequently reliable Learning similar internet site and started well ethically for an industry to the Pi and Python, to carry further majority in both. issues with comprehensive experiences. Good viruses wish dead Producers; respective maihoff-herten.de, modern giving of degrees and compatibility means with Prime Video and due more connected years. There is a conducting this engineering at the break-up. Enter more about Amazon Prime.full features, on the demographic download Progress in Pacific Polymer Science, are come to take goals much. Some renewable clients, for antivirus, relatively telepathic websites when they are found. scientific copies 've proposed to load PurchaseGreat by aligning their Sales: they recommend less official to Learn down a email n't and will, at most, as realize download system that is booming mind by songs. The dry order time, newly, 's still be however mobile.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710409.16/warc/CC-MAIN-20221127141808-20221127171808-00424.warc.gz
|
CC-MAIN-2022-49
| 1,114
| 1
|
https://www.austinlwright.com/covid-research
|
code
|
Tracking Mask Mandates
During summer 2020, I led two research labs that collected a comprehensive, county-specific database of local mask mandates. To advance scientific research on the importance, phased roll out, and downstream consequences of mask mandates, we have made this data publicly available and free to download. We ask anyone using the data to cite our working paper and acknowledge the source of the data.
The latest data can be retrieved at this link: current link (04/05/2021). _edate columns are generated using Stata's date function, which is centered at `01jan1960'. We thank Billy Ferguson for pointing out a quirk in some of the rows. This has been resolved with a new file upload.
Additional details about the data are available in this working paper: Tracking Mask Mandates during the COVID-19 Pandemic. With Geet Chawla, Luke Chen, Anthony Farmer, IPAL Lab, DPSS Lab. If you use the data, please cite this paper and consider reviewing our other COVID-19 projects where we use this and related data.
To recommend a revision to the database, please navigate to this link: https://forms.gle/DtUSyvHDj9rj1Akq8.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474676.79/warc/CC-MAIN-20240227153053-20240227183053-00607.warc.gz
|
CC-MAIN-2024-10
| 1,130
| 5
|
https://chelseasparke1995.wordpress.com/2015/11/22/reflection-on-layers-and-layer-masking/
|
code
|
In the last session, we had a workshop session and tutorial on layer masking and layering images. I personally really enjoyed this session and found it really productive. The skills I learned today are really useful to know and as a practitioner could adopt these to help create graphics that are unusual and effective. these could back up a piece of text and shows the creative side to journalism and digital media.
In this session, i created a blend using layer masking of two famous icons. these were Barack Obama and Jack Nicholson
Another image blend of two images was of a frog and some orange peel. To enable me to do this I saved two images from the web and placed them into photoshop, once these were in i then scaled them up to the size that i wanted. I also duplicated the frog layer so that i had two separate images of the frog.
Finally, we were given free reign to pick and layer mask images of our choice. For this i picked the images of a spatter pattern, a pistol and Brad Pitt.
When doing this task i follow a process in order for me to do layer masking correctly:
- I found two images that I wanted to work with
- I saved them to my hard drive
- launched photoshop and made a new canvas (changing the presets to international paper clicked to get A3 paper size
- Then I went to file, place (to import images onto the canvas)
- If I had a scaling problem, I would have to resize the images
(Cmd+T= free transform mode)
- In the layers pallet, i have the opacity option if required to help make sure that the images match up by dropping the translucency down on the top layer.
- Then in the layers palette go to the bottom click on add layer mask on the top layer in the stack.
- then i went to the paint brush, bringing the flow down to help control what happens, also turning on the airbrush
- I then made sure that the colour pickers were set to black (hides) and white (reveals)
- I then played around with the image to see what i could do with it
- if colour corrections needed to be done i then went to image, image adjustments, hue and saturation and match up the colours required
I will be able to employ this technique when doing data and information graphs, this will allow me to make more advanced images that will have the required affect on the audience. I also could use this technique when ever I wanted as long as i didn’t lie to the audience. I feel that this technique is more ethical and moral, then changing the appearance of a face or body structure. However, it also depends on how you use the tool. I think it is good for the creative aspect but i would be really careful in my own work to how far I push it and what images I may consider.
Personally, I still believe that there is a fine line to what is acceptable when using any editing softwares as we are manipulating the audiences, we are also manipulating reality by changing the images. This tool could be very fun and could create some really amazing things for my projects.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506477.26/warc/CC-MAIN-20200401223807-20200402013807-00319.warc.gz
|
CC-MAIN-2020-16
| 2,974
| 19
|
https://www.distyled.lt/product-page/box-bag-small-climbing-rope
|
code
|
Box bag, small / climbing rope
Think outside the box
but keep your belongings
save inside it.
Mini box bag is created to serve you as a perfect size bag for a busy day.
Bag size: Height - 15, 5cm / Width- 17,5cm / Depth - 13cm.
Produced of high-quality eco leather, called microfiber.
Strap size: length ~ 150cm.
Made out of synthetic 12mm static climbing rope.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00091.warc.gz
|
CC-MAIN-2021-49
| 361
| 9
|
https://www.codecademy.com/blog/80
|
code
|
We are proud to partner with Google and Mozilla in supporting Code Club UK as it expands across the globe. We’ve seen the impact coding has on children first hand at Codecademy, and we are excited to watch Code Club open in more countries.
Code Club UK is a volunteer network of after-school coding clubs that exposes children aged 9 to 11 to programming. Their open source Code Club World initiative now extends the same opportunity to children everywhere, with materials in Norwegian, Ukranian, German, Brazilian Portuguese and Dutch — as well as hosting them on GitHub for programmers worldwide to translate and use in their own coding clubs.
Their mission — “to give every child in the world a chance to learn to code — complements our own and we can’t wait to see everyone, young and old, have the opportunity to learn to program. Onwards!
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120694.49/warc/CC-MAIN-20170423031200-00315-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 856
| 3
|
https://hdho.parcebooks.de/wyszk.worldphotographie.de/posts/can-you-use-eyebrow-jewelry-for-vertical-labret.html
|
code
|
sourceCodeEncoding : String (optional) In order to correctly show all your covered source code files in the detail views, the plugin must open these files with the correct character encoding (UTF-8, ISO-8859-1, etc.
info: Lines: 11224: 13055: 86.
toBe(7); to. Run initial/baseline lcov.
Y alias, depending on the version of Python you’re using.
Y alias, depending on the version of Python you’re using. Apr 29, 2022 · This example demonstrates how to use babel-plugin-istanbul to collect coverage data during runtime with your end-to-end tests which will be stored on the filesystem. info: Lines: 286: 810: 35.
0 %: Date: 2023-05-21 11. This way the percentage of total lines. Check for Step 2: If you run ng test --code-coverage --watch=false now, there is a folder reports/ generated at the project root with sonarqube_report.
LCOV is a graphical tool for GCC's coverage testing with gcov. .
The most simple approach is to execute all tests within a single job in the CI pipeline: image: ruby:2.
Y alias, depending on the version of Python you’re using. .
. lcovrc - lcov configuration file.
. Run initial/baseline lcov. gcda files.
). . info: Lines: 3987: 4929: 80. At a later stage, you will combine this data file with coverage data files captured after the test run. Cobertura.
To create a code coverage report, you run a build project that is configured with at least one code coverage report group in its buildspec file.
github actions coverage badge. Optional.
Lcov is a graphical front-end for gcov.
7, you will be able to use coverage,.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100290.24/warc/CC-MAIN-20231201151933-20231201181933-00454.warc.gz
|
CC-MAIN-2023-50
| 1,554
| 16
|
https://constrain-eu.org/news/silicone-a-tool-for-expanding-economic-pathways/
|
code
|
If we want to know how our climate will behave in the future, we need emissions projections for a long list of different chemicals. However most socio-economic models do not provide the full range. Our latest work provides python code that allows modellers to fill out incomplete models.
In order to project climate features, sociologists and economists must make assumptions about technological developments, international norms, national policies, lifestyle choices and population changes to produce what is called an Integrated Assessment Model (IAM). These require many assumptions and are not firm predictions of the future, but internally consistent pathways that the world may take.
Collections of IAM scenarios with common social or political features from different models can then indicate what the effect of a certain type of policy might be. However most models do not provide estimates for many of the substances we emit – beyond just CO2 and methane, there are a host of less-known substances, from aerosols that make smog and change how clouds form, to a huge range of fluoridated gases that gram-for-gram can exert hundreds or even thousands of times the impact of carbon dioxide.
Our infilling tool, Silicone, looks for relationships between a commonly modelled emission (e.g. CO2) and a rarer emission (e.g. NO2) in all the scenarios that model these two emissions. It then uses that result to recommend a value for the rarer emission for IAMs that only project the more common emission.
It provides a wide range of possible relationships to choose between. These include assuming direct proportionality between the emission types, interpolating between the results in complete scenarios and performing a type of quantile regression. It also provides a set of specialised tools for breaking an aggregate value (like Kyoto gas total) into components in ways that ensure the total is conserved, or for infilling many emissions in a similar way. It’s important to note that this is a toolkit, not a magic wand – care should be taken choosing the infilling technique and the range of complete scenarios to perform the infilling, and if the scenario being infilled is radically different to those that have a complete set of data, then we cannot have confidence in the results.
In this figure, you can see our code in action, infilling the volatile organic compounds (VOC) results (unknown for the dotted line) from the CO2 results, known for all pathways, using a smooth quantile regression technique called Quantile Rolling Windows. This technique always produces responses within the limits of known scenarios, hence on the right the output dashed line is always inside the coloured lines.
The code is all open-source so additional infilling techniques can be added as required. Please let us know if you want to contribute!
For further information contact firstname.lastname@example.org.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510903.85/warc/CC-MAIN-20231001141548-20231001171548-00582.warc.gz
|
CC-MAIN-2023-40
| 2,910
| 8
|
https://forum.glyphsapp.com/t/hyphenated-letter-spacing-in-text-boxes/13392
|
code
|
Is there a way to restrict or disable the letter/word spacing within a text box when a word is hyphenated?
Can you give an example (mock-up picture perhaps) of what you want to do?
yes, i’m still fascinated and/or obsessed with this typeface. I’m positive there is a more elegant and easier way to do it, but whatever i’ve figured out has gotten me this far - around 900 individual glyph halves at this point…
in the last screen shot, when this is set within a text box and a word break occurs, the width of the hyphen adjusts the word and letter spacing to accommodate the hyphen. I’ve tried setting #entry and #exit anchors, but since i’m not using components they don’t work.
thank you for being so available, knowledgeable and generous with your time for all of these forum posts. It’s remarkable, impressive and refreshing. i really love your app.
again, thank you in advance,
Have you checked the letter spacing options.
The second line (Zeichenabstand == letter spacing) should be all 0%.
If the setting is correct, then you found a bug in Indesign.
i’ve been setting/testing everything in illustrator - these are the character palette settings:
but, i just tried it in inDesign and it works!
(inDesign character palette settings:)
does this mean there’s a bug in Illustrator?
- What about your paragraph settings?
- In AI, the field value is not fully visible, it says “Metrics –” and then probably something else. Can you verify?
- And have you considered a color font solution? You can have a solid white background. It would only overlap the other way around I’m afraid. But that could be solved with typesetting.
Here are the paragraph settings:
In the AI field, the value says “Metrics - Roman Only”. The other options are “Auto”, which doesn’t effect the appearance, and “Optical”, which definitely effects the appearance, but in the wrong way.
I did think about a color font solution, because I think that would allow for different tracking values, but as I was thinking about it, I got hung up on how to have some of the ‘strokes’ on different ‘layers’ so that individual pieces of each letter could overlap the surrounding letters in different ways.
As I’ve been looking through the forums, both here and on typedrawers, and reading different posts (including ‘/afdko/OpenTypeFeatureFileSpecification’ you put up @mekkablue), i keep finding myself wondering if there is a line of code i could put somewhere that is something like ‘kern=0’, or ‘trak disabled’ or ‘advance width inactive’, but i really don’t understand enough about any of these to do something like that. I’m wondering if there is some way to trick applications into only allowing zero as the tracking value, or maybe anchors every sidebearing…(How do non-Latin typefaces handle this?)
The only other thing I can think of doing is just having a big ‘read me’ in the font file folder, so whomever wanted to use this face would know that there are things they aren’t able to do with it.
could a ‘nonspacing’ value work somewhere in this?
You can’t influence what the app is doing with your glyphs. If the app decides to mess things up, there is nothing you can do.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00538.warc.gz
|
CC-MAIN-2023-14
| 3,231
| 23
|
https://blog.dragansr.com/2019/05/net-core-3-grpc-protobuf.html
|
code
|
After "tech trend waves" of XML/SOAP and current JSON/REST, ProtoBuf/gRPC (or similar binary serialization) may become more broadly used since it is supported by tools for most of popular client and server platforms.
An Early Look at gRPC and ASP.NET Core 3.0 - Steve Gordon
"gRPC is a schema-first framework initially created by Google. It supports service to service communication over HTTP/2 connections. It uses the Protobuf wire transfer serialisation for lightweight, fast messaging between the services."
gRPC services with C# | Microsoft Docs
C# Quick Start – gRPC
Protocol Buffers, Avro, Thrift & MessagePack - igvita.com
"Protocol Buffers (PB) is the "language of data" at Google. Put simply, Protocol Buffers are used for serialization, RPC, and about everything in between."
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00438.warc.gz
|
CC-MAIN-2023-06
| 788
| 7
|
https://help.nextcloud.com/t/why-is-this-still-happening-to-me-nc-needs-focus/34464
|
code
|
I would like to talk about my nextcloud experience and before you read further I want to say that I understand it is not easy to develop something (I am a dev too) and I appreciate all the work from the community; I hope to be able to contribute one day too.
I have tried nextcloud 9, 10, 11, 12 and now 13 (always debian + apache) looking for a simple open source file syncing solution but every time it is the same: I install nextcloud; it works just fine for about a week to a month then bugs start to appear, files are not syncing, logs are full of webdav errors, etc… and then I stop using it and wait for the next version
As a single user of my private cloud at the moment I am having encryption errors, a single file “.gitignore” is not syncing anymore, files locked, operations cancelled, connection closed, my client is stuck with errors and I don’t know what to do.
I read the forum, try to post, try to find solutions by myself but I am not able to debug it.
Looking at nextcloud’s development history it seems that a lot of focus is on “external features” but we don’t seem to have a simple bomb proof syncing feature yet ? we can’t even sync .htaccess files and a bunch of others too. I mean all other solutions do it right out of the box… How a file syncing solution is ok with the fact of not being able to sync some file types ? As suggested in the forum those files could be renamed on the fly…
Let me be clear on one thing; I just need the file syncing feature; everything else is unnecessary to me. I don’t understand why I need calendars, chats, news feeds… Really ? It seems to me that dev time should be focused on the core application…
I see the German government using nextcloud so what am I missing here ?? Is this mature enough for government use ? If so it should be ok for me and my very simple syncing need ?
Im very sorry to say but in my humble opinion something is wrong with the core backend and as a result NC does not work reliably yet
what is your nextcloud experience ?
ps: tried to update to 14 beta 1 and the updater failed…
ps2 : I disabled my NC13 server; unworkable at the moment; will try a fresh NC 14 later
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662555558.23/warc/CC-MAIN-20220523041156-20220523071156-00106.warc.gz
|
CC-MAIN-2022-21
| 2,181
| 11
|
https://www.qvera.com/kb/index.php/1235/does-qie-support-odbc
|
code
|
QIE is a Java based application. As such let's shift a bit to talk about Java, JDBC and ODBC.
Java, JDBC, and ODBC
The short answer is "yes" Java can connect to ODBC data sources but, it is not recommended. Although, Java does not support ODBC directly. There is a JDBC/ODBC bridge driver that can be used to connect to ODBC data sources. This made sense in the early days of Java but now there is an abundance of JDBC drivers.
NOTE: Up to JAVA version 7, Java included its own JDBC/ODBC bridge driver. Starting with Java 8 it is no longer included in the JDK.
QIE and using JDBC to connect to a database
So, with that out of the way how do you install a JDBC driver in QIE?
1. Place driver in QIE lib folder. Below we have included the Oracle and MySQL drivers. Note the Microsoft SQL driver ships with QIE so step 1 and 2 may be skipped.
2. In QIE select the driver. System Administration -> System Administration -> Manage External Libraries
3. Restart QIE.
4. Create and test JDBC connection. Note: The image below is connecting to Microsoft SQL Server.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271864.14/warc/CC-MAIN-20220626192142-20220626222142-00595.warc.gz
|
CC-MAIN-2022-27
| 1,057
| 10
|
https://superuser.com/questions/1377251/sum-cells-based-on-the-value-of-other-cell/1377254
|
code
|
I am facing a problem that seems trivial in Excel. What I need to do is simply this:
In short, I have a large file of prices linked to a few people and I need to know how much each person paid. I've tried things with VLOOKUP with not much success.
How can I accomplish this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669422.96/warc/CC-MAIN-20191118002911-20191118030911-00178.warc.gz
|
CC-MAIN-2019-47
| 274
| 3
|
http://raumausstattung-braun.de/freebooks/download-brain-child-1992.php
|
code
|
Calmodulin-Dependent Protein Kinase II be it a moving download La théorie in Increasing the registered sport of small question. The referendum reinvented broken in the Socialism of a authority piece and the control qFit mammals tried applied as forms on a empowerment development that work Fourier hear to Allow the pattern flow. also, our click here to investigate signals a absolute philosophical process, whereas the almost downloaded software was a proper American &ldquo. This is the download Simon Magus: The First Gnostic? renewal of liberal CaMKII is a social crystal that tries designed SMH violence sort. likewise, in our r-o-e-h-r.de the American tennis is used as an political use on an removed site industry in each Government gallery picture. This allows due download Childless: No Choice: The Experience of Involuntary Childlessness 1993 of the process of the Principal applying reform in the 3gbi)PDB article using file state.fully better that the the Express Desktop SKU lies the male 2012 patterns south here. Today Visual Studio services displaced Visual Studio Express 2012 for Windows Desktop foster and you can vote X-ray it precisely deductive. toxic orders, control groups, and CLR lessons working C++. You can thereby, of set, dispatch optics over planned subtilis into a 350mls means. axial Studio Express 2012 for Web! Inconsistency unequivocally, this is correctly for Express for Web) While Express SKUs Do Initially update micro-programmed Enzymes( you have Pro for that) the different SKU is want Unit Testing, Code Analysis, carefully too as the NuGet life-and-death crystal. It has a Race of a military cause for my political control that NuGet is However marched in ALL Visual Studio 2012 SKUs, so Express experiments.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738015.38/warc/CC-MAIN-20200808165417-20200808195417-00493.warc.gz
|
CC-MAIN-2020-34
| 1,753
| 1
|
https://community.oracle.com/thread/4125511
|
code
|
Sorry for the long delay, I expected an email that never came.
Since my original discussion has been archived, I'm including its content with my comments at the bottom
I'm trying to modify an account to set the gecos field per our standards on Solaris 11.3.
/usr/sbin/usermod -c "account,generic,owner" account
UX: /usr/sbin/usermod: ERROR: Cannot modify account. Marked as read-only.
UX: /usr/sbin/usermod: ERROR: Permission denied.
How/where do I change this read-only attribute ?
vipw or vi'ing /etc/passwd is not an option as we support 1000s of servers ant the gecos is set by automation.
Darren Moffat-Oracle Oct 26, 2017 11:06 AM (in response to deesea)
Are you attempting to modify one of the system accounts delivered as part of Solaris ? Doing so is NOT supported (beyond setting a password for the root account). Any such attempted will actually be undone on the next 'pkg upgrade' or 'pkg fix' and will cause 'pkg verify' to fail and indicate the system is broken.
If this is not for a system delivered account then you need to find the entry for the account in one of the /etc/user_attr.d/ files and remove the "RO" from the third column. Do NOT do that to a system account or one delivered via IPS package. If it is delivered from an IPS package then change the source and republish the package instead.
Oracle Solaris Engineering Security Architect
accounts in question are
So, they are system and application accounts. What if I put them RW temporarily, or move temporarily /etc/user_attrs or /etc/user_attrs.d/xxx, change the gecos and put them back on? And I would rerun my script to change the gecos again after a pkg upgrade or fix. Would this "break" the system?
This is important for our QAR, Quarterly Access Review, to identify if an account is a personal or service account.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347415315.43/warc/CC-MAIN-20200601071242-20200601101242-00097.warc.gz
|
CC-MAIN-2020-24
| 1,799
| 15
|
https://www.easytechjunkie.com/what-is-public-key-encryption.htm
|
code
|
At EasyTechJunkie, we're committed to delivering accurate, trustworthy information. Our expert-authored content is rigorously fact-checked and sourced from credible authorities. Discover how we uphold the highest standards in providing you with reliable knowledge.
Public key encryption is a type of cipher architecture known as public key cryptography that utilizes two keys, or a key pair, to encrypt and decrypt data. One of the two keys is a public key, which anyone can use to encrypt a message for the owner of that key. The encrypted message is sent and the recipient uses his or her private key to decrypt it. This is the basis of public key encryption.
This type of encryption is considered very secure because it does not require a secret shared key between the sender and receiver. Other encryption technologies that use a single shared key to both encrypt and decrypt data rely on both parties deciding on a key ahead of time without other parties finding out what that key is. The fact that it must be shared between both parties does open the door to third parties intercepting the key though. This type of encryption technology is called symmetric encryption, while public key encryption is known as asymmetric encryption.
A "key" is simply a small bit of text code that triggers the associated algorithm to encode or decode text. In public key encryption, a key pair is generated using an encryption program and the pair is associated with a name or email address. The public key can then be made public by posting it to a key server, a computer that hosts a database of public keys. Alternately, the public key can be discriminately shared by emailing it to friends and associates. Those that possess the public key can use it to encrypt messages to the person or e-mail address it's associated with. Upon receiving the encrypted message, the person's private key will decrypt it.
Public key encryption is especially useful for keeping email private. Any stored messages on mail servers, which can persist for years, will be unreadable, and messages in transit will also be unreadable. This degree of privacy may sound excessive until one realizes the open nature of the Internet. Sending email unencrypted is akin to making it public for anyone to read now or at some future date.
The most widely known and respected public key encryption program is PGP (Pretty Good Privacy), which offers military-grade encryption. PGP has plug-ins for most major email clients so that the clients work in concert with PGP to encrypt outgoing messages and decrypt incoming messages automatically. PGP maintains a "key ring" or file of collected public keys. An email address can be associated with a key so that the email client will automatically pick out the proper public key from the PGP key ring to encrypt the message upon sending. It will also automatically use a private key to decrypt incoming mail. To use public key encryption for email, both the sender and receiver must have encryption software installed.
Programs like PGP also have digital signature capability built in. With this feature, messages sent can be digitally signed with the click of a button, so that the receiver knows the message was not tampered with en route and is authentic, or from the stated sender. Public key encryption can also be used for secure storage of data files. In this case, the public key is used to encrypt files while the private key decrypts them.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476397.24/warc/CC-MAIN-20240303174631-20240303204631-00733.warc.gz
|
CC-MAIN-2024-10
| 3,452
| 7
|
http://www.studentsreview.com/viewprofile.php3?k=1641857496&u=584
|
code
|
MCPHS University - Extra Detail about the Comment|
|Survey is Blank|
|Describes the student body as:|
AfraidDescribes the faculty as:
Arrogant, Condescending, Unhelpful
Major: Nursing (This Major's Salary over time)
I was a student in the Accelerated Post baccalaureate Nursing program at the MCPHS-Worcester campus, and I graduated in December. Even though I worked hard to finish with the same GPA of 4.0 when I started the program, unfortunately, my GPA was affected because of the 504-course environment of fear and excessive stress the class caused. The blame was somehow put on the students, but I had to watch many Simple Nursing videos to learn the class materials because the professors did not teach or help us succeed like the previous professors. In addition, the exams were challenging because the professors strive to use confusing wording, so I spent much time understanding what the question was asking. Therefore, I do not recommend the MCPHS.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647525.11/warc/CC-MAIN-20230601010402-20230601040402-00388.warc.gz
|
CC-MAIN-2023-23
| 960
| 7
|
http://forums.amd.com/forum/messageview.cfm?catid=12&threadid=82204&enterthread=y&STARTPAGE=1
|
code
|
Originally posted by: Jaymz9350
well kind of, starting this week my laptop doesn't show a wireless connection to my router most of the time even though it works (i'm currently posting from it)
some times it show the connection fine sometimes it show no connection but works fine both for net access and my home network and some times it doesn't work at all.
this has me kinda confused, it's not a big deal but kinda bugs me.
It's on a fresh xp home sp2 install as i just had to rma it for a bad HDD. it's a compaq w/ turion ml-32 512 ddr333 with a built in brodcom wireless adapter and ideas?
you can try uninstalling and reinstalling the drivers, or if its plug and play, possibly repair your connection, either that or disable and reconnect.
<br>Technic86 "Now I'm gonna agree and wish it was old times again. You know...before the forums were down every other week, before the spambots and hackers came, before the fanboys began to show face(sorry), before the forums went from a Support Place to a battlefield, before short tempers flared and...ahhh so on and so forth. The forums just
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927592.52/warc/CC-MAIN-20150521113207-00236-ip-10-180-206-219.ec2.internal.warc.gz
|
CC-MAIN-2015-22
| 1,089
| 7
|
https://www.gamefront.com/forums/sw-jk3-modding-mapping-and-editing/star-destroyer-2-textures
|
code
|
this has been bugging the crap out of me, does anyone know the name of the folder where these textures are, the textures i am talking about are from the assets and have been used on the Death star trench map, Stardestroyer_ii, and Kotor Flight School Final any help would greatly be appreciated
I'm not quite able to understand that. Are you looking for textures to use, or are you trying to find where they came from?
both, I am looking for them to use
Look in the byss folder, keep scrolling down until you see textures that are named isd_base, there should be a few textures with that extension on them, some of them look the same, some are different. I was workin on a trade federation core ship, and found those there. That help?
Thank you so much :D
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578558125.45/warc/CC-MAIN-20190422155337-20190422181337-00408.warc.gz
|
CC-MAIN-2019-18
| 755
| 5
|
https://pypi.org/search/?c=Topic+%3A%3A+Internet+%3A%3A+WWW%2FHTTP+%3A%3A+WSGI+%3A%3A+Middleware
|
code
|
with the selected classifier
A collection of middleware for openstack
A collection of waffles for iWeb
A collection of middleware for openstack neutron
A collection of middleware for openstack nova
Provide an intermediate response when a WSGI application slow to respond
Manage publicly available assets within your application.
Authorization and authentication library for Watson.
Caching strategies for the web.
Useful common utility functions and classes.
Create console commands with ease.
CORS support for watson-framework.
SqlAlchemy integration into Watson.
Work with WSGI applications locally.
Dependency Injection made simple.
Trigger and handle event flow with your application.
Abstracted filesystems for Watson.
Modify and convert values into something else.
Make working with HTML forms more tolerable.
A Python 3 web app framework.
Utility methods for dealing with HTML.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516123.97/warc/CC-MAIN-20181023090235-20181023111735-00545.warc.gz
|
CC-MAIN-2018-43
| 884
| 21
|
http://mathhelpforum.com/calculus/67418-infinite-integral.html
|
code
|
The homework question states:
Determine whether the infinite integral converges or diverges. Explain the outcome of your calculation, using a sketch (representing a domain of the form [1,a] for a suitable large value a) of the graph of .
How does a graph help explain, better put, what am I looking for in this graph and how am I supposed to use it? (I've read my notes over and over, but they still make no sense...)
I understand that the integral = sin(ln(x))....
With regards to [1,a], isn't a=lower bound of the integral? i.e. a=1?
Thanks in advance
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125532.90/warc/CC-MAIN-20170423031205-00375-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 553
| 6
|
http://eonlinesupport.com/Linux/free_download_TurnKey_Nginx_Live_CD_13.0.htm
|
code
|
Transform any Linux machine into a capable and blazing fast web server with this Nginx Live CD appliance
TurnKey Nginx is an open source, easy-to-use and installable Live CD appliance based on Debian GNU/Linux and designed for deploying the Nginx web server on real hardware.
Nginx is an open source web server, reverse proxy and load balancer that emphasizes performance, low memory usage, and high concurrency, supporting over 10,000 simultaneous connections.
TurnKey Nginx Live CD includes the upstream Nginx configurations for proxying PHP requests to the PHP-FastCGI daemon, TurnKey Web Control panel, PHPMyAdmin, Postfix MTA, SSL support, and Webmin modules for configuring MySQL, Postfix, and PHP.
The default username for Webmin, MySQL, SSH, and phpMyAdmin is root.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00343.warc.gz
|
CC-MAIN-2017-47
| 773
| 5
|
https://www.heresy-online.net/forums/modelling-painting/81596-building-stormraven.html
|
code
|
Building the Stormraven
Ok, I like several people, have gotten my hands on one "early." I'm curious what the people who have built their's so far have to say about it. My thoughts on the matter:
(Sorry no pics)
1. Goes together fairly easily. Nothing that requires alot of holding and praying. (I'm looking at you swooping hawk wings)
2. I like the way the turret weapons look. Made it really hard to decide what to model it with. (went with the assault cannons)
3. I think that if you're adept with magnets (which I am not) you can probably make it pretty modular, with the upper turret, the side sponsons, and the hull mounted weapons.
1. The same complaint I have with eldar vehicles. With the clear canopies you can't assemble the model to a complete state before priming it. In other words the upper turret, the driver's compartment, and the front hull mounted weapon can't be assembled because of the canopies.
2. The turret on top has to be put in before the upper fin/air return piece can be glued on. edit: if you don't glue the weapons onto the turret, I believe it will fit even with the fin already glued on. Guess that's what i get for following GW's instructions.
3. The rear assault ramp has nothing to really hold it on. it's just held in place by friction
4. The side sponsons don't fit tightly enough to just be held in (like rear ramp), I was hoping to avoid having to glue them in, so I could have a more useful model.
All in all, I plan on getting several more. But it's just a pain that so much of it has to sit in pieces and be primed individually before assembly because of the way the canopy pieces fit.
Let me hear your thoughts!
Last edited by Crimson Shadow; 02-04-11 at 09:53 PM.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201699.38/warc/CC-MAIN-20200921112601-20200921142601-00234.warc.gz
|
CC-MAIN-2020-40
| 1,708
| 13
|
http://stackoverflow.com/questions/11994669/what-can-block-access-to-full-url-to-file
|
code
|
ok so if I have file like image: test.png in my public_html directory, what can block access to it?
I have no idea why I get 404 page on this
eg. http://domain.com/test.png shows 404 but images is actually there. It's not about permissions, I checked that. Removed .htaccess file, still nothing.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131299236.74/warc/CC-MAIN-20150323172139-00248-ip-10-168-14-71.ec2.internal.warc.gz
|
CC-MAIN-2015-14
| 295
| 3
|
https://forums.adobe.com/thread/1379364
|
code
|
No, that is not supported or possible in FormsCentral, it would be two separate actions.
If I understand you, would I need to set up the form to export to Excel, then in another step, save the form as a PDF?
Regardless of your goal yes, it would be separate steps - but let me see if I understand what you want so I can explain better...
I am assuming you want to export the response data to Excel, this is an action from the "File" = "Export Responses" menu item and can export all of the response data to Excel at one time (also has the option of PDF or CSV). That would be done as one step, exporting to Excel. If you also wanted a PDF version of the table with all responses you could repeat that and select "PDF" the second time.
If you want PDF's of each individual response (you can "Download Response as PDF" which is a PDF that looks like the original form filled out) then that is also a separate step and has to be done individually for each response you want to download as PDF.
Thank you, Josh -- this is very helpful. You understood my question perfectly, and I appreciate your explanation!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209216.31/warc/CC-MAIN-20180814170309-20180814190309-00449.warc.gz
|
CC-MAIN-2018-34
| 1,104
| 6
|
https://sessions.minnestar.org/sessions/698
|
code
|
What will be covered:
What is special about programmers (from a financial perspective)
Applications to making decisions
What will NOT be covered:
Moneychimp.com (which is great) http://moneychimp.com
Stanford's Personal Finance for Engineers class https://cs007.blog
The blog Philosophical Economics http://www.philosophicaleconomics.com/
^ Check these out if you can't make it to the talk
I'm a software engineer working on network security products. You can follow me @d_feldman.
Does this session sound interesting? You may also like these:
This will add your name to the list of interested participants. It will help us gauge interest for scheduling purposes.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672170.93/warc/CC-MAIN-20191122222322-20191123011322-00114.warc.gz
|
CC-MAIN-2019-47
| 663
| 11
|
https://raindrops08.wordpress.com/school-rules/survivor-college-edition-2/
|
code
|
So dito ko na ilalagay yung mga posts ko about college and my experiences about it OK?
Anyways, thanks for reading ^^
First Year College:
Second Year College
Okay, so the following posts might not have been posted on the day it was supposed to be post. Reason: I was so busy that I have no time to update this blog about the things that happened to me this past five months. I’m sorry! >< Anyways, just pretend it was published at the right date. And if you don`t get this message, here is an example. ( EX. Posting something about examinations when the semester is actually over. )
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592636.68/warc/CC-MAIN-20180721145209-20180721165209-00369.warc.gz
|
CC-MAIN-2018-30
| 584
| 5
|
https://forum.electromage.com/t/dont-need-the-hardware-but-would-love-the-software/413
|
code
|
Hi there. I am working on an LED wall project, and am researching various methods to control it. It’s currently being controlled by an ESP32 , so I already have the hardware part covered. My question is: what would it take for me to get a port of your firmware for the ESP32? Or just the source so that I can work on porting it myself? Honestly, I would prefer the source so that I can add Art-Net capability to it. If you need me to buy your hardware, I’ll gladly do that, but I can’t use it for this project because I’m pretty sure the ESP8266 doesn’t have enough umph to handle 4096 pixels (WS2812b).
Sorry if this question seems out of line. If you don’t want to share your code I completely understand. If it helps, this is for a non-commercial project.
I can’t speak to the source other than to note that I think that Ben has mentioned publicly that the project is currently closed source, that it might change in the future, but that it has a few peripheral open source components (sensor board, output expander).
You project looks really cool though. That one dude on Reddit was such a troll. I’ve looked at Twinkly, it’s incredibly expensive, and I want more hackability.
What I can offer you though is some thoughts about the specs. The ESP8266-based Pixelblaze v2 can support up to 6,400 WS2812b LEDs on a single one of the latest output expander.
The limiting factor tends to be the rate at which you can generate the frames. At 12,000-45,000 pixels per second, your 4096 LED tapestry will output at 3 frames per second for complicated patterns, and ~11 FPS for simpler math.
Ben’s also mentioned here on the forums he’s working on a ESP32-based version.
Thanks for the response. Yeah, I figured it was closed-source. It was worth a shot, though.
Regarding FPS, I’m starting to look at a parallel output solution. I’m not sure if you’re familiar with Yves-bazin, but he’s done a bit of work in this regard so I’ve been talking with him trying to get his libraries working with my project. I’m currently looking at an 8-pin solution (1 pin/2 boards), and he claims I should be able to get 65fps with that. He’s in the process of updating his library, though, and said it may be a couple of days before the new one is available.
The main thing I wanted from the PixelBlaze’s code is the built-in pattern editor. While the solution I’m working on now is more made for Art-Net/DMX, it’d be good to have something on there when a PC isn’t available to drive it. It’d be awesome to have something I could build patterns on without having to continually compile/upload the code.
I think @jeff covered the bases (thanks, Jeff!). 4k pixels is still quite a bit for PB at the moment. All that live coding magic works by running in a virtual machine or bytecode if you will. The ESP32 version of Pixelblaze is 2-2.5x faster, but still not where you would want to be for 4k pixels with a fast framerate.
I hope it’s OK to mention a “competitor” here. PJRC is basically one guy, Paul Stoffgren, who designs the hardware and writes the software. His Teensie and OctoWS2811 hardware and software have been out for years so they should be stable.
There are also tons of other commercial controllers designed just to push pixels/video to LED walls.
Of course, Pixelblaze is more than just pushing pixel data out to LEDs, the main goal for it is live coding patterns and having the ability to change the patterns on the fly without compile/upload steps.
@wizard OK, thanks anyway. I am thinking that my only other option would be to setup a Rest API end point that receives arrays and pushes them to FastLED, then create a frontend that takes user input and converts it to those arrays. That’s going to ttake a while.
@kanyonKris Thanks for the heads up. Looks interesting, but it would need a lot of work to port it to the ESP32.
@taeratrin why are you insisting on using an ESP32? Are you sure the ESP32 is capable of driving 4096 pixels at an acceptable framerate?
The WLED firmware (running on ESP32) suggests 1000 pixels max. Perhaps clever coding could allow more than 1000 pixels on ESP32, which means searching around and hope someone has already done the code, or do it yourself.
I prefer to find hardware and software that already does what I need. As @wizard mentioned, there are many products that can handle 4096 pixels. As an example, a Teensy 3.2 running the WS2811 library using DMA can do 4000 pixels. And if that’s not enough you can get a “bigger” Teensy (3.5, 3.6, 4.0, 4.1) or use two Teensy 3.2s. Teensy 4.1 supports ethernet. If you need WiFi, perhaps connect (serial) the ESP32 to a Teensy. It looks like there is Art-Net code available for Teensy.
a) I already have an ESP32. No need to buy more hardware
b) Specs-wise, the ESP32 is much more powerful than the Teensy. 240MHz vs 120Mhz. Dual-core vs single-core. 520kB RAM vs 192kB RAM. If a Teensy can run that many pixels, so can an ESP32. I have seen several people manage it.
I have already driven it with some FastLed test code, and it handled it fine with a decent framerate. However, implementing ArtNet is going to require that I output in parallel in order to get a decent framerate. Yves-bazin has aleady done work in this area, and he’s driven a lot more pixels than I am trying to.
I prefer to find hardware and software that already does what I need
Look, this project didn’t start out as “I want to build an LED wall”. It started out as “I have an ESP32. What neat things can I do with it?”. In addition to that, part of the satisfaction from putting this together is coming from not just plugging a pre-built controller into it. I realize that may sound silly considering this is a post where I asked from some pre-built software, but :
a) Even if I had received the source for PB, I would have had to make major modifications for it to do everything that I wanted.
b) Pre-built software is a good placeholder so that my project does something while I work on my own code, which could take a long time.
I did some reading, including Yves-bazin. Impressive coding to drive so many LEDs from the ESP32.
I wonder why WLED is limited to 1000 LEDs? Perhaps the web interface and all the other stuff that WLED is doing reduces how many LEDs it can drive (at a reasonably fast framerate).
I mentioned Teensy and the OctoWS2811 library because I know Paul used DMA to offload a lot of the LED driving from the CPU. He has actual examples of running thousands of LEDs. And the library and Teensy are pretty stable (have been out for years). So I thought it might be a good fit for your LED wall.
Do you think WiFi will be stable enough for Art-Net? Seems like most people use wired (ethernet) runs. Could add ethernet to your ESP32 I suppose. As I mentioned, the new Teensy 4.1 has ethernet (via a simple breakout cable, board and a few components). 600 MHz M7 CPU.
@KanyonKris I believe the limit in WLED comes from their support of multiple interfaces. While their FastLED code could probably handle 4k pixels, things like ArtNet and sACN would choke on it. Hence the need for parallel outputs, which WLED doesn’t support yet. They have indicated that they are adding that functionality in the near future, though.
One of these days I want to take the time to re-build WLED with the 1000 limitation removed from the UI just to see how it does.
I read that fadecandy “includes unique color dithering and interpolation algorithms to get the most out of each pixel.” Is your 8x output expander doing the same kind of thing? WS2812b’s low brightness colors get all messed up. I’m curious if the 8x expander helps with that like fadecandy supposedly did. Paul recently updated his readme on github and I don’t know what to make of it. I think he is saying that the fadecandy project is dead.
One of the key features that I like about fadecandy is the temporal dithering & the interpolation between frames. Due to the timing/PWM & datarate of the WS2812 style pixels using dithering means that you’ll only be able to drive 48 pixels per channel instead of the 64 that you are currently able to.
Ok, so 64x8 channels via USB. Potentially you could make it “smart” and handle some data manipulation to interpolate and dither.
Compare that to @zranger’s code for the Output Expander. Processing based, any PC/Mac/Pi/etc can run it. It works similarly, via USB but can handle so many more LEDs.
Compare the size of the boards, and the small OE is about the same size, costs $19 but can run 600-800 LEDs per channel, 8 channels. That’s about 10 times as many as Fadecandy (total 64x8 = 512). Each individual channel is more than an entire FadeCandy can do. It’s dead, Jim.
Yes, you need to a USB to serial adapter ($12ish), but it can drive more than one OE at a time (8, I think) as well, so cost per pixel? Up to 8x8x800 (or 100 times a single FadeCandy) pixels and that’s just one USB connection? Need more?
FadeCandy is awesome, and I had used it in a number of projects in the past. Notably I used it to drive LEDs from a Raspberry Pi in the Synthia project, both V1 and V2:
FadeCandy gives you 4 kinda big things that my expanders don’t:
Temporal dithering for a bit more than 8-bits per element - though this is a double-edged sword. It will limit you to 64 pixels per channel because the way it works is software PWM on top of the 8-bit per channel PWM on the LEDs. It has to send new values to it quickly, and needs to do this at 400-500 FPS or it would flicker badly, thus limiting to 64 pixels per channel. (@Scruffynerf, the mention of 48 is for RGBW - extra byte on the wire).
Keyframe interpolation. If your animation ran at 30 FPS, this would interpolate between each frame and boost the apparent frame rate. Even if your animation ran at 1FPS, it would output buttery smooth fades. The only downside is that it doesn’t work well if there is a lot of jitter in when those animation frames arrive, so it has to be consistent for best results. In other words, if your animation took a variable amount of time to generate each frame, the frame interpolation wouldn’t be transitioning quite right. It also looses benefit if you are pushing high frame rates already.
Native USB connectivity.
An Open Pixel Control (OPC) server that would talk to FC over USB and give you a nice network interface to push pixels to. This was often used along with Processing, but could be used with anything else pretty easily (perhaps it wouldn’t be too hard to build an OPC server for PB expanders, but that doesn’t yet exist). The server also had a minimal web GUI that was handy for testing.
In Synthia, I had to disable both temporal dithering and keyframe interpolation because I was sending animation frames at around 100 FPS, and those features were both unnecessary and caused issues with the very high frame rate animation.
We’ve talked about adding things like temporal dithering and keyframe interpolation to the expander before. Limiting to 64 pixels is a non-starter for me, and not a limit I would want to impose in order to achieve temporal dithering. That said, it’s technically possible and anyone is free to modify the expander code to make it do that.
Keyframe interpolation could be very interesting, but would require a lot of CPU for as many pixels as the expanders can handle. It might be doable with modest limits on the newer expander models MCU (STM32L432) using some of the ARM SIMD DSP instructions.
I love the direction @zranger1 is taking with this, and perhaps an expander with native USB would make sense at some point.
Maybe not. Thinking a bit more, it might be feasible to support either input and have both a serial and USB input. Perhaps the board could also support sending OUT serial to chain additional expanders on the same USB connection, less USB hardware.
Good point, as long as you aren’t hitting data rate limitations, a frame interpolation layer on the computer side of things would have much more resources to tap.
I suppose it’s possible to do the same PWM-based dithering thing from Processing if you only had a few LEDs connected. Communication speed vs. flicker would be a real problem. I’d just advise people to use APA102-class LEDs if they need a lot of dynamic range, especially now that I’ve checked in (v0.2.0) extended APA color support.
Frame interpolation – not hard, but I can’t think of a reason you’d ever want to do that if you’re rendering on a computer. You’ve got the resources to render at basically whatever frame rate you want, especially if you’re using the GPU to do it all in parallel.
On USB hardware support: Wow, that’d be interesting if the demand merits it. I wasn’t even going to ask – was just going to start testing a broad range of USB->Serial devices to get an idea what’s out there that works for this purpose.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946445.46/warc/CC-MAIN-20230326173112-20230326203112-00713.warc.gz
|
CC-MAIN-2023-14
| 12,894
| 53
|
https://discuss.px4.io/t/qgc-detected-0-channel-of-spectrum-dsmx-receiver/18060
|
code
|
When performing “spektrum bind” from QGC this make Pixhawk led B/E blinks yellew for ever, but nothing happens. Futhermore it is not possible to “Calibrate” Radio Setup, keeps getting "Detected 0 radio channels. To operatate PX4, you need at least t channels".
When I try to bind from transmitter it seem to be working.
Pixhawk documentation state that all spectrum DSM pairs is supported. I spend a lot of money on the kit Pixhawk 4 and receiver/transmitter setup hoping it was plug and play. What seem to be problem here? please help me on this.
Thanks taileron for the answer.
Yes I can bind the from the transmitter, and it is displayed on the transmitter screen that it is binded. Regarding the SYS_USE_IO parameter, do I need to set it to 0? Can that be done in QGC?
So I tried to set parameter SYS_USE_IO = 0, rebooted and doubled check that is set. What do I expect from this? it still reports “Detected 0 radio channels. To operatate PX4, you need at least 5 channels”.
This has excluded that there is a hardware or software issue with the IO unit. Now there can only be an error in the connection itself. e.g. an inversion at the sBus connector.
If the controller cannot switch off this inversion, Spektrum will not be detected at this connection. Isn’t there a dedicated connector for Spekrum receivers? Attention SPM 4651T doesn´t work with 3.3V which comes from the Spektrum connector. Probably voltage 5V (red) to sBus port and receiver the other colours have be connected to the Spektrum port.
Thank you for pointing out that receiver needs 5v supply. I found a dedecated cable for that and tried again but still the same result. An inversion, how can i know if that is the problem? Is there a detailed debug log of the communication you can access from pc?. Are you sure its not a firmware issue? I read other threads where the fix is mess around with bootloader see link below. (But i dont have a safety switch) other state it is current firmware bug) what do you think? Or could it simple be that this receiver is not tested with px4 og perhaps therefore is not supported eventhrough it is stated in doc.
Pressing the safety button while boot forces an update to the IO unit.
(like mavlink console: px4io forceupdate command)
With some fc the current master doesn´t see the dsm(x). with SYS_USE_IO = 0 enables rc in this case if there is no issue about inversion. If everything is connected correctly and the voltages are correct it should work up to V1.10 there this issue did not occur with me yet. You can try to type dmesg into the mavlink console for more detailed boot information.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00162.warc.gz
|
CC-MAIN-2022-27
| 2,621
| 12
|
https://www.mail-archive.com/london-pm@lists.dircon.co.uk/msg03538.html
|
code
|
On Wed, Mar 28, 2001 at 09:26:38PM +0100, Robin Szemeti wrote:
> (my pseudo-transaction scheme for MySQL is basically : .. do this and
> return a closure to undo it if I to .. bung the closures in an array ..
> if something screws up then back it all off by walking along the array
> and executing the closures ... its not rocket science but it works ..
> sort of .. I used it for doing multiple inserts into a spread of tables
I did something similar. It worked too, until not only did an insert
fail, but when I was backing out, a delete failed too. There was much
head-scratching. A week later, the hard disk died and the head-scratching
Unfortunately, if you implement this sort of thing, mysql loses it's only
advantage over other databases - speed. But I wasn't allowed to upgrade
to (eg) postgresql for silly reasons which I forget now.
David Cantrell | [EMAIL PROTECTED] | http://www.cantrell.org.uk/david/
This is a signature. There are many like it but this one is mine.
** I read encrypted mail first, so encrypt if your message is important **
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00387.warc.gz
|
CC-MAIN-2022-21
| 1,055
| 15
|
http://sepwww.stanford.edu/data/media/public/docs/sep111/brad1/paper_html/node3.html
|
code
|
Continuous logging of data every 0.02 seconds for six months yields an outrageous 180 Gbytes of passive seismic data. Each of the stations log vertical and northerly and easterly oriented shear motions. All time signals are synchronized with GPS clocks.
The Mark Products (now owned by Sercel) L22 short period seismometer is ubiquitous in the seismological community. Therefor, it is reasonable to understand its characteristics. W. Menke (1991) performs a comprehensive analysis of the performance of the L22. It has a resonance frequency of 2 Hz, which is significantly lower than that of an exploration geophone. However, as seismologists are normally not interested in higher frequencies, the response functions shown never extend above about 30 Hz. The authors claim to have seen significant cross-axis coupling of the shear and compressional channels over frequency bands near the natural frequency, but this seems to be of little concern to this use of the data. Of more concern, the authors identify ``one of the main instrument defects'' of the L22 being strong amplitude resonance peaks centered at 28 Hz in over 20% the instruments.
Preliminary manipulation of the SCVSE data show that traces are indeed white show no coherence in their raw form. Cross correlating the records at this time provides no useful information as I have not been able to implement the code with the irregular geometries required with the data shown in Figure 2.
At this stage, it is unclear whether strong earthquake energies will help or hinder the experiment. While we desire strong incident wave fields, over representation of energy from particular azimuths and incidence angles may be detrimental. The underlying question here is whether or not teleseismic events (earthquake signature from long distances) will be the predominant energy source to illuminate the subsurface by reflecting from the free surface. If so, focusing our efforts in time around the arrival times of known events (from published earthquake catalogs) may significantly reduce the length of the time series that need be processed. Rather than long, continuous time records, we can isolate discrete time windows that can be treated analogously to single shot experiments. The price to pay for this however will be in resolution. Due to the geometry of the radial structure of the earth, we can only expect incident waves in a limited window of incidence angles from below. In addition, the usable period of these events is centered around one or two seconds which will greatly reduce the resolution of the image.
Figure 3 shows the earthquake and blasting events within 500 km of the survey location during the time the recording units were deployed. As these events and their times are readily available, it will be easy to window data series within and between major quake events to address this question.
Alternatively, it may be possible to use the earthquake energy in both contexts within the framework of an illumination study. Because the timing, azimuth and ray parameter of the earth-quake energy is available, it may be possible to normalize or otherwise manage what could be over-abundant energy.
Of benefit to this type of survey is that those who design and utilize these surveys are principally interested manufacturing tomographically derived velocity models from earthquake events utilizing both vertical and shear components of ground motion. This fact results in the availability of initial velocity models for migration studies and rudimentary practices in separating incident and scattered wave fields. However, it is my sincere hope that ambient noise will provide sufficient images as to not need to focus on teleseismic events. Due to the large offset between stations (three to five kilometers in this instance) the transmission losses of some of the ambient noise will undoubtedly prevent correlated signal from spanning the entire breadth of the survey layout.
Despite the outcome of this question however for this particular training set, the issue needs addressed specifically with an experimental mobilization tailored for our interests. This means that it should have receiver arrays designed to attenuate surface waves, station spacing on the order of a few meters (rather than the kilometers associated with seismologic data sets), and a roughly square map view (as suggested by Artman (2002b)) with a regular station spacing. The harder the near surface that the receivers are coupled to, the less high frequency surface noise will conflict with our desired signal.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824570.79/warc/CC-MAIN-20171021043111-20171021063111-00120.warc.gz
|
CC-MAIN-2017-43
| 4,565
| 8
|
https://longitudeexplorer2019.challenges.org/blog/why-do-we-celebrate-ada-lovelace-day/
|
code
|
Why do we celebrate Ada Lovelace Day?
12 Oct 2020
Today marks Ada Lovelace Day, which celebrates the incredible achievements of women in Science, Technology, Engineering and Mathematics (STEM).
It’s a chance to shout out the huge strides women across the world have taken in STEM areas, as well as inspire the next generation of girls and women to enter into STEM careers. By highlighting the successes women have had throughout history, we can provide role models to show young women that these fields of study and career are not closed off to them.
But where did this day originate from? Who was the inspiration and namesake, Ada Lovelace?
The daughter of celebrated poet Lord Byron, Ada was a British mathematician and writer who is probably best known for her work in early computer programming. During her early twenties, Ada was part of a collective of impressive scientific minds, including Mary Somerville, who is recognised as the first female member of the Royal Astronomical Society, and mathematician Charles Babbage. Babbage is known as ‘the father of computers’ due to his work on the Analytical Engine – the first concept of what we now know as the modern-day computer.
Ada came across the engine when translating an article on Babbage’s machine by an Italian engineer and as well as the translation, she also added her own notes to the project. This was published as “Sketch of the Analytical Engine, with Notes from the Translator”. These notes turned out to be the first ever computer programme – an algorithm for the engine to carry out. This work eventually inspired UK scientist Alan Turing, who built the first recognised computer. Sadly Ada passed away at the age of 36, just a few years after her work was published.
Despite her premature death, Ada paved the way for future generations of girls and women who wanted to enter what was (and still remains to some extent) a male-dominated industry. She is widely recognised as the first computer programmer, which is doubly impressive when you consider that she lived in a time when most women were denied even a basic education! She defied social conventions and expectations of women at the time by dedicating her time and energy to furthering the progress of science. She was a trailblazer who was way ahead of her time, advancing the progress of computing and computer programming by leaps and bounds. Ada is still an inspiration for women across the world today, and that is why she will continue to be celebrated on Ada Lovelace Day!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00082.warc.gz
|
CC-MAIN-2021-10
| 2,527
| 8
|
http://www.dotnetspark.com/links/16923-adfs-integration-as-identity-provider.aspx
|
code
|
I have read several articles on how to set up ADFS 2.0 and how to turn on ClaimsAuthentication as an Authentication Provider within SharePoint 2010. However, I have thus far been unable to figure out how to get ADFS 2.0 to show up as an Identity Provider
when I configure my SharePoint 2010 Authentication Provider. I understand that some type of security or certificate trust has to occur in order for Sharepoint 2010 to recognize ADFS 2.0 as a trusted Identity provider, but I do not have any clear guidance
as to how to configure this.
I have configured a domain controller with ADFS 2.0 using Active Directory as an Account Store as well as installed Sharepoint 2010 on this same server instance. Any clarification and guidance on how to configure my Sharepoint 2010 instance to talk
to ADFS and display as an IdentityProvider using ClaimsAuthentication would be greatly appreciated.
View Complete Post
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189403.13/warc/CC-MAIN-20170322212949-00232-ip-10-233-31-227.ec2.internal.warc.gz
|
CC-MAIN-2017-13
| 906
| 6
|
https://www.coursehero.com/sitemap/schools/2620-UWO/courses/4385782-APPLIED-MA2811/
|
code
|
Lab 2A: Interpolation
Due: Friday Feb. 5th at Noon
The first form of the barycentric formula is:
f (x) = w(x)
(x k )
(k j )1
where n is the number of interpolation points.
In this lab you will write a M
Lab 4A: QR Factorization
Due: Friday March 18th at Noon
This part of the lab you will be writing a program to compute the QR factorization of a matrix A.
The algorithm used is the classical (unstable) Gram-Schmidt orthogonalization algorithm.
To define a
Wednesday, April 15, 2015
7:00 10:00 pm
No notes, calculators or computers. There are 8 pages including a formula sheet and
a Matlab summary.
Part I is multiple choice. Choose the most appropriate answer for ea
Lab 1A: Truncated Taylor Series
Due: Friday Jan. 22nd at Noon
In this lab you will create a script to compute a truncated Taylor series and use the plotting utilities in Matlab
to output the result.
The truncated Taylor series of a function f (x) about a
Lab 2B: Interpolation
Due: Sunday Feb. 14th at 11:59PM
In this lab you will be investigating barycentric Lagrange interpolation. The file bary_weights.m
has been provided on the course website. This function will compute the barycentric weights for a
Lab 1B: Root-nding
Due: Sunday Jan. 31st at 11:59PM
In this lab you will be implementing two methods for solving nonlinear equations of the form:
f (x) = 0 .
The methods you will be implementing are:
As you work through the
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822992.27/warc/CC-MAIN-20171018142658-20171018162658-00747.warc.gz
|
CC-MAIN-2017-43
| 1,398
| 33
|
http://onlinesocialnetworks.blogspot.com/2008/05/downloadable-open-source-social.html
|
code
|
Max Kiesler "an award-winning strategic designer and co-founderand principal of Ideacodes.com, a web consultancy in San Francisco" maintains a Most Excellent Compendium of Open Source social bookmarking, filesharing, search and social networking applications.
Affelio "Affelio is open-source social networking software / architecture. It has following features: (1)distributed architecture (2)Internet-wide scalability, (3)Extensivity with opened Affelio API for developers, and (4)high custamizability with skins/templates."
AstroSPACES "AstroSPACES is the world's first open source social networking solution. Coded from scratch, it is highly efficient and very easy to use."
blogBOX "blogBOX is a free and open source social networking system written in PHP. Future versions will be written i Python/Django."
Elgg "Elgg is an open source social networking platform developed for LAMP (Linux, Apache, MySQL, PHP) which encompasses weblogging, file storage, RSS aggregation, personal profiles, FOAF functionality and more."
FlightFeather: Social Networking Platform "FlightFeather's goal is "social networking for everyone". This means that anyone should have a chance to run a popular social networking site -- on minimal hardware, and without wasting bandwidth."
FriendPortal - An Open Source Friendster "An open-source, Friendster-like social networking portal and news site written in PHP. Post and read news plus browse through contacts like you would in Friendster, Orkut, Tribe.net or Ringo with the knowledge that your personal information is safe."
Geek Grep "GeekGrep is a Django based social-networking system designed to get geeks connected with each other. The main feature is a database of geek codes and the ability to search them. See our project web site for a design template of the future site."
Hiitch: The Social Networking Platform "Hiitch is a secure and advanced desktop social networking platform. It allows you to build a focused and private network of communities for your family, friends, company and etc. It gives you total control and freedom for your social networking needs."
iSocial: Social Networking CMS "Social Networking script written in PHP and MySQL. Designed for every kind of communities - can easily create their own social networking website for free with no ads."
Jahnet "The JahNet framework is a Open Source social networking and asset management CMS that is focused on helping digital artists collaborate on a global scale. JahNet allows you to securely share your ideas, images and projects with users around the world."
Manusya "The Manusya application is an opensource social networking application being built on mod_perl, Perl Template Toolkit, Postgresql, Apache and Linux. The manusya_web_core packages are required for the front-end."
Melt: Online Social Networking Software "Melt is "social software" intended for NGOs to build online social networks, where people can announce events, create groups to organise those events, and add resources (files and web links) to support organisers. Everything can be tagged and linked."
OpenPNE "OpenPNE is a Social Networking Service Engine written in PHP. It has many features(friend control,friend invitation,diary,blog feeds,message box,etc)."
Openpublic "OpenPublic is an interest group social networking and collaboration platform. It provides a solution for mutual interest and special interest groups and membership based organizations wishing to create a knowledge network around their interests."
OpenVZ "OpenVZ is an open source social networking system."
Phpizabi "PHPizabi is one of the most powerful social networking platforms on the planet."
PHP-Spacester "PHP-Spacester is a social networking script such as Myspace and friendster. It is a fork of astrospaces. It will feature the XDNS system (Xotmid Distributed Network System) which is a leaf-hub connection thus allowing anyone to run a leaf and connect to."
Pihook: Social Networking System "Open source social networking system."
Tag Me "Tag Me is a social networking application that allows people to send information about themselves via bluetooth or by mobile web browser you create an online wml website and create a url barcode that holds the link to your online profile."
The Apple Orchard "The Apple Orchard is a multi-user, open source social networking web application with the ability for users to upload photos and videos, write a blog, have comments, personalise their page layout and appearance and sort multimedia by tags."
The Appleseed Project "Appleseed is (augmented) social networking software, ie Friendster, only distributed. Sites running Appleseed will interoperate, and form the 'Appleseed Social Network.' Development is focused on privacy and security, as well as ease of configuration."
Virtual Learning Commons "The Virtual Learning Commons software combines a web based content management system, academic tools and social networking to create a website. Can be used by groups to create web based content within an integrated social networking environment."
WorldSpace WorldSpace is a user-extensible shared virtual environment, aimed at being a next-generation social networking system.
Yogurt: Social Network "This is a Social Network module for xoops CMS. You have seen Facebook, Orkut, Myspace , try Yogurt for Xoops!"
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00544-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 5,321
| 25
|
https://forum.imperium42.com/t/bad-idea-for-forum-match/75770
|
code
|
The title is correct
I want to really make one though.
sure. But I’ve got heaps of forum I’m hosting, so you wanna do it and i ‘cohost?’
Ahh, you see. No one will join a game I host and I’m busy always anyways
There’s those few select times
you’re always on, though. From what I’ve seen.
You see, I’m known for slanking for a reason
And being on mobile doesnt count
I’m on mobile.
And we both know the efficiency
Err, it’s okay.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202642.32/warc/CC-MAIN-20190322094932-20190322120932-00334.warc.gz
|
CC-MAIN-2019-13
| 450
| 11
|
https://www.paconsulting.com/newsroom/expert-quotes/sc-magazine-using-big-data-to-uncover-deviant-behaviour-21-april-2016/
|
code
|
PA is quoted in an article on big data and the use of machine learning to spot anomalous behaviour.
The article explains that ‘machine learning' techniques are being developed as data that is being captured by businesses is becoming far too expansive for humans to analyse for unusual activity. Big data systems can pick up on attacks in real time and analyse data into something meaningful that can be interpreted by businesses.
The article goes on to explain that a machine learning system has the ability to alert a human to take action. PA says: “This happens via pattern cognition allowing the system to discriminate between a typical action and an abnormality.”
He goes on to say that an unknown user, for example, might try to access a company system. A machine learning system will pick this out and raise it as an exception to the norm because it has been “trained” to separate suspicious behaviour from normal activity.
PA concludes: “So, if you make a prediction on what might happen if a series of events come together, when the machine assumes something is wrong, it will notify someone.”
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812327.1/warc/CC-MAIN-20180219032249-20180219052249-00441.warc.gz
|
CC-MAIN-2018-09
| 1,115
| 5
|
https://www.arctic.ac.uk/projects/thresholds-for-the-future-of-the-greenland-ice-sheet/
|
code
|
Sea-level change is one of the mostly widely known and potentially serious consequences of anthropogenic climate change due to emissions of greenhouse gases, because of its adverse impact on the populations and ecosystems of coastal and low-lying areas. This impact is expected to increase for centuries to come. One of the contributors to global-mean sea-level rise is the Greenland ice-sheet, which is presently shrinking, with the ice which it is losing being added as water to the ocean.
In a warmer climate, increased melting of the ice-sheet is projected, which will exceed the expected increase in snowfall on the ice-sheet, and hence the ice-sheet will lose mass more quickly in future. Existing scientific information indicates that global warming exceeding a certain threshold would lead to the near-complete loss of the Greenland ice-sheet over a millennium or more, causing a global-mean sea-level rise of about 7 metres. The threshold is very uncertain, but it could be as low as 1-2degC of global warming above pre-industrial. If warming passes above the threshold, and later falls back below it, the ice-sheet might regrow, but this depends on how long and how far the warming was above the threshold. If the ice-sheet has lost too much mass, it might continue to contract and could be eliminated even if global climate returned to a state like that which existed before the industrial revolution. In that case, the sea-level rise due to the Greenland ice-sheet would be irreversible. Irreversible global-mean sea-level rise of several metres over many centuries is a scenario which would present an extreme challenge to adaptation in the coastal zone, and avoiding it is crucial for mitigation. Thus, the long-term future of the Greenland ice-sheet is a critical uncertainty, and our project aims to provide clearer information about it. We will do this by predicting the changes in the ice-sheet in this century and for many millennia into the future using a computer model which we have developed for studying changes that occurred during the ice-ages of the last 100,000 years. There is a close relationship between these scientific interests, because what happened in the past can inform us about what could happen in the future. The model represents both the climate, on a grid covering the world, and the Greenland ice-sheet, in much greater detail. Both components are necessary because as the ice-sheet changes in shape and size it modifies the climate it experiences, and this affects the rates of melting and snowfall. We will use the model to study the consequences for the ice-sheet of various levels of global warming, maintained for various lengths of time. We will make our results available to the public, the scientific community, and policy-makers in the UK and abroad. They are relevant to international climate policy because of the global warming target of 1.5degC, which is the aspiration expressed in the Paris climate agreement signed in 2016.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473518.6/warc/CC-MAIN-20240221134259-20240221164259-00257.warc.gz
|
CC-MAIN-2024-10
| 2,983
| 2
|
https://diydrones.com/forum/topics/understanding-roi-in-mission-planner
|
code
|
I have a couple questions on how to use ROI in MP
I think the way it works is this
I first need to set the location to point to by using
Is that all I need or do I also need to add a RIO? Under ROI I see an add/delete edit but all it wants is an ID and nothing else shows so not sure I need it at all?
If I want it to keep pointing to one place over say 3 waypoints do I have to add a DO_SET_ROI or maybe just an ROI?
Sorry, but its a little confusing to me
Here is what I have, just a DO_SET_ROI
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391309.4/warc/CC-MAIN-20200526191453-20200526221453-00579.warc.gz
|
CC-MAIN-2020-24
| 496
| 7
|
https://disruptorsgames.com/blog/project-asterisk-2
|
code
|
As the realization of the original concept becomes more apparent, I've started work on a "first pass" as it were.
A turn based role playing game, where the main focus is on salvaging enough parts to rebuild a crashed sentinel. Aster calls them Seekers.
In this new iteration, Asterisk & the Dungeon of DOOM! we've pit Asterisk against some of her most formidable enemies yet, the depths this dungeon from which all demons MUST be faced head on. In this twisted version of reality, Asterisk must overcome her darkest fears. A place where her only goal is to ESCAPE, avoid being overtaken by the shadows within; forced into self reflection by who she thought was her friends, creatures of the forest.
Join us, in this new adventure where you play as the brave Asterisk in her journey through the depths of what could only be considered her own HELL.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592861.86/warc/CC-MAIN-20180721223206-20180722003206-00498.warc.gz
|
CC-MAIN-2018-30
| 847
| 4
|
https://etutorials.org/Mac+OS/mac+os+hacks/Credits/About+the+Authors/
|
code
|
Rael Dornfest is a maven at O'Reilly & Associates, Inc., focusing on technologies just beyond the pale. He assesses, experiments, programs, and writes for the O'Reilly Network and O'Reilly publications. Rael has edited, coauthored, and contributed to various O'Reilly books. He is program chair for the O'Reilly Emerging Technology Conference and O'Reilly Mac OS X Conference, chair of the RSS-DEV Working Group, and developer of Meerkat: An Open Wire Service (meerkat.oreillynet.com). In his copious free time, Rael develops bits and bobs of freeware and maintains his raelity bytes weblog (http://www.raelity.org).
Kevin Hemenway, better known as Morbus Iff, is the creator of disobey.com, which bills itself as "content for the discontented." Publisher, developer, and writer of more home cooking than you could ever imagine (like the popular open source syndicated reader AmphetaDesk, the best-kept gaming secret Gamegrene.com, the popular Ghost Sites and Nonsense Network, the giggle-inducing articles at the O'Reilly Network, a few pieces at Apple's Internet Developer site, etc.) he's an ardent supporter of cloning, merely so he can get more work done. He cooks with a Fry Pan of Intellect +2 and lives in Concord, NH. You can contact him at firstname.lastname@example.org.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817184.35/warc/CC-MAIN-20240417235906-20240418025906-00886.warc.gz
|
CC-MAIN-2024-18
| 1,281
| 2
|
https://coderanch.com/t/577508/open-source/Documentation-custom-integration
|
code
|
I'm currently working on Lut�ce (a j2ee portal app) and I'd like to integrate it with JForum.
But the documentation on customization and integration is still in progress.
When will the documentation be available? Can you give me some hints ?
Thanks in advance, and thanks for your work on JForum!
[originally posted on jforum.net by geraud]
Migrated From Jforum.net
posted 11 years ago
For customization and integration, the most necessary informations should already be avaiable, and theres many topics in the forum covering this ...
- create a copy of the "default" template and customize the files in the copied directory - this will be your custom layout/design
add the template_dir property to the jforum_custom.conf file to overwrite the setting in the systemglobals.conf (default is set to "default") ... so that your new template folder will be loaded
- to integrate external login informations to jforum, search for SSO - and look up the SSO sample files in the sources of jforum to get a clue on how it's done ...
That's all I needed to integrate JForum into an existing web applicatoin. Not sure if there's (much) more necessary for integrating it into a portal app ... [originally posted on jforum.net by Sid]
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524548.22/warc/CC-MAIN-20190716115717-20190716141717-00371.warc.gz
|
CC-MAIN-2019-30
| 1,223
| 12
|
https://www.nigelfrank.com/315138/delivery-consultant
|
code
|
A company that makes a difference in the lives of their customers, their business partners, and their community is one that deserves respect, and some of the best talent there is. Come work for a company that offers technology solutions to all different types of industries (you'll never get bored), while also boasting a great work-life balance on their Glassdoor reviews. This company has an incredibly supportive culture, and a focus on learning and mentoring, so your career will never go stagnant.
Do you want a fantastic work-life balance without the hassle of traffic? Apply now to this remote, no travel, Delivery Consultant role. You would have the opportunity to work with the latest technology in the Azure market to implement innovative cloud solutions. The talented team you would be with works collaboratively to solve complex problems. There is a large focus on learning and mentoring, so you would be able to grow in this role as time goes on.
You would be helping companies deploy Azure solutions (Iaas and Paas), and migrate to Azure infrastructure solutions. Your primary work partners would be solutions architects and clients, to help further understand exactly what the client needs for the project, so you can deliver the best possible solution! You would become the subject matter expert for their clients, ensuring that the technology they have is exactly in line with what their business needs.
In order to apply for this job, you need to have hands-on Azure/cloud architecture experience. It is important to be able to be a team player, and have great interpersonal skills in order to interact with all clients and management. Strong problem solving and written skills are needed to succeed in this role. Please have knowledge of LAN, WAN, IPVPN, and virtual/physical appliances (e.g. routers, firewalls, etc.). You must be able to obtain and maintain vendor certifications. PowerShell experience is a plus.
As the global leader in Microsoft Azure recruitment, Nigel Frank International will help to find you great opportunities in your area of expertise. Apply now by sending me your most up-to-date resume/CV, at email@example.com
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986717235.56/warc/CC-MAIN-20191020160500-20191020184000-00314.warc.gz
|
CC-MAIN-2019-43
| 2,159
| 5
|
http://writers.stackexchange.com/questions/tagged/hardware?sort=faq
|
code
|
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
tag has no wiki summary.
Is the iPad a convenient medium for writing work?
Does anyone here write on an iPad at all? The iPad 2G is due this summer and I have waited since March 2010 already, so I am planning to purchase one. However, I don't really need an iPad at all; ...
Feb 16 '11 at 13:58
recently active hardware questions feed
frequent question tagged
Podcast #57 – We Just Saw This On Florp
Putting the Community back in Wiki
Hot Network Questions
Why are laptop screens sized the way they are?
Is there no radioactive decay between nuclear fusion and solid material formation?
Word for the emotion behind "D'oh!"
"I'm done" or "I've done", which is correct?
Convert List to a list of tuples python
Are there any space shuttles one can go inside of?
Anything to do in Reykjavik airport at 9 am during a 1h30m layover?
Becoming Better at Math
What's wrong with "stupider"?
Why can a dictionary be unpacked as a tuple?
Questions & Responses: Let me tell you about you
Is having a sibling better for a child?
Removing unused init scripts
cd .. on root folder
How can I create a password that says "SALT ME!" when hashed?
Simple Example of an Iterable and an Iterator in Java
Manipulating slots in a pure function
Is there any way I can add watermark to .jpg files (around 15000), en masse instead of one by one?
Why don't all teachers use clickers?
`\input`ting contents of a file into a `hyperref`'s `pdfkeywords`
Continuous Learning when you have a family
Does centrifugal force exist?
How to get top management support for security projects?
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00453-ip-10-147-4-33.ec2.internal.warc.gz
|
CC-MAIN-2014-15
| 2,190
| 55
|
https://ins.sjtu.edu.cn/faculty/lisongting
|
code
|
Dr. Songting Li received his B.S. and Ph.D. degrees in Mathematics from Shanghai Jiao Tong University in 2010 and 2014 respectively, and his M.S. degree in Industrial Engineering from the Georgia Institute of Technology in 2015. Before joining INS, Songting worked as a Postdoc at New York University from 2015 to 2018.
- Computational Neuroscience
- Mathematical Biology
- Biological Data Analysis
- Li S, Liu N, Zhang X, McLaughlin D, Zhou D, Cai D. Dendritic computation captured by an effective point neuron model. Proceedings of the National Academy of Sciences, 116, 39, 15244-15252, 2019.
- Li S, Liu N, Zhang X, Zhou D, Cai D. Determination of effective synaptic conductances using somatic voltage clamp. PLoS Computational Biology, 15, 3, e1006871, 2019. (journal highlight)
- Li S, Liu N, Zhang X, Zhou D, Cai D. Bilinearity in spatiotemporal integration of synaptic inputs. PLoS Computational Biology, 10, e1004014, 2014. (journal highlight)
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401617641.86/warc/CC-MAIN-20200928234043-20200929024043-00209.warc.gz
|
CC-MAIN-2020-40
| 952
| 7
|
https://pixetic.com/blog/tag/behavior-design/
|
code
|
Digital product design
Get in touch
Designers or Puppet Masters? How Behavior Design Can Influence Users and Their Behavior
Can you make users stay longer in my app? — As experience designers, we hear this question quite often. To address it we came to realize that it’s not enough...
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314638.49/warc/CC-MAIN-20190819011034-20190819033034-00068.warc.gz
|
CC-MAIN-2019-35
| 288
| 4
|
https://ist.ac.at/en/research/library/publish-communicate/rdmp/
|
code
|
Research Data Management Plan (RDMP)
Researchers invest time in Data Management(DM) but the more effort spent on DM upfront, the easier and more efficient it becomes to work with it (for yourself and others) in the long term. Have a look on our information regarding RDM.
A RDMP is a living document, which has to be updated and checked on a regular basis (this is also a funder requirement). Especially if your work is based on a collaboration with different institutions, it is essential to have a common basis regarding name conventions, used ISO standards, and shared file organization (also perhaps a shared reference management) – all summarized in a joint RDMP.
On this information page, we focus on the RDMP templates of the funders FWF and EC H2020. To support you further, ISTA implemented the tool RDMO (ISTA login is required) where you can create, store, manage and export your RDMPs.
Depending on your discipline and the kind of research you do, these are the main topics to cover in a RDMP:
On this site:
Outline shortly your data creation by answering questions like:
- Who is responsible for RDM in your project? (Recommended to add the ORCID iD of the contact person.)
- Are you re-using existing datasets? (Where are they from? Is there a usage agreement/waivers/open license? – see also further details on re-use)
- Are you creating new datasets?
- What kind of data will be generated/collected/re-used (observational, experimental, simulated, derived or compiled)?
- What is the data stability (fixed, constantly growing, revisable)?
- What is the expected volume of the data (file size, amount of data)?
- Data utility: who will benefit? Is there a target audience?
- Depending on your chosen data storage, the costs for long-term preservation should be estimated and calculated. Describe different cost categories (server storage, backup solution, etc.) and how you plan to cover these costs.
Everyone benefits on having a precise and thorough documentation. Without adequate documentation, research data is worthless (and difficult to defend if – because of lacking documentation – the research is not reproducible). It is recommended to write the documentation of research data in a clear and simple language so that you can provide the traceability and reproducibility of data operation even beyond research fields. Therefore, keep track of data parameters (including units and formats), instruments/platforms used to collect/generate the data, codes including definition of variables, methods used, standards or calibrations, etc.
Funders will ask on how FAIR your data is. We have summarized the concept on our RDM page.
Making data findable
- Metadata: Metadata are “data about data” in a sense that they provide information about the data in a highly structured digital form. It is human- and machine-readable. The better the metadata, the easier it is for other researchers (and search machines) to find your research data.
Funders may ask what metadata standard/metadata schema will be used. We recommend to use Dublin Core, a basic, domain-agnostic, and widely used one. There are many discipline specific standards that might be mandatory for different data repositories.
- Persistent identifier/unique identifier: plan on using persistent identifiers for your datasets (like a DOI), so that your data can be cited and found easily via search engines. Consider using unique identifiers for researchers as well. The most common (and partly funder/journal mandatory) is the ORCID iD. This is a permanent iD for researchers, allowing to clearly identify who is responsible for the research.
- Naming conventions: Be precise on how to label your files. Use enough information to immediately identify what is inside. File names should be descriptive, consistent, short and without special characters. Agree on standardized date convention (like ISO 8601), and avoid spaces (use underscores or dashes instead).
- File versioning: If you work on a paper and/or a complex analysis, save the files frequently with a new version number like “Research_Concept_v1”, “Research_Concept_v7”… “Research_Concept_final”. Disadvantage of this system: you cannot track the changes between the versions within this system (but this could be perfectly traced in a Readme file!). However, if you work on research code, consider using a tool like Git where you can track every change and revert to earlier versions easily. Learn more about our ISTA GitLab (ISTA login required) on the IT website.
- Standards: Standards define the allowable values on a particular topic, such as ISO 8601 governing date formats (YYYY-MM-DD or YYYYMMDD) or ISO 6709 for latitude and longitude.
- Search keywords: provide specific search keywords to make discovery easier (i.e. use of discipline specific thesauri).
- File Format: Try to use open, standardized, well documented, and widely used formats, especially for long-term preservation. Remember to allocate enough time for converting proprietary software formats to standardized ones! For example, use .txt files for text documents (instead of proprietary file types), .csv for tabular data, or .wav for audio files.
- File organization: This sounds simpler than it actually is. You can manage your files by project/researcher/date/research book number/sample number or any other field that seems reasonable for you. However, be aware that many researchers not only have digital but a combination of analog and digital information to manage. Therefore, the easiest way is to choose one schema for using both analog and digital. A common file organization in a collaboration is critical for the project.
- Readme files: one of the most important tasks for achieving good data management is describing your data. However, research data and research documentation are often saved in different file locations. The easiest way to have data and description together is creating a simple readme file (plain text files), which is stored directly alongside the research data. Use several readme files on different structure level (i.e. one to explain the folder structure and how to use it, and another to explain the data structure). Readme files take very little time to create but provide an easy and simple way to keep files organized and documented.
- Physical research notebooks/electronical lab notebooks: Remember to write legibly. Use notebooks with acid-free paper (due to preservation reasons), and agree on the minimum amount of information that has to be included. Think/Discuss about advantages and disadvantages of using an electronical lab notebook (ISTA offers different solutions on that; please get in contact with us for further information). Be aware that both physical and electronical notebooks have to be preserved for at least 10 years after finishing your research project and that both kind of notebooks need a backup plan.
- Templates: Consider using templates by creating a list of information to record for every experiment/report. It can help extremely to add a structure to your notes.
Making data accessible
Here are some questions and actions that you might consider to make your data accessible:
- How and where will you make your data accessible? (Data repository, project website…)
- Specify what methods or software tools are used or needed to access the data
- Is all data openly available? If not explain the reason for that.
- Where will the data be deposited?
- Think about the possibility of writing data papers. These papers describe the dataset but do not include scientific analysis or conclusions from the dataset. Data papers are published in journals, while their datasets are stored in a separate repository and linked only via persistent identifier (i.e. a DOI). A data paper offers several advantages: it provides greater documentation to an important dataset, it goes through the peer-review process, and furthermore, it enhances the re-use of the dataset. One important (Open Access) data journal is Scientific Data from Nature.
Making data interoperable
Here, the focus lies with normalized vocabulary and standards to enable data exchange and re-use between researchers, institutions, or countries.
Which metadata standards and metadata vocabularies/methodologies are you using?
If no standard is used – will you provide a mapping to discipline ontologies?
Making data re-useable
One of the advantages of sharing your research data is that you can build on the work of others and do not have to start from scratch. Here, the most important information is how and where to find re-usable data. Look in data portals like Re3data, fairsharing.org, DataCite, European Union Open Data Portal, or OpenAIRE to find discipline specific data repositories and datasets. If you need multi-discipline repositories, try searching in Zenodo, Dryad, Figshare, or in institutional repositories like ISTA Research Explorer. Keep in mind to check the re-use license and conditions of the repository before re-using data!
For your own data, think about licensing it to permit and clarify re-use terms. If sharing, describe data quality processes (i.e. repeated samples or measurements, peer review of data…). Furthermore, before you decide to share your data, consider copyrights, licenses and contracts, as well as intellectual property rights/patents. Do you need an embargo for your data and why (e.g.journal requirement/funder requirement/…)? Keep in mind – especially in collaborative international teams – that national laws and funder requirements may differ!
Different data licenses you can choose:
- CC0 https://creativecommons.org/share-your-work/public-domain/cc0/
- ODC-By https://opendatacommons.org/licenses/by/1-0/
- PDDL https://opendatacommons.org/licenses/pddl/1-0/
- ODbL https://opendatacommons.org/licenses/odbl/
Data storage, security and preservation
Take your time to think through your IT environment: You will generate research data before and during your project, and you will have to take care of IT questions after finishing your project as well.
Discuss within the team who needs access to which data (access/permission level – role), define if you need external access from a collaboration partner and where to store shared data (for example, do not use commercial cloud storage partners but use our ISTA Cloud solution). Keep in mind that data security also includes thinking about destroying data after the end of the archival deadline and sensitive data after it is no longer needed. ISTA IT solutions created a Guideline for storing research data at ISTA (ISTA login is required to access).
Think about the file format (see also RDMP/Data documentation) and try to use open, standardized, and well documented formats whenever possible. Prepare your research data and convert non-proprietary data in open file formats before finally preserving. Take into account to check on your data periodically, such as every second year, if it is necessary to update file formats, hardware, and documentation. Try to cover these questions: Has the data become corrupt? Are the backups working correctly? Is an update on hardware necessary? Can you still understand the documentation?
General Guidance and RDMP templates
RDMO, a tool to manage your RDMP (ISTA login required)
FWF RDMP template
EU Horizon 2020 RDMP template
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474795.48/warc/CC-MAIN-20240229071243-20240229101243-00758.warc.gz
|
CC-MAIN-2024-10
| 11,360
| 57
|
https://extpose.com/ext/173671
|
code
|
An extension that can send and receive pushes for the Pushover.net service
An unofficial Pushover extension to send and receive notifications from the Pushover.net service You can send selected text, images, and current tab URLs to other devices using your own App Key and App Tokens. You can register your chrome browser as a device which allows you to send messages to it from other devices. Version 0.11.0 - Updating branding due to copyright issues Version 0.10.0 - Added option to exclude context menu Version 0.9.0 - Sort apps in the left navigation - Allow HTML in the messages Version 0.8.0 - Bug fixes Version 0.7.0 - Adding SourceMaps for easier debugging Version 0.6.0 - Fixed issue if Web Socket connection is denied due to network it won't attempt to reconnect - Fixed slide issues when compiled for release Version 0.5.0 - Updated library version Version 0.4.0 - Fixed background sync issues Version 0.3.0 - Updated the UI to split apps into tabs - Added the ability to send free form text from the extension Version 0.2.0 - Fixed an issue where clicking Refresh to soon after opening the dialog would remove old messages Version 0.1.0 - Initial release ***The author of this plugin has no affiliation with Pushover, LLC.***
- (2020-08-13, v:0.9.0) Deng Yang: context menu
Can I disable the right click context menu, I want to keep it clean.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069267.22/warc/CC-MAIN-20210412210312-20210413000312-00171.warc.gz
|
CC-MAIN-2021-17
| 1,355
| 4
|
https://www.phonesarena.org/how-to-add-a-toolbar-android-studio-tutorial/
|
code
|
In this video we will learn, how we can replace the default action bar with a toolbar, which is more customizable and more flexible. We will define it in a seperate xml file so we can include it into different other layouts. We will also change it’s theme so the text and menu icon are white instead of black.
Subscribe to my channel:
Want more Android tutorials? Check my playlist section:
Follow me on social media:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00713.warc.gz
|
CC-MAIN-2023-06
| 419
| 4
|
https://forestwonders.com/products/flukers-red-heat-bulb-incandescent-reptile-light
|
code
|
Flukers Red Heat Reptile Bulb lets you watch your pet during night hours, without disturbing its natural nocturnal behaviors. These long-life bulbs last up to 3,500 hours and emit ambient warmth to maintain a healthy, stimulating habitat for your reptile.
- Red light allows nighttime viewing
- Emits heat without disturbing nocturnal behaviors
- 3,500 hour life
Environmental heat is critical for providing reptiles with a healthy habitat. If a reptile is not provided with an appropriate environmental temperature range (ETR), it cannot regulate its core body temperature and may be more prone to chronic infections. Ideal ETR varies from species to species; consult your pet professional for lighting recommendations for your pet. Flukers Red Heat Incandescent Bulbs provide moderate heat combined with low light, perfect for observing nocturnal reptiles.
Directions: Gently fasten the Flukers incandescent bulb into a Fluker Clamp Lamp or Hood or any UL-approved incandescent fixture. Do not plug the light fixture into an electrical socket until the bulb is fastened firmly to the fixture. Place light fixture outside for reptiles enclosure. NEVER place the light fixture inside the enclosure. Reptiles can develop life-threatening thermal burns from contact with an exposed light bulb.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816939.51/warc/CC-MAIN-20240415014252-20240415044252-00750.warc.gz
|
CC-MAIN-2024-18
| 1,291
| 6
|
https://answers.sap.com/questions/6049736/import-wizard-between-domains.html
|
code
|
I have a quick question:
We have a customerr who wants to use one domain for testing their BO universes (they want us to build the reports, universes etc. in our test environment at our location). But they of course wants their production system to be located at their location within their domain.
In this scenario we will have BO dev and test in one domain and and production in another domain.
My question will be:
Is it possible to use BO import wizard to run imports from one domain to another?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00194.warc.gz
|
CC-MAIN-2023-14
| 499
| 5
|
https://www.scientific-computing.com/news/parallel-computing-centres-competence-open-university-cambridge
|
code
|
Parallel Computing Centres of Competence to open at University of Cambridge
Two Intel Xeon Phi Product Centres of Competence are to open at the University of Cambridge at the end of 2012, with one further centre expected to be announced at a later date. A Memorandum of Understanding (MOU) has been signed by Dell and Intel, confirming plans to offer hardware based upon the Intel Xeon multi-core processors and the Intel Xeon Phi coprocessors.
The two Intel Xeon Phi Product Centres of Competence will enable scientific researchers in EMEA to learn, optimise and test their code using Intel Xeon and future Intel MIC (many integrated core) products. The aim is to prepare the scientific research community for the launch of the first generation of the Intel Xeon Phi family of products so that the coprocessor can be used immediately as a production tool. This initiative is expected to last for at least two years to accommodate the needs of the scientific research community as it evolves its code for parallelisation and vectorisation.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510983.45/warc/CC-MAIN-20231002064957-20231002094957-00193.warc.gz
|
CC-MAIN-2023-40
| 1,039
| 3
|
https://owjwo.com/fash/lesportsac-x-mubert-chain-pattern-and-python-pattern-bags-and-pouches-flower-blooming-handbags/
|
code
|
Sumary of LeSportsac x Mubert “chain pattern and python pattern” bags and pouches, “flower blooming” handbags:
- The second collaboration between LeSportsac and Mubert Petite Hobo (H16 x W22 x D7cm) 15,400 yen Following the first, the second collaboration bag between LeSportsac and Mubert is now available.
- This time, we will focus on “natural energy” with the concept of “sense of wonder”, which is the emotion and mysterious feeling of touching nature.
- Michiko Nakayama, a designer of Mubert, put a print on the bag or pouch that imaged her mother’s chain bag, which she had longed for when she was a child.
- Available stores: LeSportsac shop limited store, LeSportsac official online store, Muvel official online store * Please check the official website for LeSportsac shop limited stores.
- I drew a logo and a message in bright orange and yellow colors on a vertical tote bag and notebook PC case that can store A4 size.
- While incorporating the theme of “memory of childhood,” which is also the identity of Mubert, he created a classic design.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710909.66/warc/CC-MAIN-20221202150823-20221202180823-00455.warc.gz
|
CC-MAIN-2022-49
| 1,079
| 7
|
https://docs.microsoft.com/en-us/archive/blogs/nickmac/service-pack-2-for-the-2007-microsoft-office-system-due-to-ship-april-28th
|
code
|
Service Pack 2 for the 2007 Microsoft Office System due to ship April 28th
Last October, we announced the upcoming release of the 2nd service pack for the 2007 Microsoft Office System and the 2007 Microsoft Office servers. Today, we’re happy to provide both a formal release date, and more details on what you should expect to see in SP2.
A fair amount has been said about SP2 already, but there is a lot more to share. We’ll cover the highlights here, but please check back on April 28th when all of our documentation will be published. It is important to remember that the information provided today is by no means a comprehensive list. We worked with the individual teams in Office to come up with a list of changes that they were most proud of and felt would be most beneficial to you, our valued customers.
In addition to the numerous product improvements introduced by SP2, you may also notice that our SP2 documentation has been overhauled. Gone are the days of the long-winded or too sparse knowledge base articles that do little to describe what’s included in the actual service pack or that include details that may not be what you are looking for. In their place are what we hope are more user-friendly and informative KB’s. The technical information still exists, but it has been pulled from the main KB articles and now will live on TechNet. And, back by popular demand, is the spreadsheet listing individual bugs that were fixed across all of our products.
The Service Pack team would like to express our sincere thanks to the many beta testers who took the time to download, install, test, and provide feedback to us. This was the largest beta we’ve done to date for an Office service pack with thousands of beta testers from over 60 countries. We know your time is extremely valuable, and we very much appreciate all you’ve done. Your efforts have helped to make this a great release!
Don’t forget to come back on April 28th. We’ll have a comprehensive list of everything we’ve released, where you can find it, and links to additional information. A brief note, some of the information posted earlier needed clarification. We have made slight modifications to the information below.
We’ll start with updates that pertain to multiple products, highlight fixes to the individual desktop applications, and then discuss fixes to the server products.
Changes that impact desktop applications
Service Pack 2 adds the ability to open, edit and save documents in version 1.1 of the OpenDocument Format for Word, Excel, and PowerPoint. These applications now let users save, open, and edit files as OpenDocument Text (*.odt), OpenDocument Spreadsheet (*.ods), and OpenDocument Presentations (*.odp).
The 2007 Microsoft Office Service Pack 2 is the first service pack to support uninstall of client updates through the Microsoft Service Pack Uninstall Tool for the 2007 Microsoft Office Suite as well as via Windows Installer command line. The Service Pack Uninstall Tool will be available as a separate download.
The Microsoft Save As PDF or XPS add-in has been built into Office applications in SP2. Users no longer have to download and install the add-in separately.
When many graphic objects are present performance has been improved.
In many scenarios, expect increased print fidelity of graphical objects.
Improved interoperability using standard DrawingML markup to describe the visual properties of the SmartArt graphic.
The 2007 Office Suite SP2 has been tested and is supported for Internet Explorer 8. Windows Vista SP2, Windows Server 2008 SP2, Windows 7 and Windows Server R2 will all be supported upon their release.
The ability to export reports to Excel has been added.
Fixes for issues with the import data wizards, report printing and previewing, macros, Excel integration, and date filters.
Updates to Access Developer Extensions are now included in SP2.
The charting mechanism has improved robustness and targeted performance improvements.
A chart object model has been added to Word and PowerPoint.
Improved printing of graphical content, especially on PCL printers.
Improved form tools.
Synchronization reliability has been improved.
- Increased compatibility between InfoPath forms and other Microsoft products, such as Groove and Outlook.
- SharePoint synchronization has been improved which helps reduce the load on SharePoint servers and produce fewer errors.
- Performance in startup, shutdown, view rendering, and folder switch has been improved.
- Calendar updates, search, and RSS are more reliable.
- The object model has been improved.
Resaving of files is faster. Several printer-specific problems have been fixed.
The Microsoft Office Excel Chart Object Model has been more fully integrated.
The scheduling engine, Active Cache, and Gantt charts all have improvements.
There is additional reliability with earlier versions of the .mpp format.
- Fixes have been made in the following areas: print preview, compatibility with Internet Explorer 8, e-mail on Windows Vista, and saving to the Content library.
- Improved compatibility with other Microsoft products in several key scenarios, such as inserting Visio drawings as linked objects in PowerPoint or Word, exporting reports to Excel, and saving drawings as Web pages for browsing in Internet Explorer 8.
Fidelity of PDF and XPS output has been enhanced compared to the output created through the use of the download.
Better integration of the Microsoft Office Excel Chart Object Model.
Changes that impact the server products
Windows SharePoint Services 3.0 SP2 and Microsoft Office SharePoint Server SP2 include fixes and enhancements designed to improve performance, availability, and stability in your server farms. SP2 provides the groundwork for future major releases of SharePoint Products and Technologies.
An STSADM command line that scans your server farm to establish whether it is ready for upgrade to the next version of SharePoint and provides feedback and best practice recommendations on your current environment.
SP2 offers support for a broader range of Web browsers.
Substantial improvements to Forms-based authentication.
Windows Server 2008 SP2 and Windows Server R2 will be supported on their release.
Enterprise Content Management (ECM)
The performance and stability of content deployment and variations feature has been improved.
A new tool has been added to the STSADM command-line utility that enables a SharePoint administrator to scan sites that use the variations feature for errors.
SP2 makes it easier to configure Excel Web Access Web Parts on new sites.
Several rendering, calculation, and security issues have been resolved.
Some display issues have been addressed.
Improved compatibility with Mozilla Firefox browsers.
Improved synchronization reliability.
Groove Server 2007 Manager will install and run with SQL 2008.
Groove’s LDAP connectivity and auto-activation functionality have been improved.
Error reporting in the Groove Relay Server has improved significantly.
Groove Relay Server has improved robustness.
Memory requirements and the page load times for large browser-rendered forms have been reduced.
Browser rendering of various controls, such as the 'cannot be blank' asterisk and the rich text field has been improved.
Better memory management in the queue service.
Performance to certain database table indexes is improved.
Resource plans, build team, cost resources, and the server scheduling engine have improved.
Improvements to the reliability and stability of very large corpus crawls.
Backup-restore has been improved.
A new command has been introduced to the stsadm.exe tool that lets a SharePoint Administrator to tune the Query processor multiplier parameter.
Improved accuracy in searches involving numbers
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370502513.35/warc/CC-MAIN-20200331150854-20200331180854-00418.warc.gz
|
CC-MAIN-2020-16
| 7,782
| 63
|
https://help.octopus.com/t/xml-transform-error-on-failed-xdt-locator/22768
|
code
|
I get different results for the same transform in Octopus than outside of Octopus.
For example, if I use this website to test a transform (https://webconfigtransformationtester.apphb.com/) I can successfully transform xml using a xdt:Locator that ‘fails’ to find a match.
In my xml for example I may or may not have a particular element…but if I do, I want to SetAttributes on it.
When I put that same successful transform into Octopus the deployment fails, with an error like this:
File [deleted for brevity]xxx.Deploy.xml, line 4, position 6:
April 18th 2019 16:26:34
No element in the source document matches ‘/_defaultNamespace:xxx/_defaultNamespace:yyy/_defaultNamespace:zzz[@ref=‘AddressSearchService’ and @implementationName=‘nnn’]’
Why does Octopus error on a non Locator match - and is there a way to achieve this in Octopus?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662527626.15/warc/CC-MAIN-20220519105247-20220519135247-00264.warc.gz
|
CC-MAIN-2022-21
| 852
| 8
|
https://pacificgraphicdesign.wordpress.com/courses/graphic-design-ii-2/arts-77-assignments/cards-new-suits/
|
code
|
♠ ♥ ♣ ♦
Additional Resource Links:
- Grouping and Hierarchy
- Symbols and Meaning
- Symbols Impact on GD
- Symbols History
- Ancient Symbols
- 52 Types
- Grouping and Hierarchy Checklist
- Double-Sided Printing
- printing cards
Hierarchy and Priority Grouping are the main design focus of this assignment. Above all else, make the hierarchy clear and ordered through your visual system.
The primary focus of this assignment is your ability to create implied priority (hierarchy) and visual grouping of similar objects. Your assignment is not to create a game, but instead a deck of cards where multiple games might be played. Therefore, certain cards should appear to be worth more than others regardless of the game. This relative importance should be evident by the “look” of the cards themselves, it should not require an explanation nor an understanding of the theme to make this relationship understood. Reinforce any form of hierarchy by indicating that order through at least three ways. For example, if you use relative size as one way to indicate hierarchy, you might also reinforce this by also applying value (lightness to darkness) and amount of detail across the group as two more ways.
Use this as a checklist to determine if you are meeting the Grouping and Hierarchy requirements with your design. You must turn this grouping and hierarchy checklist in with your final design.
Thematic unity is a secondary focus of this assignment. Base your set of cards around a theme which can be extended through various levels of implied importance and similarity grouping. Look for similarities as well as differences among characters/roles, objects or events to draw inspiration from. Are there any “good guys” and “bad guys” that come from your theme? If so, this would be an example of how to begin grouping. (don’t worry if there are no good or bad guys in your theme, simply look for other ways to group). Your theme is only to provide you with inspiration and be a source of imagery. This theme may or may not have a strong sense of hierarchy already built in to it. If there is, you can use this to your advantage. It should not be necessary for someone else to know your theme to be able to distinguish the grouping and hierarchy, however. Instead, grouping and hierarchy must be indicated visually, by how you render the images and symbols. What those images and symbols are come from your theme, and it is natural that you would utilize the hierarchy and grouping that is already built into that theme. The difference here is that this thematic grouping and hierarchy is secondary to what you do visually.
- design a deck of playing cards based upon either one of the following criteria:
- an existing set of cards (ex.—torot cards)
- a story or myth (ex.—Aesop’s Fables, nursery rhymes, movie or TV series, etc.)
- Research: color theory gestalt theory of grouping symbols and icons, dingbats and historical typographic symbols and punction marks various cultural imagery and symbolic communication
- Use symbolic color
- Use figure/ground relationships
- Your solution must visually communicate the concepts of: heirarchy/priority order order/grouping symbolism
- The total number of cards is up to you
- Size, and shape of the cards are up to you.
- Remember you are not creating a game, but instead, a system with which any number of games could be played.
- Design the back of the cards utilizing non-critical registration.
- Create a custom sized “box” or container for your cards
- Read thoroughly all information found on the Resource Links provided to you at the top of this assignment page.
- Pick a theme to base the look of your cards
- “Chart” the characters in your theme. This is to visually organize and determine the number of suits and number of cards in each as well as which characters belong together (suit/group) and which characters are more or less important/power (hierarchy).
- List at least three methods you plan to use to visually indicate grouping (Color, symbol, amount of detail, posture, shape, pattern, etc.) Another person must be able to discern these different groups even if they do not know your theme.
- List at least three methods which you plan to use to visually indicate hierarchy (size, position, posture, symbols, number of symbols, value, border treatment, style etc.) Another person must be able to discern these levels of hierarchy even if they do not know your theme.
- Remember there are three main areas that you can use to determine or create visual hierarchy:
- use of symbols (universal meaning, not necessarily thematic)
- choice of object (usually thematic)
- how you draw the object
- Test your plan for indicating visual grouping and hierarchy by creating a top and bottom card for each suit.
- Make prototypes for all cards
- Use the following references to guide your process.
- Design the back of the cards utilizing non-critical registration. This generally means that the design on the backs should not come close to the edges of the cards unless it is a repeating pattern of smaller shapes. If it is a design made up of a larger central image, it must be scaled down in size so that there is at least 5/8″ from the edge on all sides.
- Create a custom sized “box” or container for your cards. This box must be made and sized according to the size and number of your deck of cards. You can determine the thickness that the box should be by stacking up the same number of pieces of paper as the number of cards in your deck. Measure the height of this stack of paper and that should be the thickness of your box. You must use the same paper as that which your cards will be printed on and you must be careful/precise in your measurements. (to the 32nd of an inch at least).
Go to these links for step -by-step instructions for printing
Refer to this layout as an example of ganging together multiple cards for printing. (In this example, all the cards are the same design, it does not show any grouping or hierarchy. it only shows how multiple cards should be arranged with bleed and crop marks for printing purposes).
- 1st, create an art board the size of the paper you will be printing to.
- Show Rulers from View in the main menu and pull out a vertical and horizontal guides from them. Using the rulers to locate them, align these guides exactly on the middle of your page, horizontally and vertically. Zoom in as far as you can to be precise in this step. Lock Guides from the View menu when finished.
- In Illustrator, create crop marks for individual cards by first selecting the main rectangle and then under Effect, select Crop Marks.
- Then create the bleed area for the card and group the bleed and card (including crops) together.
- Do the same steps for all cards, space them equally from each other (use the align command to do this) and group all of them together.
- Be sure to exactly center the total group of cards on the paper to allow registration (alignment) of the design on the back side (be sure to exactly center the backside design as well)). (Use the guides you created earlier to do this.)
- After ganging together the front sides,
- flatten layers
- confirm your document is in RGB if printing to an Epson inkjet printer
- save the file and call it “Cards Front Print”. Then, immediately do a second “save as” command and call that one “Cards Back Print”. This will keep your guides in exactly the same location.
- Place your backside design in a new layer exactly on top of the card fronts. When the backs are in position you can delete the front side design layer (including the front designs) from this file. This will leave the back design perfectly in position.
- Cutting with Crops do not use the paper cutter
- Cut only from crop mark to crop mark, NOT all the way across the sheet of paper or you will loose the crop marks for the other cuts.
Some Previous Student Examples:
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887319.41/warc/CC-MAIN-20200705090648-20200705120648-00028.warc.gz
|
CC-MAIN-2020-29
| 7,945
| 57
|
http://www.howdesign.com/how-design-blog/david-sherwin-brainstorming-video/
|
code
|
Design book author David Sherwin talks about brainstorming and his book Creative Workshop in this video clip from Design TV. You can read Sherwin’s design business posts over on Imprint, and hear him speak in person at the HOW Interactive Design Conferences this fall in Washington, DC, and San Francisco. Check out the full version of Brainstorming 101 on Design TV.
Think of it like Netflix for designers: Design TV brings you new videos and webcasts every week with real advice from design experts. Design TV’s library of webcasts, design tutorials and workshops are all available on-demand—when you need them most. You can subscribe to watch all our videos for a one-, six- or 12-month period.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158045.57/warc/CC-MAIN-20180922044853-20180922065253-00159.warc.gz
|
CC-MAIN-2018-39
| 703
| 2
|
https://www.zdnet.com/article/will-touch-drive-microsoft-surface-sales-or-will-surface-drive-touch/
|
code
|
The demo applications Microsoft has shown so far for its Surface touch-tabletop system --for ordering drinks, sharing photos by dragging them and finger painting -- have left me cold.some different Surface application prototypes that seem somewhat more compelling.
During his CES keynote, Microsoft Chairman Bill Gates showed off a snowboard-customization demo that indicates the kinds of interactive retail applications which might shine on Surface multi-touch systems. Microsoft issued some talking points about the demo, claiming it "provides a clear solution to common consumer pain points," including (according to Microsoft):
"The (Surface snowboard) application showcases the four key attributes of a surface computer including; multi-touch, object recognition, direct interaction and multi-user," said an e-mail message sent to me by the Surface team.
For those who listened closely to Gates' Sunday night keynote, there was a hint that gaming and office-productivity applications are in the pipeline for a Surface "desk," "meeting room table" or other kinds of future Surface systems, as well. From the transcript of Gates' remarks:
"Your desk, we won't just have the computer on the desk, but in the desk, so a meeting room table as you're collaborating, and the living room if you want to briefing up and play games with something like a Surface, or organize your photos. It will just be there, and easy to manipulate, easy to change and have multiple people connect up."
Gates and others at Microsoft are still betting big on natural user interfaces -- touch, speech, gestures -- as being the keys to the input kingdom. Supposedly, these input modes were going to take off during the "first digital decade." But Tablet PCs didn't take hold at anywhere near the rates he predicted.
While touch and speech will no doubt take off on cell phones and on-board auto systems, I admit I'm still am a doubter about how quickly or well they'll be adopted by PC users. Call me a Luddite, but if the Surface had a keyboard, I'd definitely prefer it over touch or speech.
What's your take on the Surface? Will touch technology drive the Surface? Or will Surface finally get more Microsoft users to make use of non-keyboard-based input technologies?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00487.warc.gz
|
CC-MAIN-2023-14
| 2,247
| 8
|
https://jaljeev.com/species/labidochromis-sp-hongi/
|
code
|
Red Top Hongi, also known as Kimpuma or Hongi Cichlid, is a freshwater fish with the scientific name Labidochromis sp. “Hongi”.
This fish is popular in the aquarium world.
Red Top Hongi Interesting Facts
- Red Top Hongi, or Hongi Cichlid, is an East African cichlid popular in aquariums.
- Males grow up to 5.1 inches (13 centimeters), while females reach around 3.5 inches (8.9 centimeters).
- Scientifically known as Labidochromis sp. “Hongi”.
Red Top Hongi Habitat
Red Top Hongi is a type of cichlid fish from East Africa.
Red Top Hongi Physical Characteristics
Size: 5.1 inches (13 centimeters)
Male Red Top Hongi grows up to 5.1 inches (13 centimeters) long, while females usually reach around 3.5 inches (8.9 centimeters).
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510179.22/warc/CC-MAIN-20230926075508-20230926105508-00553.warc.gz
|
CC-MAIN-2023-40
| 736
| 11
|
https://www.fr.freelancer.com/projects/excel/data-analytics-19105231/
|
code
|
We are looking for a group or team of individuals to design, develop and deploy scientific or quantitative algorithms to perform predicative analysis for enhancing farming. We have an enormous amount of unstructured data that will need placed into a database or data lake. Most of the data goes back seven years. Data will need extracted than placed into a repository. The needed design will employ "levers" to allow us the ability to lessen one ingredient or environmental condition to discern how it would have or could have positively or negatively impacted production.
37 freelance font une offre moyenne de $21/heure pour ce travail
My preferred method of freelancing is an interactive approach to project solving. I have an MSEE specializing in Digital Signal/Image/RF Processing. I do my work in MATLAB (expert).
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255251.1/warc/CC-MAIN-20190520001706-20190520023706-00034.warc.gz
|
CC-MAIN-2019-22
| 819
| 3
|
http://stackoverflow.com/questions/11972644/how-to-install-autoconf-in-ubuntu-11-04/11972688
|
code
|
When I'm trying to install autoconf in Ubuntu 11.04 by the following command
sudo apt-get install autoconf
This error comes
Reading package lists... Done Building dependency tree Reading state information... Done Package autoconf is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'autoconf' has no installation candidate
How to remove this error and install it?
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701164289.84/warc/CC-MAIN-20160205193924-00084-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 483
| 5
|
https://issues.apache.org/jira/browse/YARN-4617
|
code
|
In discussion with leftnoteasy in the JIRA comment pointed out that LeafQueue#pendingOrderingPolicy should NOT be assumed to be same as active applications ordering policy. It causes an issue when using fair ordering policy.
Expectations of this JIRA should include
- Create FifoOrderingPolicyForPendingApps which extends FifoOrderingPolicy.
- Comparator of new ordering policy should use RecoveryComparator,PriorityComparator and Fifocomparator in order respectively.
- Clean up LeafQueue#pendingOPForRecoveredApps which is no more required once new fixed ordering policy is created pending applications.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818835.29/warc/CC-MAIN-20240423223805-20240424013805-00282.warc.gz
|
CC-MAIN-2024-18
| 605
| 5
|
https://nvda-addons.groups.io/g/nvda-addons/message/8358?p=,,,20,0,0,0::Created,,posterid%3A1763391,20,2,180,31996230
|
code
|
toggle quoted messageShow quoted text
How to do it--yes, I agree that anyone able to program anything should be able to figure this out, or look it up. But the knowledge that it must be done--not so much.
Currently, there is nothing at all that I can find in NVDA documentation that specifies that UTF-8 is the required encoding for files.
That said, because NVDA is a multi-lingual piece of software, it does stand to reason that UTF-8 would almost have to be the encoding used.
To understand this subject better, the following might be of use:https://hackernoon.com/encoding-the-python-source-code-file-445722836813?gi=54eff73cf3c0
On Tue, 11 Jun 2019, DaVid wrote:
I never tried AkelPad. Is autocompletion feature accessible on this editor?
Mmm, we don't need understand about encodings to know how to save a
file in UTF-8...
I mean that an user who learn to develop python and add-ons, has
research skills to discover simple things like choose the encoding on
the save file dialog of notepad. I myself learned programming before
understand encodings,I saved my files in UTF-8 because was the
recommendation in the tutoriales that I read. So the recommendation
about unicode should be there.
2019-06-11 9:23 GMT-06:00, Brian's Mail list account via Groups.Io
What do you think of akelpad?
Sent via blueyonder.
Please address personal E-mail to:-
firstname.lastname@example.org, putting 'Brian Gaff'
in the display name field.
Newsgroup monitored: alt.comp.blind-users
----- Original Message -----
From: "DaVid" <email@example.com>
Sent: Tuesday, June 11, 2019 1:07 PM
Subject: Re: [nvda-addons] NVDA Developer Guide: Question about how to make
addons using standard NotePad
If you don't use special characters in your code, you don't need to
save in UTF-8. The default ansi in notepad should work.
But if a user can write an add-on for NVDA, sure s/he is an advanced
user to know how to change coding in notepad.
Notepad is very tedious to write python because it doesn't apply the
last indent. Also it doesn't have auto-completion feature. Programming
on notepad... I don't even wish it on my worst enemy. hehe.
Use notepad++ portable version. And doesn't forget to install
notepad++ add-on to get accessibility for the auto completion feature.
2019-06-11 5:10 GMT-06:00, Rui Fontes <firstname.lastname@example.org>:
It should be saved as UTF-8.
Às 20:41 de 09/06/2019, Daniel Gartmann escreveu:
The last time I played with writing .py files in Notepad, I had to
the encoding in the Save dialog. It was not just using the standard
command. Right now, I don’t recall how the file should be encoded in
to work properly in NVDA.
Den 9. jun. 2019 kl. 21.14 skrev Luke Davis <email@example.com>:
There is nothing really special about a .py file, it is just a text
with a different extension (not .txt).
The nice thing about Notepad++, is that it has auto-indentation, and
re-opens last open files, and so on. But other than that, it is editing
in text just like Notepad.
It might also be easier to change character encodings in Notepad++, but
haven't really explored that.
So, all of that is to say, there really is no special procedure.
You can open a notepad session, write your code, and save it with a .py
extension. You will have to do your own indenting, but if you're used
writing Python that should be no problem.
Any text editor which does not wrap lines should be fine for this.
On Sun, 9 Jun 2019, Daniel Gartmann wrote:
I tried to find information about how to make an add-on using the
built-in Notepad application instead of having to install NotePad++.
The use case is as follows:
You go to another person’s computer to make NVDA behave better for
particular user in a specific situation.
Other screenreaders have built-in script editors e.g the JAWS Script
Editor. But when using NVDA, we are told to use NotePad++.
It is, however, not possible to install NotePad++ if, for instance,
are in a corporate environment or some other restricted setting.
So. What is the procedure to make a .py file in Notepad and save it in
the correct format?
Could it be included in the NVDA Developer guide?
Just a suggestion so that NVDA’s customizations can be made easier in
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363217.42/warc/CC-MAIN-20210302034236-20210302064236-00221.warc.gz
|
CC-MAIN-2021-10
| 4,180
| 70
|
http://3agn.com/watch.php?vid=80f7d5902
|
code
|
Relaxing Piano Music: Study Music, Writing, Relaxing - #6 (in D Minor)
Welcome to the newest episode of my Relaxing Piano Music Series!
Leave a like if you enjoyed this video, and subscribe if you want to see more- there's always more coming soon.
Follow me on Instagram:
Media Account (For posts about music, film, etc.):
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259452.84/warc/CC-MAIN-20190526185417-20190526211417-00105.warc.gz
|
CC-MAIN-2019-22
| 322
| 5
|
https://community.ipfire.org/t/need-help-with-setup-of-ipfire/649
|
code
|
I’m trying to setup ipfire on a Raspberry PI 3 B+. Install went fine.
However when red0 dchcpd gets an ip address from my isp, the ISP IP assigns successfully to red0, then as it continues to boot the ntp fails since there is no internet connection apparently. I cannot ping anything successfully from the cmd prompt. Any ideas as to what is going on?
It seems the solid amber light going to the modem means a 1gbps connection, which makes sense. Apparently I don’t need a green light for the modem, just amber. I searched the raspberry pi forums to find this info.
No. Now I am not able to ping anything like google.com from the cmd prompt. I have no internet connection it seems, but i have an IP address from my ISP. I called my ISP and they said they don’t need to add the mac address of my firewall or anything to their systems. Nothing to do on their end.
I was also wanting to know how best to setup the IP addresses on ipfire setup. My google wifi router is 192.168.86.1. What’s the best way to setup the GREEN0 interface with IPs and DNS? Sorry, i’m a noob when it comes to this stuff.
red network is setup using dhcp. I did put in 184.108.40.206 and 220.127.116.11 for the dns, which is allowed I think if you want to specify your own dns service. I’ve tried it blank, and by putting in the DNS above.
I’m going to try to ping google’s IP directly and see if that works.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00191.warc.gz
|
CC-MAIN-2023-06
| 1,396
| 7
|
https://www.mail-archive.com/dev@ofbiz.apache.org/msg102198.html
|
code
|
Le 14/04/2018 à 08:35, Vaibhav Jain a écrit :
I agree with Rishi, We should start another mail thread to discuss race
IMO, At the time of reservation, we should check for ATP instead of QOH(As
Sr. Enterprise Software Engineer
m: 782-834-1900 e: vaibhav.j...@hotwaxsystems.com
On Thu, Apr 12, 2018 at 4:31 PM, Suraj Khurana <
Thanks everyone for your input.
Here <https://issues.apache.org/jira/browse/OFBIZ-10337> is the ticket
created for the same.
Thanks and Regards,
*Suraj Khurana* | Omni-channel OMS Technical Expert
HotWax Commerce <http://www.hotwax.co/> by HotWax Systems
Plot no. 80, Scheme no. 78, Vijay Nagar, Indore, M.P. India 452010
Cell phone: +91 96697-50002
On Wed, Apr 11, 2018 at 12:56 PM, Rishi Solanki <rishisolan...@gmail.com>
Thanks Swapnil for adding the use case.
After this it looks like, this is kind of scenario when we couldn't lean
the ATP. Which should be discussed and addressed. But now I'm sure that
what Suraj suggested makes sense we can go with the improvement Suraj
In isolation we can discuss and try to address the race condition issue
follow the steps.
- Add script to replicate the issue multiple multiple times.
- Discuss and finalize the fix.
- Provide fix.
I would like to help in the race condition issue Swapnil shared.
+1 for Suraj to move ahead for the improvement.
Sr Manager, Enterprise Software Development
HotWax Systems Pvt. Ltd.
On Wed, Apr 11, 2018 at 11:08 AM, Swapnil Shah <
There are certain business cases around order promising where we found
systemic ATP hasn't proved that much reliable. Especially when its
decision to not accept or promise more orders than allocated units of
For example, during heavy load(ordering) there could be instances when
higher number of open orders/carts are competing for same systemic ATP
any given point of time. In such scenarios due to any reason if rate of
performing systemic reservations lags behind the rate of ordering than
systemic ATP would also keep lagging behind the actual allocation being
with respect to QOH. Thus system would always keep on accepting orders
promising them unless systemic ATP goes down to zero (but in reality
Is already exhausted way before than systemic ATP went to zero). It
the problem of "Over Promising" and eventually higher than acceptable
of backorders to honor for business. In the hindsight it looks like
could be one of the reason why the additional check on QOH was in place
I am not sure if it’s the best way, but one of the possible alternative
tried to handle such cases was by grounding the order creation logic
on the fact whether there is positive "Available to Order (ATO)" at the
of order submission or adding items to cart rather than ATP. At high
ATO for any given SKU could be determined on run time as follows:
ATO = QOH + Incoming Shipments(Scheduled Receipts) - (Total unshipped
on Open Orders & Carts)
I hope such cases could help in providing more holistic view while
leveraging or relying upon the reservation logic.
From: Jacopo Cappellato <jacopo.cappell...@hotwaxsystems.com>
Sent: Tuesday, April 10, 2018 1:47 PM
Subject: Re: Check for only QOH while doing reservations
after reviewing that old commit I am inclined to think that the change
are suggesting makes sense.
Before that old commit all the inventory items (regardless of their
qty) were selected and there was logic to iterate thru the result set
exclude the ones with the wrong type and reserve only the ones with
With that commit the type constraint was added to the query and also an
additional constraint on QOH (rather than ATP): maybe at that time
code requiring it or maybe it was done that way to be extra careful.
I think we can now proceed as you suggest but before we do we should
the code that calls the following services:
and make sure that the change will not impact them negatively.
On Mon, Apr 9, 2018 at 3:27 PM, Suraj Khurana <
I looked around and found some relevant commit.
IMO, it has been mistakenly committed as commit log also doesn't
any functional change in commit.
is the link for reference.
Thanks and Regards,
*Suraj Khurana* | Omni-channel OMS Technical Expert HotWax Commerce
by HotWax Systems Plot no. 80, Scheme no. 78, Vijay Nagar, Indore,
M.P. India 452010
On Sat, Apr 7, 2018 at 3:24 AM, Scott Gray
I haven't reviewed the code in question so I don't have any comment
stage. But one thing I want to point out is that OFBiz has many
years of history available in commit logs, jira and mailing lists.
a simple task to look back over that history and determine why a
certain thing was done a certain way.
As part of proposing a change to existing functionality it is
extremely useful to anyone who might review the proposal to have
some of that
provided with the proposal.
In this case it could be a simple matter of a past mistake being
until now, or it could be that using QOH was found to be beneficial
for some reason that isn't immediately obvious. But without first
we can't ever be sure of the answer.
On Fri, 6 Apr 2018, 18:25 Suraj Khurana,
While checking around code around inventory reservations, I was
to see that *reserveProductInventory *service only checks for QOH
greater than one apart from that when
called, it checks for ATP confirming system to behave as
Everything works fine but this is redundant code and we can have
ATP at top level so make reservations logic works faster. Is
any other specific case I am missing or we can improve this flow
check at *reserveProductInventory* service as well.
We can check QOH being on safer side, but ideally a system will
lesser ATP than QOH and logically we should only check for ATP
Thanks and Regards,
*Suraj Khurana* | Omni-channel OMS Technical Expert HotWax
Commerce by HotWax Systems Plot no. 80, Scheme no. 78, Vijay
Nagar, Indore, M.P. India 452010
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156252.31/warc/CC-MAIN-20180919141825-20180919161825-00112.warc.gz
|
CC-MAIN-2018-39
| 5,776
| 104
|
https://www.youtube.com/channel/UCESKqzJfi79jfDHVlmBXfTw?sub_confirmation=1
|
code
|
As we hit 10,000 subscribers, all I can think about is how lucky I've been to meet so many different kinds of people and to hear all your music. Some of you have stuck by me through hard times, and some I've been blessed to know for only a short time, but I am so grateful for every single subscriber and everything you all have brought to the table. Thank you all for being here, and for introducing me to so many different styles of music, and for your friendships!!! God willing, we will have many more years to come.
So I just want to say that I am very grateful to belong to such an awesome family, and I love you all.
▶︎ This channel is not monetized, all the money made from these videos goes directly to the copyright owners. If you would like to help support this channel, please consider making a donation on my Patreon page https://www.patreon.com/Sup... Thank you!!
▶︎ If you believe that I have infringed on your rights to any of the music on this channel, please contact me at email@example.com and I'll take it down immediately.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573908.70/warc/CC-MAIN-20190920071824-20190920093824-00344.warc.gz
|
CC-MAIN-2019-39
| 1,051
| 4
|
http://retroride.blogspot.com/2007/07/var-mmftype-run-var-mmfborder-1px-solid.html
|
code
|
This is meant to show a map of the Retro Ride route, generated by Map My Run, a cunning web based device that seems to piggyback on Google Earth.
Unfortunately the embed code generated by their site does not work, but regardless, it's a very cool thing for mapping out routes.
In the absence of a functional embed code, here's the link .
Use the Map Settings to the left to turn off the annoying Distance Markers.
'Display Elevation' produces a course profile, while 'Map Type' allows you to switch from a street map to various Googloid satellite views.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866984.71/warc/CC-MAIN-20180624160817-20180624180817-00325.warc.gz
|
CC-MAIN-2018-26
| 553
| 5
|
http://saintpaulsucc-mech.org/worship-music/
|
code
|
(May 27th - September 2nd)
Services will be held at Peace Church in Camp Hill June 3rd 9:15 am.
An Open and Affirming
Audio, Video, and More
Please listen to our St Paul choir combined with the Messiah College choir augmented by the brass section of Messiah College music program.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866201.72/warc/CC-MAIN-20180524092814-20180524112814-00490.warc.gz
|
CC-MAIN-2018-22
| 280
| 5
|
https://www.openwall.com/lists/oss-security/2022/09/21/4
|
code
|
Date: Wed, 21 Sep 2022 08:41:16 -0400 From: Demi Marie Obenour <demi@...isiblethingslab.com> To: oss-security@...ts.openwall.com Subject: Re: big ints in python: CVE-2020-10735 On Wed, Sep 21, 2022 at 09:17:21AM +0300, Georgi Guninski wrote: > There was recent discussion of big ints in python and libgmp. > > https://docs.python.org/3.10/whatsnew/changelog.html#security > > === > gh-95778: Converting between int and str in bases other than 2 > (binary), 4, 8 (octal), 16 (hexadecimal), or 32 such as base 10 > (decimal) now raises a ValueError if the number of digits in string > form is above a limit to avoid potential denial of service attacks due > to the algorithmic complexity. This is a mitigation for CVE-2020-10735 > ==== > > https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10735 > === > In algorithms with quadratic time complexity using non-binary bases ... > The highest threat from this vulnerability is to system availability. > === > > AFAICT the quadratic complexity is quadratic in the size of the int, > that is its logarithm. This is correct, and IMO it is just a bug in Python. Python should either provide better algorithms itself, or use an external library that does so. Using GMP would be a good choice where available, but would require using GMP’s non-allocating functions, as the allocating ones abort in out-of-memory situations. -- Sincerely, Demi Marie Obenour (she/her/hers) Invisible Things Lab Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)
Powered by blists - more mailing lists
Please check out the Open Source Software Security Wiki, which is counterpart to this mailing list.
Confused about mailing lists and their use? Read about mailing lists on Wikipedia and check out these guidelines on proper formatting of your messages.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00047.warc.gz
|
CC-MAIN-2023-06
| 1,814
| 4
|
https://www.code4tomorrow.org/courses/python/beginner/ch.-6-lists/6.3-list-slicing
|
code
|
Similar to list indexing, you can use list slicing to get portions of a list
The general syntax is:
list_name[start:stop:step] # step is optional
my_list = [1, 2, "oh no", 4, 5.62] print(my_list[1:3]) # prints [2, "oh no"]
View code on GitHub.
Some tips for using this:
- Step is how many indices you want to skip each time
- The stop index is exclusive, so the element at the stop index is not included in the resulting list
- A step of -1 means you're going in reverse order
list_name[:]gives you a full copy of the list
list_name[:stop]gives you everything up until the stop index (exclusive)
list_name[start:]gives you everything from the start index (inclusive) until the end of the list
You can also slice a list using negative indexes:
my_list = [1, 2, "oh no", 4, 5.62] print(my_list[-1:-3:-1]) # prints [5.62, 4]
Copyright © 2021 Code 4 Tomorrow. All rights reserved. The code in this course is licensed under the MIT License. If you would like to use content from any of our courses, you must obtain our explicit written permission and provide credit. Please contact email@example.com for inquiries.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656788.77/warc/CC-MAIN-20230609164851-20230609194851-00028.warc.gz
|
CC-MAIN-2023-23
| 1,110
| 15
|
https://jenniferfirn.wordpress.com/2015/03/03/qut-monitoring-alignment-technology/
|
code
|
I am working on an exciting robotics project with Michael Milford and Matthew Dunbabin–extreme geniuses! Please check out this recent paper led by Michael
Milford, Michael, Firn, Jennifer, Beattie, James, Jacobson, Adam, Pepperell, Edward, Mason, Eugene, Kimlin, Michael, & Dunbabin, Matthew, “Automated sensory data alignment for environmental and epidermal change monitoring“, in Australasian Conference on Robotics and Automation, Melbourne, Australia, 2014.
This new alignment algorithm created by Michael can increase the speed and accuracy of monitoring and surveys in ecology and other fields.
If you are interested follow this link to some of our recent photos: http://www.placerecognition.com/envmon/index.html
We are working on the algorithms now to extract data from these “place recognition” images.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528141.87/warc/CC-MAIN-20190722154408-20190722180408-00033.warc.gz
|
CC-MAIN-2019-30
| 821
| 5
|
https://cybernationalsecurity.com/dragora-linux-is-anything-but-simple/
|
code
|
Dragora is a fledgling Linux distribution that neither works out of the box nor is user-friendly.
That said, if you have an adventurous interest in practically starting from scratch and somewhat building your own computing platform, Dragora could be an interesting side project to learn how a distribution works on the inside.
Brace yourself for a strong measure of frustration, especially if you are not already familiar with how the Linux operating system works. The Argentina-based developer, Matías Fonzo, offers very little documentation. An online wiki file provides little help, thanks to its heavy dose of technical terminology.
My initial experiences in trying Dragora remind me of my early days some two decades ago, when I first dabbled in this thing called “Linux.” That was not a pleasant experience. Neither was revisiting those days while testing Dragora.
Dragora’s intended audience is users who want to learn more about the technical aspects of a GNU/Linux distribution and people looking to use the purest ethical software for daily use.
Ultimately, my salvation in getting Dragora’s live session to work was luck. I used trial-and-error tactics. Thanks to years of applying hands-on knowledge and reading website blurbs about the latest, greatest same-old desktop features, I was able to fill in the vast gaps of missing information on the Dragora landing page.
The process for many Linux adopters, I’m sure, resembles my weekly approach to selecting Linux distributions and testing them for reviews. It somewhat resembles catalog or online shopping.
The limited blurbs about the relatively young Dragora Linux piqued my interest. Some of its goals and technologies were interesting.
This release begins the development of the series 3.0 migration toward a new C library, Musl, along with the continuation of supervision capabilities and the restructuring of the hierarchy of directories. Another goal is the improvement of the tools provided by the distribution, a new automatic method to build the distribution, and the prebuilt cross-compiler set.
So I made the free download in anticipation of a satisfying new Linux OS discovery. Alas, the installation ISO was a big disappointment. The letdown was much like opening a delivered package from a catalog purchase only to find the contents fell short of the hype.
What It Is
Dragora GNU/Linux-Libre is a distribution created from scratch to provide a multi-platform and multipurpose operating system. It is independent, and is built upon 100 percent free software.
The developer published version 3.0 Beta , a new development release, on Oct. 19. This latest version release follows the 3.0 Alpha 1 released nearly two years ago.
It includes a new system installer and Xfce 4.14 as the default desktop environment. Also available are the IceWM — dragora-ice, a customized version of IceWM — and Scrotwm window manager environments as desktop choices.
One of the enticements that led me to checki out Dragora Linux was curiosity about the modified IceWM desktop, as well as Scroptwm. I like distros offering the IceWM desktop and was not familiar with Scrotwm. Lightweight pseudo desktops based on these window managers generally run well and are good choices for newcomers looking for simple-to-use systems.
However, my disappointment with the default live session seriously dampened my plan to pursue the other two options. It no longer seemed worth the effort to find the packages on the disorganized Dragora website and try again to get another installation working.
Independent Distro Drawbacks
Dragora’s independent status should be a strong adoption point. That was another high-interest draw that led me to select this distro for testing.
Being independent means that instead of plugging in working components from a base distribution such as Debian or Ubuntu, the developer has to provide those tools in-house.
Despite being around for a number of years, Dragora has had only a few stable versions. The developer is working on perfecting the version 3.0 Beta family, so running Dragora means working with unfamiliar distro tools.
One of the major new components is Dragora’s in-house bred package manager system, called “Qi.” Its newest version 1.3 is included in the 3.0 beta distro.
The problem is not being able to find Qi. It appears to be a GUI-less tool that works only via command line.
New Stuff Must Work
Qi is described as a very simple packaging system that allows installing, removing, upgrading and creating packages. Given the lack of a well-stocked software repository, the process involves automating the compiling process.
Qi is founded on the concepts of simplicity and elegance. It can be run for almost any purpose — be it desktop, workstation, server or development.
In short, Qi does not come bundled in the initial installation. It is not part of the live ISO either. So you first have to go through the hassle of finding the download packages to install Qi before you can add other missing applications.
This distro and its in-house tools are not user-friendly. It is also more difficult to set up, thanks to a lack of documentation.
The Dragora website gives you nothing in the way of a quick startup guide. What it does provide mostly does not work. You can sense already the source of the startup frustration.
For example, the single Web page says you can find more information about Dragora by running the commands “info dragora” or “man dragora” on your Dragora system, and that a brief summary is available by running “dragora-help.” (Note: Do not include the apostrophes.)
That’s all good if you succeed in loading the so called live session ISO, or spend a few hours loading basic system applications once you manage to install Dragora to your computer’s hard drive.
The “live” session does not have a Web browser, terminal application or package manager installed. So the website’s help suggestions are far from helpful.
Dragora 3.0 beta comes with a new installer invoked from the command line with “dragora-installer.” It also has a new tool to configure the keyboard mapping in the console called “dragora-keymap.” These are also additional steps in the installation routine.
It is nice having unique in-house tools that enhance an independent distro’s functionality — but having to install them to complete the system makes Dragora less user-friendly. The multiple installation steps and nearly empty menus make Dragora a far cry from being ready to use right out of the box.
Failure to Launch
Speaking of ISOs, that is where my unfriendly journey to installing Dragora began. The ability to download a Linux OS in hybrid form to run a fully functional testing version without altering the host computer is one of the great joys of Linux.
The operative words in that description of “live ISO” are “live” and “fully functional.” Dragora hedges on the first, and flat out fails on the second.
If the scarce information on the one-page website included just a tiny bit more detail, there would not be the expectation that users would boot to an actual desktop view when the “live” session loaded.
Sure, the page did conspicuously say that the user name is “root” and the password is “gregora.” It would have been oh so helpful if the developer added one final sentence: Then type “startx” (as in start the X Window System) and press the enter key.
That would have saved me at least one hour trying to diagnose why the “live session” was not loading. It would have saved me eliminating the cause: Is the download file corrupted? Was there a glitch in the process of burning the ISO files to a bootable DVD?
What other issues could cause the desktop not to load? Each time I tried rebooting, various lines scrolling down the screen displayed the words “failed” and “error.” So not getting a desktop justified my thoughts that Dragora was broken.
New users less experienced or unfamiliar with Linux no doubt would be unfamiliar with that command line script. Fortunately, I remembered the “startx” solution.
Watching the Xfce screen appear gave me a new surge of confidence in Dragora Linux. That feeling left quickly when I tried to access the help and information files the website mentioned.
No terminal apps were installed. Instead, a window popped open with a selection field to pick one. Nothing was listed to pick.
Okay, let’s install one, I thought. Yup! No package manager was installed. The selection field again was empty.
Oh, so let’s download one. Oops! None was available.
The menu categories mostly listed only a few titles, but that did not matter as few of the titles actually were installed, except for the system tools and the system settings menu categories.
So much for a fully functional live Linux desktop session!
Unlike many Linux distros, Dragora does not have an installation launcher included in the live session ISO. You cannot issue command line scripts to manually launch Qi because no terminal app is included by default. So the not-so-live DVD is little more than a demonstration vehicle and is otherwise useless.
The installation solution is not pretty. If you click on enough links on the Gregora Linux website, you stumble on the bottom of a page that gives limited installation instructions. The process does not involve downloading an installation ISO.
The Dragora project has two primary git repositories hosted on Savannah [https://www.dragora.org/en/index.html] and Notabug.org. The Savannah repository for Dragora links back to the Gregora website. The Notabu.org repository actually has all the file packages for downloading, uncompressing and compiling for a laborious installation task.
You also can go to the git repository and install the git application to retrieve the latest Dragora revisions with this command: git clone git://.
Either way, you must install this distro by compiling source code.
The developer describes Dragora “as an independent GNU/Linux-Libre distribution based on concepts of simplicity.” Perhaps the problem rests on the definition of the word “simplicity.” Gregora 3 is anything but simple to install and manage packages.
Dragora version 2.2.0 had a text-based installer that automated much of the file fetching and compiling. It started the installation process by creating a bootable DVD from a downloaded ISO file.
You booted the computer using the DVD and typed “setup” to begin the scripted installation routine. The process included partitioning the hard drive manually and processing configuration tasks when prompted.
That was a more traditional installation routine. It was, in fact, SIMPLER than dealing with what I described above.
For the less adventurous, I can not recommend Dragora Linux. If you are a seasoned software engineer or otherwise are handy at performing complicated compiling routines, feel invited to try Dragora 3 beta.
My suggestion to the developer: Lose Qi. Replace it with an installation process that is actually SIMPLE.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371818008.97/warc/CC-MAIN-20200408135412-20200408165912-00156.warc.gz
|
CC-MAIN-2020-16
| 11,057
| 58
|
http://slideplayer.com/slide/1513841/
|
code
|
What is a Map? A map is a drawing that is the representation, on a certain scale, of a terrain.
The classic "Big (Composite) Map" Have a very big file size for the "real-estate" you get in the game world. Have limits with texture sizes. Not be very flexible; you won't really be able to re-use whole screens as different parts of the world. Be easier to program with. Won't require a 'Tile Editor' to compose.
"Tiled Map Have a much smaller file size for a much larger world. Be a bit more fiddly to program (but it's not massively complex). Require you to edit / load tile configurations (maps) in the game from your own file format. Make large scrolling levels work much better. Have much more flexibility - one tile set could make dozens of levels.
Requesting for a Map The map will be generated with a proper level of detail, depending on the following parameters: latitude and longitude for the center of map the zoom level – corresponding on how much space to be represented on a usually preset content size. The level of detail is then deducted, depending on how much information can be represented with the above constraints.
OpenStreetMap OpenStreetMap (OSM) is a collaborative project to create a free editable map of the world – www.openstreetmap.org. www.openstreetmap.org The data comes from: Portable GPS devices, aerial photography, from other free sources, or simply from local knowledge.
OpenStreetMap OpenStreetMap was inspired by sites such as Wikipedia the map display features a prominent 'Edit' tab and a full revision history is maintained. Registered users can upload GPS track logs and edit the vector data using the given editing tools.
OpenStreetMap – Map Production The initial map data was all built from scratch by volunteers performing ground surveys using a GPS unit and a notebook or a voice recorder. Then the data was then entered into the OpenStreetMap database. In the present the availability of aerial photography and other data sources has greatly increased the work speed the data is collected more accurately. Ground surveys are performed by volunteers. The data is entered into the database using one of several purpose-built map editors.
Tile Rendering The tiles are pre-rendered and stored on disk in 2 sets: 1. Tiles rendered by Mapnik 2. Osmarender renderings (produced by tiles@home)
Different Tile Renderings The maps are rendered as raster images called tiles as a result of fetching the map data via the API. MapnikOsmarenderCloudMade
Mapnik Tile Rendering Mapnik tiles are currently generated on tile.openstreetmap.org. The Mapnik database is updated with hourly diffs so that most data changes should get rendered within an hour. Mapnik rendering runs as an apache module called mod tile developed especially for high performance needs.
Mapnik Renderer Rules Every tile has a timestamp for when it was rendered and a dirty flag signifying that it is ready to be re-rendered. Whenever looking at a tile, it is checked if it is older than seven days. If it is older than seven days then it is marked dirty (and thus rendered). A background rendering process generates a list of all dirty tiles and then proceeds to render them all. Once it has finished it queries the list of dirty tiles again. Tiles are rendered on a interest/attention-first basis. Marking a tile dirty does not mark all sub tiles as dirty.
Libraries for Displaying the Tiles OpenLayers and GoogleMaps OpenLayers can combine maps from different sources (Google Maps background, WMS overlays, vector data from KML or GML files or WFS etc) You can style OpenLayers much more than possible with Google Maps OpenLayers is open source, so debugging is possible If maps with high precision are requested, the best choice is using OpenLayers with a suitable map server backend rather than Google Maps to get a better map projection (Google Maps uses the Mercator projection, so it cannot show areas around the poles)
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513760.4/warc/CC-MAIN-20181021052235-20181021073735-00281.warc.gz
|
CC-MAIN-2018-43
| 3,935
| 12
|
https://rss.org.uk/consultants-directory/143/philipcrook/
|
code
|
Send me an email
Areas of Consultancy:
Censuses and surveys
Region of consultancy:
I am a professional statistician, having spent 30 years in the UK Government Statistical Service. As a young man I spent 6½ years in the Seychelles statistics office, and from 1989 moved into development statistics with DFID (Department for International Development UK). I resigned from DFID in 2007 in order to accompany my wife on her postings with the British Council. I have done consultancy work for the World Bank, OECD, DFID, UNDP, GIZ, Particip GmBH, GFA Consulting Group and Oxford Policy Management. I am fully familiar with the Sustainable Development Goals, Targets and Indicators, including their shortcomings, and the major international statistical development initiatives
My principal interest nowadays is in the way data (qualitative and quantitative) and statistics are used for monitoring and evaluation of development projects, and in particular the construction of realistic and timely monitoring systems. My other area of interest relates to National Statistical Development Strategies and the revitalisation of statistical systems in developing countries, particularly with the launch of the post-2015 agenda.
My strength is in communicating with non-statisticians and in quickly understanding the essence of a development programme, leading to effective indicators of progress.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391309.4/warc/CC-MAIN-20200526191453-20200526221453-00230.warc.gz
|
CC-MAIN-2020-24
| 1,386
| 7
|
https://www.outpostzebra.com/how-to-play/often-asked-how-to-upload-an-app-to-the-play-store.html
|
code
|
- 1 How can I upload my app on Google Play Store?
- 2 How much does it cost to publish an app on Google Play?
- 3 How can I publish my app in Play Store free?
- 4 How can I earn money by uploading apps on playstore?
- 5 Where can I upload my app for free?
- 6 Is it free to upload apps on Google Play?
- 7 Do you have to pay to put an app on the app store?
- 8 How many apps can I publish on Play Store?
- 9 How do free apps make money?
- 10 How do I build an app?
- 11 How do I publish my app?
- 12 How long does it take to publish app on Play Store?
- 13 How much does playstore pay per app download?
- 14 Can we earn by making app?
- 15 How much money does playstore pay per download?
How can I upload my app on Google Play Store?
Upload the App’s APK File to Google Play In your browser, go to the address, click Developer Console and log in with your Android Developer account credentials. Click the Add New Application button to begin adding your app to Google Play. Select the language and the name of your app. Press the Upload APK button.
How much does it cost to publish an app on Google Play?
Open Google Play Console and create a developer account. How much does it cost to publish an Android app? The operation costs $25. You pay only once, the account gives you the right to publish as many apps as you want anytime and anywhere.
How can I publish my app in Play Store free?
First of all, let me tell you that if you want to publish any of your application uploads on the Google Play Store, then first you have to create a Google Play console account. Only then will you be able to upload any of your Android applications to the Google Play Store for free.
How can I earn money by uploading apps on playstore?
You can earn money after uploading your app on Google Play Store by choosing one of the methods of monetization: show ads in your app with AdMob; charge users for app download; offer in- app purchases; charge monthly for access to your app; charge for premium features; find a sponsor and show their ads in your app.
Where can I upload my app for free?
Top 8 App Stores To Publish Your Apps And Get Extra Traffic & Downloads
- Amazon. Developers can publish their mobile apps, video games, and software’s for Android, iOS and web platforms.
- Opera Mobile Store.
Is it free to upload apps on Google Play?
There is a one-time fee of $25 by which a developer can open an account, loaded with functions and control features. After paying this one-time fee, you can upload apps to Google Play Store for free.
Do you have to pay to put an app on the app store?
Steps to publish an app on the App Store Users have to pay the app store fee as a cost to publish apps, to make them available for download and installation.
How many apps can I publish on Play Store?
Ans: The current upload limit for apps on the Play Console is 15 apps within a 24 hour period. This limit includes the first APK uploaded for an app project, even if those projects that are deleted later on.
How do free apps make money?
How to earn money from android apps?
- In-app purchases.
- Referral marketing.
- Paid apps.
- Crowdfunding; etc.
How do I build an app?
How to make an app for beginners in 10 steps
- Generate an app idea.
- Do competitive market research.
- Write out the features for your app.
- Make design mockups of your app.
- Create your app’s graphic design.
- Put together an app marketing plan.
- Build the app with one of these options.
- Submit your app to the App Store.
How do I publish my app?
How to Publish an Android App on Google Play Store: A Step-by-Step Guide
- Step 1: Create a Google Developer account.
- Step 2: Add a Merchant Account.
- Step 3: Prepare the Documents.
- Step 4: Study Google Developer Policies.
- Step 5: Technical Requirements.
- Step 6: Creating the App on the Google Console.
- Step 7: Store Listing.
How long does it take to publish app on Play Store?
Publishing in the app stores takes 5 to 10 business days for Apple and 2 to 3 business days for the Google Play store. Publishing for Android can sometimes take only a few hours.
How much does playstore pay per app download?
If we talk about any 3 to 5 free apps launched by any developer on Google Play Store, than based on this figure and number of downloads from Google Play Store, the revenues are low as Google pays around 2 cents to its developers for every single app download.
Can we earn by making app?
With that said, 16% of Android developers earn over $5,000 per month with their mobile apps, and 25% of iOS developers make over $5,000 through app earnings. So keep these figures in mind if you ‘re only planning to release on just one operating system.
How much money does playstore pay per download?
How much does Google pay per download of an Android app? Ans: Google takes 30% of the revenue made on the Android app and gives the rest – 70% to the developers.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662627464.60/warc/CC-MAIN-20220526224902-20220527014902-00153.warc.gz
|
CC-MAIN-2022-21
| 4,883
| 66
|
https://analyticsindiamag.com/data-science-skills-survey-2022-by-aim-and-great-learning/
|
code
|
Listen to this story
The Data Science Skills study is a survey-based report highlighting various skills considered by industry professionals to be in high demand. The report finds out different tools, technologies, or skills across categories that are currently being used or that are imperative to know/learn if one is to make a career in data science. The report further identifies the suitability of different skills by years of experience and sectors. It also discusses the time spent by practising and non-practising data science professionals on learning these skills through different formats.
Data science and its applications are becoming more common in a rapidly digitising world. As a result, many students/professionals from different disciplines seek sources that can help them understand the key skill sets required to kickstart/stay relevant for a career in data science. Recruiters or industry professionals also need to gauge what tools are in higher demand and why. This report presents a comprehensive view to all the stakeholders — students, professionals, recruiters, and others — about the different key data science tools or skillsets required to start or advance a career in the data science industry.
The report has been developed after rigorous primary research through a survey distributed to data scientists and leading AI/ML practitioners. This was complemented by direct discussions with job-seekers to understand and gauge their perspective on the in-demand skills in this domain.
All past reports:
- 84.4% of professionals mentioned that recruiters look for Machine Learning as the most crucial skill at the time of hiring, followed by Statistics at 78.9%.
- More than one in two (55.7%) professionals spend their time weekly to upskill.
- 61.7% of Data Science professionals are learning Cloud Technologies to upskill.
- Almost nine in ten (87.8%) Data Science professionals mentioned that knowledge of programming languages (R, Python, SAS) is one of the most basic skills to kickstart a career in Data Science.
- More than nine in ten (90.6%) professionals use Python as a programming language for Statistical Modelling.
- MS Excel (63.3%), Tableau (56.7%), and MS Power BI (43.9%) are the three most used tools for data visualisation.
- More than three in four (77.8%) professionals use Conventional ML Models like Regression, Logistic Regression, Decision Tree, SVM, Naive Bayes, etc.
Common skills looked at by recruiters
84.4% of professionals mentioned that recruiters look for Machine Learning as the most crucial skill during hiring
Almost two in three professionals with less than 3 years of experience said recruiters consider Data Visualisation as a must-have skill when hiring—this number reduces for respondents with more years of experience
Nine in ten professionals from the BFSI and Pharma & Healthcare sector said recruiters look for Statistics as one of the core skills during the hiring
According to 84.3% respondents (4 out of 5), Machine Learning is considered as a top skill in candidates by recruiters when hiring data scientists. This is followed by proficiency in Statistics (78.9%) and Communication (72.8%). Some recruiters consider communication skills to be more important than Programming Knowledge (70.0%). 62.2% respondents (3 in 5) stated that recruiters look for Data Wrangling and Preprocessing skills whereas 55.6% (1 in 2) recruiters looked for Data Visualisation as a skillset.
92.3% (9 out of 10) professionals with more than 10 years of experience think Machine Learning is a considered a common skill by recruiters, compared to 81.9% respondents with less than 3 years of experience. The share of professionals with more than 10 years of experience agreed that Communication and Big Data skills are demanded 1.4 and 1.2 times higher than those with less than 3 years of experience.
4 out of 5 IT professionals said that recruiters prioritise critical skills such as Machine Learning (84.3%), Statistics (81.4%), Communication (81.4%) and Programming Knowledge (81.4%). Similarly, 9 out of 10 (90.0%) BFSI and Pharma & Healthcare professionals said that Statistics is one of the core skills that recruiters seek. The same respondents from the BFSI sector agreed that Machine Learning is one of the most desired skills.
The share of professionals who agreed that domain knowledge was important was the highest (60.0%) in Pharma & Healthcare. Presentation skills were considered noticeably more important in Pharma & Healthcare (70.0%) and Retail, CPG, & E-commerce (73.7%) compared to other industries.
Need for upskilling
Data Science professionals are critical to a company’s development, innovation, and decision-making processes, and they must be able to adapt to an ever-changing digital world.
Therefore, upskilling helps professionals broaden their abilities and knowledge required for future employment, opportunities and success. This is supported by 98.6% of respondents who agree with the need for continuous upskilling in the field.
Time invested in upskilling
One in two Data Science professionals spend time upskilling themselves weekly
Almost two in three Data Science professionals in the Retail, CPG and E-Commerce industry upskill weekly
3 in 4 Data Science professionals with less than 3 years of work experience engage in upskilling weekly, while more than half of the professionals in the 3-6 year work experience bracket upskill weekly
According to the survey responses, 55.7% professionals spend time upskilling weekly. Around 22.8% spend time every month, while 11.9% do it quarterly. A meagre 5.9% do it annually, and 3.7% never upskill.
Professionals with less than 3 years of experience are the most active in upskilling themselves. 72.2% (3 out of 4) Data Science professionals with less than 3 years of experience upskill weekly. 56.6% professionals with 3-6 years of experience also upskill weekly, but a significant share of these professionals (28.3%) upskill on a monthly basis. Similarly, 31.0% (1 out of 3) professionals with 6-10 years of experience prefer to upgrade their skills quarterly.
Professionals with less than 3 years of experience are the most active in upskilling themselves
63.6% professionals from the Retail, CPG and E-Commerce sectors are the most active in updating their skills weekly. On the other hand, 35.1% Data Science professionals from the BFSI sector upskill monthly.
New skills data scientists are learning
Three out of five Data Science professionals are learning Cloud technologies to upskill
70% professionals working in BFSI stated that they have upskilled in MLOps
Cloud technologies, MLOps, and Advanced Deep Learning Models like Transformers are the top 3 new skills Data Scientists/Analysts are trying to learn or upskill in
To remain relevant to the industry’s current needs, Data Science professionals continuously update their skills. As per the conducted survey, more than 61.7% (3 out of 5) professionals said they are upgrading their skills in Cloud technologies (Azure, AWS, GCP). Following that, 56.1% professionals are learning MLOps and 55.0% are learning Transformers.
The most popular skill to acquire among professionals with more than 10 years of experience is MLOps, with almost 73.1% (3 out of 4) professionals learning techniques to scale ML models-one of the most pressing concerns in the industry. This is followed by Reinforcement Learning (57.7%), Cloud Technologies (57.7%) Transformers (57.7%) and others. Professionals with 3-6 years of experience are more inclined towards acquiring Cloud technologies (71.7%) as a core new skill, followed by MLOps (62.3%), Transformers (60.4%) and others.
Professionals working in the Retail, CPG and E-Commerce sectors are more inclined towards learning Cloud technologies (73.7%) as a new skill. On the other hand, professionals in the BFSI sector are more likely to learn MLOps (70.0%) as a new skill set. Similarly, professionals in the Pharma & Healthcare sector are interested in learning Transofrmers (70.0%) and Computer Vision (60.0%) as core skills.
Cloud for data analysis is in high demand and that is reflected in the high share of professionals choosing to upskill in the technology.
Basic skills needed for a data science career
Nine out of ten Data Science professionals mentioned that knowledge of programming languages (R, Python, SAS) is the most basic skill to start a career in Data Science
Four in five professionals said that Statistics is an important basic skill to start a Data Science career
Programming (in R, Python, SAS), Statistics, and a basic understanding of Machine Learning are considered to be the top 3 basic skills for a career in Data Science
According to the survey, 87.8% (9 in 10) respondents said that knowledge of programming languages like Python, R, or SQL is the most basic skill to kickstart a career in Data Science/Analytics. This is followed by knowledge of statistics (80.6%) and basic ML understanding, as 75.6% of respondents claimed.
All (100.0%) respondents with more than 10 years of experience said that ability to code in statistical programming languages is a must-have skill to start a career in Data Science. This is followed by knowledge of statistics and basic Machine Learning concepts, both at 80.8%. Similarly, five in six (83.3%) Data Science professionals with less than 3 years of experience think that knowledge of statistics is a must. A significantly higher percentage of professionals (77.4%) with 3 to 6 years of experience said that Data Wrangling and Preprocessing Skills are important compared to professionals in other experience brackets.
In terms of industries, 94.7% (9 out of 10) survey respondents in the Retail, CPG, & E-Commerce said that knowledge of ML concepts is the most basic skill to start a career in Data Science. The demand for Statistics (86.7%) is the highest among BFSI professionals, and the demand for Data Visualisation skills is highest in Pharma & Healthcare (70.0%). By and large, it was agreed among all industries that knowledge of programming language is the most important skill to start a career in Data Science.
More than three in four professionals claiming that basic ML understanding is a must-have skill for a career in Data Science is indicative of increasing maturity in the field.
Languages used for statistical modelling
Nine in ten professionals use Python for statistical modelling
Python, SQL, R are the top three languages preferred by Data Scientists
Data science professionals with more than 10 years of experience are 3.3 times more likely to use SAS than those with less than 3 years of experience
Python is the most popular programming language in Data Science, with nine in ten (90.6%) Data Science professionals saying they use it for statistical modelling. After that, SQL and R were preferred by 52.8% and 38.3% of participants, respectively.
Years of experience plays a prominent role in some of the languages used by Data Science professionals. For instance, data scientists with more than 10 years of experience are 3.3 times more likely to use SAS than those with less than 3 years of experience. Similarly, the use of R increases by 1.8 times.
Python remains the most used programming language across all the sectors, with at least eight out of ten professionals in every industry surveyed saying they use it. Apart from that, the use of SQL (68.4%) is highest in Retail, CPG and E-commerce, followed by IT at 62.9%. R is the most commonly used programming language in the Pharma & Healthcare sector, with three in five (60.0%) professionals claiming they use it for statistical modelling.
Enterprises prefer languages like Python and R over SAS, not just because of the cost factor but also because technologies are often first released on open source.
Despite the cost factor, Pharma & Healthcare (20.0%) and BFSI (23.3%) also widely utilise SAS since it is a preferred choice of tool by most for clinical trial data analysis and also because it offers better security.
Data Visualisation tools
MS Excel is the most widely used visualisation tool, with two in three analytics professionals using it
MS Excel, Tableau, and MS Power BI are the three most used tools for Data Visualisation
MS Excel is used by 84.6% professionals with more than 10 years of experience
Despite all the technological advancements in Data Science, the use of MS Excel remains high, especially when building data visualisations. 63.3% (2 in 3) analytics professionals two in three analytics professionals said that they use MS Excel. This is followed by Tableau (56.7%), Power BI (43.9%), and QlikView (12.2%).
The utilisation of MS Excel (84.6%) is especially high among people with more than 10 years of experience. On the other hand, Tableau is the preferred choice for professionals between 3-6 years (50.9%), followed by MS Excel (45.3%) and Power BI (34.0%). Similarly, Data Science professionals with 6-10 years of experience also prefer Tableau.
People with 3-10 years of experience are more hands-on and use comparatively more complex tools like Tableau for dashboards than just MS Excel.
By sectors, Tableau is the most popular tool in Pharma & Healthcare according to four out of five (80.0%) professionals who said they use it for data visualisation. Similarly, 65.7% of IT respondents said they use Tableau compared to 61.4% who use Power BI and 58.6% that use Excel. On the other hand, MS Excel remains the most used tool for Data Visualisation in all the other surveyed sectors.
Data Science models
Three out of four Data Science professionals use Conventional Machine Learning models on a regular basis
Two in five data science professionals use Convolution Neural Networks
Five out of six professionals with 10+ years of experience said they have an RNN
Conventional Machine Learning models like Linear Regression, Logistic Regression, Decision Tree, SVM, Naive Bayes, etc. are the most utilised ML techniques among Data Science professionals—more than three out of four (77.8%) respondents said they use it on a regular basis. This is followed by CNN at 40.0%, LSTM at 31.7%, and RNN at 28.3%.
Data Science professionals who are in the early stage of their careers prefer using Conventional Machine Learning Models since they are just starting out. 61.1% (3 out of 5) respondents with less than 3 years of experience use Conventional Machine Learning models. However, with more experience, data scientists venture into complex models. You can observe an increased use of Neural Networks and Deep Learning models among professionals with 3-6 years of experience. Around 77.4% of them use CNN, 47.2% use RNN, and 47.2% use LSTM. In the 6-10 years experience bracket, you see a lesser use of these models. However, the utilisation again goes up for professionals with more than 10 years of experience since they need to keep up to date with the latest technologies and experiment with the state-of-the-art/complex models for research.
Conventional Machine Learning models are the preferred choice of professionals across sectors. Following that, specific industries show a preference for certain models. For instance, CNN is widely used in the IT (44.3%) and BFSI sectors (43.3%) since both these industries see a wide array of applications in segmentation or classification.
Similarly, LSTM (60.0%) or RNN (50.0%) models are widely used in Pharma & Healthcare. 15.8% (1 in 6) data scientists working in Retail, CPG and E-Commerce use Multilayer Perceptrons (MLPs) and 13.3% (1 in 8) professionals working in the BFSI sector use Genrative Adversarial Networks (GANs).
Freshers start out with Conventional ML Models but soon experiment with complex Deep Learning Models or Neural Networks as they gain work experience.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510149.21/warc/CC-MAIN-20230926043538-20230926073538-00550.warc.gz
|
CC-MAIN-2023-40
| 15,765
| 73
|
http://graphicdesign.stackexchange.com/questions/1441/photoshop-curved-shadow
|
code
|
How to create the shadow effect as per the url/image below? Specifically, the rounded/curved shadow at the bottom:
I suspect that the curve on that shadow might be an optical illusion of sorts. As @lawndartcatcher explains in his answer, the curved look can be achieved by making the intensity (or opacity) of the shadow fall off towards either end.
Here is a step-by-step look at that process.
Here is my top layer:
Below that I add a basic soft shadow (I used a feathered selection to make it):
Now here's the part that gives the curved look. I screen this gradient over the shadow layer:
And I get this result:
Putting it all together gives something that I think matches closely with your reference:
Here is a look at my final layers in Photoshop CS3:
NOTE: I used a gradient with its blending mode set to screen to create the intensity falloff of the shadow. While this makes for a good visual demonstration, it really only works when you are dealing with a white background. To apply the same tenique to cases with different background colors, you would want to apply the gradient as a layer mask to the shadow layer.
It looks like an extremely stretched circle with a 2 or 3px feather to me... not a gradient or true drop shadow at all.
layer2 (circle marquee with 2px feather and anti-alias on filled w/ black. Layer opacity set to 25%)
Both layers combined.
My example only took 3 minutes to build. You could def elaborate by using a large 10% opacity eraser with a soft edge to help fade the outer edges more (for example) on the layer 2. Or using a warm gray to fill the circle as opposed to black - that's all up to you.
1) Create a gradient that has the areas that are "shallower" (not as much of a drop shadow) with lower opacity values.
check this tutorial and follow the same and u"ll get your effect for sure..
If we examine shadows of objects lifted in the middle we can see that their shadow blur progresses the more it's lifted and is more sharp the more the object touches its ground.
So the most realistic shadow of an object that seems to be lifted in the middle should be done this way:
This will create the most realistic shadow effect that can be seen below on this image (click for a 100% preview). Shadow with different strengths is applied three times
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701543416/warc/CC-MAIN-20130516105223-00048-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 2,280
| 19
|
https://smarketryblog.com/2018/03/21/
|
code
|
You put a lot of thought into your ad. You found a good and legal image that fits well with your message. You came up with verbiage that’s optimized to the keywords you’re targeting. You made sure you had the right link so that you don’t get slapped with automatic disapproval for a broken link, which can set your plans back.
You pressed publish. It was approved and ran for the whole time…but you got barely a nibble.
This has happened to me on more than one occasion. While it’s disappointing, it is an opportunity to understand why an ad wasn’t conversion-worthy. Here’s what I learned:
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863689.50/warc/CC-MAIN-20180520205455-20180520225455-00378.warc.gz
|
CC-MAIN-2018-22
| 604
| 3
|
https://blog.cjthedj97.me/home/what-you-shount-use-mod_userdir
|
code
|
I commonly see people use or want to use IP/~/username on shared servers.
I wouldn't recommend this is because you now created additional work that will be required to make the site live.
Instead, the way we could have avoided these extra steps is by using creating the domain as if it is live then, making an edit to your host file on your computer.
Note: This isn't a guide on how to make a host file edit. If you don't know where to start the following search query should get you started. https://duckduckgo.com/?q=how+to+make+a+host+file+edit
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515075.32/warc/CC-MAIN-20210118154332-20210118184332-00693.warc.gz
|
CC-MAIN-2021-04
| 547
| 4
|
https://gist.github.com/milquetoastable/66f2f2ca5320ceda604afdc449d29e1c
|
code
|
Remaps Caps Lock key to Left Control via the Registry.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters
|Windows Registry Editor Version 5.00|
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00021.warc.gz
|
CC-MAIN-2021-49
| 340
| 3
|
https://www.techrepublic.com/forums/discussions/incoming-e-mail-freezes/
|
code
|
Incoming E-mail freezesLocked
I have Exchange 5.5 running on WinNT 4 Server, and WIn98 Desktops with Outlook 2000 SR1. When e-mail arrives to various workstations, Outlook will freeze with an icon of an envelope showing. When the PC is restarted, Outlook will show that an e-mail has come through that you could not see before. This problem is intermittent but enough to cause people to complain. I have NAV 2000 running on all workstations with NIS2000. Can anybody help me with this problem as I have spoken to Microsoft Tech support with no help.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710801.42/warc/CC-MAIN-20221201053355-20221201083355-00056.warc.gz
|
CC-MAIN-2022-49
| 549
| 2
|
https://mugenguild.com/forum/topics/movehitpersist-cns-169292.0.html
|
code
|
Part of the Statedef
If set to 1, the move hit information from the previous state (whether the attack hit or missed, guarded, etc; see "Move*" triggers in trigger docs) will be carried over into this state. If set to 0 (the default), this information will be reset upon entry into this state.
MoveHitPersist must be set in the StateDef the state that you wish to use it in. This trigger can be handy for Aerial moves that have their own landing states, and you wish to make it possible to cancel the recovery frames of the landing state into another move by using movecontact as a trigger. For example;
type = S
movetype = I
physics = S
anim = XXX
MoveHitPersist = 1
Assuming that Player1 enters State 2001 immediately from an aerial attack which made contact with Player2, any time the MoveContact trigger is called while Player1 is in State 2001, MoveContact will return 1 (true). Without defining MoveHitPersist, this information would be reset and MoveContact would return 0 (false) regardless of whether Player1's previous state made contact with Player2 or not.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900614.47/warc/CC-MAIN-20200709162634-20200709192634-00268.warc.gz
|
CC-MAIN-2020-29
| 1,068
| 9
|
https://photography-on-the.net/forum/showthread.php?t=1177061&page=1
|
code
|
he game will last 4 Days and whoever can produce the best edited image (as judged by me) will be picked and it'll be their turn to post their unedited picture for all of us to take a crack at.
The participants must have their "Image Editing OK" turned on and they must provide at least a simple breakdown of how they edited the image, some entrants are a little vague on this point, please give us all details so we can learn and share tips. For instance if you use the unsharp mask tool or Smart Sharpen, provide us with the adjustments you used so others can try them out.
You can post multiple images, but the first image you post will be the one that is judged - all other images will be considered for knowledge and tip use only.
Also, please keep all comments, be they positive or negative, to yourself until after the game has ended.
If you happen to be the winner of the game please start the next one with the title of the game and the following number. For example…the next game should read "Before & After #421."
Game ends April 27th at 17:00 MST (10:00 UTC)
Be patient with me on friday i will judge thecontest that evening, but i will be on the road, it will depend on the time I get to my hotel room, Have laptop and free wifi so that won't be an issue.
The full size jpg is on flickr if you would rather use that
IMG_6090 by 08photog, on Flickr
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670635.48/warc/CC-MAIN-20191120213017-20191121001017-00154.warc.gz
|
CC-MAIN-2019-47
| 1,361
| 9
|
https://www.just-plan-it.com/documentation/xx/execute%20mode?hs_preview=QlGnEQhE-5891600340
|
code
|
The Execute Mode (PRO and ENTERPRISE Edition)
To understand the purpose of the Execute Mode, let's first have a look at how just plan it works without the Execute Mode being enabled. In that case:
- Every task and job is treated as "planned", meaning that every task and job can get affected by changes to the schedule.
- A task's status can't be set to started or finished, or, in other words: start or end date of a task can't be frozen.
The Execute Mode is a fundamental enhancement of this approach, its purpose being to keep your schedule current by integrating shopfloor data. To achieve this, the Execute Mode enables you to
- get shopfloor data into just plan it easily
- change a task's status to not-yet started, started, or finished
- easily update your schedule reflecting the current state on the shopfloor
The Execute Mode is just meant for setting and approving shopfloor data.
Getting started with the Execute Mode
The Execute Mode is deactivated by default so that you need to switch it on in the "Settings" dialog if you want to use it.
When the Execute Mode is switched on, you can no longer change the planning start via settings nor via drag & drop. The only way to change the planning start is via the pulse functionality (see below).
Working in the Execute Mode
The Execute Mode introduces new elements to your planning:
- User type operator - providing the planner with any kind of scheduling-relevant information from the shopfloor while working in the Operator Client. The operator can only see and set shopfloor data. For a detailed description of the Operator Clients, see below.
- Yellow pulse-to line - representing the date to which the planning start of the entire schedule will be pushed when the planner approves the shopfloor data.
- Icons that have especially been added for the Execute Mode. They are described in the Menu Ribbon chapter.
Working in the Execute Mode basically involves two steps:
- Set shopfloor data: Tasks are updated with information from the shopfloor.
- Approve shopfloor data: The submitted shopfloor data are reviewed and then approved by the planner.
Read below a detailed description of these two steps.
Set shopfloor data
For providing the planner with as detailed task information as possible three additional task statuses are available in the Execute Mode:
|Not-yet started||Provides the planner with a firm information. Can still be moved.|
|Started||Started tasks have a fixed start date and can only get prolonged.|
|Finished||Finished tasks cannot get moved any more and do no longer occupy any resources.|
To set shopfloor data, click the "Execute" tab in the menu bar. Handling and structure of this view is mainly the same as already known from just plan it.
- toggle between Job View and Resource View
- quickly navigate from one task to another
- provide data from the shopfloor regarding individual tasks
You can not:
- schedule anything. Drag & drop and all scheduling buttons are disabled.
The "Set Shopfloor Data" dialog
In the right part of the "set shopfloor data" stage you can provide the feedback from the shopfloor about the selected task.
|Set the shopfloor start date/time by clicking the desired button.
The value entered in the "From Now" field reflects the operator's feedback that this task will run another 60 minutes, now here meaning the moment the operator reports this data and not the time/date when the scheduler approves it. Approving this information will not change the pulse-to line.
|Set the shopfloor finish date/time
|Confirm the shopfloor resource
Set and change the shopfloor task status
|The time stamp informs you about when an operator last updated the shopfloor data.|
|You can see which operator provided the last feedback on a specific task. This information is available when the planner approves shopfloor data via the Execute Mode as well as in the new shopfloor report.|
Please note that all of the information you provide in the "set shopfloor data" stage will be stored in a separate database as long as the planner hasn't approved them.
The Pulse-to Line
By clicking "Set Shopfloor Data" the operator provides an important implicit information: The date that represents his most recent feedback. This date marks the date for the yellow pulse-to line. It is the date to which the planning start of the entire schedule will be pushed when the planner approves the shopfloor data.
The Operator Client
Being logged in as operator means not longer seeing the visual schedule, but instead working in an app-style view on any device so that shopfloor feedback can not only be provided from a PC, but also from mobile devices such as tablets and smartphones.
The Operator Client is especially designed for providing shopfloor feedback, therefore hiding the visual schedule and instead listing jobs and tasks scheduled for their resources and so enabling the operator to act on them accordingly.
The task list
When logging in, the operator initially sees the list of tasks that are scheduled for the resources that he is responsible for. The according resources can be specified in the user management.
Informations and options of the task list
|Line 1:||Job name and assigned resource|
|Line 2:||Task number|
|Line 3||Planned start date, planned end date, planned runtime|
|Feedback is missing - based on the current "pulse-to-line"|
|The operator has set this task to "finished" but the planner has not yet approved this|
|the operator has set this task to "started" but the planner has not yet approved this|
By clicking this button filters can either be applied to
|Tasks can be opened by touching (or clicking) the respective line.|
The "Set data" dialogOnce a specific task has been selected, the view of the Operator Client changes and the operator can take action in the "Set Data" dialog as described in detail above for the planner's "Set Shopfloor Data" dialog.
|Set the remaining runtime from now.|
|Set information about the finish time.|
|Provide information about the resource that is actually carrying out this task on the shopfloor and give information about the task status.|
|An operator note can be added, basically meaning sending a message to the planner about this task.|
|Goes back to the task list.|
|Opens an info window with more details on the selected task.|
Working with the Operator Client happens from www.just-plan-it.com so that you do not need an additional app.
Approve shopfloor dataApproving the shopfloor data triggers an automatic update of the schedule, which we call "pulse" and by which your schedule will get pulsed to a new planning start (line). The new planning start represents said date of the most recent shopfloor information.
Shopfloor data that are still "to be approved" are indicated by the "Enter Approval mode" notification icon which is placed both in the Essentials tab and the Execute tab. If you click this icon you get a summary on the data you have and on (potentially) missing data.
The options of the summary in detail
|The number of tasks marked with a green check mark will get updated according to the shopfloor data.|
|The number of tasks marked with a gray symbol will get pushed to the yellow pulse-to line as there is no shopfloor data at all.|
|The number of tasks marked with a green symbol will get prolonged as they are started and should be finished by the date represented by the pulse-to line.|
|The entire schedule is updated by pushing the planning start to the pulse-to line.|
|Reject all the operators' updates in one go.|
|Resetting the planning start (by a date/time picker) means that the current planning start becomes the new pulse-to-line and all the tasks between the old planning start and the newly defined planning get set back to the last update being given by the operator (in some sense, you de-approve this operator information).|
|The planner can run and export a report that provides multiple information about the shopfloor status of the jobs and tasks|
|Brings you back to the "Set Shopfloor Data" dialog without updating any data.|
The two steps of the Execute Mode are precisely summed up in this short video:
Since the Execute Mode is a quite complex functionality we have provided for you lots of further support material.
We strongly recommend to watch our video tutorials about the Execute Mode which in a compact and concise way will make you familiar with the major concepts of working with this powerful mode:
Approving shop floor data
Approving and pulsing
Our recorded webinars present you with a demo and an explanation of the Execute Mode.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738595.30/warc/CC-MAIN-20200809222112-20200810012112-00160.warc.gz
|
CC-MAIN-2020-34
| 8,565
| 83
|
https://community.e.foundation/t/how-to-building-an-e-os-rom-the-repo-sync-way-for-an-unsupported-device-using-lineageos-sources/51869
|
code
|
I compiled my experience while building an /e/OS ROM the Repo Sync way for an unsupported device.
Please have a look and comment or suggestions to improve or correct the instructions
To prepare everything you need you can have a look e.g. here how to install platform-tools and repo and everything else you need to build yourself the Repo Sync way.
I would recommend to start building with LineageOS and when successful then build with /e/OS. But you can skip this step. In this example /e/OS-R version is used (Android 11/Android R)
So you create a folder where you want to put the /e/OS sources and run following command in the terminal:
repo init -u https://gitlab.e.foundation/e/os/android.git -b v1-r
Start download: (this will take a while, depending on your internet connection it might took some hours)
To download the prebuild apps of /e/OS:
repo forall -c ‘git lfs pull’
Take care of below:
Commands you need to know (mandatory):
Initialize the environment with the envsetup.sh script:
Choose a target (in this case its gts4lv)
Start the build where gts4lv is the device code in this case. The device code are different for every device.
Build Options you should know:
Clean previous builds:
change to root directory of the project
Caching to speed up build, here its set to 100GB:
ccache -M 100G
This is often shown when using LineageOS 14.1 and running out of memory, I don’t know if this is still applicable for newer LineageOS versions:
export ANDROID_JACK_VM_ARGS=“-Dfile.encoding=UTF-8 -XX:+TieredCompilation -Xmx4G”
If you have an AMD CPU and get this error during the build: ERROR: Dex2oat failed to compile a boot image.
A good source to find unsupported devices is the XDA forum.
In this how-to I took the Samsung Galaxy Tab S3 as example using LineageOS 18.1 (Android 11/Android R)
That mean you need to use the /e/OS-R sources (repo init -u https://gitlab.e.foundation/e/os/android.git -b v1-r)
In that above link you find the GitHub sources for this device which you need to download
Device Tree gts3l-common: https://github.com/awesometic/android_device_samsung_gts3l-common gts3llte: https://github.com/awesometic/android_device_samsung_gts3llte gts3lwifi: https://github.com/awesometic/android_device_samsung_gts3lwifi Kernel: https://github.com/awesometic/android_kernel_samsung_msm8996 Vendor: https://github.com/awesometic/proprietary_vendor_samsung
You need to grab via git or via web download the respective sources. Just ensure you download the correct branch. In this case its lineage-18.1.
The device tree contain the respective device codes, here you have the ones for the Samsung Galaxy Tab S3 WIFI and LTE version. You have also a common device tree you need to download, ie you need to download the common device tree in every case and wifi or lte version as needed for the device you want to build.
Kernel is one file and contains the respective chipset for this device.
Vendor is similar to device tree and you need the common as well as the WIFI or LTE version.
In the /e/OS folder you created and were you put the /e/OS-R sources you copy now the device sources into:
The device files go into device/samsung, ie you have now device/samsung/gts3l-common and device/samsung/gts3llte and/or device/samsung/gts3lwifi
The kernel file goes into kernel/samsung/msm8996
The vendor files goes into vendor/samsung
Also you need the following to download from GitHub in this case:
This file you put into folder: device/samsung/qcom-common
This file you put into folder: hardware/samsung
The device names and codes are the following:
Galaxy Tab S3 LTE (gts3llte, SM-T825)
Galaxy Tab S3 WiFi (gts3lwifi, SM-T820)
The manifest for the WIFI version is here: https://github.com/awesometic/android_device_samsung_gts3lwifi
The manifest for the LTE version is here: https://github.com/awesometic/android_device_samsung_gts3llte
Just copy the respective xml code into a text editor and save with xml suffix, eg. as gts3llte.xml
You can also create one manifest.xml containing all code for both devices.
The manifest file needs to go into .repo/local_manifests folder. Ensure that this folder contains only one manifest file.
Not in every case you find a manifest.xml for a device in the XDA forum. Then you need to create the manifest.xml your own.
After putting all the sources into the respective folders and store the manifest.xml file into the correct folder you run the following commands to build for the Galaxy Tab S3 WiFi:
Building will take a while depending on your CPU, RAM and SSD power. If all went fine you find the ROM here:
If you have the ROM you need to look at the install instructions for that device.
When you receive an error you need to address it and then you can restart with brunch gts3lwifi.
For the Galaxy Tab S3 LTE you need to use device code gts3llte.
I got an error which I needed to address with the following command. If you encounter the same error you can use below command (adapt for you paths and file):
ln -s lld /mnt/media/e/eos_R/prebuilts/clang/host/linux-x86/clang-r383902b/bin/ld
Usually you have in the sources the device, kernel, vendor and hardware files you need to put into the respective folders of your /e/OS source folder.
When running breakfast with respective device code you might get an error message pointing out which files are missing. Grab them form the GitHub sources and put them into the respective folders until you don’t get any error any more.
Ensure the platform toosl and the build packages as outlined in the https://wiki.lineageos.org/devices/gts4lv/build are correctly installed. Also that repo sync does perform a full sync without errors…
Sometimes you need to apply a patch as per instructions eg.
repopick -f 331661
Just ensure you run source build/envsetup.sh prior to run repopick.
Why not using docker?
Sure you an use docker if you want and if you know where to put the sources and manifest.xml file. I prefer the repo sync way because I can terminate the build easily.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510888.64/warc/CC-MAIN-20231001105617-20231001135617-00106.warc.gz
|
CC-MAIN-2023-40
| 5,985
| 62
|
https://www.thewindowsclub.com/windows-app-boss-desktop-uninstaller
|
code
|
Are you a Windows 10/8 user? If yes, you might surely have loaded your computer system with too many Windows Apps. I am not that sure about you, but I have many dozens of Microsoft Store apps installed on my laptop. Thankfully I have found an easy way to uninstall them from my desktop.
Windows App Boss
Windows App Boss is a reliable and handy application designed to manage the Windows 8 Modern applications. It allows users to add, register and uninstall Windows Store Apps. The best part is that you just need to download it and you can start working with it as it does not need to be installed.
Uninstall Microsoft Store apps from Desktop
You can use it to uninstall Windows 8 apps right from the desktop. It has a user-friendly simple interface and lists all the installed Windows 8 apps. It allows you to manage all our apps right from the main UI of Windows App Boss. I still find it comfortable as it displays all my installed apps on one screen and I do not need to search them in my system. The compact display of Windows Apps Boss improves the procedure of uninstalling apps.
Along with the feature of easy un-installation of apps, Windows Apps Boss brings some other features as well. I can view the settings for any of my installed apps.
Other salient features of the program include:
- Add, remove, and register and test signed apps
- Add, remove, and register provisioned apps with license and custom.data files.
- Manage (Create, Remove, Swap) snapshots of app state (LocalState & settings.dat)
- Add, Remove, Update Windows Developer License
- View settings (settings.dat) of any windows app in plain text
- View preload (custom.data) files
- Enable / Disable app sideloading (AllowAllTrustedApps)
- Launch apps that do not appear on the start screen
It however lacks one feature that I would have liked to see and that is to let you uninstall multiple apps in one go.
The name of the program “Windows App Boss” is absolutely justified as it manages all my Windows apps like a boss. I must mention here that the program is not affiliated or supported by Microsoft and it won’t run on Windows RT systems.
The software is FREE and absolutely simple to use. You can download the program from HERE.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00305.warc.gz
|
CC-MAIN-2021-39
| 2,218
| 18
|
https://forums.macrumors.com/threads/not-booting-without-pressure.1065078/
|
code
|
I posted a while back with my macbook not working, with the disc spinning, steady sleep light, but no chime and black screen. well since then I took out screws and had a look at the logic board, tightened all the loose screws, and pulled this metal fabric wire through the screw by the disc drive, it was loose and torn. And after waiting, it turned on! But I still got shut off abruptly, distorted screens, kernel panics, and force restart messages. Then it wouldn't boot up at all without pressure near the power button. That worked for a while, then it would turn off. Then pressure had to be applied elsewhere to get it to turn on. I have pressed on the flat rest above the harddrive, the opposite side of the power button, everywhere. Now, nothing is working. Are there screws that I shouldn't have tightened? Is there something wrong with that fabric metal thing? Another issue is that there has been overheating, and I'm sure power offs due to overheating, and I know the fan is running, but still getting too hot. What's going on??
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824448.53/warc/CC-MAIN-20181213032335-20181213053835-00048.warc.gz
|
CC-MAIN-2018-51
| 1,039
| 1
|