url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://blog.aspose.com/2007/10/30/aspose-barcode-documentation-gets-a-new-look/
code
The Aspose.BarCode documentation has recently gotten a makeover. Here is a list of some of the things that have changed. - Some new contents and links to useful resources have been added - Code snippets have been tested and more descriptive comments have been added where they were needed - Latest screenshots have been added which are visually clearer and covey the right message - Special attention was paid to the language, taking special care to avoid any grammatical error and spelling mistakes - The documentation was converted into two more useful and user friendly format, namely WebHelp and CHM The new documentation is available a http://www.aspose.com/products/Aspose.BarCode/Doc/WebHelp/
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00418.warc.gz
CC-MAIN-2021-43
699
7
https://www.pluralsight.com/courses/microsoft-azure-cognitive-services-custom-vision-api
code
The Custom Vision Service is a machine learning algorithm, provided as a service, supporting image classification for the subject of your choice. This course will show you how to train a model and to integrate it into a variety of applications. The Microsoft Cognitive Services, hosted in Azure, provides a range of APIs that support developers in integrating artificial intelligence (AI) features within their applications. In this course, Microsoft Azure Cognitive Services: Custom Vision API, you will gain the ability to work with the Custom Vision Service, understand its benefits and limitations, create, train and improve an image classification model, and work with it within a range of applications. First, you will learn about the problem of image classification that the Custom Vision Service is designed to solve, and how you can create a model for image recognition dedicated to your subject of interest. Next, you will discover via the service APIs how you can integrate image prediction and model training functionality into your own web applications. Finally, you will explore how to export the model for use in offline contexts, such as within a desktop application or embedded in a mobile app. When you are finished with this course, you will have the skills and knowledge of working with the Custom Vision Service needed to develop and use your own classification models to provide image recognition. Andy is a developer and architect, working at a digital agency with a primary technical focus on using .Net and Azure to deliver solutions for clients. Future interests include, .Net, Azure, Azure Functions, Bot Framework, Alexa, Umbraco, Sitecore, and EPiServer. Section Introduction Transcripts Section Introduction Transcripts Course Overview Hi everyone. My name is Andy Butland, and welcome to my course on the Custom Vision Service, part of the Microsoft Azure Cognitive Services. I've been working as a software developer for 20 years primarily on the web, but focusing on. NET and Azure. In recent years, the availability of machine learning algorithms as services has transformed our ability as developers to add artificial intelligence features to our applications. Functionality that once was the _____ of hard core AI experts, can now be accessed by anyone with some development experience and a Cloud subscription. In this course, we're going to investigate one such service, the Custom Vision service, available from Microsoft and hosted on Azure. Some of the major topics topics we will cover include understanding image classification, the problem the service is designed to solve, training a model via the Azure portal and service APIs, integrating the prediction API into a web application, and exporting a model for offline use within mobile and desktop apps. By the end of this course, you will know how to create and train effective custom vision service models and be able to integrate them within a range of applications. Before beginning this course, you should be familiar with the C# programming language. I hope you'll join me on this journey to learn about the Custom Vision service with the Microsoft Azure Cognitive Services Custom Vision API course, at Pluralsight. Introducing the Custom Vision Service Hello. My name is Andy, and welcome to this course introducing the Custom Vision API, part of the suite of artificial intelligence offerings provided by Microsoft under the banner of Cognitive Services. In this module of the course, we'll introduce the service and discuss the types of problems it's designed to solve. By the end of the module, you'll have a good understanding of where it fits within Microsoft's AI offerings and how we might use it to add features to our applications. Here's an overview of the topics we'll cover in this module. Firstly, I'll briefly introduce the Microsoft Cognitive Services and the range of intelligent algorithms provided that we can use within our websites and apps. We'll discuss the differences between Custom Vision and the related Computer Vision service so we can see which one might be most appropriate for our needs. Lastly, we'll look at the problem that a Custom Vision service is looking to solve, that of image classification. We'll review the process of building and training a model and using it to make predictions. Although not necessary for utilizing the service or API, we'll also take what I hope will be a short, but interesting diversion into the theory of image classification problems and how algorithms can be used to solve them. Building and Training a Custom Vision Service Model Welcome back to this course introducing the Custom Vision service available through Microsoft Azure. In the last module, we took a little time to introduce the service and the problem of image classification that it's designed to solve. Here we'll be taking a more practical focus, stepping through how we create a Custom Vision service project and build, train, and test a model within it, all via Azure portal interfaces. By the end of this module, you'll understand how to carry out these tasks to create an image classification model dedicated to your chosen subject or domain. Although touched on in the previous model, in this one we'll introduce more fully the example problem domain, model, and application we're going to build throughout this course, that of a garden birds classifier. We're then going to see how, given a preexisting set of source images already classified with the garden bird they show, we can create a model to predict which bird is seen in further pictures. We'll do this in a walkthrough from start to finish using the interfaces provided via the Azure portal. Working with the Custom Vision Service Prediction API Thanks for joining me in this next module on working with a Custom Vision service, part of the Cognitive Services suite from Microsoft Azure. In previous modules we've looked at the image classification problem and how the Custom Vision service can help us with solving this using a model trained on a set of images drawn from a single domain. We've seen how we can create, train, and evaluate a model using the Azure portal interfaces. In this module, we're going to step away from the portal and into code demonstrating how we can incorporate the model's predictive results within a web application. Within this module, we'll demonstrate the web application we're going to build, which will incorporate image classification features provided by the Custom Vision service model. We'll cover the information we need to obtain from the portal in order to access the prediction API, which we'll also introduce. After that, we'll carry out a detailed walkthrough of the architecture and code used to build the application, focusing in particular on how we integrate with a prediction API. Working with the Custom Vision Service Training API Welcome back to a further module in this Pluralsight course on working with the Azure Custom Vision service. We've seen so far how we can work with the service via the portal interfaces, to build and train a model, and using the Prediction API, how we can integrate the results of image classification operations using that model in our applications. In addition, the service offers a second API, known as the training API, and that's going to be the subject of this module. In a previous module, we've looked at how we can use the Azure Custom Vision portal interface to carry out the various tasks needed in building and training an image classification model. This has involved creating tags, uploading images, and assigning them to those tags, and training the model, creating new iterations of hopefully improving accuracy as we do so. All those actions are supported by an API offered by the service, which means we have the option of integrating model building features into our own applications. Whilst this won't always be useful, and we can of course continue to use the portal for such tasks, there could be cases where maintaining a model becomes part of a business process, and as such it makes sense to provide such features within a custom application. In the upcoming module, we'll see a web application in which some of these features have been implemented, which will serve to demonstrate how we can build an equivalent means of carrying out the model training tasks that are available via the portal. As before, we'll then review the application architecture and step through the source code to show how this has been carried out. Using a Custom Vision Service Model Offline Thanks for joining me in this final module on the subject of the Azure Custom Vision service. We've looked in previous modules how we can create and train an image classification model via the Azure portal and APIs, as well as how to integrate predictions for the model into a web application by calling the Prediction API over HTTP. Sometimes though, a real time request over the internet isn't feasible or appropriate, perhaps for a mobile or desktop application where network connectivity may not be permanently available. In this module, we'll look at how we can handle offline scenarios like this by exporting the model into a file that we can embed and use within our applications. We've previously looked at integrating the image classification features offered by the Custom Vision service using the Prediction API, which requires real time, REST-based HTTP calls. To support offline use of the model, we can export it into a format that can be packaged up and distributed with our application. We'll first look at the portal interface steps we need to go through to prepare and export an iteration of our model into the appropriate format for use in different types of applications. Once we have an exported model, we'll then look at how we can use it on three different platforms. Firstly, a Windows 10 UWP application, then a Xamarin-based Android mobile app, and finally, a Python script.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999620.99/warc/CC-MAIN-20190624171058-20190624193058-00419.warc.gz
CC-MAIN-2019-26
9,981
11
https://waynehoover.com/resume
code
I am a full stack software engineer with 10 years experience building on the web. I take pride in creating maintainable and scalable web apps that produce results. My focus is on Ruby on Rails and Node. Expert at creating a beautiful rich user experience. Eight years of experience with Ruby on Rails. Also highly profcient with Redis, Postgres, and ElasticSearch. Node, VueJS, Angular, React Wayne Hoover — email@example.com — (424) 229-2414
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578548241.22/warc/CC-MAIN-20190422075601-20190422101601-00296.warc.gz
CC-MAIN-2019-18
446
7
https://www.hotscripts.com/listing/myuploader-62070/
code
MyUploader is an easy to use Java applet for uploading files and folders to an ASP server using the http protocol (RFC 1867). No Java knowlegde is needed. Uploading files bigger than 1 gigabyte is not a problem. With drag and drop you can easily upload hundreds of files within seconds. During file upload a progress bar is visible.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00368.warc.gz
CC-MAIN-2022-40
332
1
https://blog.openstreetmap.org/2016/05/02/registration-is-now-open-for-state-of-the-map/
code
Early bird tickets for State of the Map 2016 are now on sale! Come register for this international gathering of the OpenStreetMap community. Move fast. We might sell out! 😉 State of the Map offers value for anyone excited about open location data. Our main conference days will feature nearly 50 talks, open spaces for gatherings, and exhibition areas where individuals and organizations can meet. Hundreds of OpenStreetMap community members are expected to attend and we want you there! Register now or if you need financial assistance, apply for a scholarship. Here are your links: The early bird catches the worm. Or “Le monde appartient à ceux qui se lèvent tôt.” The world of OpenStreetMap belong to those who get (tickets) early.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056348.59/warc/CC-MAIN-20210918062845-20210918092845-00657.warc.gz
CC-MAIN-2021-39
745
4
https://timpanys.com/collections/most-wanted/products/tom-ford-pink-natalia-python-crossbody-handbag
code
£1,300.00Click here for full description Tom Ford Pink Natalia Python Crossbody Handbag This gorgeous handbag from Tom Ford features in a baby pink python skin complimented with gold hardware. Wear across the body or on the shoulder, a fabulous handbag to wear every day or in the evening. In good pre owned condition, doesn't come with a dust bag or box. measurements: L 21 H 15 W 2 CM Or from £700.00 today & 5 weekly interest free payments of £120.00 with what's this?
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528220.95/warc/CC-MAIN-20190722201122-20190722223122-00212.warc.gz
CC-MAIN-2019-30
474
5
https://www.computerhope.com/jargon/s/ssd-mounting-bracket.htm
code
SSD mounting bracket In the past (and even now), many users have utilized an HDD (hard disk drive) as their primary storage method. However, as the prices of SSDs (solid-state drives) have come down, many consumers have transitioned to them. However, older cases still use 3.5" drive bays to house drives, which may be problematic for users as SSDs are much smaller in size. To fix this issue, an SSD mounting bracket (pictured) can be used to create a more fitting support for the smaller device in the larger bay. Although an SSD mounting bracket is helpful, most SSDs can be secured to a larger drive bay by using only screws due to their light weight.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653216.3/warc/CC-MAIN-20191014101303-20191014124303-00210.warc.gz
CC-MAIN-2019-43
655
3
http://stackoverflow.com/questions/13407959/is-it-possible-for-a-sinatra-app-to-use-2-databases
code
We have an API in Sinatra that serves both a staging environment and a production environment. The API should talk to the staging database if the request comes from a staging server. It should talk to the the production database if the request comes from a production server. All apps are deployed on Heroku. We can use env['HTTP_HOST'] to find out whether the request is coming from staging or production, and then set the However, the problem is the ActiveRecord init code that runs to connect to the db: db = URI.parse db_url ActiveRecord::Base.establish_connection( :adapter => db.scheme == 'postgres' ? 'postgresql' : db.scheme, :host => db.host, :port => db.port, :username => db.user, :password => db.password, :database => db.path[1..-1], :encoding => 'utf8' ) Does it make sense to run this code before each request? That would probably be slow... Another solution is to run two instances of the API. But then we need to deploy the same code twice... Is there a better way to do this?
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159376.39/warc/CC-MAIN-20160205193919-00147-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
993
9
https://www.physicsforums.com/threads/modelling-dcm-flyback-converter-with-matlab-simulink.662883/
code
I have an actual flyback converter on a chip and I'm thinking of having a simulation platform to model the circuit on MATLAB. This is because somewhere down the road I intend to modify the inductor by biasing it with a permanent magnet in order to prevent premature saturation. It is hoped that this allows for the use of a smaller core in the transformer which will reduce the overall physical dimensions of my converter. My flyback converter operates in DCM. In any case, I have trawled the web and came across the following DCM buck-boost converter which can be modified slightly for my purposes. It basically compiles the state-space equations in a u-->y block and plots the graphs out. I have an oscilloscope so I was thinking of comparing my readings to the simulation graphs as a first pass. Barring things like losses I hope that my simulation graphs will closely approach my actual graphs. My questions are: -I know for a fact that my transformer turns ratio is 64:5, how do I incomporate this info in my buck-boost simulation to accurately simulate my turns ratio? -A slightly more challenging part is incorporating the physics of my biased inductor in the simulation: I intend to use FEMM to get flux-current relationships for my biased inductor -- any idea how to incorporate this in the model? Any suggestions/responses would be great, thanks.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813431.5/warc/CC-MAIN-20180221044156-20180221064156-00206.warc.gz
CC-MAIN-2018-09
1,356
1
https://aws.darcy-it.com/what-is-cloudformation-a-tool-for-building-aws-environments-with-source-code/
code
What is CloudFormation? - Managed service to build AWS environment with source code - Use templates provided by AWS. - Templates should be written in YAML or JSON format. - Difference between YAML and JSON - There are YAML and JSON conversion services on the net. - What to describe in the template is building AWS - Building can be automated by loading templates into CloudFormation. - Eliminates the need to create them one by one in the management console. What can we do? AWS CloudFormation can be used to describe infrastructure configurations using AWS resources in a template as code, which can then be launched and configured as a single stack at one time. Infrastructure as Code (source code) Cloud provisioning (server construction) can be accelerated with CloudFormation provides the output values of one stack to another stack, allowing infrastructure to be configured by interlocking between stacks. To share information between stacks, export stack output values. To export the output values of a stack, use the Export field in the Output section of the stack template The tool is provided by AWS and made by AWS. Describe where, what, and how? What do you get? (Benefits) What about rival products? How to build (how to use) The CloudFormation stack set allows you to define AWS resource configurations in CloudFormation templates and then deploy them horizontally across multiple AWS accounts and/or regions with a few clicks. This capability can be leveraged for baseline-level setup of AWS features that resolve cross-account and cross-region scenarios. Once set up, it can be easily deployed to additional accounts and regions It is possible to import existing resources into the CloudFormation stack. This is used when another side of the stack uses the exported output values. A set of set styles for automatically building AWS resources from templates. Change sets are a mechanism for making changes to CloudfFormation template content. If you need to update your stack, you can do so with confidence knowing that you understand the impact on your running resources before implementing the changes. Route using change sets to see how proposed changes in the stack might affect running resources (e.g., whether the changes will remove or replace critical resources). AWS SAM is a deployment tool for building serverless applications. Use YAML to model Lambda functions, APIs, databases, and event source mapping for serverless applications. AWS SAM works with CloudFormation to deploy serverless applications. In such cases, SAM can convert and extend the SAM syntax into AWS CloudFormation syntax to accelerate the building of serverless applications. Keep this in mind as it will be useful when creating and designing network configuration diagrams. AWS download site (darcy-it.com) Tips for designing with CloudFormation icons
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653764.55/warc/CC-MAIN-20230607111017-20230607141017-00495.warc.gz
CC-MAIN-2023-23
2,850
34
https://www.sciencecheerleader.com/2009/02/10_questions_for_ray_kurzweil_reader_input/
code
A couple of days ago, I invited readers to submit questions they’d like Ray Kurzweil to answer. The Science Cheerleader and Bartacus will be interviewing Ray Kurzweil (Artificial Intelligence expert; king of the Singularity effort) in the coming weeks. Details are here. As a reminder, the deadline to submit questions is midnight, Monday 2/16. Here are some terrific questions from Science Cheerleader readers: Jon: Singularity University is clearly aimed at helping to shape the Singularity and hasten its arrival. Do exponential trends really need help and, if so, can we really expect to shape them? Paul: 1) What is/will be the relationship between ethics and The Singularity? The rapid growth of science/knowledge leads to many advancements via engineering, but how can/will ethics be applied when mankind can no longer keep pace. Or will this be a problem? 2) What to do about scientific literacy so that everyone can understand, to at least a basic level, the rapidly advancing technology? Given the slow and erratic progress in AI over the past 40 years, what makes Kurzweil so confident that machines will become intelligent (in the commonly understood sense) in the next 40? Or perhaps I should ask the flip side of the question: Suppose that things continue in much the way they are now, with increasingly powerful and miniaturized wireless devices making information available wherever we want it. Does that count as a “singularity”? It is easy for me to imagine, for instance, a brain implant that allows me to conduct Google searches purely by the power of thought–but that merging of biological and digital intelligence seems distinctly different from what Kurzweil means by singularity.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00378.warc.gz
CC-MAIN-2019-47
1,711
5
https://www.what-song.com/Movies/Soundtrack/103608/Lion
code
Lion (2016) Soundtrack20 Jan 2016 Lion, released on 20 Jan 2016, consists of a playlist of 7 credited songs, from various artists including Sia, Salma Agha and Mondo Rock. The original score is composed by Dustin O'Halloran. List of Songs Official Soundtrack OST Lion Official Trailer 1 (2016) - Dev Patel Movie
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153791.41/warc/CC-MAIN-20210728185528-20210728215528-00485.warc.gz
CC-MAIN-2021-31
311
5
https://access.redhat.com/solutions/1325063
code
- The requirement is to proxy a REST endpoint URL using JBoss ESBand capture the response received from the endpoint and display it to the browser from where the request message is sent to the ESBapplication. However there seems to be something wrong with the encoding in the - For example äis displayed instead of üis displayed instead of - If the RESTendpoint is sent with the same request directly then everything works fine. The problem only occurs if user makes a - The issue also persists if user takes help of HttpResponseAPI to set the properties Encodingmanually for the outgoing HTTP Responsemessages (which the ESB serviceis delivering to the client) , by implementing the suggested code in a custom ESB action classin the action pipeline. - How to solve this issue? - Red Hat JBoss SOA Platform (SOA-P) Subscriber exclusive content A Red Hat subscription provides unlimited access to our knowledgebase of over 48,000 articles and solutions.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738816.7/warc/CC-MAIN-20200811150134-20200811180134-00203.warc.gz
CC-MAIN-2020-34
959
19
https://top-10-list.org/tag/violent-sports/
code
Top 10 Jarring NFL Scandals of All Time Behind every scandal there are facts, opinions and speculations. I tried to run down through the largest scandals in NFL history. Picking out the facts, here is what I came up with. Please comment if some of this is news to you.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00212.warc.gz
CC-MAIN-2023-14
268
2
https://upge.wn.com/?from=chiefvolunteers.com&pagenum=2&language_id=1&template=cheetah-photo-search%2Findex.txt&query=chief_volunteers
code
- published: 04 Jun 2020 - views: 1488 Former Minneapolis city leaders are speaking out after the shooting of George Floyd. They describe problems with the police department and alleged systemic racism. Jamie Yuccas reports from Minneapolis. Chief Erika Shields said the two officers were fired because she felt that is what had to be done after watching the video. In a letter obtained by 11Alive and confirmed by the Atlanta Police Department, Shields said they planned to conduct an internal investigation into the other officers on the scene. Two had previously been fired. More: https://www.11alive.com/article/news/crime/atlanta-police-chief-calls-charges-against-officers-political/85-b5980e90-f3bb-453b-9f53-f01304981a6d "There's your police chief!" Howard County Police Chief joined a protest in Columbia, Maryland holding a sign that read, "Silence is complicity." https://abcn.ws/3dv81HR SUBSCRIBE to ABC NEWS: https://bit.ly/2vZb6yP Watch More on http://abcnews.go.com/ LIKE ABC News on FACEBOOK https://www.facebook.com/abcnews FOLLOW ABC News on TWITTER: https://twitter.com/abc GOOD MORNING AMERICA'S HOMEPAGE: https://www.goodmorningamerica.com/ Los Angeles Police Chief Michel Moore said Monday that 700 people had been arrested Sunday during mass protests. About 10% of those arrests were connected to burglary or looting, he said. SUBSCRIBE FOR MORE VIDEOS AND NEWS http://www.youtube.com/subscription_center?add_user=losangelestimes https://www.latimes.com/subscription LET’S CONNECT: Facebook ► https://www.facebook.com/latimes Twitter ► https://twitter.com/LATimes Instagram ► https://www.instagram.com/latimes Seattle mayor Jenny Durkan and Police Chief Carmen Best met with protesters to answer questions and discuss policy changes. Subscribe to iWant YouTube Channel now! - http://bit.ly/iWantYouTube Click here to watch the full episodes on iWant: http://www.iwant.ph Follow us on our Social Media accounts: https://www.facebook.com/iWant https://twitter.com/iwant https://www.instagram.com/iwantofficial Chief Art Acevedo said Floyd died in a “manner that was inexcusable.” #Houston #PoliceChief #Protest #ABCNews #ABCNewsPrime #ABCNewsLive ABC News Live Prime, Weekdays at 7EST & 9EST WATCH the ABC News Live Stream Here: https://www.youtube.com/watch?v=w_Ma8oQLmSM SUBSCRIBE to ABC NEWS: https://bit.ly/2vZb6yP Watch More on http://abcnews.go.com/ LIKE ABC News on FACEBOOK https://www.facebook.com/abcnews FOLLOW ABC News on TWITTER: https://twitter.com/abc The Los Angeles Police Commission opened the floor to residents with comments on LAPD’s actions during protests and Chief Michel Moore’s comments. SUBSCRIBE FOR MORE VIDEOS AND NEWS http://www.youtube.com/subscription_center?add_user=losangelestimes https://www.latimes.com/subscription LET’S CONNECT: Facebook ► https://www.facebook.com/latimes Twitter ► https://twitter.com/LATimes Instagram ► https://www.instagram.com/latimes Leaders in Minneapolis' Black community are angry at the thought of an investigation into the Minneapolis Police Department and its first Black chief by an agency they believe has no interest in what's best for the community Reg Chapman reports (1:58). WCCO 4 News at 6 – June 2, 2020 Kentucky's governor on Monday called for the release of police video from a deadly shooting in Louisville that took place while police officers and National Guard soldiers were enforcing a curfew amid waves of protests in the city over a previous police shooting. The city's police chief said the man was killed early Monday while police officers and National Guard soldiers returned fire after someone in a large group fired at them first. A witness said the group had nothing to do with the protests, and was shocked to see soldiers arrive to disrupt their gathering. Gov. Andy Beshear said there's significant camera footage, body camera and otherwise" from the shooting and pressed Louisville police to release the video as soon as possible. Louisville Mayor Greg Fischer identified the shoot...
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107916776.80/warc/CC-MAIN-20201031062721-20201031092721-00217.warc.gz
CC-MAIN-2020-45
4,028
12
https://premium.wpmudev.org/forums/topic/allowing-pop-up-to-open-multiple-times
code
My site contains a decision tree with eight possible results. For each result, I have a separate popup that is triggered when a user clicks on a text link. Everything works fine the first time the text link is clicked--the popup box appears just like it should. Once the user closes the popup, there is no way to re-open the popup. Is there specific setting that will allow the user to close the popup box, then click the link again to relaunch the popup box? If this option is available, would you mind walking me through the steps to implement it?
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823442.17/warc/CC-MAIN-20181210191406-20181210212906-00269.warc.gz
CC-MAIN-2018-51
549
3
http://www.whathifi.com/forum/accessories/2-toslink-cables-1-input
code
2 Toslink cables, 1 input... Hello all, I'm hoping someone can point me towards a simple item or solution. Summed up, I have an Xbox360 and a PS3. Both have digital audio out, in toslink format. My amp has only one digital in. So I have dig around swapping cables. Not ideal. Can anyone point me at a simple switch / box that will accept multiple toslink inputs and one out, to my amp?
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707436332/warc/CC-MAIN-20130516123036-00070-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
385
4
https://www.svasoftware.com/bvq/bvq-for-vmware-vsan/
code
The BVQ VMware platform offers the same functions as the BVQ Storage Platform for analysis, monitoring, management and optimization of the utilization of the VMware environment including vSAN. Both platforms (VMware and Storage) can be integrated together into one single pane monitor in BVQ, but they can also be independently monitored of each other. The combination of both platforms in BVQ gives you an easy to monitor central point. You Can Have Now! e.i. the path from a VM machine down to Datastore, Storwize/SVC volume, storage system or even array or drive. Capacity utilization view: e.i. a Datastore can be evaluated at the most current state of the VMware configuration at any time and unused capacities become visible. Performance metrics are also collected and evaluated, which significantly facilitates the optimization of the storage capacity and issues finding. All other BVQ functions are available for the VMware Platform: Such as Analytic Expert GUI, web dashboards, web reporting, alerting, accounting, REST interface and many more. - Extension packages supported: Brocade SAN Integration package (Link)
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737178.6/warc/CC-MAIN-20200807113613-20200807143613-00432.warc.gz
CC-MAIN-2020-34
1,124
10
https://www.freelancer.com.ru/projects/python/detect-behavior-historical-data/
code
Please, I really need a professional who understands the data first, the peaks, the trends, etc. before putting in machine learning to predict the next 5 years. I found many developers with machine learning experience, but the result does not translate what is happening with historical data. I have 3 historical files that contain the following: 1. Two data columns; 2. The first column qtdTitulos contains the amount of notes the database receives daily; 3. Date is the day of the week; Note: The system does not receive appointments in the database on weekends, only from Monday to Friday. I need a specialist to do the following: Detect pattern in files and generate a forecast of the next 5 years. Initially I will send 2 files and the third one I will send later. The structure of all files will be the same. 31 фрилансеров(-а) в среднем готовы выполнить эту работу за $216 This is time series prediction problem, and I have worked on a similar problem it was for internet of mobile operator I have used python ( ARIMA, recurrent-neural-networks) Hello. I have big experience with forecasts and forecasting methods using historical data. We have to discuss details of project, then I have to examine the data. If we path these two stages I'll take the job. I'm not a well experienced Machine Learning expert, but I'm learning Machine learning for the past two years. I will try my best if hired,I need some doubts cleared please contact me. Hello. I have good skills in Python and R Programming. I have read your project description carefully and very interested. Contact me please. Thanks...............................
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203168.70/warc/CC-MAIN-20190324022143-20190324044143-00284.warc.gz
CC-MAIN-2019-13
1,672
15
https://www.freelancer.com.ru/jobs/nodejs?contest=true
code
Using Freelancer.com to recruit Node.js Developers highly skilled at developing web applications is a great way of getting the job done quickly and efficiently. Here’s some projects that our expert node.js Developers made real: - Creating websites with complex functionalities - Hosting applications on cloud environments - Extracting text from scanned documents or images - Building web scrapers - Setting up chatbot integrated with OpenAI API - Developing user interfaces for websites - Developing APIs for server/clients - Setting up projects in PC If you're searching for a Node.js Developer to join your team, hiring an experienced freelancer on Freelancer.com would be an ideal choice for completing your project quickly and efficiently with high quality results. It takes only minutes to post a project; after that you will be able to connect to potential freelancers who have the skills and experience needed for your projects. Hire a freelancer on Freelancer now!На основании 264,426 отзывов клиентов, рейтинг node.js Developers составляет 4.85 из 5 звездочек. Нанять node.js Developers
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00209.warc.gz
CC-MAIN-2023-50
1,157
12
http://idlermag.com/author/mvdysphonia/
code
Profile: In the past 16 years I have worked in a record store, a record store, a bookstore, and a record store. I’ve also hosted radio shows, covered radio shows and hosted morning radio shows. Presently I teach at-risk 4 year olds in a state-funded readiness program that is 25 miles away from where I live. I have one wife, two children, four cats, and thousands of CDs. There is an old saying about those who work in record stores: they either form bands or become critics. I can’t play an instrument or sing, let’s see if I can write!
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696383156/warc/CC-MAIN-20130516092623-00013-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
544
1
https://www.dynamicsuser.net/t/create-and-run-a-batch-via-cmd-vs-direct-execution/10282
code
Does anybody know what is the advantage of executing a batch file via SHELL(cmd.exe ‘/c file.bat); instead of simply SHELL(file.bat)? If you know the answer straight forward you can stop reading now , otherwise below is a little background for my question: What I currently need is to run an external program, where the external progam is specified in setup Both in Std Navision and posted several times in this forum I’ve seen examples of creating a batch file and the executing that Batch file via cmd.exe The structure I’ve use so far to solve my task is taken from EFT File Transmiting in the North American version of EFT (COD10090/10091) Function TransmitExportedFile. However, since I need to run this in a Citrix Environnment I can’t create the batch file in the in client installation directory with [file.CREATE(FileName.Bat)] since there is no access to othe installation folder unless you are a Server Admin. Adding a valid path to filename creates the Batch file where I want it…but that gives me a problem when I try to excecute via CommandProcessor := ENVIRON(‘COMSPEC’); IF SHELL(CommandProcessor,’/c’,drive:Path\FileName) = 0 THEN ; since the Shell opens in the same installation folder as above and can’t find the batch file where I put it, even though the path is specified. I don’t get an errormessage, but I can tell the Batch File is not run. Calling the batch directly in the shell instead of via cmd.exe seems to work, but an itch in my left knee tells me that there must be a reason why the less straight forward cmd.exe approach is usually used. Thanks Jens It is an odd thing. I know that in earlier versions of Navision there were a lot of problems with this, and if fact I thought it didnt even work as a direct command. Is it possible that this is something that changed in later versions, but that just due to “history” we still do it the old way? Does anybody know what is the advantage of executing a batch file via SHELL(cmd.exe '/c file.bat); instead of simply SHELL(file.bat)? There’s no reason to use the cmd.exe /c option when calling a .bat-file (or any other programfile for that matter). This option is primarily (if not solely) used for doscommands i.e. SHELL(‘CMD.EXE’, ‘/C’, DIR) - this opens a new instance of the command interpreter, carries out the command and then terminates. If you use /K instead of /C the command window remains. Yes Steffan, I think its one of those “left over things” I know that I had an interface developed once, that required Navision to pull import files off a Unix system, and file.OPEN kept getting file locked errors (not very often, maybe 2-3 times a week), so I eventually used a batch to copy the files to a local FATdrive first. This was very unstable if I just called the bat directly, but adding the cmd /c it worked fine, running every 60 seconds 24/7. I for one just got then into the habit of doing it that way. Unfortuunaltely old habits die hard. My knee is less itchy now so I’ll carry on with calling the Batchfile directly with the SHELL(); Thanks a lot for the input, most appreciated. Just an additional one: It is easier to call the batch file using Windows Scripting Host than doing it from the SHELL command in Navision as (at least since 4.0) the system asks for confirmation for running the SHELL command at least once per machine. I’m not quite sure how it behaves on CITRIX and where Navision stores the answer to the CONFIRM for running the SHELL command.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711108.34/warc/CC-MAIN-20221206124909-20221206154909-00045.warc.gz
CC-MAIN-2022-49
3,500
7
https://community.spiceworks.com/topic/2257980-how-do-i-locate-where-my-openvpn-server-is-located
code
Hello, our company is using OpenVPN as our VPN service. I am trying to locate where the openvpn server client is located so that i can grant additional users access. I am stuck as to where to look. There is a text document briefly outlining the process of adding an additional user to the VPN however it only says "Create 'Mikrotik' VPN account in winbox". I've had a look at the mikrotik setup but i cannot find anything that would indicate there being an existing VPN. When i use winbox to view the mikrotik router and click quickset, the VPN area is not enabled. I know that there is some sort of user authentication that exists because i have a username and password. I am in the process finding out if there is also a hardware authentication via the mac address. any steps to trouble shoot or gain additional information would be greatly appreciated edit: If I go under interface list I have found that I have a dynamic running connection into the mikrotik router. Anyone have any ideas as to how i can create another one of these connections?
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500095.4/warc/CC-MAIN-20230204075436-20230204105436-00654.warc.gz
CC-MAIN-2023-06
1,048
4
http://gnaural.sourceforge.net/help/schedule_examples_oldformat.html
code
Two example file for Gnaural Version 3 and earlier - Basically just a reworking of the default schedule in Gnaural, to have finer resolution and a bit higher peaks. - My first systematic attempt to find some frequencies that sustain "high-wakefullness." I ended up with it by manually setting Gnaural's beat frequency to enhance the mental state I am in when I'm, say, programming Gnaural! Between 9.4 and 10.4 seems to be the magic region; and yet I am pretty sure my brain in eyes-open vigilant mode is probably closer to 20hz. My theory on this: 20hz binaural beats are sort of irritating, my brain prefers locking-on at 2:1 to a slower (and thus less irritating) stimulus. BTW, the occasional drops in frequency were added because I found that occasionally slowing the beats down suddenly every few minutes seems to force my brain to re-sync to the beat, which feels a bit like a jolt of adrenaline, refocusing my attention.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864544.25/warc/CC-MAIN-20180521200606-20180521220606-00582.warc.gz
CC-MAIN-2018-22
928
16
https://bagoscoop.github.io/videos/watch/best-settings-to-give-you-a-huge-pvp-advantage-albion-online-settings
code
Albion Online will load much faster between zones with these best settings that will give you a huge pvp advantage. If you load a zone faster than gankers chasing you it will give you a huge head start when running. This is the most optima best settings guide for Albion Online. It's raw, unedited, uncut, pure information. Just follow along as I explain what you should have your settings at and I guarantee it will increase your ability to play this at a higher level. If you want to give this game a try, here's the link:https://albiononline.com/?ref=35XAU9EB6D 💸Donate here: https://streamlabs.com/swolebenji (Text to speech for $1 or more.) (I don't have access till I can afford a phone.) I might resort to using https://www.twitter.com/Swoleben1 if I can't get the other unbanned 💬 Discord: https://discord.gg/ZUuSHDa Bursic: $500 (OMG!) Marcus: (Super Chat) A$50 ($33) ⛔️RULES: Just be a chill gamer bro. Title: BEST Settings To Give You A HUGE PvP Advantage! Albion Online Settings Guide. #MMORPG #Albion #AlbionOnline
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488257796.77/warc/CC-MAIN-20210620205203-20210620235203-00047.warc.gz
CC-MAIN-2021-25
1,037
13
https://community.mendix.com/link/space/app-development/questions/87876
code
Hi! As I currently have two active Mendix accounts (neither is an 'old' account), can I merge my accounts? What does merging my accounts do? For example, will the projects I work on from one account remain invisible for employees of the company of my other account? Will I still be able to work on projects from the account I would merge into the other, will I be able to start new projects? Thanks in advance for your answer! Jeroen van Asten When I merged my old account into my new account, my points etc got transferred, but I did not have access to my old project anymore. So it seems like only limited info is copied into your new account. As to the question should you, that's up to you I guess :-) Adnan Ramlawi - TIMESERIES You can merge your accounts via your new Community Profile. Click the 'Account Settings' link on your Profile:
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817289.27/warc/CC-MAIN-20240419043820-20240419073820-00462.warc.gz
CC-MAIN-2024-18
843
6
https://maugryph.com/2013/04/27/building-by-a-oasis-environment-speed-painting-5/
code
Evnvoiriment Speedpaint 5. Its funny. I didn’t have time to draw the oasis because I got too involved with the Romanish building, the left side feels kind of empty. I would of been best for me to look up a reference instead of drawing out of my head for the building, however the building is somewhat sound at least. I love to put in people and crows to indicate perspective. Photoshop :1 Hour: (C) 2013 Maugryph
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153892.74/warc/CC-MAIN-20210729172022-20210729202022-00113.warc.gz
CC-MAIN-2021-31
414
2
http://www.websitetemplates.bz/faq/info/CMS_Templates-c22/faqs_description-a_595.html
code
CMS Templates - WordPress - WordPress Categories Every your post is filed into various categories. Information displayed in Category blocks is presented in post meta data section in the sidebar, under the heading title and near your posts. Although, various WordPress Themes mark section with post meta data in various areas. You are able to choose how your categories will be displayed due to the template tag the_category(). In order to find this template tag, browse through your index.php files and look for <?php the_category() ?> tag. By default, the list of categories is displayed with spaces between each category. If you want to change this, you should add parameters inside the tag. You may separate categories with commas, arrows, bullets, pipes and others: <?php the_category(',') ?> and your categories will be separated with commas. Replace comma with an arrow to separate your categories with arrows <?php the_category(' > ') ?> And with this code <?php the_category(' | ') ?> you will get your categories separated with pipes. Nevertheless, you can also separate your categories with some text or words: <p>Find related subjects in the <?php the_category(' and ') ?> categories.</p> In this way, your categories will be separated with 'AND'. If you replace 'AND' with 'OR', your categories will be separated with or word.
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661640.68/warc/CC-MAIN-20160924173741-00090-ip-10-143-35-109.ec2.internal.warc.gz
CC-MAIN-2016-40
1,338
10
http://www.photopost.com/forum/showthread.php?t=109823
code
I just upgraded from 2.5.2 to 2.8.2 . I use vB3 and have integrated it. I've activated search engine friendly urls. The review home page looks fine. When I click any link to view a review, the vBulletin navbar doesn't display properly. The triangular images that are displayed with the drop menus do not appear. I edited the navbar template to hardcode the links to the images. However, it still doesn't work. For example, the hardcoded link is http://www.mysite.com/forum/images/misc/menu_open.gif But the link that appears when using Search engine friendly urls shows up as I appreciate any input.
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999655160/warc/CC-MAIN-20140305060735-00010-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
599
5
http://ep-dep-sft.web.cern.ch/contributed-event/147660
code
Again an important committment in the field of Education: one third of the scientific program the topical CERN School of Computing. Here you can find a description of the lectures that will be given: Modern and performant C++ The ability to design and implement high throughput scientific applications leveraging the features of a modern programming language is crucial. In this lecture we focus on C++ and in particular on its latest standard, C++11. Starting from real life and concrete examples, we review the newly introduced semantics and constructs relevant for achieving top performance parallel implementations. Software design principles allowing to seamlessly accommodate such implementations are discussed. High level tools for measuring software performance are as well introduced. Expressing parallelism pragmatically This lecture focuses on the problem of expressing parallelism adapting existing scientific software and designing future applications. Design principles aiming to the formulation of parallel programs and data processing frameworks are presented. The concept of task oriented parallelism is introduced as well as the difficulties of the related work partitioning. The features are explored which the C++ language and a selection of libraries offer for concurrent Resource protection and thread safety A fundamental aspect of concurrent programming is the protection of thread unsafe resources. With argumentations relying on concrete use cases, the issue of thread safety and its possible solutions are treated. Designs aiming to avoid contention are characterised, offering veritable patterns applicable also in different disciplines. Different resource protection strategies are evaluated discussing concrete examples.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316150.53/warc/CC-MAIN-20190821174152-20190821200152-00183.warc.gz
CC-MAIN-2019-35
1,750
24
https://users.rust-lang.org/t/guy-steeles-four-solutions-to-a-trivial-problem-talk-about-sequential-vs-parallel-code/12750
code
I recently came across the ‘Four solutions to a Trivial Problem’ talk by Guy L. Steele. It talks about different styles of programming to solve a problem, with the important conclusion mostly being that when you write a high-level ‘divide and conquer’ approach to solve a problem, the compiler can decide to optimize it for sequential or parallel computing (or a blend thereof). A simple example would be to say ‘I want to sum this list of numbers’, without specifying how, meaning that the compiler can decide whether to perform a sequential accumulation (linear time, constant space) or a parallel tree-combining approach (logarithmic time, linear space). In the talk, Fortress is mostly used as the language to present the concepts in. However, I do think that many of the concepts can easily be transferred to Rust. So I am wondering: Are there already approaches/libraries existing in Rust where concepts such as the Monoid Cached Tree and the parallel prefix/parallel suffix operations that work in this way?
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484772.43/warc/CC-MAIN-20190218074121-20190218100121-00593.warc.gz
CC-MAIN-2019-09
1,026
5
https://docs.trafficserver.apache.org/en/latest/developer-guide/api/functions/TSMimeHdrFieldAppend.en.html
code
Attaches a MIME field to a header. The header is represented by the bufp and hdr arguments which should have been obtained by a call to TSHttpTxnClientReqGet() or similar. If the field in field was created by calling TSMimeHdrFieldCreateNamed() the same bufp and hdr passed to that should be passed to this function. TS_SUCCESS if the field was attached to the header, TS_ERROR if it was not. Fields cannot be attached to read only headers.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816863.40/warc/CC-MAIN-20240414002233-20240414032233-00062.warc.gz
CC-MAIN-2024-18
440
9
http://forums.codeguru.com/search.php?s=74336e99bda991fecdce664feb813555&searchid=9131709
code
Type: Posts; User: WeePecky Search took 0.01 seconds. February 10th, 2010, 01:51 PM Cheers Nelo. Simple and sorted. February 9th, 2010, 09:47 PM I am using Visual Studio 2008 SP1, MVC.NET and the .NET Framework 3.5. I have added two public properties to the Global.asax class which I want to access from the \Views\Home\Index.aspx page... Your absolutely correct: I was trying to find the solution to this problem in the Code behind (C# code). I stumbled on another discussion which indicated that the default response of programmers... Well hows that... found the problem...after 4 months looking... You don't want the code ... really you don't... Answer might be obvious to those initiated in .NET Removed the following line... I have written an application that sends pager messages to pagers. The application is running on IIS6, is written in C#, and uses the Microsoft ASP.NET 2.0 AJAX Extensions v 1.x. February 7th, 2008, 01:33 AM For those who are interested in this, it is possible to do this, and I found the solution out there on the net (see links below) after a bit of effort. My solution was in VB.NET which is easier... January 28th, 2008, 06:51 PM I have solved my problem - with a few mods which I will document here for anyone who may need a solution... I found that there is no method to get to the Module code in access from VB except via... January 22nd, 2008, 04:33 PM Should have stated, they are all Microsoft Access mdb's. January 22nd, 2008, 04:18 PM Hey thanks everyone - I have got the application running now. But I have a new problem! Can anyone advise of an alternative way to open the target database? Some of the Access databases have... January 21st, 2008, 07:25 PM Yes, I got the properties. I want to search the code in the modules and find a connection string. We are trying to id all Access Aps that use data from an appliation that is about to be retired.... January 21st, 2008, 06:37 PM Added reference to Microsoft Access Objects and used the Module datatype. January 21st, 2008, 02:59 PM I am trying to write some VB6 code that will seek out a specific string in an Access Code Module. I can get to the modules name but cannot find properties or methods which will give me a... August 6th, 2007, 05:51 AM I'll let you know how I get on... August 6th, 2007, 05:02 AM Thanks for that dude, I was looking for a C# solution if one is out there. Possibly overloaded class or something... August 5th, 2007, 09:24 PM Is it possible to create a doubleclick handler for an asp.net ListBox control? If so, are you able to provide a link to a how to? Ok, I'm a wally. I have found examples galore in this discussion site and others out in cyberspace... But, I cannot find an example of how to check for an existing contact in AD (that is not a... Sorry if this has already been answered elsewhere, I did search but did not find an answer ... I am required to write some client side code that will automatically update our Active... I have found the function in a file called winuser.h Doco says include Windows.h - which I have done. I have installed and am using Microsoft Platform SDK for Windows XP SP2. Thanks your reply, Am I doing something dicky here? I have set the MS Visual Studio to include winspool.lib and the project links to the correct include and library directories. I have tried with an #include <winspool.h> and without. or at least thats the way this seems/feels. I am new to Visual C++ (MS VS 6). I want to print using the winapi to a DC passed from the FaxStartPrintJob method. How to do this has eluded me.... February 23rd, 2005, 01:53 PM Thanks for your previous help. I have the method FaxStartPrintJob behaving nicely, and the export is much more friendly! I have one more question, because while the above method... February 7th, 2005, 09:48 PM Noted with thanks Will start again. February 7th, 2005, 09:32 PM The values are all hard coded and translate ok. I have used a "Typical Hello World" application and added to the code in the //TODO of the WinMain function. The Call... Click Here to Expand Forum to Full Width This a Codeguru.com survey!
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824146.3/warc/CC-MAIN-20160723071024-00196-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
4,107
71
https://danpierce.org/2015/02/15/common-sense-killer-the-choice-to-cheat/
code
Common Sense Killer: The Choice to Cheat Cheating has a negative connotation. There comes a point where healthy innovation turns into real cheating. Innovation happens within the rules or changes the rules to make a better future based on learning. Innovation is the context of the old saying of, “if you’re not cheating, you’re not trying”. True cheating is a shortcut that gains a short-term reward at the cost of integrity. It rewards only in the short-term. There is no substitute for hard work and effort over time. That hard work usually includes a best practice cheaters are trying to avoid. Cheaters always discount the future for immediate gratification.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00282.warc.gz
CC-MAIN-2022-33
671
2
https://ubc.net/content/janne-schaffer-my-music-story
code
There will be a lot of music etc. about the time with Ted Gärdestad, Abba and Björn Json Lindh - and much more - with Jonas Gideon, keyboard and vocals, and Forsa Voices. I am very proud of the Gradelius Scholarship. There will also be many memories from all the years I played on Gotland, including Gåsemora, which I come from. More information: https://gotland.com/events/janne-schaffer-my-music-story/
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710808.72/warc/CC-MAIN-20221201085558-20221201115558-00516.warc.gz
CC-MAIN-2022-49
407
2
http://dione.lib.unipi.gr/xmlui/handle/unipi/9027
code
Ανάπτυξη διαδικτυακής εφαρμογής στη μεριά του εξυπηρετητή, συμβατής με desktop, mobile και tablet συσκευών, για τη διαχείριση αθλητικών δεδομένων αθλητών στίβου, με χρήση της μεθοδολογίας Scrum Agile για την ανάπτυξη λογισμικού Implementation of a cross-device server-side web application for managing and analyzing personal performances for track and field, using Scrum Agile software development methodology This thesis is about developing a web application that keeps statistics about track and field athletes. For the development, the nowadays popular software engineering methodology, agile, will be used, and more specifically scrum. Scrum promotes teamwork, so I will develop the application with another programmer. I will write the code on the server side, create and manage the schema of the database and handle the API. The other programmer will work on the front-end of the application. Agile will be used in every part of the development process - coding, decisions, feedback. The development process will be described in parts, called sprints, according to scrum. Every sprint will be the description of a week’s work. The first part of the sprint will be the sprint planning, where the goals of the sprint will be set, and there will be a short description of the desired outcome that we should have by the end of the week. The second part is called stand up, which is a sum of all the daily stand ups. This is where I declare what I did each day, while working on the application. Usually some decisions that I took, along with the other programmer, are mentioned. Stand ups are also used in order to describe the difficulties that are faced. The third part of the sprints is the sprint review, where there is a report of what worked and what didn’t during the sprint. The outcome is presented and the plans of the next sprint may be revealed. By using technologies which are not very familiar to us, developing an application from scratch, and getting feedback from users, we expect agile to be suitable for our case, making communication the main aspect that will help us release a full product. After finishing the development of the application, it is proven that indeed agile greatly helped in the development process. The application got developed in time, all the developed features are useful for its users, and the quality of the service is very good. The small iterations that had a working application as the outcome, every time, helped us get feedback early on, and improve fast. Overall, the agile methodology proved to be appropriate for our case, and quite possibly, much more suitable than any plan-driven process.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250616186.38/warc/CC-MAIN-20200124070934-20200124095934-00077.warc.gz
CC-MAIN-2020-05
2,814
3
https://moe.cat/@ShadowRZ/102969581907012944
code
I'm using a desktop Mastodon app called 'Tootle'. It's a bit... dense? Some padding around the 'toots' would make it look a lot prettier. @omgubuntu isn't it meant to be run on elementary OS? @tromino @omgubuntu I think so as it was designed for that OS. Have fun and play together~
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655882634.5/warc/CC-MAIN-20200703153451-20200703183451-00302.warc.gz
CC-MAIN-2020-29
282
4
https://skunkworks.kangaroopunch.com/explore/projects/starred
code
Cross-Platform Game Development Library Tools for the Foenix F256 line of computers. A retro graphical interactive fiction engine based on JoeyLib, JZip, and Inform. Cross-platform, open-source, self-hostable, Hamachi clone. Build Server for JoeyLib projects. Used by the JoeyDev IDE. A 6502-based computer inside your browser. You can create your own software, save it, share it with others, and generally have a grand time mucking about with one of history's most popular processors. Sokoban for JoeyLib! Simple (and weird) stack-based VM designed to be used with JoeyLib games. DOS MultiPlayer Networking Service. https://discord.gg/AMA8zbYwQD Roo/E, the Kangaroo Punch Portable GUI Toolkit Somewhat Interactive Nostalgic Game Engine
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817033.56/warc/CC-MAIN-20240415205332-20240415235332-00018.warc.gz
CC-MAIN-2024-18
736
11
https://os.mbed.com/questions/3074/Is-there-a-code-for-WS2811neopixel-leds-/
code
Important changes to forums and questions All forums and questions are now archived. To start a new conversation or read the latest updates go to forums.mbed.com. 7 years, 5 months ago. Is there a code for WS2811/neopixel leds what will work with GCC offline? I have found several libraries for the WS2811 leds, but they all require assembly code due to strict timing requirements. The gcc assembler won't work with these .s files as they are Keil syntax files. Has any one created a straight C/C++ version or a version that will work with the gcc assembler? 7 years, 2 months ago. Depending on your mbed board, my library might work for you. However, it depends on the KL25Z board's DMA peripheral (though the code may work on other Freescale boards; I don't know yet). Library allowing up to 16 strings of 60 WS2811 or WS2812 LEDs to be driven from a single FRDM-KL25Z board. Uses hardware DMA to do a full 800 KHz rate without much CPU burden.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056974.30/warc/CC-MAIN-20210920010331-20210920040331-00678.warc.gz
CC-MAIN-2021-39
946
8
https://adamauckland.com/posts/a-completely-unscientific-glance-at-virtualization/
code
TechEmpower recently ran an interesting study comparing speeds of different web frameworks over here I'm going to ignore the framework comparisons. They've been discussed at length on the Web elsewhere. I want to draw your attention to the numbers of the highest performing libraries, on both an Amazon EC2 instance and a dedicated server. Amazon EC2 VM: 37,717 Dedicated server: 213,322 Amazon EC2 VM: 8,838 Dedicated Server: 96,542 Holy cheap hardware, Batman! I was aware virtualization came with performance penalties but by my beer-mat-calculations, the dedicated server handled over 5 times as many requests in the first test and over 10 times as many in the second! This is completely unscientific and very finger-in-the air, so further tests are in order. As a software developer I use virtual machines all the time. They are cheap and convenient, however it appears they may not be ideally suited for heavy production servers. Next time I'm working on releasing a project, I'll be sure to check the expected traffic volume. It might be that the cost of running 10 Amazon EC2 instances is higher than a single dedicated server. In summary, running your web application on dedicated hardware can handle between 5 and 10 times the number of concurrent users as a VM.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100909.82/warc/CC-MAIN-20231209103523-20231209133523-00195.warc.gz
CC-MAIN-2023-50
1,272
8
https://www.ruby-forum.com/t/gotchya-with-case-sensativity-and-habtm/53171
code
Here’s a gotchya that cost me a couple hours. Maybe this will help I had a model :User in a controller. This was code that pre-dated not needing the model line for most cases. The problem was elsewhere it was loaded as :user. Apparently, it does a case-sensative compare on what has already been loaded. Because User != user in this it then loads User.rb. On Windows, anyway, user.rb and User.rb are the same file. So the file is loaded twice. It had a habtm association. habtm renames a method and creates a new method which calls the old. The second time this happens the old and new method will infinate loop upon each other. So watch out for case issues or files may be loaded multiple times. Hope this helps someone.
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541309137.92/warc/CC-MAIN-20191215173718-20191215201718-00392.warc.gz
CC-MAIN-2019-51
723
11
http://www.freeemailtutorials.com/microsoftOutlook2003/emailAccountsOptions/connectionSettings.php
code
Outlook 2003's Connection Settings determine how Outlook will connect to the Internet to send and receive emails, check for updates, etc. The possible connections Outlook will use are those defined in your Windows Control Panel's Network Connections settings. To customize your email accounts in Outlook 2003, go to Tools > Options. Select the Mail Setup tab from the Options dialog, and click E-mail Accounts. Select the email account and click Next. By default, Outlook 2003 will use whatever connection is available to the Internet, but this behavior is configurable, and Outlook can even establish itself an Internet connection. When the mail server is internal, (as often is corporate settings), you do not need a direct Internet connection to retrieve your emails to the server - the mail server and the outgoing mail server are themselves connected to the Internet. The default option, Connect using my local area network (LAN), nearly always works, since it instructs Outlook 2003 to follow your Windows' settings for connection to the Internet. This configuration have been set by your System Administrator, or by yourself, following your Internet Service Provider (ISP) instructions. Checking the Connect via modem when Outlook is offline checkbox authorizes Outlook 2003 to automatically establish a connection to the Internet if it finds no available connection. Since it is safer to be offline than online, you should retain control of Internet connection state by not allowing applications to enable connections on their own. The Connect using my phone line option concerns dial-up connections to the Internet: Outlook 2003 can automatically dial to your ISP to establish an Internet connection when it doesn't find one. The same above remarks apply, and we recommend that you leave this option aside. If enabled, the Connect using Internet Explorer's or a 3rd party dialer option authorizes Outlook 2003 to use a dialer to establish an Internet connection. If you don't know what a dialer is, you are probably not using one. The Modem section of the Connection tab allows you to choose which modem to use, and edit its global properties (otherwise available from the Windows Control Panel.) The Modem section of Outlook's Connection Settings will be disabled (grayed-out) unless you checked Connect via modem when Outlook is offline, or have enabled Connect using my phone line. In this case, you will be able to configure Outlook 2003's Modem settings. The Use the following Dial-Up Networking connection drop-down menu displays all Internet connection setup in Windows, that Outlook can use to go online. The Properties button opens the selected network connection's Properties window, otherwise accessible from Windows' Control Panel, under Network Connections. The Add button will open the Windows New Connection Wizard, and allow you to configure a new connection to Internet.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572515.15/warc/CC-MAIN-20220816181215-20220816211215-00638.warc.gz
CC-MAIN-2022-33
2,896
17
http://richardwellum.com/skills-and-technologies/
code
Here’s some information on things I’ve used while helping deliver software projects. It’s by no means an exhaustive list, and will of course change over time. I’m selecting what gets included/excluded based on my perception of what people tend to look for or ask about; if something’s not in here it doesn’t necessarily mean I’m not familiar with it or haven’t used it as part of a project – just that I have to draw a line somewhere, and a list of every tool or NPM package I’ve ever used would be a bit crazy. I also strongly believe in continuing to learn, and that the ability and willingness to learn new things is far more valuable than the current sum total of knowledge. When selecting projects to work on I look for a balance between things I’ve already worked with (and found useful and worthy of working with again), and things that I’ve perhaps not used before but am interested in. I try to stay near the “sharp edge” of the technology curve, experimenting with new platforms and emerging technologies, and combining these where appropriate with more established things based on how they fit the task at hand. - Microsoft Azure - Cloud Services - Blob Storage - Table Storage - Azure Service Bus - Amazon Web Services - S3 Storage - SQS Queues - Microsoft Azure - ASP.Net MVC - ASP.Net SignalR - ASP.Net WebAPI - Web/Desktop Client - Octopus Deploy - SQL Server - Entity Framework - Key-value storage - SQL Server Strongly Typed Languages The language I currently enjoy working in the most, and feel most productive in, remains C#. This is especially so since the cross-platform .Net Standard, and latest round of language enhancements. I enjoy working in F#, and am a little envious of some of the language features F# has that haven’t made it across into C# yet – particularly Discriminated Union types, but it feels like C# is evolving in a very healthy direction. I’ve been delivering projects in C# since .Net 2.0 and all the language revisions in that time, and the single change in that evolution that has made the biggest difference in my mind is the introduction of Lambdas and LINQ; a close second more recently is the async/await model, and I think the next big winner will be the newly introduced pattern matching. The Roslyn compiler is also a major advance for the .Net ecosystem. It has made it far easier for developers to customise their tooling, with analysers and compiler services functionality. This in combination with the powerful meta-programming in the framework itself make it possible to elegantly handle cross-cutting concerns via techniques such as Aspect Oriented Programming. I still use the ReSharper plugin and Visual Studio, but feel it may get edged out by the advances in Roslyn and in other editors, notably VSCode and Atom. I still consider C# my “primary” language, and all the above is only scratching the surface of the things that are possible with the language. My preference is swinging towards functional programming over object orientation, but C# is a multi-paradigm language that handles both well. I’d like to see more support for immutability by default (although there are ways to achieve this with a little effort), and I’d like to see (more) support for different threading/process models – I’d love to see an Erlang-like lightweight process and concurrency model and runtime underneath the C# language. The DotNetCore frameworks represent a major landmark for the .Net framework, and I’m enjoying working with them. The context in which I’m using these in production applications is in AWS Lambda functions. The launch of DotNetCore hasn’t necessarily been the smoothest, with a lot of churn in the release candidates, but I wouldn’t hesitate to go straight for DotNetCore in any new project now. The F# language is Microsoft’s functional language on the .Net Core Language Runtime. I’m using it in my own personal projects, but haven’t yet had a client project where it’s been used – although I’d like to, in the right circumstances! I’ve been using the Project Euler challenges as a set of problems to solve (just for fun) in F#, which are a nice fit as they are algorithm-heavy. Where F# is ahead of C# is in writing very expressive code, and its approach of immutability by default. The way that dependencies are modelled in F# makes complex or circular dependencies impossible, and the language places a high value on simplicity and being easy to reason about. I also really like the F# type system, especially Discriminated Union types, which make very rich domain models possible, where invalid states of the application can be prevented at compile time. F# makes entire categories of error impossible, which I think makes it very valuable. I think I would introduce F# into a C# environment as a way of modelling the core domain; I’d probably stick to C# for infrastructure code and wireup because the tooling support is better at the moment. It would be nice to see Microsoft treat F# as an equal citizen in the .Net ecosystem; at the moment it is a bit of an outsider, with most tools being developed by the community. I am continuing to build personal projects in F#, particularly for things like modelling board games, which can have intricate rules. I find that F# offers nice ways to express these. The big win in using TypeScript is in codebases that are of significant enough size – which in practice will mean any serious codebase with multiple contributors. It catches a lot of bugs without even having to run code, by type-checking and performing static analysis. It requires a little investment to setup, but delivers a lot of value in enabling teams to work on the codebase with confidence. ES2015/ES2016 and beyond MicroServices/Service Oriented Architecture Service orientation and microservices are all about being able to sub-divide a larger system into smaller components that represent logical boundaries within the application. I’ve been working with various applications in this style now, and I’ve seen a lot of stuff that’s worked and probably more that hasn’t; one of the ways that I engage with teams is as a consultant to help with projects in this style that have encountered problems as they evolve. The key to successful delivery of a microservices project is to constantly assess whether the boundaries between components are correct; it is very easy to get these wrong and pay a large penalty for it. Keeping boundaries well-defined, and managing the coupling across these boundaries to a minimum, via strict patterns, is vital. The first service oriented system I was exposed to was a large e-commerce system, handling the processing of customer orders from purchase through to dispatch. One thing that was good about this prticular system implementation was that the services that were in place when I started on the project modelled autonomous business divisions that already existed; this meant that the service boundaries and the contracts for interaction between the services were already quite well-defined by the real-world processes the system was enabling. One of the weaknesses however was an over-reliance on Commands over Events, which lead to an artificial coupling and leakage of domain knowledge between boundaries. I worked with this team in alleviating some of the worst effects of this issue, by replacing Command-centric flows with Event driven alternatives. I enjoy working with service oriented or microservice architectures, but there is definitely an anti-pattern in industry of adopting these where they are simply not needed. I strongly agree with Martin Fowler’s first law of distributing computing – “don’t do it unless you really know you need to”. I think microservice architectures can emerge naturally with boundaries that are reasonable by building a monolith application first, but with an obsessive focus on minimising coupling, and evolving as and when there are clear components that can be split out. Trying to retrofit service boundaries to a monolith that has not been designed with minimal coupling from day one is far harder, and may not be viable. The other mistake I see is the belief that separately deployed components are automatically not coupled; this fallacy can lead to major problems in the development flow of a team, as the coupling tends to be harder to detect and manage, and the result is instability. When building services I believe in giving each service total autonomy, from data upwards. Communication between services is limited to asynchronous messaging, with no runtime dependency on any other service. This ideal isn’t always possible, and there may be limited exceptions where the pattern has to be broken, but aggressively targetting this ideal and keeping these exceptions to a strict minimum is needed. I think for a team to successfully deliver an application with this architecture, all team members have to have a deep enough understanding of the paradigm to recognise when something will break it, and whenever this is the case the team should huddle to discuss the proposed change. The main technologies I’ve used to deliver microservices architectures have been NServiceBus and Azure Service Bus. I believe in keeping these firmly as infrastructure concerns, and in keeping the core domain pure. It should be possible to easily swap out the framework of an application with minimal change to the application’s core. I like to use queues (such as MSMQ, AMQP, Azure Storage Queues or Amazon SQS) or streams (such as Amazon Kinesis or a simple feed) as the underlying transit for event communication between services, and HTTP as the transit for commands. Domain Driven Design An understanding of Domain Driven Design is almost a pre-requisite to successfully build an application in a service oriented or microservice architecture. Modelling a domain in terms of entities and value objects, aggregates and events is an effective tool for identifying subdomains, which may be candidates for service boundaries – or bounded contexts. One of the main things that is learned/unlearned in gaining an understanding of domain driven design is to flip from thinking about systems with commands, i.e. a linear workflow of each thing telling the next what to do, to events, i.e. things responding to things that have happened. This helps to limit the implied knowledge components have of other components in the system, and hence the coupling, leading to boundaries that are clearer and more autonomous. Event Driven Architecture Event driven architecture is a style in which interactions in a system are modelled primarily as rich events, rather than primarily as state. The events in a system describe changes that have happened over time, which roll up into a current state; however it is also conceptually possible to reconstruct previous states of the system from the events. Events represent a particularly nice model for interaction between services, because they are far more decoupled in nature than commands. Events are multi-cast, i.e. many consumers can subscribe to an event, and the publisher of an event does not need to logically know about the identity of the consumers or how they consume the events; it is an infrastructure concern to handle the transit of an event. If events are distributed via queues, in a bus or broker pattern, the infrastructure will hold a directory of subscribers, but in an event stream, the publisher is completely disconnected from consumers – this has become my preferred approach now when architecting this type of system. I’ve implemented several event-driven service-oriented systems now, including a large e-commerce platform for a major retailer, a healthcare system, and the back-end services of a AAA game title. There is also a parallel between event-driven service architecture and the reactive style of client development, which is a design style I’ve had success with in writing both web and desktop clients. Object Oriented Programming The object oriented model is the de-facto standard for the majority of projects, but is often not deeply understood. Object orientation can cause complexity challenges, especially if the wrong abstractions are applied. Designing objects according to well-known principles, such as the SOLID principles, goes some way to mitigating this. One of the criticisms of object orientation is that it is designed by definition to encapsulate, and hence obscure, state, while at its core all programming is about the manipulation of state. It is interesting to me to observe both a shift in popularity towards functional languages, and a shift in the design of traditionally object oriented languages to include more features from functional languages. To make an object oriented design work well, it is critical to separate components cleanly between logical (domain) concerns and infrastructure concerns. Examples of this would be things like the “thin controller” in MVC architectures, or the “ports and adapters” pattern. The job of infrastructure or adapters is simply to marshall raw requests and responses, and transform these into calls into the core domain. This isolates these from each other, and makes for far easier testing. One of the most common problems I help teams to solve in object oriented codebases is leaking of infrastructure into the domain. Every project I’ve worked on has used object oriented languages and paradigms for at last some part of the codebase. In contrast to object orientation, functional programming elevates functions to be first-class components. This is often combined with language features like immutable data structures, and makes parallel computing much easier because code is guaranteed to have no side-effects, and functions can be passed around in the same way as data. The separation of state and data feels like the logical continuation of separation of concerns and single responsibility principle. Pure functions are described as those that have no side-effects or external dependencies, and act solely to transform the data that comes in into data that is passed out. Writing the core domain in particular in this style offers significant benefits; it is much easier to reason about the effects of such functions, and they are almost trivial to test. The trick is to tease out pure functions, pushing infrastructure or side effects as far towards the boundaries as possible, so that the pure area is as large as possible. I believe that with the trends in hardware and platform development towards parallelism and scale, functional programming will continue to grow, and I look forward to embracing it more in the projects I’m inolved with. The reactive manifesto is about designing systems that are composed of components that subscribe and respond to events, rather than the conventional model of sequential coordination. There are advantages and disadvantages to this approach – as with any. A disadvantage is that the mental model is different, and it can be harder to directly see workflows – and so reactive architectures work best when there are natural boundaries and subdivisions, rather than simple workflows. The advantage is that components can be split out, and freed of coupling more effectively than with some other techniques. There are a set of Reactive libraries, across many languages, that offer a standardised version of the patterns involved, the most common of which is the observer pattern. The Reactive libraries deal with the publish/subscribe behaviour nicely, and offer a ready-to-go implementation. They also offer a very rich Linq style way of interacting with event streams, and this is where the real power kicks in. The Reactive paradigm can be applied nicely to UI design, as an alternative to manually coordinating threads and callbacks. I’ve built a responsive client application in this style, on top of the ReactiveUI library, which offers a way to treat UI events as streams in the model, giving clean separation from the view in a way that is easy to reason about. The Reactive model also gives a nice declarative way of coordinating asynchronous work, which can be used in place of the Tasks Parallel Library in a C# or .Net context. The advantage is that the abstraction is at a higher level, and the details of coordination and the actions being performed are kept separate. I’ve also used the Reactive paradigm to build a server-side long-polling system, similar to SignalR. The publish/subscribe behaviour of the observer pattern allows a clean way to deal with the connection lifecycles of clients, and with background changes that are to be published. This gives something close to a real-time data response, by broadcasting the event in HTTP responses as it arrives. The actor model is not a new idea by any stretch, but one that is enjoying a renaissance at the moment, as trends in hardware are towards parallelism rather than raw speed. The actor model gives each “actor” its own process model, where it is completely isolated from all others, and can communicate only by messages. Popular examples include Akka (Java), Akka.Net (.Net port of Akka), Orleans (cloud-based virtual actors) and of course Erlang and Elixir, where the Erlang runtime is built from the ground up to be actor based. Actors are all about concurrency, in that every actor is guaranteed to be single-threaded in handling its messages. This means that entire categories of race-hazard and mutable state problems are simply not able to happen. The messages My main commercial experience with the actor model is in designing and implementing an Orleans-based system. Orleans is a .Net/Azure technology, that was famously used in the Halo games for managing online lobbies. The beauty of Orleans is that it gives a familiar programming experience, but offers the very powerful actor model. Each Orleans “grain” (actor) is kept on a server, but the management of those servers and their addresses is managed by the framework. Actors can keep their own state, which they hold both in-memory and in storage. Orleans keeps “warm” actors in-memory – if they are used recently – and saves their state to wherever you specify when they have not been used for a while, freeing up the compute resources. SOLID Design Principles The SOLID principles describe a set of good practices for object-oriented design. They are principles rather than rules, in that there are cases where it makes sense not to follow some of the rules 100% strictly. The SOLID principles tend to produce clarity in design of components, and to result in testable code. The principles are Single responsibility, Open-closed, Lyskov substitutability, Interface segregation and Dependency inversion. Each of these has nuances to it, which are frequently misunderstood – perhaps Dependency Inversion is the most commonly misunderstood, because of the prevalence of Dependency Injection libraries and frameworks that don’t always encourage the principle to be applied correctly. The single responsibility principle is about giving components a single reason for change. By extension, when a single change is made, the number of components that need change to accomodate that change should also be minimised. Open-closed principle is about managing the extension of modules; one way to embrace open-closed principle is via inheritance; where a class that can be extended by inheritance is closed to modification (the original is unaffected) but open to extension (the inheriting class can customise or add behaviour). The Lyskov substitutability principle is that implementations of a base-class or interface should be interchangeable. One way to break this principle would be by not implementing some of the methods, or by requiring the calling component to have knowledge about which implementation it is calling into. The interface segration principle encourages the use of many small discreet interfaces over big interfaces. This is linked to the single responsibility principle in the push for small simple components, but is a little more subtle. If interfaces are broken up to be as small as possible, a class might then implement a combination of these interfaces. The counter-example would be something like the ASP.Net 2.0 Membership class, with lots of methods that any implementer is burdened with that mostly aren’t used. The dependency inversion principle is linked to the dependency injection technique, but often used interchangeably with this – which is incorrect. Dependency inversion is about how dependencies are structured rather than how they are resolved, with the emphasis being on yielding the decisions about dependencies to the host application. Dependency inversion encourages a coupling by abstraction rather than implementation – that is, a class that depends on another component should depend on an interface that could be implemented in many different ways, rather than on an implementation directly. There’s a witty observation that while Object-Oriented programming relies on these principles, the same problems are all addressed very simply in the functional programming paradigm by the use of higher-order functions! It’s certainly true that without skill and discipline, and understanding and applying principles such as SOLID, the design or object-oriented systems can be prone to problems. One of the ways I particularly enjoy helping teams is when I get the opportunity to help a team with a project that would normally be out of their comfort zone. For example, I often work with teams whose background is developing desktop applications, who are now tasked with writing a web-based system, or teams who are new to things like service oriented architecture/microservices, or new to a different data paradigm. In these teams, my role is as much as a coach or mentor as technical, and I treat it as a primary outcome to make sure there is a healthy culture of learning and sharing knowledge in the team. For some teams, this can be a scary thing – they might have slipped into a “comfort zone” for some time, and be out of the habit of learning. In some company cultures, driven by pay rises and fear of redundancy, admitting a knowledge gap can be dangerous, in perception, reality or both. I work with the leadership of these teams to encourage the right mindset of helping the team to learn and grow in a safe environment, driven by curiosity and enthusiasm over fear. In some teams there can be a dysfunctional “heirarchy” that the individuals try to climb, where knowledge is used as a means to “leapfrog” others – and it is essential to break down the motivators behind these unhealthy behaviours and replace it with cooperation and a team that are more than the sum of their parts. An example of a tactic I might deploy in such a team would be to introduce knowledge sharing sessions. This could be a series of presentations, where I might lead by presenting a topic that will help the team deliver the project, and then the rest of the team take it in turns to present a topic to the team, either of their own choosing or from a list. This is especially effective in teams that aren’t used to sharing knowledge, as it lets each team member be the “specialist” for an area, and share that specialism with others, while there are also clearly members of the team who are sharing their specialism with them. I also enjoy mentoring others, to help them with their career growth. Working with developers who are at an earlier stage in their growth is a lot of fun, because they often have a lot of enthusiasm and desire to learn, and a knack of asking questions that make me re-evaluate my knowledge of a subject. Design patterns are a way for software engineers to label common ways of solving common problems, such that the solution is recognisable by name. Knowledge of design patterns, in the context of when they are applicable, helps engineers to both re-use solutions that are already tried-and-tested, and to create solutions that are easy for others to recognise and understand. Some common examples of design patterns might include the factory pattern, observer pattern, or singleton pattern. From these names alone, most engineers would implicitly understand both the problem and solution approach. The problem that can come from design patterns is their over-use and abuse; when some engineers first become exposed to these patterns there can be a tendency to try to use these patterns for the sake of it, resulting in adding complexity where none is needed. Test Driven Development Test Driven Design (TDD) is the discipline of writing code only to satisfy a failing test, and using these tests to drive the development process. The workflow is to start by writing a test, which should fail, and only then writing code to make the test pass. At this point the code can be refactored, i.e. it can be changed in ways that do not affect its behaviour; as the behaviour does not change the tests should continue to pass. Test Driven Development doesn’t require any particular style of testing, although it is most effective with tests at component level, with tests that test logic rather than infrastructure. I have successfully introduced TDD to teams who have not used the practice before, by designing and running workshops with the teams based on familiar problems. I enjoy coaching the technique, as a way of changing the way decelopers think about writing software. In order to apply TDD, it is necessary to think first about how an outcome can be verified, and only then to start implementing it. This change in thinking is for me the largest benefit of TDD, although having a healthy suite of tests that validate the behaviour, and also serve to document it, is also very valuable. Some describe TDD as “training wheels for object-oriented-design”, in that the SOLID design principles and testable code are inextricably linked. The SOLID design principles place a high value on well-organized code, with modules with clear separation of concerns, and inversion of dependencies (especially around infrastructure). These principles lead to code that is easy to test, and writing code by writing tests first will guide developers towards following these principles (whether knowingly or otherwise). Test Driven Development offers the most value when applied to logic – in its classical form it is far less useful in scenarios such as user interface development, and infrastructure-heavy development. That said, the TDD mindset of first defining acceptance criteria is can always be applied, even if the means of validating those criteria is different. In a more traditional team model, with dedicated “developers” and “testers”, developers were enabled to allow testing to be someone else’s responsibility, and in many programmers the ability to think about outcomes and how to verify them can be under-developed. Learning this skill addresses that defecit. My personal preference is to combine TDD (writing tests before production code) with BDD (writing tests that describe behaviour in the form of scenarios that map to the business domain). I believe in a fairly loose definition of “unit” in “unit testing”, opting for “unit of behaviour” rather than an implementation detail like a class or method. By applying this style, I can collaboratively write scenarios with a business expert, as a way of making sure we have a shared understanding of the problem being solved. NCrunch is a continuous test runner for .Net languages. While other test tools interrupt the workflow so that you can run the tests, NCrunch (and others that are on the market now such as ReSharper’s Continuous Test sessions and Visual Studio 2017’s test features) works by running the tests automatically when changes are detected. NCrunch provides a visual indicator at the bottom of the screen that shows whether any tests are currently failing, and allows quick ways of navigating to these tests. Beyond this, every line is marked by red dots (covered by failing tests), green dots (covered by passing tests) and black dots (not covered by any tests), to show for any component exactly what level of coverage is there. It can process this into coverage statistics, which refresh constantly as code is modified. What I particularly like about NCrunch – especially as a coaching tool when working with developers who are new to TDD – is the workflow it enables. By continuously running tests, the feedback loop is very short – within less than a second there is feedback showing the impact of a change. This also encourages tests at a sensible level, especially where there is a tendency to mix logic and infrastructure and fall back to integration or automation testing, which results in a visibly worse feedback loop. Behaviour Driven Development Behaviour Driven Development (BDD) is a style of test-writing, where tests are expressed as scenarios in a way that describe the behaviour of the system in a way that enables close collaboration with business experts with no requirement to understand the implementation of the tests. It can be combined with other test-writing techniques such as TDD, and allows closer integration of the specification of features and their implementation. A nice team workflow is for BDD features to be written collaboratively, with developers, business analysts and quality analysts all providing input. This helps to leverage all the skills of the team, and to build shared understanding. Many times, a team will find that bugs are caught (and fixed) on paper, simply by finding contradictions in the scenarios without a line of code even being written. There is sometimes a preconception that because automation tests are often written in a BDD style, BDD is a style most applicable to automation testing, however this is simply not true. Behaviour Driven Development is merely a style for expressing the specification of behaviour, which can be used to validate the implementation in any means available. One thing I like to do is to write BDD tests in such a way that the specifications themselves are re-usable in different tests – only the implementations of the steps change. The scenario does not change, but in an automation test the step is implemented by driving the user interface, while in a component (unit) test the step is implemented by in-memory method invocations (often with mocks or stubs at the boundaries). SpecFlow is a .Net BDD framework for writing specifications in the Gherkin (Cucumber) syntax and generating .Net classes to execute these specifications. The SpecFlow library parses the .feature files, generating C# code against a test runner of choice (typically NUnit). It offers almost all of the features of the Gherkin specification, such as tables, tags and the like. I have used SpecFlow for many years, but keep an eye out for alternatives. The trait I would like to see is for a run-time parser of .feature files rather than a compile-time generator; this would mean that .feature files are easier for non-developers to edit outside a code editor, and .feature files are easier to use because there is a more indirect link between the .feature file and any code. One such example is the TickSpec library, which offers a nice way to do this in either C# or F#. The patterns I adopt when using SpecFlow in a larger scale project are to tightly scope step definitions, opting for a 1:1 mapping of feature/step set rather than widespread re-use. In my experience trying too hard to re-use bindings leads to steps that are written around the implementation rather than the requirements, and un-necessary complexity; tightly scoped steps allow more flexibility and avoid having a large library of similarly named steps. Where re-use is desirable, the approach I take is to write a helper class, and re-use this rather than step bindings. The extreme case I’ve seen is a suite of 3000+ automation-style tests all using a single set of step definitions, at which point breaking this up into manageable chunks had become a very difficult exercise because of the level of coupling between tests. Load testing is the practice of placing high demands on a system’s capability to measure and improve its ability to withstand such conditions. The goal might be to establish whether there are bottlenecks preventing scaling, to understand the scaling behaviour (e.g. how many users can 1 server support), or to see whether there are any particular problems under heavy load (e.g. deadlocks, timeouts). Performance testing meanwhile is all about measuring the latency experienced by a single consumer, with or without background load. The two are often linked – it is very common for a system’s performance to degrade under load – but can be treated as separate. Both these types of testing are relatively expensive; they require a deployed environment, and significant time both to implement and to run the tests. They are often neglected completely, which introduces risk to projects, because what can happen is that they are only considered when it is too late and there is an issue that needs resolution. However the risk can be partly mitigated by monitoring, to ensure that trends in the system performance from real-life usage are seen and addressed early. The cost of performance testing as a standalone exercise can then be avoided. A challenge in implementing these types of testing is in defining what outomes will be measured and how, and in defining what scenarios are to be exercised. If a scenario is too contrived, the result can be meaningless, and mask an actual error. A good example to illustrate this is the “rippled load” scenario – say there is a system divided into separately deployed components, one of which handles user loging, and another handles lobby matchmacking for an online game. A real-world event could be a promotion – a press release, or a special offer of some kind, at an announced time. The user flow here would always be to log in, and then to go to a lobby. While these support automatic scale-out, what will happen is that the login service will respond to its spiked demand, take a while to respond, and fail under inbound load, after which the lobby service will go through the same cycle. Performance tests that treat these as isolated components and ignore the overarching real-world user workflow will mask this issue. The basic components of these tests are a test client, that generates load and simulates a user interaction scenario, and monitoring, to capture output from the test clients. In order to generate adequate load, these will tend to be cloud-hosted. There are a growing number of off-the-shelf components available, and it is a case of establishing whether any of these can be used rather than resorting to writing something bespoke. Automation tests rely on the structure of the user interface to find elements to interact with, and one challenge is that every change to the user interface risks breaking these tests. A way round this can be to introduce extra markup solely for the benefit of the automation framework, which has no styling responsibilities, and is managed to be more stable. In a traditional team set-up, quality analysts or testers and developers form quite separate silos, and automation tests can be sub-siloed to a group of automation testing specialists within the tester group. This results in a set of tests that only a small group of people understand, and only a small group of people pay attention to. Automation testing in this climate becomes a very high-cost low-value exercise. All members of a team, regardless of specialism, should be able to access and understand the automation tests, including understanding the output. Because of the brittle nature of these tests, they will tend to have a higher failure rate than other tests, and the resolution will often be to fix the tests themselves, but these tests can also be the only way of catching some classes of system-level issue. The goal in designing a suite of automation tests should be for a good return on investment, with low maintenance and implementation costs and high enough coverage. It is not sensible or viable to try to cover the logical behaviour of the system by automation tests – this can be done far more economically with component level tests, given an adequate design of the components involved. The aim for automation tests should be to cover a simple journey through each feature area, testing mainly that the components in the workflow are connected together correctly at runtime. This is especially true for distributed systems, where the boundaries between components can be a weak point. The question that should always be asked before writing an automation test is what the intent of the test is, and whether that could be achieved in a better way. Smoke testing is the lightweight testing of a deployment to ensure that the deployed instance is operational. This is often done via automation, as an end-to-end flow that establishes quickly whether any component is not deployed or configured correctly. Smoke test scenarios should be designed to hit every part of the infrastructure – for example a smoke test that simply hits a hard-coded HTTP response will tell you whether the HTTP server and network are configured, but nothing about the health of databases or other infrastructure. Smoke tests can either be run after a deployment to give a one-off health indicator of whether the deployment was successful, or regularly to provide a heartbeat for the system. In conjunction with application level monitoring, a heartbeat can tell you at a glance whether a whole system is offline, or whether there are some partitions that are unavailable. Azure Service Bus Amazon Web Services Amazon Web Services For scripting in a Windows environment, my preferred choice is PowerShell. I’ve used PowerShell heavily on various projects, for writing build scripts that can be run both locally and on the build server. In some cases, particularly working with Azure, this extends to deployment scripting. The Azure PowerShell SDK offers all the commands needed to provision and deploy entire environments; on one particular project one of my focus areas was in writing PowerShell scripts to set up and configure all the resources needed from scratch. The result of this was a set of scripts that could be run on an empty Azure subscription, to create and deploy the full solution against an environment’s definition. Git has become the de-facto standard for source control in recent years, at least partly due to the success of the GitHub platform. Git offers a fully distributed version control system – unlike alternatives like SVN where a central server is significantly different to other nodes. A big advantage of this is that git is fully functional in an offline environment; the only things that can’t be done when offline are pushing or pulling to remote instances, where in other version control systems things like branching may need network access. I’ve successfully introduced git to several teams, who had previously been using things like SVN or Microsoft’s TFS. While the mental model is a little different, and can take some getting used to, it is easy to learn and be productive. I normally choose to use a GUI for most day-to-day operations – because I’m often working with people who are less familiar, and showing them a visualisation of what is going on is helpful – and I drop back to the command line for troubleshooting or more advanced things. As a branching/commit strategy, my preference is to have a continuous integration setup, where all developers work against the main branch most of the time. This preference is only valid though if the right circumstances are there to support it; for example the test suite must be good enough that broken builds can be at least detected if not prevented. I often see big problems in teams who try to adopt continuous integration without the pre-requisites in place, which leads to chaos, and in these instances I switch the team to feature branches or a gitflow model as a temporary solution to alleviate pressure on the main branch until an adequate test suite and other preqrequisites can be put in place. As a build server, my preference is TeamCity; it is simple enough to use, but powerful enough when customisation is needed. I like to write builds in such a way that the build server itself is a dumb runner of scripts that are included with the source code, that can also be run locally; that way there are no surprises, and if anything goes wrong it can be debugged on a developers’ machine with no magic. I often take on setting up the build systems in teams, and I try to apply a lean approach, building only what is needed, but making sure it is built as soon as it is needed. I’ve seen teams struggle by deferring build pipelines for too long, by which time the solution and build needed for it have become too complicated. By working on the build pipeline alongside regular code, it can grow incrementally. This also discourages people from making design decisions that place a heavy burden on the build system! Octopus Deploy is a purpose-built deployment management tool, with good integration to build tools like TeamCity. It is good at managing what version of components are deployed to where, with a dashboard for visibility. Many teams start out by deploying via the build tool, but this can become unwieldy – each environment needs its own set of builds and configuration, and deployment takes up build agents for sometimes quite long periods of time. Octopus deploy works especially well with .Net projects, with tooling such as OctoPack for building a .Net project into a deployable package. The most common use cases – installing Windows services, setting up IIS etc – are catered for by scripts bundled with Octopus Deploy, but by including install scripts inside the Octopus package it is possible to have bespoke deployments relatively easily. Nuget is the package management system for .Net. It is a way of bundling components into a package format, for sharing either on the public nuget feed or internal feeds for custom packages. It is a valuable tool if following a service-oriented approach, as a means of sharing and versioing the contracts between components, and for sharing cross-cutting libraries. Nuget packages are also the input format for tools like Octopus Deploy; it is a well-established format with a means of providing both content, with conventions for things like assemblies, and scripts that run at different times in the package install lifecycle. I’ve worked with various forms of SQL databases on almost all projects in some form or another. While NoSQL stores have their place, I think there is very definitely still a place for SQL in many solutions. While NoSQL stores often have quite “spiky” capability sets – they are very good at some things, but cannot do others – SQL is a very solid all-rounder, and it is so well-known that any problems have probably been solved before. SQL is the canonical relational database, built on tables, rows and columns, with tables joined by primary/foreign key. Modelling domains into tables is often done by applying the Normalisation approach, aiming to represent each piece of data once each in joined tables rather than having duplicated data. The reverse technique, Denormalisation, can sometimes be useful in cases where the performance of a join would be unfavourable. I’ve mostly worked with Microsoft’s SQL server, with various editions of the main product and the Azure offering. All are based on the standardised language, but there can be nuances – Azure’s SQL in particular omits some features. I’ve also worked with Amazon’s RDS implementation, and with things like SQLite, a minimised version that can run cross-platform on phone hardware for application-level local storage. I enjoy working with the SQL language; it is a very rich and declarative language, very well suited to working with set based logic, and very mature. Sometimes on projects it can be faster to abstract away SQL by using ORM tools, but I quite like using light-weight libraries such as Dapper combined with hand-written queries. Entity Framework is Microsoft’s data access platform; it offers a powerful way of working with data in quite a well-abstracted way, leveraging the power of Linq. I’ve worked with all versions from the first release, with the database-first, model-first and code-first paradigms. What I like about Entity Framework is how quickly functionality can be built out. The performance is reasonable, but normally a hand-written query can be faster. I find it effective to use Entity Framework, and replace specific components with hand-written queries where there is a need (normally for performance). RavenDb is a document store written in .Net. I’ve used versions of RavenDb on several projects, since its early releases. RavenDb is an example of a database that has “right ways” and “wrong ways to work with it, where it can be very nice to work with in its sweet spots but cause problems if it is shoe-horned in to a problem it’s not designed to solve. The key to working with RavenDb is to respect its paradigms. Treating transaction boundaries as documents – rather than trying to build a conventional domain model around entities – and leveraging the powerful indexing capabilities are a good start in getting the best from it. A nice aspect of RavenDb is the developer experience, with things like in-memory test implementations for integration testing of indexes available out-of-the-box. I’ve worked with RavenDb as both a primary store, as the main store for business data, and as a secondary store of things like short-lived workflow data. ElasticSearch is a document store with powerful search capabilities provided by indexing the documents, normally in Lucene. The documents being written to the database are analysed at the point of write, so that searching later is against these highly optimised indexes rather than raw data. I’ve worked with ElasticSearch several times as a logging store – in conjunction with things like Kibana. This allows log data to be collected, aggregated, and searched quite effectively, even with quite high rates of data collection. I’ve also worked with ElasticSearch as the backing store for a data search product, where data was ingested from a write-friendly SQL data store into the read-friendly ElasticSearch store. The data structures for these split data stores can then be optimised for the task at hand, giving the best of both. The trade-off in a separate store for reads is that there is a data synchronisation process needed, which can add complexity and cause the data in the read store to be “eventually consistent” – i.e. there can be a delay before writes are shown. Graph databases are very powerful for modelling relationships between data, and traversing those relationships. Whereas a relational database operates with tables and keys, the relationships in a graph are a first-class concern. Neo4J is an example of a graph database that I’ve used in side projects, but not yet used in a commercial project. An advantage of graph databases over relational stores is that the speed of a graph query is related to the number of connections in to/out of each individual node, whereas the performance of a table join can be influenced by the total size of data in the table, and so the performance for a graph query against a large data set that goes through many relationships can be much faster and stay more stable with size. Modelling domains with a graph database can be a more natural fit that modelling tables, especially as both the nodes (objects) and edges (connections) can have properties. Connections are also named, and directed, which can lead to a more intuitive and richer data model. Key-value storage is a highly read-optimised way of dealing with data, so long as that data can be associated with a single key. Document databases are often a specialism of key-value stores, where the value is a richer document rather than a simple value. The trade-off is that aggregation, or operating on more than one value at a time, are significantly more expensive. An example of a key-value store would be something like Redis. Key-value stores can make effective caches or view model stores, where the value would be expensive to calculate on-the-fly, but can be pre-calculated and stored against a single key for quick retrieval. I’ve used key-value stores mostly for storing things like user profile data in Azure Table Storage, where there is no requirement to aggregate the data, the data is slow-changing, and the data is always accessed by key (the user’s identity).
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203326.34/warc/CC-MAIN-20190324043400-20190324065400-00020.warc.gz
CC-MAIN-2019-13
49,105
132
https://www.dotnetspider.com/forum/266546-Use-proper-title-summary-and-description-for-resources.aspx
code
Use proper title, summary and description for resourcesThere are large number of resources posted in our site, but a big majority of them are not tuned to attract any search engine traffic. What is the use in writing a great resource but no one reads it? It is very important to optimize your resource to attract maximum search engine traffic. The most important points for attracting search engine traffic to your resource are: 4. Proper use of Keywords The title is the most important part of search engine optimization. Google looks at the title first before it decide whether a page need to be displayed in search result or not. Typically, if the term that people search are not part of the title, the page will not be shown in search results at all. - Use a long, descriptive title so that it can contain multiple terms that can be searched by various people. Try to make the title atleast 10 words. - Use a proper sentence as title. Do not use few words that are separated by comma, pipe symbol (|) etc. - Think about the keywords in your resource that usually people will search for. Try to include that keyword/term/phrase in a meaningful manner in the title. Take a look at the below image which is a Google search result page: In the image, you can see that under the title, a short summary is displayed (which is marked in a red box). If you provide a good summary in your resource, that summary will appear in the Google result page. If there is no proper summary, then Google will randomly pick some portion of text and display it there, which may or may not be interesting for the viewers. If the summary is not relevant or important, people may not bother to click on that result and may move on to some other results, by losing a visitor to our resource. So, provide a good summary for all of your resources. * The summary should be atleast 2 sentences (3 is ideal). * Summary can be something like this: In this article, I will explain ...... Also, I have mentioned about (topic1, topics 2, topic3 etc). Let us take the resource "How to disable Aero in Windows 7" as an example. A good summary can be like this: In this article, I will explain how to disable Aero in Windows 7. Even though it is a great feature, sometimes you may want to disable it for performance reasons. Article title: Which is the best Antivirus? In this article, I will explain some of the best antivirus software available in the market. Antivirus is very important to keep your Windows PC safe from malware. In most cases, you can start the summary with the words "In this article, I will explain ...." * Do not write everything in capital letters. * Use proper grammar and spelling. * Do not use few comma separated words as the summary. Summary should be meaningful and complete sentences. There are 100s of code snippets posted in our resources section with no description. Such code snippets are very useful for people, but there is no way any one going to find them since Google will not show them in the search results. So, a valid description is very important for any resource you submit. Even in case of code snippets, there must be 3-4 lines of description explaining what the code does. Remember to include the keywords/terms/phrases few times in a meaningful manner in the description. You may use the <b></b> tags to make the keywords bold.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526508.29/warc/CC-MAIN-20190720111631-20190720133631-00435.warc.gz
CC-MAIN-2019-30
3,345
29
https://training.talkpython.fm/courses/transcript/python-for-decision-makers-and-business-leaders/lecture/230702
code
Python for Decision Makers and Business Leaders Transcripts Chapter: Python vs. Lecture: Python vs. C# Login or purchase this course to watch this video and the rest of the course contents. 0:00 First step into the ring is Python and C# and .NET. We're going to take Python and compare it to C# and the overall .NET ecosystem. 0:09 It just so happens I did professional .NET development for I don't know, 12, 15 years a very very long time actually. 0:16 And I still do a little bit of C# work on our mobile apps. So we're going to pick on .NET here. This is probably a pretty good stand in for Java. 0:24 It's not exactly the same Java and .NET are fairly good competitors. They're kind of on equal footing and in a lot of ways, no they're not the same 0:32 but they're similar. So if you're also thinking about Java this is probably the closest comparison you're going to get. 0:38 Lets put em side by side, go down some of the features that I think are important and compare them. Is .NET open source? Yes. Wait no. Yes sort of. 0:47 It turns out some parts of .NET are open source some of them are not. For example the main .NET Framework, I believe is not. 0:54 But something called .NET Core which is a newer version. That's cross platform but doesn't do as much is in fact open source. 1:01 ASP.NET is open source and so on. So there's, some of it is, some it isn't it's a bit of a mixed bag. 1:07 Python we've already seen straight across the board. You can go to GitHub and just get it. It's Open Source. Is it compiled? 1:14 Sometimes that's an advantage sometimes that's a disadvantage. .NET, yes its compiled and in further JIT compiled Python is, not really. 1:23 Technically if you look at the internals there's something that would look like compilation too. But it is not in the sense that we're meaning here. 1:29 Is this technology owned and controlled by a company? .NET, yes. Microsoft, Python, no. There is the Python Software Foundation, the PSF. 1:39 They kind of sort of own it and control it. But it's more like that's a legal structure in place 1:43 to be the steward of Python, not in the same sense that you know a commercial entities using it as part of their business. 1:51 Is there a strong base class library, or standard library? Yes, .NET and Python both have incredible base class libraries. 1:58 There's definitely one of those. What about building web apps? Are they good at that? There, I would call that mostly a tie. 2:04 .NET has ASP.NET so not so much variety but its really good at building web apps. And Python, we've already seen. There's so many options 2:12 its very good that Flask cap we built was great. Any work with databases? Yes, both of these have extremely strong support. 2:19 .NET has a new framework. Python has SQLAlchemy. Those are actually very, very similar to each other. And yeah, really great story there. 2:27 Mobile capabilities, .NET is actually extremely strong here. And this is one of the places where Python gets red mark on its record. 2:34 Python is actually very poor at building mobile apps. Its not that its impossible its just very, very immature. And well, lets just say its practically 2:45 not something you would choose. On the other hand .NET has something called Xamerin which allows you to write mobile applications 2:52 in .NET that work both on IOS and Android. And that's what we use for our mobile apps. Desktop applications, can you build those in .NET? 2:58 Yeah, there's a couple good options there. WPF, its good at it I just don't love the technology that much. 3:04 There's also Windows Forms that which is, pretty good. Python it has Tkinter built in that's kind of a old out of date mode. Probably the best is Qt 3:14 that's a really great way to build apps. But its not as well supported as .NET. Get to its rank, this one Python definitely shines. 3:23 .NET is number 4 of the languages on Stack Overflow. That's that graph I showed you the incredible growth of Python. 3:30 At the beginning .NET is 4, Python number 1. And, leaving the others in the dust. TIOBE, another way that ranks the usage of languages. 3:39 I believe this one has kind of a longer tale affect in the sense that stuff that was written 15 years ago happen to be written in a language 3:46 that still counts towards this rank. So its got a much slower leading edge to to pick up the changes. Anyway on the TIOBE programming language rank 3:56 we have 5 for .NET and 3 for Python. And Python is going up there. I believe .NET is, actually, as well. Price, both of these are free. 4:04 You don't have to pay anybody, anything for them. You can use .NET and the associated tooling like Visual Studio Code or 4:10 Visual Studio Community edition for free. Same thing, Python obviously depending on the tools you pick you might buy a pay tool or not but yeah. 4:18 Python itself is absolutely free. Another important distinction is is this a general purpose programming language? 4:24 Some things, like R and Julia, Matlab they are not general purposes languages. You would never go and build YouTube in R. 4:33 But in this case .NET and Python are both very much on par here and they are absolutely general purpose programming languages. 4:39 Finally, the computing significa putting level. This is the, Jupyter Notebook type of work that we just explored earlier in the data science section. 4:47 Until very recently .NET would of gotten a fail here I've got a poor. And Python, would of course very, very strong as it, continues to be. 4:55 But .NET now, recently added the capability to have C# and F# type of code. In Jupyter Notebooks, so in that, in my world 5:04 that brings them up a little bit. But that does not bring in all these incredible libraries. Like Astropy and 800 Biology libraries that we found on 5:14 PyPi that is still, I would believe mostly missing in the .NET side. So Python definitely wins here but .NET is not as bad as some others.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644855.6/warc/CC-MAIN-20230529105815-20230529135815-00081.warc.gz
CC-MAIN-2023-23
5,930
46
https://www.avrfreaks.net/forum/custom-led-displays
code
I'm looking for someone who can manufacture fully custom LED displays - like a seven segment display, only bigger - a single part with custom divisions and LED placements. Something like: Can anyone recomend such a company? I've found some, but I'm hoping someone has had experience with such a company. And it would be preferable to have a low (100-1000) minimal order size.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192887.19/warc/CC-MAIN-20200919204805-20200919234805-00263.warc.gz
CC-MAIN-2020-40
375
3
https://www.erol.name/tag/vim/
code
To check which editor is set as default in your Debian distro you can do two things: - in your favorite shell press CTRL+X+CTRL+E which will launch your default editor or - use command update-alternatives –list-editor to list installed editors and you can expect result similar like this one: # update-alternatives --list editor To set for example vim.basic as your default editor you can use following command: # update-alternatives --set editor /usr/bin/vim.basic update-alternatives: using /usr/bin/vim.basic to provide /usr/bin/editor (editor) in manual mode Now if you again quick launch your editor with CTRL+X+CTRL+E you will notice that vim is invoked. I do not use vim macros often, but they can be useful from time to time. But when I need them I always need to re-learn the shortcuts, so now I will summarize them in simple example which will create number list from 1. to 50. Start your vim editor by typing: Continue reading “Create incremental number list in vim with using simple macro” I always forget how to do this and I need to write it down: Turn on syntax highlighting with: If highlighting looks to dark, then you are probably using dark background, so set it to achieve better readability: To make these two changes permanent, edit ~/.vimrc file and enter: To turn of syntax highlighting do: To go back to light background do: or delete the lines from ~/.vimrc file.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662531762.30/warc/CC-MAIN-20220520061824-20220520091824-00561.warc.gz
CC-MAIN-2022-21
1,395
18
https://www.team-bhp.com/forum/gadgets-computers-software/31260-digital-camera-thread-questions-discussions-etc-271.html
code
Originally Posted by navin I know the landmark by what it is called. The Singapore Zoo is the Singapore Zoo. I dont need GPS to tell me that. I assume the "kunzum pass" will be marked too. Or is that the "kunzum pass" wil be called something else or is not marked correctly? I am still confused. I just used that as an example. <ets say you are driving from village A to village B which are 60kms apart. In between there a 3 mountain passes. As you drive, you keep clicking pics. With GPS tagging a year later I will know which pic was taken where. As for kunzum pass, there is a signboard, but what about pics 100 meters before the signboard, and 100 meters after the signboard. A zoo, a theater etc., is very different. I never geotag such pics. for example if on my trip I visit a zoo, only 1 pic taken outside the zoo with have geotag. Once I am out of my vehicle, and moving around inside the zoo, there will be no geotags. However, even there if I have a pocket gps, I can take pics of the animals, and later, each pic will show the approximate location of each animal enclosure! Moreover, this information is hidden in the EXIF. User is not shown that information unless the user actually goes through the IPTC or EXIF completely I respect TSK (he knows that) hence I wonder what does he know that I dont. I hate to use technolgoy just for the sake of it (also becuase I not very good with technology - atleast not as good as the youngsters here). It has to solve a real world problem otherwise it just slows me down. No slowing down here. All I need to do is run the program, and it does the photo tagging. Very little effort on my part Ok 3 questions. | a. how does your camera know and where does you camera log in this information? The information is appended to the EXIF information? b. When I upload pics from my camera I get a serial number not a name. How do I get it to say "Singapore Zoo 17/8/10 12:10:07" instead of "DSC-0001". c. why would a user want to go to Google Maps? If I label a pic as "Chushul and Rezang-la War memorials" I assume the user would takt that as face value na? a. Camera only knows what time and date the picture is taken. So before taking pics, I align my camera clock to exactly 5.5 ahead of GMT as shown by my GPS clock. b. I also get just a number. You have to look at the EXIF information to see the date and time of picture taken. EXIF is embedded inside the JPEG(or RAW). Most software programs like photoshop/Gimp/Canon DPP show complete EXIF. Many other free software programs also show complete EXIF c. Yes, I can label that. but what if while driving from Chushul to Rezangla, I climb a narrow uphill unmarked road, and click a pic of an intersting plain. If a user wants to know the exact location where I clicked that pic from, the GPS information embedded inside the EXIF will tell the user that. Again, to explain what exactly is Geotagging. There are two devices. 1. the Camera - Whenever the camera clicks the pic, it embeds the date and time taken. For example, lets say I took a pic on 2009-09-01T03:35:55Z This means 1 September 2009 03:35:55 UTC which is 9:05:55am India Time 2. The GPS - It keeps dumping "Latitude/Longitude/Altitude" with time stamp in a file. Here is a snippet <trkpt lat="32.779725000" lon="78.974696700"> | <trkpt lat="32.779610000" lon="78.974735000"> <trkpt lat="32.779481700" lon="78.974776700"> <trkpt lat="32.779295000" lon="78.974845000"> Now the software program will load this file, and look at my picture. it will see the time. Then it will search for the trkpt which is the nearest in time to the picture taken time. That is the GPS location where picture was taken. At my end, all extra I have to do is 1. Download all pictures to my PC - This we all do 2. Download GPX track from my GPS on my PC 3. run GPICSYNC giving it path to pictures folder, and the GPX file All pictures are embedded with the GPS information, whenever the nearest trackpoint is within 300 seconds of picture taken. The 300 sec thingy can be tuned in software. If you want more accuracy, then you can use 50 seconds.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649343.34/warc/CC-MAIN-20230603201228-20230603231228-00699.warc.gz
CC-MAIN-2023-23
4,085
34
https://www.codetriage.com/akveo/ng2-smart-table
code
When you volunteer to triage issues, you'll receive an email each day with a link to an open issue that needs help in this project. You'll also receive instructions on how to triage issues. Receive a documented method or class from your favorite GitHub repos in your inbox every day. If you're really pro, receive undocumented methods or classes and supercharge your commit history.TypeScript not yet supported Add a CodeTriage badge to ng2-smart-table - renderComponent to include column name - Provide an alternative to change the width of column when the table becomes horizontally scrollable. We have to use :host ::ng-deep and apply width on every column we want to change width. - How can I get access to Ng2SmartTableComponent component inside my class - I want to emit an event on clicking cancel button in ng2-smart table. I cant Find a way to bind - im using ng2-smart-table and i want that cols are linked like <a>!!!!</a> in my smat table ? hw can i change the html of this component plz! - how to insert an nb-icon into a custom cell? - Minimize lodash dependency - how to count the total value of a column, only for the rows which are visible - strict filtering for lists - feat: add custom action renderComponent - TypeScript not yet supported
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330907.46/warc/CC-MAIN-20190825215958-20190826001958-00057.warc.gz
CC-MAIN-2019-35
1,258
14
https://github.com/ednapiranha/the-great-brain/tree/master
code
The Great Brain Project What it is The Great Brain is an experimental project to test out a basic network system using asynchronous calls on Node.js and Express. How it works Currently, the goal is to have as many contributors provide their nodes via basic API calls that send/receive either a 0 or a 1 (boolean). Depending on which value is sent to them, they return HTML content in the format of text, an image tag, a video tag or an audio tag. External Node Requirements Nodes can be in any language and/or framework that the contributor wants to use - the only requirement is that they return JSONP responses for HTML content and boolean requests. To be determined ...
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107490.42/warc/CC-MAIN-20170821041654-20170821061654-00563.warc.gz
CC-MAIN-2017-34
672
8
https://www.oxxmirrors.eu/en/spoxx-instruction-mirror/oxx-assembly/
code
SPOXX special, VW, OPEL, BMW, etc Installation instructions of the setup Mirror. (Oxx universal) The set-up mirror (oxx universal) may be mounted in the center of the original glass of the mirror as well on to the outside of the original mirror, but not beyond the outside of the original mirror. The terminals of the set-up mirror (oxx universal) should be supplied with Allen key. Evenly tightened VW Golf and Opel Astra, you must mirror the oxx pressed firmly to keep when tightening oxx otherwise the mirror slides off. use rubbers is not recommended for this type of vehicle. We strongly advise you not here automated and or using power tools. For
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818072.58/warc/CC-MAIN-20240422020223-20240422050223-00198.warc.gz
CC-MAIN-2024-18
652
5
https://forum.qorvo.com/t/dwm1004c-code-without-dev-board/9261
code
I and my friends work on a university project. Our main purpose is to implement the Angle of arrival (AoA) concept using two UWB modules. Hence we have ordered and received 10 DWM1004C modules. However, the modules do not have a development board on their own. The sample code for DWM1004C involves DWM1001 dev. board and we do not have that. This means we cannot even start the project. My question is: Is there a way to program DWM1004C using the SWD pins of the processor? Of course, assuming that we will connect those pins to some controller, like Arduino or Raspberry Pi. This is probably too late to help you, but you should be able to solder a connection directly to the pins of the STM32 named: NRST, SPI, CLK, UART_Tx, and UART_Rx, of course while connected to power and ground. Then you should be able to use software to flash through the connections. The UART pins help to read serial data to verify the flashing worked.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572077.62/warc/CC-MAIN-20220814204141-20220814234141-00282.warc.gz
CC-MAIN-2022-33
932
2
https://mechanicsofmagic.com/2023/06/07/final-class-reflection-ore/
code
CS247g Final Reflection When I was a kid, all I played was video games. I had a burning passion for the genre. When I was in middle school, I pointed to my mom at an image of MIT, and told her I had chosen this school, so I could learn coding to one day become the greatest video game designer of all time. That dream got sidetracked quite a bit and I ended up starting my junior year of college burnt out after two lackluster summer software engineering internships. I then had a friend, Umar, who I had met at Oxford introduce to me this class. Before this class, I knew when I had felt a good game. I had played the Indie game greats from Outer Wilds to Undertale to Hollow Knight to Super Metroid to Terraria to Fallout New Vegas and Xenoblade ( No joke, I have 5,000 hours in my steam account on Terraria). I had no idea how these worlds were made, but I know how they made me felt. I sort of thought of Game Design as a movie director frantically trying to sew together a thousand moving parts into something beautiful right before the players eyes. I thought of play mostly through the lens of more difficult games like Dark Souls. I wanted to know the thought processes behind greatness, and this class delivered. The class really drew me in from the start. It was awesome for the first time in my Stanford experience, getting to really connect with other passionate gamers and people interested in getting into games. The first small games we made, like the monster battling, was very fun even though so little initial effort was put in, and I was drawn to the power of the magic circle. The theory readings on Narrativologists vs Ludologists also stuck with me, as I thought back to own experiences with games like earthbound or xenoblade where most of the game happened through cutscenes. I initially experienced some issues into getting into a flow with working on a team. Especially as our P1 project threw us into a randomly selected team creating a random game design concept. Luckily, my team was full of hard working students and so we managed to make something excellent. The simple introduction of a creepy soundtrack to our P2 Testable Core immediately scared every single player who set their hands on it. They were transported to the manor despite the limited graphics and design thrown together a couple of hours before the deadline, and players ran around in a frenzy simply because of a royalty free audio clip. This, and juicy week really revealed to me the power of art to affect players and put them into altered states, as well as the responsibility we as game designers have on making sure we steward that ability. I didn’t realize how huge the power of onboarding was. The lecture on Plants Versus Zombies made it click. As I saw our (horror-ish) game playtested by a diverse audience of people who had little to no experience with the genre, it was exciting seeing them yelp, and say aha!, in all the same ways I did when I first started playing games. It made me super nostalgic and misty-eyed and it makes me want to commit to making my games easy to at least pick up and play. I really grew in my ability to prototype my ideas, communicate them with others and collaborate to bring something into the world. I also felt good about the risks I took as I had little-to-no Unity experience when I started coding, and turned that into a full-fledged game. While I was happy being in a supportive role in P1, P2 gave me the opportunity to really see how hard I could work on a career that was a dream since childhood. There’s no cooler feeling than seeing a player struggle and overcome in a world you and a couple of others have spent countless hours trying to bring to life. To see them strategize with their friends or be brought to lows or highs based on concepts you invented. In the future, I want to make more games with cool stories, and inventive mechanics. I feel I have the confidence to strike out on my own make things, and am grateful for this class to give me the opportunity to take risks in a safe environment. Thank you so much!
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510671.0/warc/CC-MAIN-20230930082033-20230930112033-00897.warc.gz
CC-MAIN-2023-40
4,080
12
http://textadventures.co.uk/forum/general/topic/6kqfh9kai0i5ml7qsrchug/how-to-change-username-or-delete-account
code
I have decided to start creating my own stories and would like to change my username to the name I usually write under elsewhere. Is there a a way to change username? If not, how can a delete my current account to start over? I think you have to use the main "Contact us" thingy and ask for your account to be deleted. Either way, you'll have to create a new account with the desired username. So, I'd advise just doing that and leaving your current account alone, unless you just want it deleted. (I'm pretty sure you can't change the username on your current account, but I may be mistaken.)
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529839.0/warc/CC-MAIN-20190420140859-20190420162859-00265.warc.gz
CC-MAIN-2019-18
593
5
https://demos.smu.ca/demos/waves/98-standing-waves-and-the-strobe-effect
code
A strobe light can make a vibrating string appear bent and unmoving in midair! WATCH THE VIDEO Other Demos of Interest: Normal Modes of Standing Waves Standing Waves and Resonance - Strobe Effect This demonstration shows the strobe effect. The strobe effect occurs whenever an object in continuous motion - like the vibrating string in this demo - is represented by a series of small sections, much like how videos are just many pictures strung together. In order for the strobe effect to be apparent, the rate at which these 'pictures' are being taken must be close to the rate at which the continuous motion is happening, like the strobe light flashing at the same rate as the string is vibrating. In this demonstration, a standing wave is formed along the string. By vibrating at the right frequency, the waves sent down the string will interfere with each other to form the familiar wave pattern seen in the video. In the video the n = 4 mode was shown, and the frequency was 56.1 Hz. When the strobe light flashes at the same frequency, the string looks like it is not moving anymore. This is because when the strobe light flashes, the string is at the same point of oscillation each time, causing the string to look like it stationary. When the frequency of the strobe light is slightly longer than the frequency of the string, the strobe light begins to light up a later and later sections of the string's oscillation period each time it flashes. This results in the string looking like it is slowly oscillating. In contrast, when the frequency of the strobe light is slightly faster than the frequency of the string, the strobe begins to light up the string at earlier parts of its oscillation, causing the string to look like it is slowly oscillating in reverse. - Taut length of string - Sine wave generator - Mechanical wave driver - Strobe light with adjustable frequency - Set up the apparatus like it was seen in the video. Using the wave generator, produce a standing wave along the string. - Turn on the strobe light and adjust the frequency until the string appears to be unmoving, bent around in midair. - Try adjusting the frequency slightly up or down. This should result in the string appearing to bend in midair very slowly. - Try experimenting some more by changing the frequency of the strobe light and see what happens!
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510888.64/warc/CC-MAIN-20231001105617-20231001135617-00177.warc.gz
CC-MAIN-2023-40
2,344
17
http://www.aikiweb.com/forums/archive/index.php/t-444.html
code
12-10-2000, 02:17 PM I am considering opening a Dojo after the first of the year. This will be my first and I would like to create a list and use it to check and double check everything before making the next step. For those of you that have Dojos or had them in the past, I would like to ask you for a list of thinks you had to consider when you started your own Dojos? Thanks for your help in advance.
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511122.49/warc/CC-MAIN-20181017090419-20181017111919-00085.warc.gz
CC-MAIN-2018-43
403
3
http://superuser.com/questions/391163/sharing-a-access-database-on-the-web-not-using-sharepoint
code
Is there any other way to have an Access Database shared on the web but not using Sharepoint? I have been looking to figure out if this is a possibility or not. Thanks for your help What do you mean shared on the web? I think Office365 has a net version of Access 2010, called Access Services. You can always hook it up to a web app front end, in whatever tool you know how to program in.
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738087479.95/warc/CC-MAIN-20151001222127-00128-ip-10-137-6-227.ec2.internal.warc.gz
CC-MAIN-2015-40
388
4
https://valohai.com/blog/author/ari-bajo/
code
What did I Learn about CI/CD for Machine Learning Most software development teams have adopted continuous integration and delivery (CI/CD) to iterate faster. However, a machine learning model depends not only on the code but also the data and hyperparameters. Releasing a new machine learning model in production is more complex than traditional software development. Classifying 4M Reddit posts in 4k subreddits: an end-to-end machine learning pipeline Finding the right subreddit to submit your post can be tricky, especially for people new to Reddit. There are thousands of active subreddits with overlapping content. If it is no easy task for a human, I didn’t expect it to be easier for a machine. Currently, redditors can ask for suitable subreddits in a special subreddit: r/findareddit. Production Machine Learning Pipeline for Text Classification with fastText When doing machine learning in production, the choice of the model is just one of the many important criteria. Equally important are the definition of the problem, gathering high-quality data and the architecture of the machine learning pipeline. Scaling Apache Airflow for Machine Learning Workflows Apache Airflow is a popular platform to create, schedule and monitor workflows in Python. It has more than 15k stars on Github and it’s used by data engineers at companies like Twitter, Airbnb and Spotify.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.74/warc/CC-MAIN-20230322180852-20230322210852-00149.warc.gz
CC-MAIN-2023-14
1,380
8
https://brutalistwebsites.com/intosunrise.co/
code
IntoSunrise (Rebecca Nuvoletta) Q: Why do you have a Brutalist Website? A: My heart has a soft spot for the 1990s intersection of internet and techno, especially simple black and white rave listing style. Around 2011 I got into producing underground events in Brooklyn, where I fell in love with the thrill of the hunt of the hidden. IntoSunrise is a side project I started to fund my interactive art. Q: Who designed the website? A: I designed this Q: Who coded the website? A: I did, coded this in much the same way I did when I was a teenage girl stuck on an Alaskan island in the late 90s with a computer and a modem. Q: With what kind of editor?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817442.65/warc/CC-MAIN-20240419172411-20240419202411-00182.warc.gz
CC-MAIN-2024-18
650
8
http://news.sys-con.com/node/3103111
code
|By Marketwired .|| |June 10, 2014 10:00 AM EDT|| WATERLOO, ON -- (Marketwired) -- 06/10/14 -- Thalmic Labs today reveals the final design for its gesture control device, the Myo armband. The state-of-the-art industrial design is a thin, expandable band, which is nearly half the thickness of the Myo Alpha units that were given to select developers and partners over the past six months. Weighing in at less than 95 grams, the final Myo armband weighs less than the average male wrist watch. As the device requires direct contact with your upper forearm, this design allows you to wear it comfortably under clothing, all day long. "Our team has put countless hours into creating the sleek design we're showing today, as well as the technology inside of it," said Stephen Lake, CEO and Co-founder of Thalmic Labs. "This final design is rugged, while also being lightweight, making it easy for the Myo armband to become a part of our everyday lives." Product development of the Myo armband began in the spring of 2012. Over the past two years, the look of product has evolved from early 3D-printed prototypes to the production design shown today. The technology has advanced at a rapid rate as well. With an incredibly talented team of engineers brought together from across the world, Thalmic Labs developed a new type of muscle activity sensor from the ground up, made countless advances in gesture recognition algorithms, and developed a robust one-size-fits-all industrial design that will accommodate ages twelve and above. The Myo armband is the first gesture-control technology of its kind. Instead of relying on cameras or voice control, the Myo armband measures electrical activity in your muscles, giving you the ability to wirelessly control and interact with computers and other digital consumer products around you using simple, natural movements. The device also has a highly sensitive 9-axis motion sensor to detect all of the motions and rotations of your forearm. Thalmic Labs' developer program has piqued the interest of over 10,000 developers, who have applied to create applications that are compatible with the Myo armband. Using the Myo API, third-party developers can configure the Myo armband to an endless number of applications using a Bluetooth connection. Among other things, the Myo armband can be used as a controller for home automation and entertainment, to seamlessly replace presentation remotes, to interact with video games and heads up displays, and as a tool for controlling robotic devices and toys. Thalmic Labs will begin shipping pre-orders of the Myo Developer Kit next month, with the remaining, non-Developer-Kit, pre-orders shipping in September. All pre-order units will have final hardware. The product will then become widely available for real-time purchase in the fall of 2014, in time for the holiday season. The Myo armband is currently available for pre-order for US $149 at thalmic.com/myo/preorder/. Uproar PR for Thalmic Labs 321-236-0102 x 228 All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work... Jan. 24, 2017 06:30 PM EST Reads: 1,657 Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open. Jan. 24, 2017 06:15 PM EST Reads: 3,824 Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walked through how Octob... Jan. 24, 2017 06:15 PM EST Reads: 3,155 “RackN is a software company and we take how a hybrid infrastructure scenario, which consists of clouds, virtualization, traditional data center technologies - how to make them all work together seamlessly from an operational perspective,” stated Dan Choquette, Founder of RackN, in this SYS-CON.tv interview at @DevOpsSummit at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY. Jan. 24, 2017 04:30 PM EST Reads: 3,661 SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran... Jan. 24, 2017 04:15 PM EST Reads: 2,733 "There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York Ci... Jan. 24, 2017 02:30 PM EST Reads: 5,922 The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ... Jan. 24, 2017 02:30 PM EST Reads: 5,393 WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC. Jan. 24, 2017 02:15 PM EST Reads: 3,279 DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @... Jan. 24, 2017 01:45 PM EST Reads: 3,022 20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Jan. 24, 2017 01:30 PM EST Reads: 4,500 Discover top technologies and tools all under one roof at April 24–28, 2017, at the Westin San Diego in San Diego, CA. Explore the Mobile Dev + Test and IoT Dev + Test Expo and enjoy all of these unique opportunities: The latest solutions, technologies, and tools in mobile or IoT software development and testing. Meet one-on-one with representatives from some of today's most innovative organizations Jan. 24, 2017 01:00 PM EST Reads: 1,742 SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and E... Jan. 24, 2017 12:45 PM EST Reads: 5,931 @DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open. Jan. 24, 2017 12:30 PM EST Reads: 3,742 SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed ... Jan. 24, 2017 12:15 PM EST Reads: 6,536 “DevOps is really about the business. The business is under pressure today, competitively in the marketplace to respond to the expectations of the customer. The business is driving IT and the problem is that IT isn't responding fast enough," explained Mark Levy, Senior Product Marketing Manager at Serena Software, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Jan. 24, 2017 12:15 PM EST Reads: 11,751
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285315.77/warc/CC-MAIN-20170116095125-00105-ip-10-171-10-70.ec2.internal.warc.gz
CC-MAIN-2017-04
10,419
41
http://search.sys-con.com/node/1275501
code
|By Yakov Fain|| |February 6, 2010 01:00 PM EST|| Yesterday, I finished my dinner in a French restaurant with traditional crème brulee. This time I've also ordered a small glass of Sauternes wine. Then we went to our friend's house to follow it with some good old port. But no matter what software developers drink or eat in February 2010, one way or the other the conversation will slide into a No-Flash-Player-on-iPad discussion. Apple pretends that they will never allow Flash Player on Steve's OS (SOS), because it's buggy. Adobe's CTO, Kevin Lynch, states that Apple doesn't cooperate. After the third round, I made a statement that when the dust settles, everyone will thank Steve Jobs for forcing Adobe to make Flash Player better and faster, which is a win-win situation for all application developers. My drinking buddy responded that Adobe has a tiny group of hard core developers who work on Flash Player, have deep understanding of its internals, have the status of sacred cows, and Kevin Lynch can't put pressure on them regardless of what Steve says or wants. When I hear about any prima donnas in IT, I'm getting easily excited. I believe that if any developer in any IT team starts exhibiting the prima donna symptoms, there's only one solution to this disease: s/he has to be fired. My opponent was not so sure and replied, "You can't fire the entire team". Don’t get me wrong, I not saying that the Flash Player team has prima donnas nor that Adobe’s management can’t control them... Actually, can you give a better than this explanation why the bug fix that caused Flash Player crashes was not deployed in production for more than a year? Does it take Steve Job to have a product manager openly admin that they didn't pay enough attention to Flash Player bugs? Will it be different from now on? Anyway, after a couple of old ports it was interesting to dig into this direction a bit deeper. I told my friends a story that happened with my friend Gregory ten years ago. Back than he had several gas stations in our state of New Jersey. You may not know, but NJ drivers are not allowed to pump gas themselves. You just pull up to a pump, the gas attendant stops by, and you say, "Fill up, Regular please". At least I say the same phrase during the last fifteen years - I lease cars and don't buy premium gasoline. Gregory had about 20 attendants working for him. All of them were relatives from some Asian country. They were self-managed, low maintenance, and hard working people. One day, the leader of the clan came to Greg and demanded raising salaries to all of them. Greg refused. Then the envoy said, "If you won't raise our pay, we'll all quit" Greg quietly responded, "Go back and tell everyone that all of you are fired as of this very moment." Greg had to temporarily lock his gas stations - he went to South Jersey, where the pay was lower, hired and relocated 20 new gas attendants. Greg has balls. Yes, he lost money, but didn't bend to blackmailers who believed that they were irreplaceable. You'll be surprised, but situation in the job market of gas attendants is very similar to what I see in IT. It's a pretty small world, all local recruiters know you, and employers require references from the previous place of work. Two weeks later, the blackmailer came back to Greg begging to hire them back, but it was a little to late. No, I don't think that developing Flash Player is as easy as pumping gas. But the source code of the latest build Flash Player is safely stored in a central repository, and if, for any hypothetical reason, Adobe executives will need to replace the entire team, they can do it within a month or so. There are so many brilliant programmers in this country, you wouldn't believe it. Sorry Flash Player folks, for using your team for illustrating my attitude to prima donnas in IT. I believe that you did a great job with this VM (trust me, I have something to compare with). But our conversation about your team did take place yesterday, and I've openly shared it with my readers. Yes, there is always room for improvement, but I'm sure there are plenty of non-technical reasons for the current situation in Mac OS and SOS. I simply don't like prima donnas. Plus Sauternes. Plus the old port... |Yakov Fain 03/21/10 04:53:00 PM EDT| @roche It looks like you didn't get my message and analogies. I know that Apple simply doesn't want Flash Player on iPhone regardless of how good/bad the product is. I also know that Adobe has good engineers, but I don't see that they have much support from the management. By support I mean providing enough resources for delivering software of superb quality. Your statement about "internal assessments of Adobe's management by its own engineers" is great, but show me the money. Why in the world does it take two years to release the next version of Flex? Why Adobe substantially raised the licensing cost of LCDS leaving in the dust those IT shop who started using it? Being a democratic and cool executive is nice but not good enough. They need to make the right decisions to get better external assessment too. |roche 02/10/10 05:24:00 AM EST| Perhaps the greatest problem with the Apple/Adobe conflict is how many people grant Apple the high ground in the discussion. Adobe isn't being deprived access to Apple products because of quality. It's being deprived access to Apple products because that's what Apple does. First of all, consider the business diplomacy issues. Adobe wants access to Apple's platform, so it cannot be forthcoming with its retorts. If you read between the lines, Adobe's response is always "our quality isn't an issue, and our customers are asking for access". This is very much a guarded statement, staying polite and ambivalently taking the higher ground. Now, consider Apple's track record. Apple isn't a software company, it's a hardware company that runs proprietary software. To save some reading, suffice it to say that Apple has never enabled an OEM to install its OS or products (save ITunes & Safari), reaping the benefit of a constrained support base. Compare that to Microsoft. As maligned as their products are, you can install Windows XP on any machine from a multi-processor server down to a netbook. It supports everything. Apple's game is to keep the hw/sw relationship very safe. Taking that knowledge to their iPod/iPhone/iPad family of products, consider what else they fail to support. Anything available on PC/Mac via browser plug-ins is not supported in iP*'s Safari. Java, Flash, etc... None of it is supported. Now consider the balance of Apple's business. There was a time where Quicktime took the bulk of online video market share away from Real and MS. Then came Flash. Now, Quicktime is a piece of history rather than the authoritative online video platform, and Apple hates that. iP's video is all QT, they've even painstakingly ported it for YouTube streaming consumption. Those scars are still relatively fresh, and this is the first high ground Apple has had over Adobe since. So, the bottom line is that Flash isn't supported on Apple's portable products because Apple wants it that way. Because it makes business sense to stay polite, Adobe is just reiterating indisputable facts about customer demand and its own bug stats. Apple is at fault for making this a shooting match. With due respect, your article documents a phenomenon that might have worked with gas station attendants, but not the architects, implementers, and testers of a hugely popular platform with a fully functional API. I'd question your perspective with regard to management and professionalism, seeing your virtue (or lack thereof) regarding your "burn it down" policy toward what you perceive as prima donna engineering teams. Read glassdoor.com's internal assessments of Adobe's management by its own engineers. People are harping on them for not making Flash a bigger product. It seems no one with first-hand perspective or an empathetic mindset wouldn't feel the need to destroy an engineering organization to prove a point about salaries or process. The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ... Jan. 31, 2015 06:30 AM EST Reads: 1,985 The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t... Jan. 31, 2015 05:45 AM EST Reads: 3,204 Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi... Jan. 31, 2015 05:00 AM EST Reads: 2,905 There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immediate and actionable interpretation of events as they happen. Another aspect concerns how to deliver ... Jan. 31, 2015 03:00 AM EST Reads: 3,499 Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys. Jan. 31, 2015 02:00 AM EST Reads: 3,106 The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago. Jan. 31, 2015 02:00 AM EST Reads: 8,093 In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect at GE, and Ibrahim Gokcen, who leads GE's advanced IoT analytics, focused on the Internet of Things / Industrial Internet and how to make it operational for business end-users. Learn about the challenges posed by machine and sensor data and how to marry it with enterprise data. They also discussed the tips and tricks to provide the Industrial Internet as an end-user consumable service using Big Data Analytics and Industrial Cloud. Jan. 31, 2015 01:00 AM EST Reads: 2,954 Things are being built upon cloud foundations to transform organizations. This CEO Power Panel at 15th Cloud Expo, moderated by Roger Strukhoff, Cloud Expo and @ThingsExpo conference chair, addressed the big issues involving these technologies and, more important, the results they will achieve. Rodney Rogers, chairman and CEO of Virtustream; Brendan O'Brien, co-founder of Aria Systems, Bart Copeland, president and CEO of ActiveState Software; Jim Cowie, chief scientist at Dyn; Dave Wagstaff, VP and chief architect at BSQUARE Corporation; Seth Proctor, CTO of NuoDB, Inc.; and Andris Gailitis, C... Jan. 31, 2015 01:00 AM EST Reads: 2,851 How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ... Jan. 31, 2015 12:30 AM EST Reads: 3,096 The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy. Jan. 30, 2015 10:00 PM EST Reads: 2,886 Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the "Internet of Things" (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his General Session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, discuss the real benefits to focus on, how to understand the requirements of a successful solution, the flow of ... Jan. 30, 2015 03:45 PM EST Reads: 3,159 Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world. Jan. 30, 2015 03:15 PM EST Reads: 3,539 "People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Jan. 30, 2015 02:30 PM EST Reads: 2,723 Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 15th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, discussed how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your online presence. Jan. 30, 2015 02:15 PM EST Reads: 3,249 In this Women in Technology Power Panel at 15th Cloud Expo, moderated by Anne Plese, Senior Consultant, Cloud Product Marketing at Verizon Enterprise, Esmeralda Swartz, CMO at MetraTech; Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems; Seema Jethani, Director of Product Management at Basho Technologies; Victoria Livschitz, CEO of Qubell Inc.; Anne Hungate, Senior Director of Software Quality at DIRECTV, discussed what path they took to find their spot within the technology industry and how do they see opportunities for other women in their area of expertise. Jan. 30, 2015 01:45 PM EST Reads: 2,391 DevOps Summit 2015 New York, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. Jan. 30, 2015 01:15 PM EST Reads: 2,633 Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho... Jan. 30, 2015 01:15 PM EST Reads: 2,032 Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity. Jan. 30, 2015 01:00 PM EST Reads: 4,143 We’re no longer looking to the future for the IoT wave. It’s no longer a distant dream but a reality that has arrived. It’s now time to make sure the industry is in alignment to meet the IoT growing pains – cooperate and collaborate as well as innovate. In his session at @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine the key ingredients to IoT success and identify solutions to challenges the industry is facing. The deep industry expertise behind this presentation will provide attendees with a leading edge view of rapidly emerging IoT oppor... Jan. 30, 2015 12:45 PM EST Reads: 1,934 “With easy-to-use SDKs for Atmel’s platforms, IoT developers can now reap the benefits of realtime communication, and bypass the security pitfalls and configuration complexities that put IoT deployments at risk,” said Todd Greene, founder & CEO of PubNub. PubNub will team with Atmel at CES 2015 to launch full SDK support for Atmel’s MCU, MPU, and Wireless SoC platforms. Atmel developers now have access to PubNub’s secure Publish/Subscribe messaging with guaranteed ¼ second latencies across PubNub’s 14 global points-of-presence. PubNub delivers secure communication through firewalls, proxy ser... Jan. 30, 2015 12:45 PM EST Reads: 1,745
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869647.75/warc/CC-MAIN-20150124161109-00201-ip-10-180-212-252.ec2.internal.warc.gz
CC-MAIN-2015-06
19,611
71
https://link.springer.com/chapter/10.1007/978-1-4419-7251-4_41
code
General Conditions for Interpretable Results In principle, any technique that separates bound from unbound ligand, or occupied from unoccupied protein, can be used to monitor binding. As a general rule, highest accuracy is achieved when the concentration of the protein in binding studies is approximately equal to K d (within about one order of magnitude either way), this makes it easier to determine the concentration difference between bound and total ligand. Since the supply of protein is usually a limiting factor in research, binding studies are expensive and the sample size must be reduced as much as possible. Considerable progress has been made in this direction over the years.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210463.30/warc/CC-MAIN-20180816054453-20180816074453-00642.warc.gz
CC-MAIN-2018-34
690
2
http://mathforum.org/kb/message.jspa?messageID=9389110
code
Edric M Ellis <email@example.com> wrote in message <firstname.lastname@example.org>... > "Christian " <email@example.com> writes: > > > Path: news.mathworks.com!not-for-mail > > Newsgroups: comp.soft-sys.matlab > > Subject: histc for gpu > > Date: Sat, 25 Jan 2014 21:28:07 +0000 (UTC) > > Organization: The MathWorks, Inc. > > > > Hi, > > > > I hope to use histc on CUDA, but it's not a supported function. So I tried a workaround because I'm actually just interested in the second output argument: > > > > [~,ind] = histc(x,x_grid); > > > > is more or less equivalent to the following code (except for points outside the grid). > > > > ind=ones(size(x)); > > for j=1:length(x_grid)-1 > > ind(x>=x_grid(j) & x<x_grid(j+1)) = j; > > end > > > > But this latter code is much slower than histc, probably due to the loop. For instance, for my inputs the first code requires 0.03 seconds, the latter needs 1.8 seconds. > > > > Is it possible to speed up the second code so that I can still run it > > on CUDA? > > I haven't profiled this, but you can run just the binning part of HISTC > on the GPU using arrayfun and up-level variables to access the bins: > > %%%%%%%%%%%%%%%%%%%% > function test() > > % Test inputs > q = gpuArray.randi([1 10], 1, 1000); > edges = 1:8; > > % Implementation > nEdges = numel(edges); > function b = bindata(x) > b = 0; > if x == edges(nEdges) > b = nEdges; > else > for idx = 1:(nEdges-1) > if x >= edges(idx) && x < edges(idx + 1) > b = idx; > break; > end > end > end > end > b = arrayfun(@bindata, q); > > % Check against the CPU > [~, bcheck] = histc(gather(q), edges); > assert(isequal(b, bcheck)); > end > %%%%%%%%%%%%%%%%%%%% > > Cheers, > > Edric. Thanks, Edric. I have some trouble getting your code running. I've set up a function file: function b = bindata(x,edges) nEdges = 100; b = 0; if x == edges(end) b = nEdges; else for idx = 1:(nEdges-1) if x >= edges(idx) && x < edges(idx + 1) b = idx; break; end end end end I changed nEdges=numel(edges) to just the size of edges because the arrayfun command didn't like numel. But still when running the code from my main file using indx = arrayfun(@bindata, xpgpu, x_gridgpu); I receive the message Error using gpuArray/arrayfun Indexing is not supported. for lines 4 and 8.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945222.55/warc/CC-MAIN-20180421125711-20180421145711-00163.warc.gz
CC-MAIN-2018-17
2,262
6
https://www.101apps.co.za/index.php/itemlist/tag/Compare
code
I was busy doing some Android coding and needed to compare two dates. The result of the comparison had to show whether the dates were the same or whether one was greater than the other (or smaller). Having stored the dates as an integer in a SQLite database, I extracted and converted them into long objects. It was these long objects that I had to compare. As the long objects were accurate time stamps in milliseconds (including hours, minutes, seconds, and milliseconds), this posed a problem when comparing the dates as the same dates would not register as being equal if they differed by even a millisecond! The solution was to first convert them into a format that only represented the day, month and year.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987769323.92/warc/CC-MAIN-20191021093533-20191021121033-00220.warc.gz
CC-MAIN-2019-43
712
2
http://www.osnews.com/permalink?437963
code
Username or EmailPassword Servers need 99% boiler plate code to be executed? Don't think so. Simple configuration wins. How would you love to have full control over the processes spawned via control groups? How about automatic service recovery? fully detailed reported of why the service failed if it did? And yes, systemd developers are not recommending everyone convert. Even in Fedora 14, only a few default packages ship with native systemd service files and the rest of the system continues to use sysv init scripts. If anyone wants to try it out, they are free to include both. Exclusivity is not needed.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218193284.93/warc/CC-MAIN-20170322212953-00040-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
610
2
http://filesharingtalk.com/threads/63166-Overnet-Lite-Pro
code
how come u can far less sources with overnet than with emule!? anyway u can change it? cant overnet connect emule network? i recognize that though there are far less sources, speed is far much better. but for some files i have no sources at all, whereas i had some on emule. and since u are quite cunning,, u realized u dl then not fast at all, acually not at all.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988717963.49/warc/CC-MAIN-20161020183837-00455-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
364
4
https://www.bleepingcomputer.com/forums/t/274335/vista-x64-thumb-drive-insert-bsod;-now-cant-connect-internet/
code
Situation: I inserted a thumb drive into the front usb on my PC, then the computer crashed. After resetting the computer, it would no longer access the internet. In the network connections screen it shows that I am connected to the internet, however IE, FF, and Thunderbird will not connect. I attempted to update Nod32, but no dice. Weird thing was that Windows Update worked. Please help Failed attempts to fix: - reset computer - system restored to few hours prior - reset router - went into device manager, discovered issue with SM Bus Controller. d/l reinstalled chipset from Dell - uninstalled network card, then reinstalled - completed a windows update - uninstalled java & flash. These guys have caused me more problems over the years than I care to recount. - ran Dell diagnostics from boot menu I could be wrong, but because of the way it happened I suspect some sort of hardware issue unfortunately I am unable to figure out what it could be. While waiting for a reply I will run Nod32 just in case though. Any advice would be appreciated.
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510853.25/warc/CC-MAIN-20181016155643-20181016181143-00389.warc.gz
CC-MAIN-2018-43
1,050
11
https://fornoob.com/how-many-moles-of-cao-form-when-98-60-g-caco3-decompose/
code
How many moles of cao form when 98.60 g caco3 decompose CaCO3 = CaO + CO2 We are given the amount of CaCO3 to decompose when heated. This will be our starting point. 98.60 g CaCO3 ( 1 mol CaCO3 / 100.09 grams CaCO3) (1 mol CaO / 1 mol CaCO3 ) ( 56.08 g O2 / 1 mol CaO) = 55.25 g CaO 0.9851mol. This is the rt answer just did it CaCO3 ---> CaO + CO2 2) Molar ratios 1 mol CaCO3 : 1 mol CaO : 1 mol CO2 3) Number of moles of reactant: 98.60 g CaCO3 3.1) Calculate the molecular mass of CaCO3: 40.08 g/mol + 12.00 g/mol + 3*16.00 g/mol = 100.08 g / mol 3.2) Calculate thte number of moles using the formula numbe of moles = mass in grams / molecular mass => number of moles = 98.60 g / 100.08 g / mol = 1.015 mol of CaCO3 4) Using the theoretical molar ratios you know 1 mole of CaCO3 produce 1 mol of each product, then with 1.015 mol of CaCO3 you will obtain 1.015 mol of CO, which rounded to 3 significant figures is 1.02 mol. 0.9852 moles of CaO Reaction equation for the decomposition of CaCO₃: CaCO₃ → CaO + CO₂ The question asks how many moles of CaO form when 98.60g of CaCO₃ decompose. We can see from the reaction equation that for every mol of CaCO₃, one mol of CaO will be produced (molar ratio 1:1) So first we need to calculate how many moles are the 98.60g of CaCO₃: Molar Mass of CaCO₃ = molar mass Ca + molar mass C + 3 * molar mass O = 40.078 + 12.011 + 3 * 15.999 = 100.086 g/mol Moles of CaCO₃ = mass CaCO₃ / molar mass CaCO₃ Moles of CaCO₃ = 98.60 g / 100.086 g/mol = 0.9852 moles CaCO₃ As we said before for every mol of CaCO₃, one mol of CaO is produced. So the decomposition of 0.9852 moles of CaCO₃ will produce 0.9852 moles of CaO. Hey I think it is going to be 0986 moles. Cuz we can see that no. of moles of CaCO3 which will decompose is equivalent to the no. of CaO . Now it's just the matter of finding the no. of moles of CaCO3 . no.of moles=mass /relative molecular mass
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583423.96/warc/CC-MAIN-20211016043926-20211016073926-00034.warc.gz
CC-MAIN-2021-43
1,929
28
https://blogs.infosupport.com/presentation-and-demo-materials-sdc-sessions/
code
The past few days I gave some sessions at the Software Developers Conference. As promised to the audience I would post the presentations and the demos I gave on my blog, so here goes: The demo files can be found here To run the demos, just unzip the file and open the .sln files found in the directories. The folders are numbered in demo sequence so you can find the demo in line with the presentation. Follow my new blog on http://fluentbytes.com
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100327.70/warc/CC-MAIN-20231202042052-20231202072052-00649.warc.gz
CC-MAIN-2023-50
447
5
https://www.slashgear.com/ubuntu-linux-maker-canonical-targets-smartphones-tablets-and-smarttv-31191959/
code
The smartphone world is dominated by Apple and Android with Windows Phone playing a distant third and then all others far behind the top three. Canonical, the makers of Ubuntu Linux are now saying that they will be taking on the top three firms in the smartphones and smartTV realm with a version of its Linux software. I don’t see any win in this for Linux. We have seen repeatedly that iOS and Android are very entrenched and even Windows Phone is having a very hard time completing. I see very little hope for another OS on the market we have today. The tip that Ubuntu is heading to the new platforms comes from Mark Shuttleworth, the founder of Canonical and Ubuntu. The move to mobile devices will come after the latest 12.04 version of Ubuntu and the Unity desktop environment is stable, polished and ready for home and business users. Then Ubuntu will target other platforms. Do any of you see a chance for Ubuntu in the smartphone and tablet market?
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00561.warc.gz
CC-MAIN-2021-49
960
3
http://virtualization.sys-con.com/node/2457907
code
|By Marketwired .|| |November 26, 2012 03:01 AM EST|| WASHINGTON, DC -- (Marketwire) -- 11/26/12 -- FASTCASE, the case for speed, has unveiled a new case that dramatically improves the connection speed and performance of the iPad®. Consumers and executives use the iPad, the most popular tablet in the world, in a host of new scenarios. FASTCASE has developed patented technology and a unique design that have been tested and proven in labs certified by the Federal Communications Commission (FCC) to enable the iPad to run as much as 5X faster while optimizing signal strength. Consumers and executives now utilize the iPad for both everyday functions and business needs. So they require faster, more consistent performance to access the Internet, and to upload and download cloud-based apps. Enterprise executives now have a new opportunity to get ahead of the game in today's real-time business environment. Verified by third-party research including FCC-certified labs and WIRED® magazine, FASTCASE offers the following benefits: - Improves cellular signal strength by up to 10X over any other case, including the Apple® SmartCase® - Enables up to 6.6X better Wi-Fi reception compared to other leading cases - Can improve range on both cellular and Wi-Fi networks - Allows for more consistent connections with less dropped data - Speeds Wi-Fi uploads by up to 4X and downloads by up to 5X - Reduces wireless radiation exposure by up to 88 percent below the FCC safety limit. "Tablets are now fully integrated into our lives whether at work or play," said Drew Wilkins, Vice President of Marketing, FASTCASE. "A connection that is both fast and steady is essential for consumers because most iPad Web apps rely on it. FASTCASE is the only case on the market today that addresses the need for on-the-go productivity and consistent speed. Anyone who needs iPad apps to perform quickly will find that FASTCASE is an indispensable tool." The 'Human Sensor' Performance Problem iPads with cellular technology contain a 'human sensor' designed to power down the iPad's cellular antenna when it gets too close to the human body. Apple designed this solution to comply with FCC safety limits for cell phone radiation exposure. The problem for consumers, however, is that the iPad's sensor cannot distinguish a case from the human body or any other solid. So all other cases on the market (except FASTCASE) trigger this sensor and cause iPads to limit their transmitted power by up to 90%. This result translates into a problem: any iPad with a protective case is limited to running at as low as 10% of optimal power and performance. FASTCASE is uniquely designed to be compatible with the human sensor, allowing iPad performance to be fully realized while reducing wireless radiation exposure by up to 88 percent below the FCC safety limit. In order to function properly, iPad applications such as Gmail®, Facebook® and Skype® depend on a strong signal and steady connection. When iPad apps work faster, users can work more productively. Poor reception, however, can interfere with all kinds of important activities -- from on-the-go presentations to the ability to share files or check email. Your iPad Can Go Faster FASTCASE allows the iPad to work at optimal signal strength, so that executives can work faster. Engineered for signal intelligence, FASTCASE is embedded with passive antennas that can improve connection, signal, and reception. For once, iPad users who want a protective case can use their tablets the way that Apple intended. FASTCASE is manufactured by Pong Research Corporation, the pioneer in radiation-controlling mobile cases. Pong cases are renowned for their ability to redirect, diffuse, and redistribute potentially harmful wireless radiation away from users' bodies. Pong's cases also improve device connectivity, and can better performance and conserve battery life. FASTCASE is specifically designed for the iPad Wi-Fi + Cellular series, enabling faster device speed and performance while reducing exposure to radiation. To learn more about FASTCASE, please visit http://fastcasenow.com/. FASTCASE engineers its patented cases to optimize the signal strength and increase the speed and efficiency of the iPad while reducing users' wireless radiation exposure. Designed by scientists and proven in FCC-certified labs, FASTCASE products are the new solution for executives needing 24/7, fast access to their cloud-based apps. Visit us at http://fastcasenow.com/, or find us on Facebook at www.facebook.com/fastcasenow. The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform. In his session at @ThingsExpo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and shared the must-have mindsets for removing complexity from the develo... Jul. 30, 2016 03:00 AM EDT Reads: 1,637 SYS-CON Events announced today that MangoApps will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. MangoApps provides modern company intranets and team collaboration software, allowing workers to stay connected and productive from anywhere in the world and from any device. Jul. 30, 2016 02:45 AM EDT Reads: 1,483 The IETF draft standard for M2M certificates is a security solution specifically designed for the demanding needs of IoT/M2M applications. In his session at @ThingsExpo, Brian Romansky, VP of Strategic Technology at TrustPoint Innovation, explained how M2M certificates can efficiently enable confidentiality, integrity, and authenticity on highly constrained devices. Jul. 30, 2016 02:15 AM EDT Reads: 1,184 “delaPlex Software provides software outsourcing services. We have a hybrid model where we have onshore developers and project managers that we can place anywhere in the U.S. or in Europe,” explained Manish Sachdeva, CEO at delaPlex Software, in this SYS-CON.tv interview at @ThingsExpo, held June 7-9, 2016, at the Javits Center in New York City, NY. Jul. 30, 2016 02:00 AM EDT Reads: 1,685 The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discussed how businesses can gain an edge over competitors by empowering consumers to take control through IoT. He cited examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He also highlighted how IoT can revitalize and restore outdated business models, making them profitable ... Jul. 29, 2016 10:15 PM EDT Reads: 2,075 Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open. Jul. 29, 2016 08:00 PM EDT Reads: 2,749 We all know the latest numbers: Gartner, Inc. forecasts that 6.4 billion connected things will be in use worldwide in 2016, up 30 percent from last year, and will reach 20.8 billion by 2020. We're rapidly approaching a data production of 40 zettabytes a day – more than we can every physically store, and exabytes and yottabytes are just around the corner. For many that’s a good sign, as data has been proven to equal money – IF it’s ingested, integrated, and analyzed fast enough. Without real-ti... Jul. 29, 2016 07:45 PM EDT Reads: 1,186 SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ... Jul. 29, 2016 06:00 PM EDT Reads: 937 Big Data, cloud, analytics, contextual information, wearable tech, sensors, mobility, and WebRTC: together, these advances have created a perfect storm of technologies that are disrupting and transforming classic communications models and ecosystems. In his session at @ThingsExpo, Erik Perotti, Senior Manager of New Ventures on Plantronics’ Innovation team, provided an overview of this technological shift, including associated business and consumer communications impacts, and opportunities it ... Jul. 29, 2016 05:15 PM EDT Reads: 397 You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ... Jul. 29, 2016 04:45 PM EDT Reads: 1,258 WebRTC is bringing significant change to the communications landscape that will bridge the worlds of web and telephony, making the Internet the new standard for communications. Cloud9 took the road less traveled and used WebRTC to create a downloadable enterprise-grade communications platform that is changing the communication dynamic in the financial sector. In his session at @ThingsExpo, Leo Papadopoulos, CTO of Cloud9, discussed the importance of WebRTC and how it enables companies to focus... Jul. 29, 2016 04:15 PM EDT Reads: 1,090 ReadyTalk has expanded the capabilities of the FoxDen collaboration platform announced late last year to include FoxDen Connect, an in-room video collaboration experience that launches with a single touch. With FoxDen Connect, users can now not only engage in HD video conferencing between iOS and Android mobile devices or Chrome browsers, but also set up in-person meeting rooms for video interactions. A host’s mobile device automatically recognizes the presence of a meeting room via beacon tech... Jul. 29, 2016 04:15 PM EDT Reads: 447 Manufacturers are embracing the Industrial Internet the same way consumers are leveraging Fitbits – to improve overall health and wellness. Both can provide consistent measurement, visibility, and suggest performance improvements customized to help reach goals. Fitbit users can view real-time data and make adjustments to increase their activity. In his session at @ThingsExpo, Mark Bernardo Professional Services Leader, Americas, at GE Digital, discussed how leveraging the Industrial Internet a... Jul. 29, 2016 03:15 PM EDT Reads: 679 On Dice.com, the number of job postings asking for skill in Amazon Web Services increased 76 percent between June 2015 and June 2016. Salesforce.com saw its own skill mentions increase 37 percent, while DevOps and Cloud rose 35 percent and 28 percent, respectively. Even as they expand their presence in the cloud, companies are also looking for tech professionals who can manage projects, crunch data, and figure out how to make systems run more autonomously. Mentions of ‘data science’ as a skill ... Jul. 29, 2016 03:00 PM EDT Reads: 528 IoT generates lots of temporal data. But how do you unlock its value? You need to discover patterns that are repeatable in vast quantities of data, understand their meaning, and implement scalable monitoring across multiple data streams in order to monetize the discoveries and insights. Motif discovery and deep learning platforms are emerging to visualize sensor data, to search for patterns and to build application that can monitor real time streams efficiently. In his session at @ThingsExpo, ... Jul. 29, 2016 02:00 PM EDT Reads: 1,213 "delaPlex is a software development company. We do team-based outsourcing development," explained Mark Rivers, COO and Co-founder of delaPlex Software, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY. Jul. 29, 2016 01:00 PM EDT Reads: 2,130 SYS-CON Events announced today that LeaseWeb USA, a cloud Infrastructure-as-a-Service (IaaS) provider, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. LeaseWeb is one of the world's largest hosting brands. The company helps customers define, develop and deploy IT infrastructure tailored to their exact business needs, by combining various kinds cloud solutions. Jul. 29, 2016 11:15 AM EDT Reads: 1,352 Amazon has gradually rolled out parts of its IoT offerings in the last year, but these are just the tip of the iceberg. In addition to optimizing their back-end AWS offerings, Amazon is laying the ground work to be a major force in IoT – especially in the connected home and office. Amazon is extending its reach by building on its dominant Cloud IoT platform, its Dash Button strategy, recently announced Replenishment Services, the Echo/Alexa voice recognition control platform, the 6-7 strategic... Jul. 29, 2016 11:15 AM EDT Reads: 696 The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between wh... Jul. 29, 2016 09:45 AM EDT Reads: 2,162 For basic one-to-one voice or video calling solutions, WebRTC has proven to be a very powerful technology. Although WebRTC’s core functionality is to provide secure, real-time p2p media streaming, leveraging native platform features and server-side components brings up new communication capabilities for web and native mobile applications, allowing for advanced multi-user use cases such as video broadcasting, conferencing, and media recording. Jul. 29, 2016 09:45 AM EDT Reads: 1,043
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257832942.23/warc/CC-MAIN-20160723071032-00257-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
14,534
60
https://blogs.msdn.microsoft.com/alimaz/tag/mcm/
code
As you might know, the Microsoft SharePoint documentation team is offering a series of dedicated design sessions through out the week of October 3rd during the SharePoint Conference 2011 in Anaheim, CA. I am very privileged to drive this sessions along with other well-known folks in community such as Spence Harbar and rest of… Read more Azure, Open Source and ... As you might know I’ve been working with Spence and rest of the SharePoint 2010 MCM delivery team on creating the material. We had a very successful upgrade rotation back in May and the first full rotation will start on July 12 (Good luck to all the attendees), also I should mention that Brett… Read more I just wrapped up an intensive Three week MCM training up in Redmond and wanted to share couple of posts on the program from colleagues who attended the R2 (Beta) for folks who are interested: Master Training: Are You Ready? My Microsoft Certified Master experience Certified Master for SharePoint 2007 R2 As soon as I… Read more
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863949.27/warc/CC-MAIN-20180521043741-20180521063741-00501.warc.gz
CC-MAIN-2018-22
1,009
4
https://stackoverflow.com/questions/31378763/hide-the-left-sidebar-of-derek-eders-fusion-table-template?noredirect=1
code
How to hide the left sidebar of Fusion table template? On mobile's width, the left sidebar will show up at the top of the map. How do I hide it inside the collapsible navbar? I have a couple of custom filters. It would be nice to be able to hide them on mobile. Otherwise, you need to scroll down to see the map, which is not user friendly. Thanks! Ok, but that only solve part of the problem. The left sidebar shall show up inside *div class="collapse navbar-collapse"* So that, when user click navbar, it will pop up. Like this I just found out that this website has the same functionality, that is, show left sidebar in desktop version on the one hand, and collapse sidebar in mobile version on the other hand. And the related stackoverflow question. But the accepted render view only collapse the sidebar without putting it at top in mobile version; instead, it put it at left. I am not sure it is trivial to collapse the left sidebar and show it at top in mobile version?
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143455.25/warc/CC-MAIN-20200217235417-20200218025417-00227.warc.gz
CC-MAIN-2020-10
976
7
http://rcrebels.eu/pca-face-recognition-ppt.html
code
Pentland, Face Recognition using Eigenfaces, cvpr 1991 56 PCA General dimensionality reduction technique Preserves most of variance with a much more compact representation Lower storage requirements (eigenvectors a few numbers per face) Faster matching What are the proofing tools greek office 2010 problems for face recognition? Fisherfaces, Belheumer., pami 1997 72 (No Transcript). Find closest labeled face in database nearest-neighbor in K-dimensional space 49 Key Property of Eigenspace Representation Given 2 images that are used to construct the Eigenspace is the eigenspace projection of image Then, That is, distance in Eigenspace is approximately equal to the correlation between.Given m points in a n dimensional space, for large n, how does one project on to a low dimensional space while preserving broad trends in the data and allowing it to be visualized?14 2nd Principal Component, y2 1st Principal Component, y1 15, pCA Scores xi2 yi,1 yi,2 xi1 16, pCA Eigenvalues 2.As a result, we start picking the new basis vectors (new directions to project the data from the eigenvectors of the cov.The variance in each eigenvalue direction is lambda_i, so we sum the variance in the k direction and we require that it surpasses say 90 of the original variation.11 Change of basis x2 z1 Note that the vector 1 1 is longer than the vectors 1 0 or 0 1 hence the coefficient is still.Too many possible appearances!32 Principal Component Analysis (PCA) Linear transformation implied by PCA The linear transformation RN?Avv where eigenvalue, v eigenvector, pCA Method (6 step 4: Calculate the eigenvectors and eigenvalues of the covariance matrix.8, trick: Rotate Coordinate Axes, suppose we have a population measured on p random variables X1,Xp.Fisherfaces Reference Eigenfaces.How to predict/synthesis/match with novel views?Testing Steps (1) Testing Image: Preprocessing Transformed into Eigenface Components The weight describe the contribution of each eigenface in representing the testing image. 55 Recognition with eigenfaces Process labeled training images Find mean and covariance matrix S Find k principal components (eigenvectors of S) u1,uk Project each training image xi onto subspace spanned by principal components(wi1,wik) (u1T(xi, ukT(xi ) Given novel image x Project onto subspace(w1,wk) (u1T(x, ukT(x. 13, pCA: General From k original variables: x1,x2,.,xk: Produce k new variables: y1,y2,.,yk: y1 a11x1 a12x a1kxk y2 a21x1 a22x a2kxk. 57 Limitations Global appearance method not robust at all to misalignment not very robust to background variation, scale 58 Principal Component Analysis (PCA) Problems Background (de-emphasize the outside of the face.g., by multiplying the input image by a 2D Gaussian window centered on the.Presented by Miss Chayanut Petpairote.41 Problem: Size of Covariance Matrix A Suppose each data point is N-dimensional (N pixels) The size of covariance matrix A is N x N The number of eigenfaces is N Example: For N 256 x 256 pixels, Size of A will be x!42 Eigenfaces summary in words Eigenfaces are the eigenvectors of the covariance matrix of the probability distribution of the vector space of human faces Eigenfaces are the standardized face ingredients derived from the statistical analysis of many pictures of human faces A human face may.If scale is changed, the performance of recognition is very bad Face images are quite clear face and not occlusion References.23 PCA Example step 1 24 PCA Example step 1 zero mean data: data: x y.220.127.116.11 zero mean data: x y data: x y 25 PCA Example step 2 Calculate the covariance matrix since the non-diagonal elements in this covariance matrix.The eigenvectors with the largest eigenvalue results in the largest variance.54 Reconstruction P 4 P 200 P 400 Eigenfaces are computed using the 400 face images from ORL face database.Where K ltlt.Yk ak1x1 ak2x akkxk such that: yk's are uncorrelated (orthogonal) y1 explains as much as possible of original variance in data set y2 explains as much as possible of remaining variance etc.7, applications Uses: Examples: Data Visualization Data Reduction.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867904.94/warc/CC-MAIN-20180526210057-20180526230057-00188.warc.gz
CC-MAIN-2018-22
4,100
6
https://quasima.com/chronotracker
code
Quasima Chrono Tracker is a time tracking application for Windows desktop. It allows you to keep track of multiple projects and categorize how much time you spent working on each of them. Is Quasima Chrono Tracker a new application on the market? No. Quasima Chrono Tracker is a successor of applications formerly known as MapleXp and Quasima Time Tracker. All users of those applications are kindly invited to install Chrono Tracker which is now in constant development and brings new features from version to version. Do I need any license to install and use Quasima Chrono Tracker? No. Chrono Tracker is a free application for both personal and commercial use. What are system requirements for running Chrono Tracker? Chrono Tracker runs on top of Microsoft .NET Framework 4. Minimal version of Windows which allows for this framework to be installed is Windows XP with Service Pack 3. I have a question or problem with Chrono Tracker. Can you help? Definitely! Do not hesitate to write to us and ask further questions or share your ideas. Please use the provided web installer to install Quasima Chrono Tracker. All necessary components will be aquired from Internet so you have to stay connected until this process completes. You can also download the portable version which does not require installation and can run from a removable drive. After it has been installed Chrono Tracker will be updated automatically as soon as new versions are released. Depending on your system security settings Windows may ask you to confirm the installation process.
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370520039.50/warc/CC-MAIN-20200404042338-20200404072338-00192.warc.gz
CC-MAIN-2020-16
1,556
14
https://www.lg.com/ca_en/support/product-help/CT20098071-1403583756397-buttons-do-not-respond
code
UNRESPONSIVE BUTTONS – CHILD Is Lock (Child Lock) activated? - It is normal for “CL” to be displayed in control panel if Child Lock function has been turned “on” (Child Lock: lock function) - When the Child Lock is activated, buttons will be unresponsive. To operate, the function must be - Press and hold button(s) to that effect for 3 seconds to lock or unlock. - How to use the Child Lock ※ Preventive measure to keep children from playing with the product, accidentally turning it “on” or even starting a cycle. ▶ Press both buttons which have a lock symbol between them for 3 seconds. ▶ CL is displayed. ▶ To unlock, press them for 3 seconds.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517682.16/warc/CC-MAIN-20190418141430-20190418163430-00321.warc.gz
CC-MAIN-2019-18
669
12
https://www.internations.org/munich-expats/forum/looking-for-flat-or-room-or-roommates-718424
code
Looking for flat or room or roommates.. :) (Munich) I'm looking for a flat or room in reasonable price in Münich, preferably the northern part (Schwabing, Freimann, etc.) or in the center (hahaha) I search it for long term. For the moment I live a bit outside of München which is ok, but I'd like to move in the city. Or if you are also some of those poor guys who can't find a proper place then let's join our forces and search for a bigger flat together! :)
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591837.34/warc/CC-MAIN-20180720213434-20180720233434-00005.warc.gz
CC-MAIN-2018-30
461
4
https://fssnip.net/authors/Erling+Hellen%C3%A4s
code
The Microsoft tutorial Walkthrough: Creating an Asynchronous HTTP Handler did not describe how to use IHttpAsyncHandler from F#. It was also a bit complicated, because it did not show how to do it from Visual Studio. Here is the Visual Studio F# version. 1. Create empty ASP.NET Web Application. Call it FSharpHttpAsyncHandler. 2. Add a F# library project to the solution. Call it FSharpHttpAsyncHandler.Lib. 3. Add the following code to Library1.fs in FSharpHttpAsyncHandler.Lib 4. Add a reference to System.Web in FSharpHttpAsyncHandler.Lib 5. Add a reference to FSharpHttpAsyncHandler.Lib in FSharpHttpAsyncHandler. 6. Add the following to Web.config in FSharpHttpAsyncHandler. 7. In the Web tab of the project properties of FSharpHttpAsyncHandler, set Start url to http://localhost: Posted: 4 years ago by Erling Hellenäs
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103324665.17/warc/CC-MAIN-20220627012807-20220627042807-00714.warc.gz
CC-MAIN-2022-27
826
3
http://my2iu.blogspot.com/2006/05/exporting-transparent-gifs-from-java.html
code
After spending a while trying to create paletted PNG files with transparency that work in Internet Explorer (after much trying, I couldn't coax ImageMagick to create such a file, though I was able to use GIMP to do it), I decided that GIFs were the only practical file format to be used. But it's hard to get a Java program to output GIF files. Because of the whole patent licensing issue, there aren't really any good, full-featured, unencumbered GIF exporters for Java. But there are various good bits and pieces on the 'net, which can be pieced together to make something reasonable. First, you need some code that can actually encode the GIF file. The US government has some code that can do this (offered as part of NIH's ImageJ). All you have to do is download ImageJ and grab their ij.io.GifEncoder class. The ImageJ code only accepts paletted data as input though. Since most images are in 24-bit colour, you need to perform a colour reduction somehow. Fortunately, someone went and took the colour quantizer used in ImageMagick and rewrote it for use in Java. It uses an octree for its colour quantization, which is a technique described in much detail elsewhere on the Internet. The quantization code needs to be reworked a bit to properly handle transparent pixels, but fixing that isn't too much work. A more important fix though is that the code performs an unnecessary optimization to save memory, which results in poor quality quantization. In the constructor for Cube, it sets the tree depth to log_4 of max_colors. Instead, the tree depth should just be set to the maximum depth of 8. Although the original ImageMagick did limit the tree depth to log_4 of max_colors, they used another optimization elsewhere in the code to reduce the effect of that limitation. Once you combine these two pieces of code together (you have to change some of the APIs to handle transparency and other things), then you end up with a very reasonable GIF exporter.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660258.36/warc/CC-MAIN-20190118172438-20190118194438-00270.warc.gz
CC-MAIN-2019-04
1,961
5
http://cmu-crowd.blogspot.com/2013/
code
Title: Leveraging Crowds to Inject Perception-oriented Feedback into the Visual Design WorkflowSpeaker: Brian Bailey Date: Wednesday, December 4 Room: GHC 8102 There is rapidly growing interest in leveraging crowds as part of individual creative workflows. In this talk, I will describe the concept, implementation, and evaluation of Voyant, a system that leverages a non-expert crowd to generate perception-oriented feedback from a selected audience as part of the visual design workflow. The system generates feedback related to the elements seen in a design, the order in which elements are noticed, impressions formed when the design is first viewed, and interpretation of the design relative to guidelines in the domain and the user’s stated goals. An evaluation of the system showed that users were able to leverage the generated feedback to develop insight and discover previously unknown problems with their designs. This type of system has the potential to tighten feedback cycles in design practice and contributes to the growing movement of data-driven design methods. The talk will conclude by outlining intriguing pathways for future work and by highlighting some challenges of using crowds to build end-user applications. Brian Bailey is an Associate Professor in the Department of Computer Science at the University of Illinois, where he has been on the faculty since 2002. He conducts research and teaches graduate and undergraduate courses on user interface design and human-computer interaction. Dr. Bailey was a visiting researcher at Microsoft Research in 2008-2009. He earned a B.S. in Computer Science from Purdue University in 1993 and an M.S. and Ph.D. from the University of Minnesota in 1997 and 2002, respectively. His research interests include creativity support tools, design studies, crowdsourcing, and attention management. He holds affiliate academic appointments in Human Factors, the Beckman Institute, and the Graduate School of Library and Information Science. Dr. Bailey received the NSF CAREER award in 2007. His research has been supported by the NSF, Microsoft, Google, and Ricoh Innovations.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347387155.10/warc/CC-MAIN-20200525001747-20200525031747-00051.warc.gz
CC-MAIN-2020-24
2,135
5
https://ebooks.stackexchange.com/questions/8920/drm-ing-mobi-files-amazons-way
code
I cooperate as a sysadmin with a publishing house. They publish standard books and ebooks. Ebooks are watermarked. Now they want to add a library service (lending ebooks for a short time period) They asked me to investigate the possibility of DRM-ing mobi files exactly the way Amazon does it (to their proprietary azw format). I know there's a way to remove amazon DRM (I checked it and it works great), so my question is whether it is possible to "protect" mobi files that way (without Amazon's participation). Or maybe you know if Amazon has an extra API service for protecting ebooks (only protecting, without selling)? I know perfectly well what you think of DRM in general :), I'm asking because I promised to research the topic . thx for any hints
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510707.90/warc/CC-MAIN-20230930181852-20230930211852-00540.warc.gz
CC-MAIN-2023-40
754
5
https://subscription.packtpub.com/book/programming/9781849518048/1/ch01lvl1sec11/installing-on-linux-with-code-blocks
code
You can install openFrameworks not only on Ubuntu, but also on Debian and Fedora versions of Linux. See installation guides at http://www.openframeworks.cc/setup/linux-codeblocks/. Also, you can use Eclipse development environment instead of Code::Blocks. See http://www.openframeworks.cc/setup/linux-eclipse/. The installation steps are as follows: Install Code::Blocks. In the main menu in Ubuntu, click on the Dash home icon, search for Ubuntu Software Center, and open it by selecting the Ubuntu Software Center icon. Search for Code Blocks here. The Code::Blocks program should be the first item listed. Click on the Install button and follow the instructions. Download the openFrameworks archive. Go to http://www.openframeworks.cc/download/ and download openFrameworks for Code::Blocks (Linux), for a 32- or 64-bit operating system. The downloaded ZIP file should be named like of_v0.8.0_linux64_release.tar. Unzip the downloaded file; it will be a folder containing openFrameworks. Move the folder to any location on your computer. Now you should install openFrameworks by running some scripts from Terminal. Please refer to http://www.openframeworks.cc/setup/linux-codeblocks/ for detailed instructions. Follow the steps 4, 5, 6, and 7 from the Code::Blocks (Windows) section for installation steps on running an openFrameworks example.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473824.13/warc/CC-MAIN-20240222161802-20240222191802-00541.warc.gz
CC-MAIN-2024-10
1,345
8
https://docs.paloaltonetworks.com/traps/4-2/traps-endpoint-security-manager-admin/forensics/forensics-overview/forensic-data-types.html
code
When a security event occurs on an endpoint, Traps can collect the following information: Forensic Data Type Contents of memory locations captured at the time of an event. Files that are loaded in memory under the attacked process for in-depth event inspection including: DLL retrieval including their path Relevant files from Temporary Internet Files folder Open files (executables and non-executables) PE image files that are loaded on the system at the time of a security event. Network resources that were accessed at the time of the security event and uniform resource identifier (URI) The Traps agent can collect accessed URI from Internet Explorer and Firefox browsers only. When an event occurs that is related to other browsers (for example, Microsoft Edge), you will not be able to access URI data for further analysis. URIs including hidden links and frames of the relevant attacked threads. Java applet source URIs, filenames and paths, including parents, grandparents, and child processes Collection of URI calls from browser plug-ins, media players, and mail-client software Information about ancestry processes—from browsers, non-browsers, and Java applet child processes—at the time of a security event including: Separate sources and destinations for Thread Injection
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00102.warc.gz
CC-MAIN-2021-39
1,288
29
https://www.freelancer.com/job-search/css-html/
code
Discover the ultimate list of free design resources online with more than 120 tools and websites that offer free design resources. I am designer already customize a theme using HTMl CSS Bootstrap website look like flipkrt I need a developer who can do everything except payment gateway . it's start up project looking developer who from India Kolkata Hello, I'm looking for a PHP expert for my web site: [login to view URL] I want to change my site's some of parts. The person who will do the work must know HTML, CSS and PHP at least. Please send me your offers. Thank you. ...using the WP Bakery Page Builder after you are finished, and that your code does not break down due to activities in the WP Bakery Page Builder. You have to be skilled in HTML, CSS, PHP and Wordpress. I will have a look at the websites you have built and when your bid is accepted you will receive the requirement specification from me. Looking forward ...using vuejs You must Vue Expert at least 3 years experiences. Totally 63 pages please check attachment You are only designing the front end and converting it into functional HTML. No CMS is needed. You are doing nothing on the back end. Backend will be done after ui/ux design. If you have experience in Laravel or Express as well, You will have a greater ...maya-gaia subdomain hosted at [login to view URL] are written in HTML5/CSS. All 100 or so web pages in the 3 subdomains hosted at [login to view URL] plus the webpages in one subdomain of my [login to view URL] - are written in 'old-school' HTML. The developer is responsible to acquire a SSL certification and convert CSS/HTML JS/Jquery work on Wordpress Reviews ([login to view URL] and [login to view URL]) using Google Materialize design and also to incorporate: Category summary at top based upon total reviews (categories to be provided) Search reviews functionality Slider 0 to 10 instead Make our website mobile friendly with CSS media query Pages to work on : shared Header/Footers [login to view URL] [login to view URL]@/Rembrandt-Van-Rijn [login to view URL]@@/8YDJKC-Night-Watch [login to view URL] [login to view URL][Champ20_EN]+contains+%22Books%22 I have design a chrome extension with: - 3 main screen: Login/Account/History - 3 popup screen : Call/ Keypad/ ...Keypad/ Forward Call Width of every screen is 300px, Height depend on content Requirement: Transform those design into html/css with: - 3 main screen: can using bootstrap - 3 popup screen: have to make your own css file and include in html Looking for an expert who can convert HTML to fully dynamic and responsive for all browsers and all devices Wordpress website.. Strictly no In-Line CSS nor Hard codings not any Debug . Very neat and clean webiste expert only keep in mind i need one Student and Writer Dashboard too where student can check how's their orders are going on and writer's ...some functionalities) for our startup in Moscow, Russia. Preferred programming platform: Python, Django We do have a front-end developer. The front end will be provided on HTML/CSS/JS. WIll need a backend integration, though. Project description: 1. Similar to [login to view URL] 2. Our services will have ONLY 12-15 categories - not as much Seeking a graphic designer for a festival style wedding. The wedding has a black and white theme with an ampersand '&' symbol for the invite/rsvp etc. ...reuse. Vector files to be supplied upon completion in common format (.ai, .psd, .pdf, .svg, .eps). I am a web developer and can do the conversion of images to the mailchimp html/css template myself. ...agency, - She worked as an assistant art director, - Computer, Photoshop, Illustrator, Indesign, Flyer, Brochure, Layout Design, programs that can use the computer, - Good HTML 5 / CSS / PHP - Experienced in developing creative ideas, developing visual design and content, - Good knowledge of printing technologies and printing in printing processes, - Good ...to. The new page must match the existing style. This means that navigation and mobile responsiveness must be identical. You will need to design the pages and code them in HTML/CSS and I will provide the content. We will need a Services index page and pages for each specific service. We will also be designing an about page. Please contact me for more We need someone to add our template to a Django based website. It has to be done in 24 hours. We are moving our Magento1 store to a Magento2 . Some of our static blocks and pages are not showing well. I need a freelancer to recreate these pages as fast as possible. It is important for you to be able to get this done is a very tight schedule. only bid if you are able to deliver fast. There are about 100 pages. Using Flask on top of Google Cloud Should be able to display html+java-script graphs Website should include: CSS design the will look nice and have the following content: 1. basic description of project 2. graphs, visualizations (we have those just need to add to the website) 3. description of team members we have been tasked with creating 7 pages in html/css, that can then go into the wordpress site, and be responsive the 7 pages designs are here - [login to view URL] there is no mobile design, all pages need to be mobile responsive the submission forms will be gravity forms, which we will do once Web design for RESPONSIVE FUNNEL PAGE. Page content is ready - see: [login to view URL] I need a SINGLE PAGE (HTML) and separate CSS page. that can entice my clients to schedule an appointment. NO BLIND QUOTES: Read the page first, then bid. The project is worth $30 USD Thank you Specification for freelancer changes to homepage at [login to view URL] Freelancer is responsible for all debug and must supply his/her own debug platform and submit code to the client for install. Under no circumstance will freelancer have ftp access to the client’s host platform. No payment until all requirements are met with a single milestone payment, as follows: 1) Eliminate the arro... hi we have 7 pages they're all in wordpress but we've been asked to code them in html/css, and then apply to a genesis child theme we have build them, but now we need to make the code responsive, and fix the final layout issues i expect for the right person, its 2-3 hours of work I want 2 developers to help so we can get this finished quicker if you Looking for help on a task in a project related to java, spring web flows, jsp, servlets, html, css. Regarding building a report in jsp. Please reach out asap. I can pay in milestones. There are 5 parts. ...specifications PHP and Code-Igniter based code development and debugging Perform unit/integration testing on forms, reports, interfaces etc. Preferred exposure to HTML and CSS and customization About Company: As a company, our core mission is to be the best in the IT marketplaces. To do this we need to be delivering highly engaging products that Create a HTML/CSS markup for a div so that it looks like a original Tweet from Twitter. There is a way to embed a Tweet on a website - I'm need for the same look, but with done 'manually'. Bootstrap 3 is ok for this job. Should not exceed a max size even if an image is larger. [login to view URL] I've an image and I want someone to convert the image into html using custom html/css. And should be perfect responsive. NO framework like bootstrap should be used here. NOTE: This is not a mock up but just an image. I need someone who can do immediately. BUDGET is 500INR. Will not pay much. ...creating which will be used on my website www.shrimpwebdesign.co.uk. You can see the current images I use at the bottom of my homepage above the footer, there is a wordpress, html, css, woocommerce logo. I would like ONE image creating which will replace all images in this section so that I dont have to add 6 individual images. I would like the logo icons ...commencement.) 5. CSS shall include cascading backgrounds, animated logos, animated navigation bar, etc. 6. Dynamic page layout, to adjust neatly to different screen sizes and devices. 7. Bonus points if you quote the word "Pizza" in your response, so I know you've read the whole brief! 8. Comprehensive comments in html/css coding, so I can make I would like a Music website like SoundCloud in PHP, MySQL, HTML, and CSS. But the difference is that only the admin will be the one to upload the songs into their profiles. Description: The first page is a welcome page and to be able to have access to the songs, the person needs to log in first or to register. After according to the profile, they Project is a website that allows teachers to create a maze for students to solve. Front end to be html/css/possibly php, backend to be php/mysql. Maze will be tiles, students will solve problem on first tile, locate adjacent tile with correct answer, solve the problem on that tile and repeat until they reach the "finish" tile. See attached requirements PLEASE DON'T BID IF YOU'RE NOT AN EXPERT This project requires great knowledge of Shopify & liquid. I want to customize my product page with some logic. So it's not just and HTML, CSS work. You should be able to convert the logic in Shopify. Thanks ...needs to modify based on the data. Website and back coding looks should be very professional and " SEO and Keywords " optimization needed. Have to achieve these things by HTML,CSS,Bootstrap,Nodejs,Mysql The website should be comfortable with all kind of devices like mobile and tab view. Have to support until website gets a solid state. sample web
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514708.24/warc/CC-MAIN-20181022050544-20181022072044-00101.warc.gz
CC-MAIN-2018-43
9,490
31
https://sspinnovations.com/blog/performance-analysis-your-gis-editors/
code
This is another topic that has garnered a lot of interest from various utilities that currently use either Telvent ArcFM™ or Telvent Designer™. The goal has long been to capture some objective key performance indicators (KPI’s) on the work performed by your editors within the GIS that would allow management to gauge how well they are performing in general and to show trends over time. If you are using ArcFM™ Session Manager or Designer™ Workflow Manager you are already familiar with working in a “Session” or a “Design”. Sessions and Designs allow users to perform edits within an Esri versioned geodatabase without having to know about how versioning works behind the scenes. They instead focus the user on the business requirements behind the work that has to occur within the geodatabase. Sessions are used for general data cleanup, as-builting ad-hoc or blanket work orders or any other types of general editing. Designs are used to enter planned work requests into the GIS to establish a design drawing and a cost estimate for the future work. In both cases, there is a workflow associated with these edit sessions. For the purposes of this article we will focus on sessions going forward, but the concept is the same for both sessions and designs. The following screenshot shows Session Manager with a few sample Sessions. Each of these corresponds to an Esri Version in the geodatabase where the edits are performed: At the simplest level, a Session will have a workflow similar to this where the gray boxes represent the statuses of the Session as it moves through its lifecycle: So a new Session is created by an editor, edits are performed and then the Session is submitted to an approval officer who will review the edits and either approve and post the edits or reject the edits back to the editor for correction. Each time any of these transitions occur within Session Manager, the history of the transaction is saved to the database. This transactional data is typically kept with the Session until the Session is posted (completed) and then all of the data is cleaned out of the database. SSP saw an opportunity to harvest this transactional data to drive several editing KPI’s: - Type of Work Performed: When a new Session is created the editor designates the Session Type as Electric, Gas, Water, Land, Fiber, etc (a configurable list of types.) We can use this data to statistically track the type of work a user performs over time as compared to their assigned work. - Average Days Worked per Session: Based on the transactional data we can extract how long each session is being worked by the editor before being submitted to the approval officer. Most utilities have benchmarks set for the turnaround of their sessions (as-built work orders) so this can be a very useful statistic. - Number of QA Rejections: Each time a Session is rejected back to the editor for corrections, the rejection is logged as a transaction. With this data we can display the number of rejections per Session and more importantly, the number of rejections over a period of time in relation to the number of sessions worked. - Quality Performance Rating: We can then combine the above statistics into a single rating that combines: the total number of sessions completed, how quickly an editor is completing the work and how many times the work is being rejected. This provides an overall quality rating for the editor who has performed the work. This is all numeric statistical data, but as with many numbers like this, the value to the organization goes up if it is displayed in a graphical and interactive format. Like many of our tools, we put this into a web control format which can be plugged into most configurable websites, because that maximizes the availability of the data to the organization. The interface starts with allowing the manager to choose a specific user and the timeframe to extract the KPI’s. This would typically be monthly, quarterly, annually, etc depending on the goals of the analysis: The system then performs the analysis and renders the data using a combination of text, graphs, and dials. Preset ranges identify the target (green), warning (orange) and poor (red) performance brackets on the charts: The bottom panel includes all of the raw data on the Sessions that was included in the analysis, which is very useful if further inspection is needed. The detail includes each Session ID, the description of the work performed, the days worked, the number of QA rejections and the full transactional history for the Session. The detail is organized in a collapsible hierarchy to make it easy to review. Managers will often compare different time periods of work for the same editor to establish trends in the quality of work for that user – ie., whether they’re improving, holding steady, or decreasing. They may also run it on different users during the same time period for comparison. The KPI’s collected from this report give management an objective view of individual editors and the editing organization as a whole, and can help them to address quality issues through further training, goal setting, and/or other corrective actions. To collect this data over time, our solution is deployed with a few custom Telvent PX Subtasks that take care of capturing and archiving this data to a set of data warehousing tables. This allows for trends to be established over many years while not clogging up the underlying PX tables used by the Telvent software. As you might imagine, the KPI’s are only limited by the workflow and data kept within your organization. As you add additional fields to your Session or Design screens within the Telvent software, or add additional statuses and transitions to the workflow, the performance analysis module can be easily updated to account for almost any valuable KPI you can think of. Plug the KPI’s into a web dashboard and you’ll have more management visibility to the performance of your GIS group than you’ve ever had before. I hope this article has sparked some thoughts on how you can extract some new KPI’s out of your editing group. We’ve looked at doing a similar implementation on top of Esri WFM (previously JTX) but have not had the push from a customer as of yet. Almost all of our utility customers use Telvent Session Manager and Designer™ Workflow Manager so this has been the focus thus far. As always, if you have thoughts on the topic we’d love to hear from you.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814140.9/warc/CC-MAIN-20180222160706-20180222180706-00074.warc.gz
CC-MAIN-2018-09
6,501
20
https://www.datacareer.de/job/9292/student-job-data-science-f-m-div/
code
Do you have an affinity for data and data science topics? Additionally, do you possess strong analytical skills and are you able to quickly grasp and implement analytical and creative problem-solving techniques? Are you keen to bridge the gap between theory and implementation in a manufacturing environment? Then we have a great opportunity for you! In this student job you will support our team in shaping a more effective and efficient engineering and manufacturing environment. That sounds interesting? We are looking forward to your application! In your new role you will: - Work closely with Data and Business Engineers to get required data for image classification - Prepare datasets for machine learning with the help of business experts - Evaluate, prototype and develop Artificial intelligence (AI) models for image classification - Build proof of concept (PoC) to demonstrate success/ failure of the hypotheses - Analyze, monitor and report the results You are best equipped for this task, if you: - Are currently studying Computer Science, Mathematics or a similar field of study - Have first hands-on experience in using Python, Keras and TensorFlow - Are familiar with neuronal networks and deep learning (experience in building models is an advantage) - Have the ability to clearly communicate complex aspects of machine learning to non machine learning experts - Have strong self-learning skills , can work collaboratively and product focused - Can freely communicate in English and German Please attach the following documents to your application: - CV in English - Certificate of enrollment at university - Latest grades transcript - High school report Additionally, the following requisites apply for student jobs: - You have to be enrolled in either your Bachelor or your Master studies to be eligible for a student job. - In the current situation, we particularly emphasis the health and safety of our employees, which is why we support working from home. Nevertheless, you should not live more than 150 km away from the place of work, so that the location is easily accessible for you. Our only site in which frontend and three different back-ends come together.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363149.85/warc/CC-MAIN-20211205065810-20211205095810-00595.warc.gz
CC-MAIN-2021-49
2,184
23
https://learn.microsoft.com/en-us/azure/architecture/example-scenario/digital-health/patient-data-ohdsi-omop-cdm?WT.mc_id=AZ-MVP-5003408
code
Observational Health Data Sciences and Informatics (OHDSI) created and maintains the Observational Medical Outcomes Partnership Common Data Model (OMOP CDM) standard and associated OHDSI software tools to visualize and analyze clinical health data. These tools facilitate the design and execution of analyses on standardized, patient-level, observational data. OHDSI on Azure allows organizations that want to use the OMOP CDM and the associated analytical tools to easily deploy and operate the solution on the Azure platform. "Terraform" is either a registered trademark or a trademark of HashiCorp in the United States and/or other countries. No endorsement by HashiCorp is implied by the use of this mark. Download a Visio file of this architecture. The preceding diagram illustrates the solution architecture at a high level. The solution is made up of two major resource groups: - Bootstrap resource group. Contains a foundational set of Azure resources that support the deployment of the OMOP resource group. - OMOP resource group. Contains the OHDSI-specific Azure resources. Azure Pipelines orchestrates all deployment automation. This article is primarily intended for DevOps engineering teams. If you plan to deploy this scenario, you should have experience with the Azure portal and Azure DevOps. - Deploy the Bootstrap resource group to support the resources and permissions needed for deployment of the OHDSI resources. - Deploy the OMOP resource group for the OHDSI-specific Azure resources. This step should complete your infrastructure-related setup. - Provision the OMOP CDM and vocabularies to deploy the data model and populate the OMOP controlled vocabularies into the CDM in Azure SQL. - Deploy the OHDSI applications: - Set up the Atlas UI and WebAPI by using the BroadSea WebTools image. Atlas is a web UI that integrates features from various OHDSI applications. It's supported by the WebAPI layer. - Set up Achilles and Synthea by using the BroadSea Methods image. Achilles is an R-based script that runs data characterization and quality assessments on the OMOP CDM. The Synthea ETL script is an optional tool that enables users to load synthetic patient data into the OMOP CDM. - Azure Active Directory (Azure AD) is a multitenant cloud-based directory and identity management service. Azure AD is used to manage permissions for environment deployment. - Azure Pipelines automatically builds and tests code projects. This Azure DevOps service combines continuous integration (CI) and continuous delivery (CD). Azure Pipelines uses these practices to constantly and consistently test and build code and ship it to any target. Pipelines define and run this deployment approach for OHDSI on Azure. - Azure Virtual Machine Scale Sets enable you to create and manage a group of heterogeneous load-balanced virtual machines (VMs). These VMs coordinate the deployment of the environment. - Azure Blob Storage is a storage service that's optimized for storing massive amounts of unstructured data. Blob Storage is used to store the Terraform state file and the raw OMOP vocabulary files (before ingestion into the CDM). - Azure Key Vault is an Azure service for storing and accessing secrets, keys, and certificates with improved security. Key Vault provides HSM-backed security and audited access through role-based access controls that are integrated with Azure AD. In this architecture, Key Vault stores all secrets, including API keys, passwords, cryptographic keys, and certificates. - Azure SQL Database is a fully managed platform as a service (PaaS) database engine. SQL Database handles database management functions like upgrading, patching, backups, and monitoring. This service houses the OMOP CDM and all associated relational data. - Azure Web Application Firewall helps protect applications from common web-based attacks like OWASP vulnerabilities, SQL injection, and cross-site scripting. This technology is cloud native. It doesn't require licensing and is pay-as-you-go. - Azure Container Registry enables you to build, store, and manage container images and artifacts in a private registry for all types of container deployments. In this solution, it stores OHDSI application images (BroadSea WebTools and BroadSea Methods) for deployment into Azure App Service. - Azure App Service is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. This service supports the OHDSI WebAPI and Atlas applications. If you require more scalability or control, consider these alternatives: - Azure Kubernetes Service (AKS) or Azure Container Apps instead of App Service. - Azure Synapse instead of SQL Database. The ability to federate, harmonize, visualize, segment, and analyze clinical patient data has rapidly become a popular use case in the healthcare industry. Many organizations, including academic institutions, government agencies, and organizations in the private sector, are looking for ways to use their patient health data to accelerate research and development. Unfortunately, most IT teams struggle to collaborate effectively with researchers to provide a work environment where researchers can feel productive and empowered. OHDSI is an initiative that includes thousands of collaborators in over 70 countries/regions. It offers one of the few available solutions in an open-source format for researchers. OHDSI created and maintains the OMOP CDM standard and associated OHDSI software tools to visualize and analyze clinical health data. Potential use cases Several types of healthcare organizations can benefit from this solution, including: - Academic institutions that want to enable scientific researchers to tackle observational cohort studies by using clinical data. - Governmental agencies that want to federate large amounts of disparate data sources to accelerate scientific discovery. - Private sector companies that want to streamline the identification of potential patients for clinical trials. These considerations implement the pillars of the Azure Well-Architected Framework, which is a set of guiding tenets that you can use to improve the quality of a workload. For more information, see Microsoft Azure Well-Architected Framework. Reliability ensures your application can meet the commitments you make to your customers. For more information, see Overview of the reliability pillar. SQL Database includes zone-redundant databases, failover groups, geo-replication, and automatic backups. These features allow your application to continue running if there's a maintenance event or outage. For more information, see Azure SQL Database availability capabilities. You might want to consider using Application Insights to monitor the health of your application. With Application Insights, you can generate alerts and respond to performance problems that affect the customer experience. For more information, see What is Application Insights?. For more information about reliability, see Designing reliable Azure applications. Security provides assurances against deliberate attacks and the abuse of your valuable data and systems. For more information, see Overview of the security pillar. This scenario uses Managed identities for Azure resources, which provide an identity for an application to use when it connects to resources that support Azure AD authentication. Managed identities eliminate the need to manage secrets and credentials for each Azure resource. SQL Database uses a layered approach to help protect customer data. It covers network security, access management, threat protection, and information protection. For more information on SQL Database security, see Azure SQL Database security and compliance. If high-security networking is a critical requirement, consider using Azure Private Link to connect App Service to Azure SQL. Doing so removes public internet access to the SQL database, which is a commonly used attack vector. You can also use private endpoints for Azure Storage to access data over an Azure private link with increased security. These implementations aren't currently included in the solution, but you can add them if you need to. For general guidance on designing secure solutions, see the Azure Security documentation. Cost optimization is about reducing unnecessary expenses and improving operational efficiencies. For more information, see Overview of the cost optimization pillar. To better understand the cost of running this scenario on Azure, use the Azure pricing calculator. This estimate uses the default configuration of all Azure resources deployed via infrastructure as code. These cost estimates can change based on the size of your data and because of other resources in your organization that might be shared, like Azure AD or Azure DevOps. Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner. For more information, see Performance efficiency pillar overview. This scenario uses App Service, which you can optionally use to automatically scale the number of instances that support the Atlas UI. This functionality allows you to keep up with end-user demand. For more information about autoscaling, see Autoscaling best practices. For more information, see Performance efficiency checklist. Deploy this scenario See these resources for more information on deploying an OHDSI tool suite and for additional detailed documentation: This article is maintained by Microsoft. It was originally written by the following contributors. - Andy Gee | Senior Software Engineer - Kaipo Lucas | Senior Program Manager - Yvonne Radsmikham | Senior Software Engineer - Cory Stevenson | Senior Specialist - Mick Alberts | Technical Writer - Heather Camm | Senior Program Manager - Gayatri Jaiswal | Program Manager To see non-public LinkedIn profiles, sign in to LinkedIn. - Azure AD documentation - What is Azure Pipelines? - What is Azure DevOps? - What is Azure SQL Database? - OHDSI home page - OHDSI Atlas demo environment - OHDSI GitHub - OHDSI YouTube channel
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00811.warc.gz
CC-MAIN-2023-06
10,071
68
https://vibraclean.ca/ihp-technology-history/
code
United States Department of Defence Declassified & Patented. Ionized Hydrogen Peroxide (iHP) was developed as a direct response to weaponized anthrax attacks. As an invention funded by a grant provided by the U.S. Defense Advanced Research Projects Agency (DARPA) The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the US military.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358074.14/warc/CC-MAIN-20211126224056-20211127014056-00098.warc.gz
CC-MAIN-2021-49
482
5
https://blogs.warwick.ac.uk/aktakkar/entry/leadership_for_me/
code
Leadership for me A thought struck my mind while I was thinking about my entrepreneurial venture (dissertation topic) by which I mean a very small scale organization to begin with. The question that I asked to myself was, ‘What I want my role to be?’ In search for an answer, the first thing that came to my mind is me working around 12 hours a day, files and folders around me. Then this suddenly lead me to think, is this what I thought when I wanted to be an entrepreneur!? NO! So then I realize, I cannot generally be a part of the daily activities, and running projects. This is too much of an underutilization of my potential. What I need to do is, look at things from a distance, which means, looking at where we’re heading, if at all we are, and where we wanted to head to in the first place. And this is from where the roots of continuous improvement will be laid in the organization. Even if it is a small scale organization, I cannot allow it to be static over the years, and to break this staticity, this is one thing I have to be sure of. I cannot let myself fall in the trap of solving the urgent and important only and forgetting about the not-urgent and important! This is probably the essence of an MBE over any other standard business management masters course. Either of them would teach you to manage people and day to day activities. But this is something that most organizations wouldn’t even realize they are lacking without having lost a number of years! Halfway through the course, I can now see a bit of clarity in my mind, how am I going to put the wisdom (it is worth being called that) that I gain from this course, to practice. The beauty is, I have something to begin with!
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710711.7/warc/CC-MAIN-20221129200438-20221129230438-00341.warc.gz
CC-MAIN-2022-49
1,712
6
https://objectgraph.com/geomeasure.html
code
See in Action Super easy to measure distances and areas Easy As Tap Tap Tap .. Search for a location and place points to calculate Area or distance. MEASURE DISTANCES / AREAS A simple to use area and distance measurement tool for maps. Have you ever wondered “How much acreage is that farm?” or “What is the distance between your house and subway station?”. Are you curious to find out who has the most property in your neighborhood? We have an app for that. We are pleased to announce the launch of our new app for Windows 10 devices. This is a great tool for anyone who wants to find geographical areas or distances. Use the app for finding distances between far away places or just calculate the bike path before you actually attempt it. This app is universal, it means the same binary will run on both Windows Phone and Windows Desktop |Area Units||Distance Units|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510516.56/warc/CC-MAIN-20230929122500-20230929152500-00339.warc.gz
CC-MAIN-2023-40
876
11
http://www.geekstogo.com/forum/topic/33086-home-network-problem/
code
I have 2 Desktop PCs. Both run XP (does it matter which version?) Mine is connected to the internet. I have a hub and the proper cable. For some reason, the "OTHER" computer can't use the internet "THROUGH" mine. Both computers can ping the other....but still can't. I'm not sure what other information anyone might need to help, but I'd be glad to provide it to you. thanks in advance
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512504.64/warc/CC-MAIN-20181020034552-20181020060052-00350.warc.gz
CC-MAIN-2018-43
385
3
https://blog.payara.fish/5-ways-to-improve-your-java-ee-applications-in-reactive-way-ondrej-mihalyi-at-geecon-2017
code
Have you ever wondered how you can improve the performance of your applications under high load? You've probably heard that reactive design can help meet better response time and make your applications more flexible. In this presentation, I will show that you don’t need to rewrite your Java EE applications from scratch to achieve that! We’ll go through 5 ways of how to reuse the knowledge of Java EE and Java 8 to improve your existing applications with a reactive design. We’ll apply them to a big production-like application step by step, walking through the code and demonstrating live. In the end, we’ll compare how much the performance and user experience can be improved. All that without learning a new framework or library, and limiting the amount of changes in the application source code to a bare minimum! This talk was presented at the GeeCON conference in Krakow, Poland, on the 18th of May 2017.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00514.warc.gz
CC-MAIN-2022-27
920
3
https://www.nintendolife.com/companies/picogram/news
code
Oh, that's so much better People have very, very strong feelings about the icons that appear on the Nintendo Switch dashboard. Most icons are surprisingly busy, with titles displayed prominently, and it turns out that's exactly what people want — and yet, every now and again, a game will release with the kind of icon you'd expect for a phone game:... Garden Story, an adorable RPG with a protagonist who is a grape, was surprise-released during the Nintendo Indie World showcase last week, and has since been winning over the hearts of everyone (including us, though we're still working on the review!). It does have a few minor issues that players have been experiencing, but the patches... We've been keeping one eye peeled for Garden Story for quite some time now, and suddenly, just like that, our wait is finally over. The adorable fruit-themed RPG just got a surprise drop on the Switch eShop, where it will be a timed console exclusive. As the newly-appointed Guardian of the Grove, tiny grape hero Concord is tasked... Fight of the Concord Announced for Switch in the August 2020 Nintendo Indie World Showcase, top-down action-RPG Garden Story is confirmed to be bringing its green fingers to Switch in 'Summer 2021'. Hang on, we're practically in 'Summer 2021' right now — it can't be far away! As revealed in the Day of the Dev's livestrea
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587606.8/warc/CC-MAIN-20211024204628-20211024234628-00702.warc.gz
CC-MAIN-2021-43
1,355
6
https://lists.opensuse.org/archives/list/project@lists.opensuse.org/thread/UCY7EC3T2KJQOPRIAMZLQNJVXYU4MVT7/
code
here's the status of Action Items after this week's meeting. New Action Items: #343444 - bugzilla wizard to qualify bugreports There are several not-so-good bugreports (something like "my computer doesn't work" - OK, this is exaggerated ;-) and the developers have to "waste" their time with asking for details and logs. The idea is to create a wizard (like kde.org does) for people new to bugreporting that guides them through the process, searches for duplicates, maybe it could even force to attach y2logs when reporting YaST bugs etc. Klaas got the AI to check if and how this is possible. Note that this wizard would be an alternative to the current bug reporting page, not a replacement. Work in progress: #173 961 - openSUSE merchandising rlihm and coolo set up a shop @spreadshirt and are in the process of getting approval on the design by Novell's corporate marketing (ETA Coolo ordered the first test shirts. For those who want to see the beta ;-) shop: (URL might change) #223290 - better wishlist handling using FATE klaas finally did a short presentation on fate in the wiki: For comparison: the wiki way from Francis AI Coolo to start a discussion about what to use. #328611 - Try out wishlist handling / feature tracking in the wiki Basically a duplicate of #223290 (better wishlist handling). Some feedback from the GNOME team how it worked out would be interesting. See also http://en.opensuse.org/GNOME/FeaturePlanning for pros and cons of different methods. #238350 - unmaintained wikis (is, es, vi) Only vi is in bad shape and there is a banner on the frontpage. is and es are maintained again. #238355 - status of cn wiki Notlocalhorst is working on it, should be ready next week #267437 - community comitee, @opensuse.org mail addresses etc. As you (should) know, the community comitee (aka board) was announced. Therefore the summary of this AI changed to #267437 - @opensuse.org mail addresses Mail addresses are still not available #339796 - graphical headline on help.opensuse.org rlihm is working on it (english is done, other languages will follow the AIs without news: #229213 - clarify bugzilla usage for packages in build service AdrianS didn't attend the meeting. #164761 - built service trust/rating system Blocked (no time and resources ETA 2008) #293726 - Creation of Babel wiki Blocked (No time) Things that were done: # - add something about sustainability to the translation howto While discussing about unmaintained wikis and how to avoid it, it was proposed to add a note that wiki translation is not a one-time job etc. to the translation howto. Martin did this instantly, therefore no bugzilla :-) #328613 - Prepare explanation of Fate features and processses in the Klaas finally did a short presentation on fate in the wiki #328622 - help.opensuse.org is now translated into lots of languages. Thanks to everybody who helped with this! (If your language is missing, feel free to reopen the AI and attach it. Or contact AdrianS for a SVN account ;-) Lass es mich so sagen: GUIs? Wir haben keine. Davon aber zwei. [Ratti in suse-linux] To unsubscribe, e-mail: opensuse-project+unsubscribe(a)opensuse.org For additional commands, e-mail: opensuse-project+help(a)opensuse.org
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00522.warc.gz
CC-MAIN-2021-49
3,216
67
https://upbaz.com/projects/
code
What I'm currently working on. Founder @ Writing Startup (since Nov 2020) A modern content management system for writers - link What I used to work on. Founder @ 200 Words a Day (Nov 2018 - Nov 2020) A writing community (5000 signups, 30,000 blog posts, 100+ customers, $100 in MRR) - link Co-creator @ Sipreads (Oct 2019 - Oct 2020) Book summaries (3000 signups, 1M impressions with a 1.4% CTR, < $100 in revenues) - link Co-founder & CTO @ Justinien (January 2018 - November 2018) A legaltech startup (The Family alumni, $1000 in revenues in 2 months)
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00457.warc.gz
CC-MAIN-2021-10
553
10
http://chronicles.blog.ryanrampersad.com/2012/04/spring-advising/
code
Just finished my advisor meeting for next fall. The recommendations were two math courses, Linear Algebra and Statistics with Calculus, one technical course, Machine Architecture and Organization, and some liberal education. I just finished meeting with my academic advisor for classes next fall. His recommendations were two math courses, one technical course and one liberal education course: - Stat 3021 Applied Statistics - CSci 2033 Elementary Computational Linear Algebra - CSci 2021 Machine Architecture and Organization - Either Geog 1403 or HSCI So that was a great meeting. Oh, and I can also apply to my major. Today.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00353.warc.gz
CC-MAIN-2021-04
628
8
http://www.coderanch.com/t/78343/Websphere/Referring-variables-set-file-War
code
I have two War files on the Websphere server say A and B. Now from a jsp file in A, i want to send some variables values to a jsp file in B. When i do the following in jsp file in A: String referURL = "Http://url.com/folder/fileInB.jsp?getValue="+getValue; And in jsp file in B: String getValue=(String)request.getParameter("getValue"); It gives getValue as null. Also setting the variables in session doesn't help. Any idea how to get the values of variables set in a jsp file in WAR A to a jsp file in War B on the same Websphere Version? and then appending the path of the jsp file present in the another war file whose path i have referred in the getContextPath() API. But when i try to refer to those values, it returns null. For the time being, i have moved the jsp file to the same war file, where i want to refer it. But i was just wondering, is it not possible to pass the values from one jsp in one war to another jsp in another war. Joined: Jan 02, 2007 I'm sorry, I can not completly follow youre last post. But the code snippets I put in my previous post can be in different war's or even different servers.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164019123/warc/CC-MAIN-20131204133339-00049-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
1,120
9
http://espiya.net/forum/index.php?topic=174277.msg1432041
code
personal advice? this is forum related concern, bat andito to? anyway, you cant access the recycle bin. the reason threads are there is because they are for deletion, usually the topics are not allowed here so the entire thread is moved there. or nirequest mismo ng thread starter na ilagay dun ang thread niya
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145910.53/warc/CC-MAIN-20200224071540-20200224101540-00309.warc.gz
CC-MAIN-2020-10
310
2
https://support.wpvideorobot.com/ticket/unable-to-login-at-support-wpvr-co-after-registration/
code
I signed up successful but I miss spelled my “envato” username. So I tried to logout and sign in again. But whatever I do, I am getting wrong password. I also tried to reset my password but I didn’t got any email to reset my password. SO now i used my live account id to signup again with all the correct information. This time I opened https://support.wpvideorobot.com/ on chrome and also on IE. I found that its not allowing me to login with currect password again in IE or chrome. Please help my email id used are “[email protected]” (used for first time signup) and “[email protected]” (used second time for signup with all correct information). I am new to wordpress and need help. You must be logged to read replies !
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879673.14/warc/CC-MAIN-20201022141106-20201022171106-00446.warc.gz
CC-MAIN-2020-45
737
5
http://www.terraforums.com/forums/tropical-pitcher-plants-nepenthes-/125055-critical-care-nepenthes-id-climate-zone.html
code
I have had a number of neps that I got from a "no tag" $5 sale last year. Two died immediately, two hung on, and one is doing well, though slow. The one that I'm most concerned about is this one: While one of the two "TLC" plants liked it's move to the LL tank, this one clearly DIDN'T. I can only assume it's a HL plant. Should I put it in an intermediate area, like a windowsill, before putting it back in the greenhouse? It's kinda cold out there right now. It actually did try to pitcher earlier, during early summer (a wierd, cold "summer" it was!). I didn't have a good camera then, and couldn't get a picture. It was small, and covered in little red freckles. Just for fun, here's the one that's doing well: It has a thin, short tendril that comes off the tip of the lid, a gap at the back of the peristome, and a small boss where the lid joins the peristome. If that info helps.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718426.35/warc/CC-MAIN-20161020183838-00376-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
886
4