url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://simplymaya.com/forum/showthread.php?s=&threadid=8748
code
hmm, i would contact to book publisher and ask them. they might need to do a recall. And after calming me down with some orange slices and some fetal spooning, E.T. revealed to me his singular purpose. --TOOL, 10,000 Days---
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00123-ip-10-171-10-70.ec2.internal.warc.gz
CC-MAIN-2017-04
224
4
http://stackoverflow.com/questions/10561700/skill-matching-algorithm
code
I need to implement a skill matching feature similar to http://venturocket.com - a candidate enters a list of skills and rates his proficiency for each. You can then search by again entering some skills and the level of expertise you are looking for. The result is a list of candidates ordered by how well their skills match your search. Candidate 1 enters skill Java (proficiency 90) and candidate 2 enters Java (50). When I search for Java (60) candidate 2 is a closer match. This shold also work with multiple skills. What I'm looking for are pointers to technologies or algorithms that would help me achieve this. My current approach would be to do a range query in a database (e.g. look for Java skills between 45 and 75) and then sort on the client, but that wouldn't be very fast.
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157472.18/warc/CC-MAIN-20160205193917-00118-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
787
4
https://lundman.net/wiki/index.php/User:Gaston
code
As you can probably figure out by the links below I have a vision. I want my audio-system connected to my sources to be able to reproduce whatever the producer intended for me to hear. The are several step to the process. Most are easy to fix with cash and moderate WAF. In order to increase WAF and reduce cash I have been investigating some other possibilities. Some links to stuff that I'm involved/interested in: Swedish forums on various audio stuff This is where I put stuff worth to keep from the discussion page
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00204.warc.gz
CC-MAIN-2021-43
519
5
https://community.gamedev.tv/t/low-poly-terrain-nebulus-collector-project/146763
code
Just want to share some current work progress on my level design for Argon Assault part. I created some low poly spaceship in blender (I was on vacation and I took a week break after finishing project boost, I spend this time learning basics of Blender after that I created those spaceships): Since spaceships are low poly I started to think about creating low poly terrain for this project. First part of day I spend on creating map layout prototype using terrain as a very usefull tool: And here is a result: After that I exported terrain as .obj file using script found in internet, than I used blender to decimate terrain polygons and UV painted terrain. Hope you like it
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100800.25/warc/CC-MAIN-20231209040008-20231209070008-00656.warc.gz
CC-MAIN-2023-50
675
6
https://www.toptal.com/resume/sean-wu
code
Data Modeler2018 - PRESENTBlue Cross Blue Shield NC Technologies: Star Schema, Entity-relationships Model (ERM), Teradata, Excel 2016, Windows, C#, Tableau, Python, SQL - Designed and created database models for enterprise-wide projects. - Created tables and views. - Performed data analysis. - Completed R&D for new technology. SQL Developer2019 - 2020McKeil Marine (via Toptal) Technologies: Entity-relationships Model (ERM), Azure SQL, Excel 2016, Windows, SQL - Developed a business oriented database on Microsoft Azure from scratch. - Created all tables based on business needs. - Created complex stored procedures to meet business requirements. - Created complex views. - Created triggers. Database Developer and Administrator2010 - 2018CCL Technologies: MacOS, Software Development, .NET, Microsoft Access, Excel 2016, Microsoft Visual Studio, C#.NET, SQL Server 2016, SQL Server 2014, SQL Server 2008, Windows, Snowflake, Star Schema, Entity-relationships Model (ERM), Visio, Data & Backup Management, Microsoft SQL Server - Designed, created, and managed primary objects such as tables, triggers, database links, indexes, and privileges based on a logic design model. - Developed all the required stored procedures, user-defined functions, and dataset results in reporting services to reduce code complexity and optimize performance. - Developed applications for manufacturing efficiency improvement. - Created conceptual, logical, and physical data models for creating the schema using MS Visio and ER/star/snowflake schema for data marts. - Applied technical skills using SQL, Excel in data collection, data analysis, and reporting to procure data from database structures to report and provide solutions to client requests in a timely manner. - Contributed to building business dashboards to conduct trend analysis on sales metrics especially relating to driving sales revenue, customer retention, and production. - Tuned, troubleshot, evaluated, and recommended database hardware and software including support tools to increase performance. - Provided support to developers for integrating development environment (C#) into databases.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00789.warc.gz
CC-MAIN-2023-14
2,148
23
https://speakerdeck.com/riggaroo/android-studio-whats-new-2020
code
In this talk, we will take a look at the new Android Studio 4.0 and the tooling that has been added in this new release. Looking at how we can now inspect Databases, use the new Motion Editor and other small treats that the latest release has included. Join to learn about all the new goodies you can play around with in Android Studio! Rebecca is a Principal Android Engineer at Over (acquired by GoDaddy), she has been passionately building Android Apps for over 8 years now. She loves making beautiful Android Apps and teaching others all the tricks she learns along the way. She is a Google Developer Expert for Android and blogs regularly at https://riggaroo.dev. When she isn't coding or teaching, she can be found baking up a storm in her kitchen.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00253.warc.gz
CC-MAIN-2022-27
754
2
https://www.reddit.com/r/productivity/comments/bqdi2/webolodeon_stop_firefox_procrastination/
code
This is a greasemonkey script that will display a popup every 5 minutes or so, asking for why you are still browsing the internet. The response isn't stored anywhere, but it is very helpful to put what I am doing into words. I was suggesting that Reddit adopt a similar feature. Approve. When I was working on my thesis I used a program to remove the network interfaces for a pre-set duration. It could only be bypassed by rebooting the machine (or doing OS X Terminal ifconfig stuff, which is simple on one hand, but make it obvious that you're cheating on the other). Tips and tricks for being more productive
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210058.26/warc/CC-MAIN-20180815102653-20180815122653-00234.warc.gz
CC-MAIN-2018-34
611
4
https://coreawhoden.firebaseapp.com/1081.html
code
This time linux mint maya xfce rc comes with some awesome feature like. Download linux mint 10 rc right now from softpedia. Linux mint xfce rc 201104 introduction to linux mint xfce linux mint xfce is rolling on top of a debian testing package base and uses the same repositories as lmde. I work on windows servers mostly but usually i would connect to them using my local linux mint workstation. Tagged desktop linux, linux, linux mint, lxde, lxle, xfce 6 comments post navigation. The team is proud to announce the release candidate of linux mint xfce. Linux mint is an ubuntubased distribution whose goal is to provide a more complete outofthebox experience by including browser plugins, media codecs, support for dvd playback, java and other components. The team is proud to announce the release of linux mint 10 lxde rc. The team is proud to announce the release of linux mint 11 lxde rc2. Lxde hybrid iso images search engines upstream components. It was a little hard to comprehend if this was indeed a question. Changes the team is proud to announce the release of linux mint 12 lxde rc. Linuxid is capable of identifying your distro, getting what its based on and every detail related to it. The software manager ui improvements new splash screen fonts category more accurate package information more application icons by default more accurate search by default the update manager performance boosts improved dependencies. Each release was given a new version number and a code name, using a female first name starting with the letter whose alphabetical index corresponds to the version number and ending with the letter a e. Linux mintcode named as julia rc 10 lxde has just released a week ago,with a lot of improvements. This offers the following advantages to linux mint xfce. Linux mint 10 lxde comes with updated software and brings refinements and. With hybrid images, you can simply use the dd command or a graphical frontend to make a bootable usb stick with no efforts which acts exactly like a livedvd. The linux mint project announced the release and general availability of the linux mint debian edition 4 operating system, a major series that brings lots of new features and enhancements coming one and a half years after lmde 3 cindy, the linux mint debian edition 4 debbie release is here to provide the linux mint community with an uptodate installation media for. Xfce, 32bit 64bit, an edition featuring the xfce desktop. Thank you for using linux mint and have a lot of fun testing the release candidate. Linux mint 10 julia tony wijayas official web blog. Lxde hybrid iso images search engines upstream components for a complete overview and to see screenshots of the new features, visit. Why go debianubuntumintcinnamon, when you can go debian. Estimated people, i also considered at the time that was inappropriate for the lxde environment, but today there is very little price difference between a cd and a dvd, you can download 650 mb can also download 900 mb and burn to a dvd. If you run ubuntu with lxde, for example, and you run mint with lxde, too, then you dont see any difference between the two distros. So it could print linux mint 16 petra, and then all its details and its base os. Unofficial linux mint lxde unofficial version linux mint with lxde desktop brought to you by. Linux mint 9 rc is a free download, should boot from most intelbased hardware. Released unofficiallmde 4inoffizielleslmde4lxde32bitde20200324rc1. This package is a metapackage depends on the core components and recommended components of the lxde. The team is proud to announce the release of linux mint 12 lxde rc. Linux mint is one of the popular linux based os targeted for desktop users. The only difference that i can see is the desktop environment. Solved changing wallpaper in lxde 9 rc linux mint forums. Remember that this is a testing release and it should not be installed on production machines. Previous post how nongeeks and technophobes see linux next post why linux is great for missionaries. If the boot sequence only shows dots and no logo, you can make it look better by following these instructions. Changing wallpaper in lxde 9 rc solved post by aljoriz sat jul 10, 2010 12. Linux mint software downloads download32 software archive. I have been playing around with ubuntu and mint for a while. From linux mint to lxle confessions of a technophobe. How to install linux mint via usb by clem has been read, copied onto paper for easy reference and utilized. Linux mint is free of charge thanks to your donations and adverts on the website and we hope youll enjoy it. The only limiting factor of the linux operating system is its user linus torvalds. It also adds a custom desktop and menus, several unique configuration tools, and a webbased package installation interface. Linux mint 9 rc backs up your data and application choices. If you just want to pick and choose the lxde components then feel free to remove this package. They measure the upload speed and calculate an eta. Inoffizielles linux mint debian 3 lxde linux mint users. Linux mint comes with the latest adobe flash square, running in full 32bit or 64bit depending on your edition of linux mint native mode. Linux mint debian edition 4 debbie released, this is. Remote desktop to a windows server from ubuntulinux mint. Linux mint code named as julia rc 10 lxde has just released a week ago,with a lot of improvements. Clement lefebvre has announced the release of linux mint 12 lxde edition, a fast and lightweight variant of the popular ubuntubased distribution. However, if you want to have debian with cinnamon as a desktop environment you can download it. Up to 2014 there had been two linux mint releases per year, about one month after the ubuntu releases they were based on. The final release of linux mint 9 kde is available for download. It includes lxdecore, lxappearance, lxinput, lxsessionedit, gpicview, lxterminal, lxrandr, galculator, leafpad and xarchiver. Mulai release candidat ini kami akan mencoba menyajikan perkembangan terbaru dari distro linux berbasis ubuntu yang ternyata memiliki banyak penggemar, linux mint 10 rc yang didasarkan pada. Mate, 32bit 64bit, an edition featuring the mate desktop. Linux mint 9 xfce rc is available for download here on softpedia. If youre not sure which one is right for you, cinnamon 64bit edition is the most popular. If you want to access their source code you can use the aptget source command. In this article i will show you how to install lxde on linux mint 15 olivia and ubuntu. In this post i will quickly run through how i connect to the windows servers using the rdp protocol. Linux mint is an elegant, easy to use, up to date and comfortable gnulinux desktop distribution. According to its website, this is the first release of linux mint using hybrid iso images. Likely because mint 10 lxde is still rc and i am afraid i have to wait for the official release. In fluxbox session i want to change the gtk theme not styles and download gtk themes from, but lxappearance works only in the lxde session with openbox. Kali ini aq mau bahas linux mint 10 julia release candidat. Traditionally, tools such as startup disk creator or unetbootin were needed to install linux mint via usb. Because linux mint 15 olivia comes by default only with cinnamon and mate, some users may want to install new desktop environments. Moonlight was removed from linux mint because of a bug that made firefox crash. Why would you want a debianubuntumintcinnamon, when you can have a debiancinnamon. Remember that this is a development release and it should not be installed on production machines. Linux mint 14 nadia release candidate, based on ubuntu 12. Linux mint menindak lanjuti beberapa permintaan, baik melalui komentar maupun pesan yang kami terima di facebook. Mulai release candidat ini perkembangan terbaru dari distro linux berbasis ubuntu yang ternyata memiliki banyak penggemar, linux mint 10 rc yang didasarkan pada ubuntu 10. If my idea is approved, i think it would be great if linux mint lxde weighed a little more. Since linux mint is open source as well as free you can download and try now. Get project updates, sponsored content from our select partners, and more. Some of the packages we distribute are under the gpl. The upload dialogs were improved and now look similar to the firefox download dialogs. The purpose of linux mint is to produce a modern, elegant and comfortable operating system which is both powerful and easy to use. To upgrade from linux mint 10 lxde rc, simply apply any level 1 and 2 updates if any available in the update manager.1273 732 691 890 534 1031 762 472 105 4 507 957 507 577 1331 833 1241 978 1396 407 222 640 1170 1205 1401 1488 1054 775 1139 1007 599 1229 1211 1465 639 805
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00498.warc.gz
CC-MAIN-2022-40
8,787
11
https://www.upwork.com/o/profiles/browse/skill/json/?page=63
code
Last active: 8 days ago I have 6+ years of experience in Installation/configuration/debugging of the CMS and frameworks. - I am master with PHP/MySQL scripting and expert with “WordPress CMS” coding conventions and configurations. - I am very good with Mobile Optimized website designing and development. - I have developed many android applications and i have 4+ experience in mobile development. -I have successfully delivered GPS driven apps using google maps/open street maps, Social Media, login/utility apps, 2D games, Enterprise apps, Gallery Browsing, Recording, QR Coder etc using native xCode / Eclipse. - I have vast experience in OpenCart, UberCart, PrestaShop, osCommerce & VirtueMart along with Major CMS like WordPress, Joomla, Drupal, Magento and confident to resolve the issues. I have skills in following CMS:- Skills: CSS3, HTML5, PHP, WordPress, ... Affiliated With: Dark Horse Services Android skills :- -jQuery, HTML5, CSS3, -JSON and XML,REST API, Push / local notification, -Facebook and Twitter API, File transfer functionality(Images, video, audio), Google maps API, Osmand Map API. -In App Purchase, Payment gateways -Chat application, File sharing platforms, Online event booking platforms, Image management/Rating applications -Taxi booking Applications, Kids oriented applications, Invoice apps. -CRM Apps, Online Food Ordering Apps, Retailer-buyer Apps, Video/Image sharing Apps.
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042981525.10/warc/CC-MAIN-20150728002301-00181-ip-10-236-191-2.ec2.internal.warc.gz
CC-MAIN-2015-32
1,414
18
https://www.amazon.jobs/en/jobs/1790410/software-development-engineer-i?cmpid=SPLICX0248M
code
Do you want to own cutting edge technology, solve new problems that didn’t exist before, and have the ability to see the impact of your successes? Amazon is growing, and we need SDEs who move fast, are capable of breaking down and solving complex problems, and have a strong will to get things done. SDEs at Amazon work on real world problems on a global scale, own their systems end to end and influence the direction of our technology that impacts hundreds of millions customers around the world. At Amazon an SDE can expect to design flexible and scalable solutions, and work on some of the most complex challenges in large-scale computing by utilizing your skills in data structures, algorithms, and object oriented programming. Coming to Amazon gives you the opportunity to work on a small development team in one of our many organizations; Amazon Web Services, ecommerce Services, Kindle, Marketplace, Operations, Platform Technologies and Retail. A day in the life About the hiring group · Programming experience with at least one modern language such as Java, C++, or C# including object-oriented design · Bachelor's degree in computer science, computer engineering or related technical discipline · Strong, object-oriented design and coding skills (C/C++ and/or Java preferably on a UNIX or Linux platform) · Knowledge of Perl or other scripting languages a plus · Experience with distributed (multi-tiered) systems, algorithms, and relational databases · Experience in optimization mathematics (linear programming, nonlinear optimization) · Ability to effectively articulate technical challenges and solutions · Deal well with ambiguous/undefined problems; ability to think abstractly · Previous technical internship(s) preferred · Graduate degree a plus
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00638.warc.gz
CC-MAIN-2022-05
1,776
15
https://help.quickbase.com/api-guide/signout.html
code
This call is for use by API client implementations that make use of the ticket cookie rather than the <ticket> parameter. Invoking this call returns a null ticket cookie (with the name TICKET). In some cases, invoking API_SignOut results in applications at the local machine (the API client) being unable to access Quick Base applications until API_Authenticate is called for a new ticket cookie. This call does not invalidate any tickets, nor log off the caller from any Quick Base applications, nor prevent further access of Quick Base applications. If the caller has saved a valid ticket, that caller can continue to use that ticket even after API_SignOut is called. A string value that you want returned. It will not be handled by Quick Base but it will be returned in the response. The originating request, for example, API_SignOut Identifies the error code, if any. (See the Error Codes appendix for a list of possible error codes.) 0 indicates that no error was encountered. Text that explains the error code. "No error" indicates that no error was encountered. Optional. Contains any udata value supplied in the request. POST https://target_domain/db/main HTTP/1.0 where target_domain is the domain against which you are invoking this call, for example, quickbase.com. Read about this notation. <?xml version="1.0" ?> Did this help you? Give us a rating: © 1999-2020 QuickBase, Inc. All rights reserved. Legal Notices.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703537796.45/warc/CC-MAIN-20210123094754-20210123124754-00553.warc.gz
CC-MAIN-2021-04
1,427
15
https://gitter.im/sbt/sbt-native-packager?source=explore
code
mymodule/docker:publishLocalmultiple times? I mean <none>images that are not pruned even if the Docker image tag remains the same: $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE my-image dev-SNAPSHOT c927fa471f18 6 seconds ago 740MB my-image latest c927fa471f18 6 seconds ago 740MB <none> <none> 7a46221591b3 3 minutes ago 740MB <none> <none> 6e7c34fdb71d 4 minutes ago 740MB dockerAutoremoveMultiStageIntermediateImagesis set to trueby default - however, it seems that it's not taken into account Without such auto-removal, something like: docker rmi `docker images -f "dangling=true" -q` is needed to prune those dangling images FYI - I solved this issue by adding a custom label to every Docker image created with sbt-native-packager (via build.sbt) and then by adding: docker rmi $( docker images \ --filter "label=com.acme.docker-image=my-module" \ --filter "dangling=true" \ --quiet ) to the script that's responsible for Docker-related setup in my project DockerPluginis about building a Dockerfileand the corresponding build context - and that should be more or less the same for all Docker build tools (I think). This can be factored out into a separate plugin ( DockerBuildContextPluginor whatever). You'll have to reimplement building, publishing (including local) and cleaning - but that should be doable if you're familiar with the tool you're wrapping. Containerfileinstead (not a big deal now as it seems most tools read the Dockerfileas is as well). The trickiest bit seems to be the fact that each tool seems to come with its own fancy features. It seems that podmancould be the simplest one to adopt (it's daemon-less and the command line is pretty much the same as Docker) but wondering that, if deciding on splitting the logic in the DockerPluginif maybe the best approach is to have something like a ContainerPackagingarchetype that generates the Containerfile) and then the actual tool wrapper plugin (i.e. The trickiest bit seems to be the fact that each tool seems to come with its own fancy features. I'll take your word for it, since I haven't really dug into those tools. But Dockerfile support seems to be standard enough to warrant reuse. archetype that generates the Dockerfile Yes, that's what I meant with DockerBuildContextPlugin (same as your ContainerPackaging if I understood correctly). Thanks @TheCodingPenguin / @andrewgee I haven't noticed that bintray is going down?! https://jfrog.com/blog/into-the-sunset-bintray-jcenter-gocenter-and-chartcenter/ Essentially sbt-native-packager is published to a general purpose sbt plugins repository maintained by sbt itself. So sbt-native-packager will move wherever sbt will move. publishor similar) rather than Docker / publish. The error log probably has more details - most importantly the name of the failing task. org, it should be there but nothing. https://repo1.maven.org/maven2/com/typesafe/sbt/ hi all ! I ran into a problem while making a docker image with sbt native packaer docker plugin. My docker image internally writes the log in demiourgos728 user does not have write permission in So, an AccessDinied error occurs. How can I add permissions? spark-submit. I think it is achievable through configuration, but is there an already "best" way to do it ? Hello , following https://www.scala-sbt.org/sbt-native-packager/formats/windows.html , i try to build an MSI package. I did it eventually but i don't get why my first try did not work. I have a multi project SBT build def and i have set the maintainer key at the begining of the file with ThisBuild / Windows / maintainer := "TOTO" It does not work. BUt if i put : .settings( Windows/maintainer := "TOTO", ) On the relevant project, then it works properly and i get my package. I have inspected the key to look at delegation order. I don't understand. I think i have an issue with scope but i don't get whet it is. Thank for your feedback. maintainerat the project level: https://github.com/sbt/sbt-native-packager/blob/2268343362812bbca6e49f15e05586821a101bc7/src/main/scala/com/typesafe/sbt/PackagerPlugin.scala#L95. Any settings with a larger scope ( ThisBuild / maintainer, Global / maintainer) are overridden by this setting. when i add this: graalVMNativeImageGraalVersion := Some("19.1.1") to my build.sbt it does not find the docker container when i add this containerBuildImage := Some("graalvm/graalvm-ce:21.1.0") [warn] multiple main classes detected: run 'show discoveredMainClasses' to see the list [error] stack trace is suppressed; run last Graalvm-native-image / packageBin for the full output [error] (Graalvm-native-image / packageBin) Could not find a main class. [error] Total time: 0 s, completed 18.06.2021 15:11:32 what am i doing wrong? /home/X/Projects/X/build.sbt:32: error: type mismatch; found : sbt.Def.Initialize[sbt.Task[Option[String]]] required: Option[String] containerBuildImage := GraalVMNativeImagePlugin.generateContainerBuildImage("ghcr.io/graalvm/graalvm-ce:latest")
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488538041.86/warc/CC-MAIN-20210623103524-20210623133524-00060.warc.gz
CC-MAIN-2021-25
4,962
62
https://torrent-mac.ru/en-n-4607-next-rc-flight-simulator-1702.html
code
Next - RC Flight Simulator 1.702 for macOS - Download Torrent - CategorymacOs Games - NameNext - RC Flight Simulator 1.702 - Size429.89 MB Next - RC Flight Simulator 1.702 Mac Platform: Intel OS version: 10.12+ Processor type(s) & speed: Intel 64-bit - Install game & crack (read howtoinstall.rtf) Languages: English, French, German, Chinese, Portuguese, Spanish, Italian Version: Official Website v1.702 RC Flight Simulator the name neXt because it heralds the next evolution in the model flight simulator industry. The simulator contains 26 flight sceneries, 86 helicopters, 14 multicopters and 16 fixed-wing models. More scenes, models and functions will be added in updates. Thanks to the highly effective programming beneath the hood of the graphics engine, you’ll get the highest quality not only on the latest hardware, but also smooth performance on older computer systems. More information: Official Website
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00670.warc.gz
CC-MAIN-2022-05
918
15
https://www.catalyzex.com/paper/arxiv:1611.08657
code
Constrained Local Models (CLMs) are a well-established family of methods for facial landmark detection. However, they have recently fallen out of favor to cascaded regression-based approaches. This is in part due to the inability of existing CLM local detectors to model the very complex individual landmark appearance that is affected by expression, illumination, facial hair, makeup, and accessories. In our work, we present a novel local detector -- Convolutional Experts Network (CEN) -- that brings together the advantages of neural architectures and mixtures of experts in an end-to-end framework. We further propose a Convolutional Experts Constrained Local Model (CE-CLM) algorithm that uses CEN as local detectors. We demonstrate that our proposed CE-CLM algorithm outperforms competitive state-of-the-art baselines for facial landmark detection by a large margin on four publicly-available datasets. Our approach is especially accurate and robust on challenging profile images.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303956.14/warc/CC-MAIN-20220123015212-20220123045212-00638.warc.gz
CC-MAIN-2022-05
987
1
https://access.redhat.com/documentation/en-us/red_hat_process_automation_manager/7.12/html/deploying_and_managing_red_hat_process_automation_manager_services/assets_types_ref
code
Chapter 15. Types of assets Anything that can be versioned in the Business Central repository is an asset. A project can contain rules, packages, business processes, decision tables, fact models, domain specific languages (DSLs) or any other assets that are specific to your project’s requirements. The following image shows the available assets in Red Hat Process Automation Manager 7.12. Case Management (Preview) and Case Definition asset types are only available in case projects. The following sections describe each asset type in Red Hat Process Automation Manager 7.12. Business processes are diagrams that describe the steps necessary to achieve business goals. Case Management (Preview) Case management is an extension of Business Process Management (BPM) that enables you to manage adaptable business processes. Case management provides problem resolution for non-repeatable, unpredictable processes as opposed to the efficiency-oriented approach of BPM for routine, predictable tasks. It manages one-off situations when the process cannot be predicted in advance.Important The business process application example includes features that are Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and are not recommended for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Cases are designed using the Case definition process designer in Business Central. The case design is the basis of case management and sets out the specific goals and tasks for each case. The case flow can be modified dynamically during run time by adding dynamic tasks or processes. Data objects are the building blocks for the rule assets that you create. Data objects are custom data types implemented as Java objects in specified packages of your project. For example, you might create a Person object with data fields Name, Address, and Date of Birth to specify personal details for loan application rules. These custom data types determine what data your assets and your decision service are based on. Decision Table (Spreadsheet) Decision tables are collections of rules stored in either a spreadsheet or in the Red Hat Decision Manager user interface as guided decision tables. After you define your rules in an external XLS or XLSX file, you can upload the file as a decision table in your project in Business Central.Important You should typically upload only one spreadsheet of decision tables, containing all necessary RuleTabledefinitions, per rule package in Business Central. You can upload separate decision table spreadsheets for separate packages, but uploading multiple spreadsheets in the same package can cause compilation errors from conflicting RuleTableattributes and is therefore not recommended. Decision Model and Notation (DMN) creates a standardized bridge for the gap between the business decision design and decision implementation. You can use the DMN designer in Business Central to design DMN decision requirements diagrams (DRDs) and define decision logic for a complete and functional DMN decision model. A rule file is typically a file with a .drl extension. In a DRL file you can have multiple rules, queries and functions, as well as some resource declarations like imports, globals and attributes that are assigned and used by your rules and queries. However, you are also able to spread your rules across multiple rule files (in that case, the extension .rule is suggested, but not required) - spreading rules across files can help with managing large numbers of rules. A DRL file is simply a text file. Domain Specific Languages (DSLs) are a way of creating a rule language that is dedicated to your problem domains. A set of DSL definitions consists of transformations from DSL "sentences" to DRL constructs, which lets you use of all the underlying rule language and decision engine features. Data enumerations are an optional asset type that can be configured to provide drop-down lists for the guided designer. They are stored and edited just like any other asset, and apply to the package that they belong to. Forms are used for collecting user data for business process. Business Central provides the option to automatically generate forms, which can then be edited to meet specific business process requirements. Global variables are used to make application objects available to the rules. Typically, they are used to provide data or services that the rules use, especially application services used in rule consequences, and to return data from the rules, like logs or values added in rule consequences, or for the rules to interact with the application, doing callbacks. Guided Decision Table Decision tables are collections of rules stored in either a spreadsheet or in the Red Hat Decision Manager user interface as guided decision tables. Guided Decision Table Graph A Guided Decision Table Graph is a collection of related guided decision tables that are displayed within a single designer. You can use this designer to better visualize and work with various related decision tables in one location. Additionally, when a condition or an action in one table uses the same data type as a condition or an action in another table, the tables will be physically linked with a line in the table graph designer. For example, if one decision table determines a loan application rate and another table uses the application rate to determine some other action, then the two decision tables are linked in a guided decision table graph. Rules provide the logic for the decision engine to execute against. A rule includes a name, attributes, a whenstatement on the left hand side of the rule, and a thenstatement on the right hand side of the rule. Guided Rule Template Guided rule templates provide a reusable rule structure for multiple rules that are compiled into Drools Rule Language (DRL) and form the core of the decision service for your project. All assets are contained in packages in Business Central. A package is a folder for rules and also serves as a "namespace". A Solver configuration is created by the Solver designer and can be run in the Execution Solver or plain Java code after the KJAR is deployed. You can edit and create Solver configurations in Business Central. Test scenarios in Red Hat Process Automation Manager enable you to validate the functionality of rules, models, and events before deploying them into production. A test scenario uses data for conditions that resemble an instance of your fact or project model. This data is matched against a given set of rules and if the expected results match the actual results, the test is successful. If the expected results do not match the actual results, then the test fails. Test Scenario (Legacy) Red Hat Process Automation Manager 7.12 includes support for the legacy Test Scenario because the default Test Scenario asset is still in development. Work Item definition A work item definition defines how a custom task is presented. For example, the task name, icon, parameters, and similar attributes.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818312.80/warc/CC-MAIN-20240422144517-20240422174517-00356.warc.gz
CC-MAIN-2024-18
7,243
39
https://forum.freetronics.com/viewtopic.php?f=44&t=6725
code
Expand the capabilities of your Raspberry Pi quickly and easily using expansion boards. [Product list] 2 posts •Page 1 of 1 Yes, I now have two I2C busses. After I cut the solder pad cut-tracks, I soldered a header to the prototyping area, connected up the pads going to the Pi (via the level translators) and hooked up a 20x4 LCD display with I2Cpiggyback board. Running a python demo program produced a message on the display - all good!!!
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00232.warc.gz
CC-MAIN-2021-43
443
3
https://mirceaulinic.net/2019-01-09-do-we-need-network-automation/
code
In my mind, especially after seeing how automation massively helped one of the largest global networks (Cloudflare - my current employer), I simply cannot conceive that a network can possibly run reliably without a form of automation. However, there still are plenty of examples of networks running (often with major outages) without any automation at all, yet reluctant to start adopting automation methodologies. I have debated the subject at many conferences and meet-ups, and I heard a variety of weak arguments against automation, or a form of anxiety caused by false assumptions. In today’s post I would like to share my views on some the most frequent myths I’ve heard, and hopefully bust them. What is automation actually? One of the laws of thought states that, in order to ensure that we’re speaking the same language is to define the terms. I have searched for several definitions for automation, and here’s what I found: - “The technique, method, or system of operating or controlling a process by highly automatic means, as by electronic devices, reducing human intervention to a minimum.” - “The technique of making an apparatus, a process, or a system operate automatically.”, where automatically means “Having a self-acting or self-regulating mechanism”. On the other hand instead, automation is often (mis)understood as just configuration management. Needless to say that configuration management is indeed a major factor, but definitely not the end goal. The most important is what’s the most painful to you and the one that’s most boring for the engineers in your team. In simpler words: consider to automate whatever is the most painful to you, or causing the most issues to your organisation. Start automating what you hate doing the most. These are easy wins that will bring excitement in your team seeing that automation actually works, and equally creates more time for you to automate more. The goal is, of course, to automate everything possible, but it’s always good to see early results. What does “everything possible” mean? We now have so many tools that provide you enough information about what happens in your network (either developed or extended internally, e.g., napalm-logs, Prometheus metrics, etc., or commercial products, e.g., ThousandEyes, etc.), so the question is: what do you do with all this data? “Watch a display and when an event occurs you execute manually a command to apply a configuration change”, is not the right answer - not only that it conflicts with the definitions I shared above, but this process also would rely on you to see the event at the right time and act on it before your customers are impacted; sometimes, this might be too late. (Surely, assuming the configuration change you deploy manually is correct and it won’t impact even more negatively your network). In my opinion, one should aim for a self-healing system that when it detects an event also applies the necessary changes. But there’s more to it than auto-remediation: what about the boring notifications you need to write manually (i.e., in case of BGP session flapping, interface flapping, massive packet loss caused by your transit providers, etc.). Additionally, the system won’t always be capable to fix the issue by itself: in this case, it can create the notifications for humans to investigate the issues further, for example by raising a ticket. At RIPE 77 I had a talk that might help you see what I mean: Three years of automating large scale networks using Salt presents some good examples (the list can be nearly infinite) of network automation beyond configuration management triggered by running a command manually, i.e., automatic BGP prefix limit update when the neighbours breach the existing limits, automatic Jira tickets raised when the BGP password is incorrect, automatic emails sent to transit providers on high service degradation due to packet loss, etc. You can similarly implement and automate all of these, and many, many others for more reliable, stable, and self-resilient networks. This is what network automation is all about. Automation is a just a fancy thing to be in-line with the rest of the tech Managing networks comes with a very high cost as both in terms of equipment and human resource; if the company you’re working for decided to make this investment, it probably means that the network plays a critical role within the organisation. With this in mind, it is probably safe to assume that the reliability and the performances of this company highly depend on the network. In other words, the better your infrastructure, and implicitly the network, the better regarded is your company going to be, and the customers are certainly going to notice that. I can give an example from the company I am currently working for, Cloudflare: before I joined, customers, for good reasons, were always complaining about the quality of service and frequently experiencing service degradation. Even though this was due to external causes (in particular, extremely poor performance of the transit providers), customers don’t care about that: they pay you to offer them good services, otherwise they’ll go to your competitors, whatever would be your reasons. In our case, the reasoning was the low speed of reaction and the scale to manually perform configuration changes when having to deal with external factors. Building an automation logic that intelligently reroutes the traffic, and applies various other configuration changes as the business logic requires, immediately after the external factors are detected. This is something that humans aren’t able to perform manually, especially when the configuration changes have to be applied in tens of places simultaneously. In fact, we’ve seen the results very quickly, and the number of customers on-boarded took off, while the amount of support tickets due to network issues just dropped. Disclaimer: I am not speaking in the name of my employer; similarly, I have not been told / paid / whatever to write these: I’m trying to use this as an example out of my own experience: to me, it was an incredible experience and opportunity to give a helping hand with this, and seeing the results and the positive impact on the business, as in terms of revenue, customer satisfaction, etc. Nevertheless, there are many other factors as well, but that’s beyond the purpose of this post. The more reliable and flexible is your network, the more customers are going to trust your company. Currently that’s not that case with most networks: in fact, when something goes wrong, the famous “it’s always the network” tends to be accurate. We need to do better than this, we can do better than this, we have all the resources to do so. Everyone needs to learn to code This is one of the weakest arguments against automation I’ve heard. No, not everyone has to learn to code. At most, your toolset might change and might be exposed to new a slightly new world. Eventually, instead of CLI command X, you might execute the CLI command Y, and that’s pretty much it – but the effect of what the command does is the key here: it goes without saying that there’ll forever be a requirement for engineers that have to deeply understand the effects of deploying a change - be it local or global across your network - from a networking perspective. We need an even stronger background when you think that the new command Y does a lot more then the previous command X (from my previous example). Providing access to someone that doesn’t fully understand the implications, may lead to disastrous results. We inherit most of the methodologies from the system side. If you look into the structure of the system administration teams, you’ll find out that they are usually divided (although not always a hard split) into engineers that continuously write code, and the others that are (fully) dedicated to operations. There is no reason why people would assume that the existing network operations teams would migrate over night and everyone would start writing code. Yes, there is a high demand of people writing code for networking; but, as I mentioned in the previous paragraph, there’s an even higher demand of engineers that understand networking. A soft delimitation between developers and operational engineers that actively collaborate in a continuous feedback loop, is going to win on the long term - in my opinion. I am also saying this out of the experience I had in the last two teams I worked with. Operational engineers don’t have to write code. If they want however, they must be encouraged, their initiative is laudable, but this cannot possibly and it will never be enforced. To sum this up: I don’t think it’s feasible to assume that writing code is ever going be a hard requirement. I do expect however small changes in the day to day operations, but these are completely normal. Besides, network engineers are smart and have always been able to adapt and learn new technologies. At the same time, I would always encourage everyone to dive into writing code - at least for fun. It’s surely a plus, at the end of the day, it’s an investment in your own skills, and widening your view, and who knows when you might actually give be able to give a helping hand and pleasantly surprise your colleagues, or land a better (paid) job. :-) A vast majority of the networking tooling is written in Python. If you are interested, I would recommend a few good resources, but not limited to: - Kirk Byers periodically runs a nice Python course for beginners. - Mark Lutz’s Learning Python: it is the first book I read about Python. A beautiful book, I totally enjoyed reading. - Matt Harrison’s courses at the O’Reilly online learning platform I will similarly make time to put down some notes in this direction, sharing some tricks to show everyone how easy it is today to build something around the existing tooling, without requiring advanced background and a bit of will. We’ll lose our jobs No, we won’t. Nobody will. In fact, all the companies that embraced automation struggle to hire: there aren’t as many candidates as open roles. This excuse is somewhat related to the previous myth regarding the requirement of writing code. Many people fear that automation would restrict everyone to having to learn and write code, therefore they would be replaced. Well, I hope that with the thoughts I shared previously, I’ve been able to clarify that this scenario is surely impossible. With the risk of being pedantic, I must confess that I have experienced that myself too: at the very beginning, I felt some engineers slightly anxious - perhaps due to the same reason; but after some time, seeing the potential of automation, how much it simplifies their job and exploits more their networking skills rather than their speed-typing skills, how easy is to deploy a configuration change on hundreds of devices instantly, and how the network auto-detects issues before they become a real concern, they started to love automation. If you still don’t believe in this, just give it. If you are an engineer struggling to have automation adopted by your team, look into a different approach and offer your colleagues quick and easy wins: start by offering solutions / tools the most painful issues you’re dealing with frequently - taking that mass out of their shoulders and putting it onto the computer to deal with it, is surely going to be a winner. Given your business use-case, try to make it clear that automation is not about replacing engineers. In fact that’s not event the point. I once had the chance to be listening to Tim O’Reilly speaking at APRICOT 2017 on this exact topic. His keynote is luckily recorded and I recommend you to watch it: it is part of the opening ceremony, the actual keynote starting at 1:47. A slightly different version of the same is available here - a better quality of the recording, however the APRICOT talk slightly covered some topics more specific to the networking industry. If you don’t have the time right now, don’t skip it. At the very least, bookmark it for later. The key takeaway of this speech is that you should see the power of automation as an opportunity for more meaningful and exciting jobs. Along the history, there are plenty of examples of how automation transformed the world. Did we run out of jobs? On the contrary - we still struggle to hire as many engineers as we would require (think about this statement from the perspective of the amount of work, I wouldn’t want to diverge into a management/administrative point of view, which would be a different argument, beyond the scope of this post). Another interesting outcome to remember is the human and machine hybrid. A good example is the aviation industry: it is a well known fact that pilots no longer fly modern planes manually; instead, they are assisted by computers. I once had this argument and I have been told: “yes, but before introducing computers in aviation, there were 6 pilots versus only 2 or 3 now”. This is true if you limit your view to a single plane only. But let’s zoom out a bit: globally, how many pilots are there nowdays compared to 50 years ago? Millions probably versus a few thousands (rough approximation) 50 years ago. I think this speaks for itself, it’s a matter of perspective. And - most importantly - that could not have been the case without computers: this is the very reason why aviation is so reliable; in result, more and more people feel more confident to fly, and continuously increasing demand can only create more and more jobs. With all these mobile apps and services connected through the Internet it is no surprise that the traffic levels are increasing much faster than ever before. I’m not telling you a secret with this, you probably know these details better than me; I’m taking this chance to emphasise our role in this entire machinery. It’s clear that more traffic automatically implies bigger networks, when translates to more network devices to manage. Scaling out the human resources in order to match the gap by continuing to operate the network manually is only going to exponentially increase the number of human mistakes. But scaling the teams intelligently in order to operate that network more reliably through a form of automation is a completely different discussion. One good example of massive continuous growth of the network size is inside the data center. Not long ago I read Dinesh Dutt’s BGP in the data center: as Dinesh pointed out, managing the data center network becomes possible only through automation. I think you see the parallel I am drawing here: my belief is that networks managed by humans assisted by computers will only enable for more stable and reliable networks, which will definitely lead to more and more job demand. Mentioned in the previous paragraph, I would like to talk again about automation by auto-remediation: it’s a given that the machine is never actually going to auto-remediate everything, but only a part of the problems, the rest of them being sent to a reporting system when unable or unsure what action to take. There is no standard where to draw the line between these, it mainly depends on the complexity of the business logic, and a variety of other environmental factors. But one thing is for sure: they will both co-exist, and enable us to focus on the real issues, that the machine is unable to resolve, allowing engineers to practice engineer work. Another fundamentally false assumption is that jobs in the networking space would eventually evolve in such a way that only experts in both networking and software simultaneously would have their place. With the risk of being terribly brutal, I find this assumption ridiculous. Out of experience, it’s incredibly hard to do both networking and software at the highest levels, at the same time - it’s close to impossible. This points somehow again to the “everyone needs to learn to code” problem which I hope I managed to clarify already. :-) We’ll lose our jobs after automation is done The truth is that there’s no such thing as “automation is done”. If anyone tells you that their network is fully automated, take that with a pinch of salt. It’s extremely unlikely that anyone got that far yet - as of January 2019, I’m not aware of anyone that has that, and never heard anyone even remotely close to that; they may have automated configuration management fully in place - very good, great start - but remember: automation is so much more than just configuration management (I have already expanded on this topic above). Automation is a continuous process that is never going to end: not only that your network is growing, but business requirements change and expansion of the services offered by your company are at the heart of a healthy business. To put this in a different way, you’ll never be done, you will always have to adjust / change the automation logic you put in place sooner or later (of course, probably not entirely, but replace old pieces of the puzzle with newer ones). It’s a never ending game. I will refer again to the system side: they call this “DevOps”; they did this for many years already - are they done yet? No. In fact, the number of openings is now higher than ever, specifically because there’s so much more to automate. The CLI is dead I am not sure what are the origins of this myth - perhaps vendors trying to sell new products, or just the same old features branded under a fancy label, perhaps overly excited fanboys, but hear me out: the CLI is not dead - I am still using it, you are still using it, we will continue using it. I have initially understood this sentence as a metaphor interpreted as “we are not going to depend massively on the native CLI” - which potentially, ideally, would be true. But I was wrong: I was surprised to find out that the projected “expectations” should be that future devices would eventually be delivered without any CLI at all. We inherit the automation methodologies from the server side, we are barely following what they did decades ago. Did you hear any story about Debian, OpenBSD, or another Unix distribution dropping their CLI because there are automation tools allowing remote execution without requiring CLI? You probably didn’t, simply because that’s not going to ever happen. :-) I expect us - and hope - that we’re going to use less and less the CLI, and steadily migrate to the automaton tools we’ll eventually have in place. But, between this and assuming that we’ll suddenly get rid of the CLI completely, is just a fairytale with unicorns. That’s even more ridiculous when one of the vendors largely trumpeting this out, Cisco, still doesn’t provide a reliable API, particularly on some platforms such as Cisco IOS, and the CLI remains the only option you can actually use - also for automation, sadly. I will start with an example from the real world: when starting to build a new house, do you expect to move in immediately after starting to build it? The same goes with automation: think about it as a construction site - you may not see the results and the benefits immediately, but when it’s done, it’s so much better to stay inside than outside. Besides, you can actually start moving in before it’s 100% ready! ;-) Please be patient, invest time, hire more people that have experience with writing software - even though they may not have much experience in the networking space, they’ll learn. At the same time, your network engineers might be interested to learn software; give them time, invest in them, sign them up to trainings and start with the programming basics. Even though it may take a long time, or simply they’ll never write hardcore software, if they have an interest in this direction, it’s good to have a background and an understanding of what’s happening under the hood. Waiting for the “best” tool to be built Are you waiting for them to build themselves? WE build the tools, and by we I’m including you too. Besides: there’s no such thing as “best” tool - there are simply tools that are good to solve a particular set of challenges, and others that perfectly resolve a different set of challenges - and they may eventually overlap (or maybe not). This is not a discussion about the tools, the most important is for automation to happen! (Note: I have initially worded the phrase above as “the most important is for automation to happen, in whatever way”, but I wasn’t happy with this: no, not in any form, it’s important to get things right, and, in your own interest and sanity don’t reinvent the wheel. My recommendation is to use a widely adopted framework. Personally, I have a bias towards Salt, as it’s by far the most complete and flexible I’ve worked with, but you should use whatever makes your environment happy, i.e., solves all your requirements.) None of the existing tools would ever fit perfectly and entirely your own environment and solve all your needs over night. I’m sorry if that’s surprising you, but that’s not the case today, and it will never be: you will have to extend their capabilities and adapt them to your own needs; eventually, whenever possible, it would very nice to give back to the community and open source bits of your work. This is the way that is proven to produce the quickest results for yourself, and help driving the community efforts at the same time. Have an extensive meeting with your team and evaluate your needs; put together a list of requirements, then investigate which automation framework would suit your needs best. Spend time with that, analyse carefully, and always listen to your network. It doesn’t matter that I’m always telling you how great is Salt, it doesn’t matter if your best friend is an Ansible fan: all it matters is which one suits you the best. Besides the obvious gains in terms of speed and reliability of the configuration changes, there’s a number of other benefits including: - Easy to audit changes, and the actual configuration the devices are running. If your company is interested in PCI compliance, this is a big plus. - Peer review: a change doesn’t get in without being reviewed by multiple pairs of eyes. - In-line with the above, you can setup a CI/CD pipleline to automatically check and validate your changes. - History: you can keep tracking of the changes, and easier follow, incrementally what has changed, when, and why. This is also a big win in tracking down the root cause of an issue introduced by a particular change. It is true that some platforms such as Junos offer, however it comes with some limitations in terms of number of steps you can look back into the history, the description (the reasoning) of the change, and accessing that information locally vs globally (i.e., you need to log into each and every device to check this information, while through an automated system, this information is centralised and immediately avaialble). - Reuse code, and existing tooling already available. - Yet again, it’s much more than just configuration management. It’s about making your life easier and your job more reliable, from any perspective. Please make it happen As 2019 just begun, I hope this post is going to help you be less afraid of automation. If you have any other concerns, or disagree with what I said, please leave a comment or drop me an email and I will be happy to discuss. Similarly, if you heard other weak arguments against automation which I didn’t cover, I would love to hear them. In the end, I would like to share a video from NANOG 71, where together with Scott Lowe, Kirk Byers, David Barroso, Jeremy Stretch, and Jathan McCollum, we put together a panel on network automation:
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658928.22/warc/CC-MAIN-20190117102635-20190117124635-00045.warc.gz
CC-MAIN-2019-04
23,933
60
https://colinwren.medium.com/building-a-midi-to-lsdj-web-app-in-flask-react-f6b5fd704b3f
code
Continuing on from the work I’d done turning a MIDI file into LSDJ commands I set myself a goal to fix some of the issues I had with printing the entities to the CLI. My main bugbear with the CLI was the fact that it was hard to keep track of which phrase/chain you were on and if there were many phrases you could easily make mistakes due to how clunky things looked. Another restriction I found with the CLI approach was the fact that it required someone to download Python and the package to run it. In order to allow the tool to be used by more people and make it easier to visualise the output of the tool I decided to build a web app to handle a MIDI file upload and return a series of screens representing the phrases, chains and song that make up the LDSJ version of that song. In order to display the LSDJ entities in a web app I first needed to refactor the existing CLI so that it could provide a serialised version of the LSDJ structure I used to create the CLI output. I started by breaking out the original script into multiple modules that would allow me to have a common MIDI to LSDJ translation ‘core’ and then pass that data structure into different ‘presentation’ layers. One of these presentation layers would be the existing CLI and the other would be a dict that represented all the data for the song that could then be serialised into JSON for a consuming system or app to use. To help with the design of the JSON format and to allow others to understand how to consume it I created an OpenAPI3 document that details the endpoints and the data structure they’ll return. I then implemented the API itself using Flask, as it’s a lightweight Python based server and I wrote the original script in Python. Eventually I’ll add the ability to save the LSDJ JSON into a document store such as CouchDB, at which point I may just have the user…
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102469.83/warc/CC-MAIN-20231210123756-20231210153756-00287.warc.gz
CC-MAIN-2023-50
1,876
10
https://nodecore.mine.nu/w/Running
code
Type: Player Action (Passive) Trigger: Walking forward for >3 seconds Walking forward will trigger a gradual, but increasing, boost in speed after 3 seconds of continuous walking. This effect increases to a maximum speed at ~10 seconds of walking, by which time the player can be assumed to be in full sprint. This effect will persist until it is cancelled or until the player stops holding down the forward key. Running can be cancelled by the following events: - Pressing the "down" arrow key while moving forward. - Releasing the "up" arrow key.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100427.59/warc/CC-MAIN-20231202140407-20231202170407-00601.warc.gz
CC-MAIN-2023-50
548
7
http://forums.adobe.com/message/4415246?tstart=0
code
In the past I was able to add a Google map via Google api key. I'm having an issue finding this. Any suggestions? The goal is to add the code to my website for those that are looking for a store location from the database of stores on my website. Thanks in advance. Are you a Kyuco member? I have used this http://kiyuco.com/tutorials/map-locations-using-the-built-in-web-app-g oogle-maps-module it worked well for me. Yes. I'd recently joined. At the time of my question, I wasn't a member yet. But I did looked at the tutorial. The issue was getting the API key. Unless I read it wrong on Google, things appeared to have changed on how to get access to it. Yes, Google has changed things and you now don't need an API key. The good old boys at Kiyuco have a later Web App Google Map tutorial here http://kiyuco.com/tutorials/update-maps-to-google-map-api-v3 which explains how to set it up. Hope that helps Europe, Middle East and Africa
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701562534/warc/CC-MAIN-20130516105242-00016-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
939
6
https://fmv.jku.at/projects/index.html
code
This page lists proposals for practical projects and thesis work in both the computational engineering specialization of the computer science master as well as the master in artificial intelligence. Most of them have a more theoretical background but also more often than not require good low-level implementation skills. Satisfiability checking (SAT) is a fast moving technology and is used in many applications, from hardware verification to device driver verification. We are particularly interested in combining structural and high-level reasoning with SAT solving, such as parity reasoning for formulas with many XOR constraints as well as algebraic techniques using techniques from computer algebra. Related to this are different forms of parallel SAT solving (GPU, threaded, cluster, cloud). Even though binary decision diagrams played an important role in the late 20ties century, they became out-fashion due to the SAT revolution we witnessed in the last 20 years. There are however certain problems for which BDD based techniques seem to be a better fit, and thus we want to revisits BDDs and related formalism, such as ZDDs both for verification as well as optimization. Decision procedures for Quantified Boolean Formulas (QBF) can be used to handle various verification problems. Recently these procedures became much more powerful, though benchmarking in the real world and adapting QBF solvers to real problem domains has not been investigated much. In this context it is also possible to work on ideas to improve QBF solver technology.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101282.74/warc/CC-MAIN-20231210060949-20231210090949-00701.warc.gz
CC-MAIN-2023-50
1,551
5
https://textbooks.cs.ksu.edu/cc315/ii-trees/5-binary-trees/8-balance/embed.html
code
We have the same nodes but our root is now 12 whereas before it was 14. This is a valid binary tree. We call this a balanced binary tree. A balanced binary tree looks visually even amongst the left and right trees in terms of number of nodes. Note: Balancing is not necessary for a valid binary tree. It is, however, important in terms of time efficiency to have a balanced tree. For example, the number of actions when inserting an element is about the same as the number of levels in the tree. If we tried to add the value 11 into the unbalanced tree, we would traverse 5 nodes. If we tried to add the value 11 in to the balanced tree, we would traverse just 3 nodes. We believe that balancing binary trees is out of the scope of this course. If you are interested in how we might balance a tree, feel free to check out these videos by Dr. Joshua Weese.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473738.92/warc/CC-MAIN-20240222093910-20240222123910-00186.warc.gz
CC-MAIN-2024-10
855
4
https://dmitrysotnikov.wordpress.com/2008/06/13/breakthrough-product-of-teched-2008/
code
At the TechEd 2008 in Orlando PowerGUI has received the highest award of the show – The Breakthrough Product of the show. Here’s the award description: This award is for the best single product of the Tech•Ed 2008 IT Professionals, and could be from any IT Pro award sub-category. I am super-excited. This is an incredible achievement for the team, and frankly for the whole PowerGUI and PowerShell community. Without you guys providing all the feedback and feature requests, contributing your PowerPacks to the library, localizing us to every language in the world, and simply spreading the word we would not have been where we got. Love you all! 🙂 You can find the list of winners in various subcategories here.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122621.35/warc/CC-MAIN-20170423031202-00310-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
722
5
https://communities.mentor.com/thread/17005
code
The windows commands are universal and already in the .vbs file. Cut - Ctrl-x Copy - Ctrl-y Paste - Ctrl-v Undo - Ctrl-z Print - Ctrl-p If an object is selected, the above operations are valid. However once the enhanced font toolbar is activated the keyboard commands are inactivated. Editing the .vbs file will have no further effect. Thanks for your reply! I still cannot get Copy - Ctrl-c to work when I highlight text in the schematic. This is the case even if I do not right click and see that Cut/Copy/Paste menu as shown in the image I attached. Also, if I just left click on the text (instead of double clicking and highlighting the text), the Ctrl-c Windows command still does not work. Could you please let me know how to get this working? This is the case for the version of xDX Designer that I am running. Here are the version details: If you continue to have issues with this, I would suggest filing a Service Request at: Issues like this can occur due to your configuration and will need to be looked into more closely. Something that typically cannot be done through the Communities Site.
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204736.6/warc/CC-MAIN-20190325234449-20190326020449-00158.warc.gz
CC-MAIN-2019-13
1,103
11
https://northeme.com/support/case/sorting-order-for-home-screen-sections-posts-3347
code
How to set sorting order for Home Screen sections? They seem to appear randomly and I found no settins for sorting them for e.g. sorting to post date. Welcome to Northeme Support Center & Knowledgebase. Theme support covers installation & getting setup, trouble with using theme features and bug fixes. Our goal is to reply your questions within one business day. Theme support is only available for our premium themes. Please sign in to access support forums and submit your queries.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100057.69/warc/CC-MAIN-20231129073519-20231129103519-00578.warc.gz
CC-MAIN-2023-50
484
5
https://shearinglayers.com/focus/estimation-and-cadence/
code
The first time you did something like this, you pretty much made it up as you went along. Over time, as you worked through these situations again, you spent more and more time on safe ground. To a casual observer, it looked like you knew what you were doing. Shockingly, considering where you started on the arc, an inside observer would agree. Blowing away the specifics, what you really learned through iteration and paying attention … estimation and cadence. You learned how to look at a situation and take a pretty good swing at how much work was involved and the steps you need to take along the way. The other thing, in the moment, to prioritise – so you could fit this piece of work along all your other commitments. Your next job: help your work this stuff out too. Skippy strategy: It takes time to work out how to manage time. Get a daily nudge by subscribing to email updates.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510967.73/warc/CC-MAIN-20231002033129-20231002063129-00706.warc.gz
CC-MAIN-2023-40
891
6
http://dirtdirectory.org/tags/map
code
DH Press (originally called diPH) is a toolkit conceived as an easy-to-use WordPress plugin which allows potentially every kind of user to visualise and mashup historic and geographic information, documents and various types of multimedia content to develop digital humanities project. A key benefit is that once a developer has loaded Mapstraction, s/he can to switch from one map API to another quickly and easily - often only needing to change a few lines of code. Mapstraction displays points, lines, polygons and markers on the maps, and also allows developers to add base maps and overlays. SepiaTown is a cultural history project that aims to provide a window to the past by merging photography, geography, and technology, by acting as a forum for institutions and individuals to share and map historical images. Once they have signed up, users may upload historic images to the site individually or in batches. Each image is given a title, description and keywords, as well as a spatial location represented by a marker on a map.
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049288709.66/warc/CC-MAIN-20160524002128-00205-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
1,037
5
https://community.frontity.org/t/cannot-read-properties-of-undefined-reading-link/6217
code
Hi folks. I’m going through the official tutorial and got stucked at cpt support adding. i have tweaked frontity settings and added support for destionations cpt. Then tutorial says Frontity now knows about this CPT and will just work with it. Try it! Enter localhost:3000/destinationsinto your browser’s address bar and you should see a listing of our favourite travel destinations. Click on one and it displays using the So i have entered localhost:3000/destinations into browser’s address bar and suddenly stumbled upon this text Cannot read properties of undefined (reading 'link') meanwhile in console there is an error Failed to load resource: the server responded with a status of 500 (Internal Server Error) Please, let me know what i am missing here . my system info ## System: - OS: Linux 5.4 Linux Mint 20.3 (Una) - CPU: (8) x64 Intel(R) Core(TM) i5-10300H CPU @ 2.50GHz - Memory: 10.97 GB / 15.35 GB - Shell: 5.0.17 - /bin/bash ## Binaries: - Node: 16.15.1 - ~/.nvm/versions/node/v16.15.1/bin/node - npm: 8.11.0 - ~/.nvm/versions/node/v16.15.1/bin/npm ## Browsers: - Chrome: 103.0.5060.134 - Firefox: 102.0.1 ## npmPackages: - @frontity/core: ^1.16.0 => 1.16.0 - @frontity/html2react: ^1.7.0 => 1.7.0 - @frontity/mars-theme: ./packages/mars-theme => 1.6.2 - @frontity/tiny-router: ^1.4.4 => 1.4.4 - @frontity/wp-source: ^1.11.7 => 1.11.7 - dayjs: ^1.11.4 => 1.11.4 - frontity: ^1.17.2 => 1.17.2 - my-first-theme: file:packages/my-first-theme => 1.0.0 ## npmGlobalPackages: - frontity: Not Found - npx: Not Found link to repo
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00310.warc.gz
CC-MAIN-2022-33
1,542
13
https://career.luxoft.com/job-opportunities/?countryID%5B0%5D=776&arrFilter_pf%5Bcities%5D%5B0%5D=39445&set_filter=Y
code
|Specialization||Position / Title||Location||Recommend a friend||Send to a friend| |Agile||Senior Agile Coach||Muenchen, DE| |Software - Other||Navigation Practice Head||Muenchen, DE| |QML (Qt)||Experienced Qt graphics, rendering and systems developer||Muenchen, DE| |Software - Other||System Test Practice Head||Muenchen, DE| |QA automation||Senior Agile Test Engineer||Muenchen, DE| |Engineering Program Management||Program manager for leading projects/programs in autonomous drive||Muenchen, DE| |Engineering Project Management||Project Manager||Muenchen, DE| |QML (Qt)||Experienced Qt and Qt Quick developer with passion for design||Muenchen, DE| |Engineering Program Management||Program Manager for leading Projects/Programs in Automotive Connected Mobility||Muenchen, DE| Hot jobs for a Referral Bonus Jump at the Opportunity to Grow! When you mix passion and excellence, you will enjoy coming to work each day and success will follow easily. We love our work, and that’s why we create software developer jobs that give you opportunities to improve your IT skills. For Luxoft, innovation is the key to success, and it’s why we consider ourselves the next generation of solution providers. Why not join one of the TOP companies for software developers?
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583716358.66/warc/CC-MAIN-20190120123138-20190120145138-00489.warc.gz
CC-MAIN-2019-04
1,261
13
http://www.rage3d.com/articles/gaming/zeno_clash_2_tech_review/
code
ACE Team, hailing from Chile, released their terribly unique first person brawler / adventure game Zeno Clash a few years ago, and it proved popular among independent game lovers. Released this week on PC is Zeno Clash 2, a follow-up that brings improved combat, RPG elements, and drop-in/out co-op. If you're curious how the game fares on PC, read on for a light technical analysis. Processor: Intel Core i5 3570K 3.4GHz (Stock) Memory: Corsair XMS3 8GB DDR3-1600 (1333 mhZ) Storage: Western Digital Caviar Black 640GB 3.5" 7200RPM / OCZ Agility 3 6Gb/s (Cache) Video Card: EVGA Geforce GTX 680 2GB (Stock) Input: Logitech G400 mouse @ 1900dpi, Leopold Tenkeyless Linear Touch Mechanical Keyboard OS: Windows XP, Windows Vista, Windows 7 Processor: Intel Core 2 Duo 2.4 GHz or AMD Athlon X2 4800+ Memory: 2 GB Video Card: ATI 3850HD 512 MB or NVIDIA GeForce 8800 GT 512MB
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122886.86/warc/CC-MAIN-20170423031202-00159-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
872
11
https://zenithtrove.in/products/6861_steel_water_bottle_1000ml
code
6861 Stainless Steel Water Bottle, Fridge Water Bottle, Stainless Steel Water Bottle Leak Proof, Rust Proof, Hot & Cold Drinks, Gym Sipper BPA Free Food Grade Quality Silver Color, Steel fridge Bottle For office/Gym/School 1000Ml Ideal Usage: It Is Used To Keep Water In The Fridge But Apart From It You Can Use It Anywhere Like Office, Home, Kitchen. Easy to carry in your bag packs and also in your hand bag when you travel. Care Instructions: When not in use, it is suggested that the lid be kept open to avoid odor developing. Use mild detergent to clean it and do not use it to store carbonated drinks. Special Feature: This Bottle Is Specially Designed with food grade stainless steel which is rust proof and BPA free. Design: 1000ml Stainless Steel Water Bottle (Pack of 1) Wide Mouth, Leak Proof, Rust Proof Bottle. PREMIUM MATERIAL: The water bottle are made of high quality food grade 18/8 stainless steel, Durable, rust proof and sweat free design; Don't worry about breaking them; You can pour any drink you like, such as coffee, wine, champagne, cocktails, juices, sodas or any other drinks you likes.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818999.68/warc/CC-MAIN-20240424014618-20240424044618-00422.warc.gz
CC-MAIN-2024-18
1,114
6
https://community.ivanti.com/thread/19281
code
We are setting up our development project, with the aim of going live with LDSD in 2-3 months time. We have recently set up separate Live, Test and Development versions of LDSD. We would like to be able to quickly identify which instance we are currently using. We are intesrested in learning how others have tackled this problem. Some ideas that we have had are listed below: a) Change the login screen for the console and the portal - either add large text such as "(Test Version)" or "(Development Version)"; or change the background colour of the login. b) Change other screens similarly - but how many screens do we need to change, and what kind of changes work best? This is as much a psychological question as a technical question, so all replies gratefully received! University of Cambridge
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203991.44/warc/CC-MAIN-20190325133117-20190325155117-00230.warc.gz
CC-MAIN-2019-13
798
7
https://java.net/jira/browse/WSIT-1616?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel
code
1) From SOAP 1.1 spec: 4.2.3 SOAP mustUnderstand Attribute The SOAP mustUnderstand global attribute can be used to indicate whether a header entry is mandatory or optional for the recipient to process. The recipient of a header entry is defined by the SOAP actor attribute (see section 4.2.2). The value of the mustUnderstand attribute is either "1" or "0". The absence of the SOAP mustUnderstand attribute is semantically equivalent to its presence with the value "0". 2) From SOAP 1.2 spec: 5.2.3 SOAP mustUnderstand Attribute The SOAP mustUnderstand attribute information item is used to indicate whether the processing of a SOAP header block is mandatory or optional (see 2.4 Understanding SOAP Header Blocks) The mustUnderstand attribute information item has the following XML infoset properties: A [local name] of mustUnderstand . A [namespace name] of "http://www.w3.org/2003/05/soap-envelope". A [specified] property with a value of "true". The type of the mustUnderstand attribute information item is xs:boolean. Omitting this attribute information item is defined as being semantically equivalent to including it with a value of "false". SOAP senders SHOULD NOT generate, but SOAP receivers MUST accept, the SOAP mustUnderstand attribute information item with a value of "false" or "0". If generating a SOAP mustUnderstand attribute information item, a SOAP sender SHOULD use the canonical representation "true" of the attribute value (see XML Schema [XML Schema Part 2]). A SOAP receiver MUST accept any valid lexical representation of the attribute value. If relaying the message, a SOAP intermediary MAY substitute "true" for the value "1", or "false" for "0". In addition, a SOAP intermediary MAY omit a SOAP mustUnderstand attribute information item if its value is "false" (see 2.7 Relaying SOAP Messages). A SOAP sender generating a SOAP message SHOULD use the mustUnderstand attribute information item only on SOAP header blocks. A SOAP receiver MUST ignore this attribute information item if it appears on descendants of a SOAP header block or on a SOAP body child element information item (or its descendents).
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720737.84/warc/CC-MAIN-20161020183840-00179-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
2,130
16
https://resourcefulman.net/2015/05/26/turn-tracking-protection-on-in-firefox-to-load-pages-44-faster/
code
Turn Tracking Protection on in Firefox to Load Pages 44% Faster Even if you don’t care about the privacy implications of tracking cookies and other technologies that sites use to identify us on the Internet, you can still turn on Tracking Protection in Firefox to potentially significantly improve speed. Former Mozilla Software Engineer Monica Chu and Computer Science Researcher Georgios Contaxis reviewed top 200 news sites (as measured by Alexa) and found an average 44% reduction in page load times and a 39% reduction in data usage when Advanced Tracking Protection was enabled … Tracking protection actively blocks domains known to track users. You may not see huge performance benefits across all sites, depending on how much each site relies on third-party content and similar add-ons from tracking domains. However, with a range of 20% to 90% faster page load times – and better privacy controls – it’s worth a try, according to research. To enable Tracking Protection in Firefox: - Type about: config in the address bar and press Enter. - You will see a warning about a possible void of the warranty. Hit “I’ll be careful, I promise!” continue. - Search for privacy.trackingprotection.enabled . - Double click on it to change the value to true. You can read the researchers’ article (PDF) here . Tracking Protection for Firefox at Web 2.0 Security and Privacy 2015 | Monica at Mozilla via Venture Beat and Boing Boing
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00074.warc.gz
CC-MAIN-2022-49
1,446
11
https://aronlaszka.com/author/taha-eghtesad/
code
Taha Eghtesad joined the department of Computer Science at the University of Houston as a Ph.D. student in Fall 2018. He received a B.S. degree from Shahid Beheshti University, Tehran, Iran in Computer Engineering with a minor in Software Engineering. Currently, he is working as a research assistant under supervision of Dr. Aron Laszka. He is interested to work at the intersection of machine learning, game theory, cyber-physical systems, and systems security. In specific, he tries to build more secure cyber-physical systems using artificial intelligence and machine learning.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510225.44/warc/CC-MAIN-20230926211344-20230927001344-00859.warc.gz
CC-MAIN-2023-40
581
1
http://hmnkonsult.se/diagram/fuse-diagram-for-2004-xc90.html
code
2004 Mini Will Not Start Engine Cranks But I Get No Spark 2004 Ford Taurus Fuse Box Chevrolet Colorado 2004 - Fuse Box Diagram Land Rover Discover 2004 Fuse Diagram For 2004 Fuse Diagram For 2004 What on earth is a UML Diagram? UML is usually a method of visualizing a application system utilizing a collection of diagrams. The notation has developed from the operate of Grady Booch, James Rumbaugh, Ivar Jacobson, as well as the Rational Application Company for use for item-oriented design, but it really has given that been extended to protect a greater variety of application engineering projects. Nowadays, UML is accepted by the thing Administration Group (OMG) as the common for modeling application enhancement. Enhanced integration concerning structural types like course diagrams and behavior types like action diagrams. Additional the ability to determine a hierarchy and decompose a application system into parts and sub-parts. The original UML specified nine diagrams; UML 2.x delivers that selection nearly thirteen. The four new diagrams are named: interaction diagram, composite construction diagram, interaction overview diagram, and timing diagram. In addition it renamed statechart diagrams to condition machine diagrams, often known as condition diagrams. UML Diagram Tutorial The main element to creating a UML diagram is connecting shapes that depict an item or course with other shapes to illustrate interactions as well as the move of data and information. To find out more about creating UML diagrams: Sorts of UML Diagrams The existing UML specifications demand thirteen different types of diagrams: course, action, item, use case, sequence, bundle, condition, ingredient, interaction, composite construction, interaction overview, timing, and deployment. These diagrams are organized into two distinctive groups: structural diagrams and behavioral or interaction diagrams. Structural UML diagrams Package deal diagram Composite construction diagram Behavioral UML diagrams Use case diagram Interaction overview diagram Timing diagram Course Diagram Course diagrams would be the spine of almost every item-oriented strategy, which include UML. They explain the static construction of a system. Package deal Diagram Package deal diagrams undoubtedly are a subset of course diagrams, but developers in some cases handle them for a individual method. Package deal diagrams organize components of a system into related groups to minimize dependencies concerning deals. UML Package deal Diagram Item Diagram Item diagrams explain the static construction of a system at a specific time. They can be utilized to examination course diagrams for precision. UML Item Diagram Composite Composition Diagram Composite construction diagrams display the internal Component of a category. Use case diagrams product the operation of a system utilizing actors and use conditions. UML Use Case Diagram Action Diagram Action diagrams illustrate the dynamic character of a system by modeling the move of Handle from action to action. An action signifies an operation on some course inside the system that leads to a adjust inside the condition of your system. Ordinarily, action diagrams are utilized to product workflow or enterprise processes and inside operation. UML Action Diagram Sequence Diagram Sequence diagrams explain interactions between courses with regards to an exchange of messages after a while. UML Sequence Diagram Interaction Overview Diagram Interaction overview diagrams are a combination of action and sequence diagrams. They product a sequence of steps and allow you to deconstruct much more complicated interactions into workable occurrences. You must use a similar notation on interaction overview diagrams that you would probably see on an action diagram. A timing diagram is usually a style of behavioral or interaction UML diagram that focuses on processes that occur for the duration of a specific timeframe. They seem to be a Distinctive instance of a sequence diagram, besides time is proven to raise from still left to ideal in place of major down. Interaction diagrams product the interactions concerning objects in sequence. They explain the two the static construction as well as the dynamic behavior of a system. In some ways, a interaction diagram is usually a simplified Variation of a collaboration diagram introduced in UML 2.0. Statechart diagrams, now often known as condition machine diagrams and condition diagrams explain the dynamic behavior of a system in reaction to external stimuli. State diagrams are Specially practical in modeling reactive objects whose states are induced by particular activities. UML State Diagram Ingredient Diagram Ingredient diagrams explain the organization of Bodily application parts, which include supply code, operate-time (binary) code, and executables.. UML Ingredient Diagram Deployment Diagram Deployment diagrams depict the Bodily resources in a system, which include nodes, parts, and connections. UML Diagram Symbols There are lots of different types of UML diagrams and each has a rather distinct image set. Course diagrams are Possibly One of the more frequent UML diagrams applied and course diagram symbols focus on defining attributes of a category. For instance, you can find symbols for Energetic courses and interfaces. A class image can even be divided to point out a category's operations, attributes, and obligations. Visualizing user interactions, processes, as well as the construction of your system you're endeavoring to Create can help save time down the line and make sure Absolutely everyone to the staff is on a similar web page.
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500426.22/warc/CC-MAIN-20200331084941-20200331114941-00415.warc.gz
CC-MAIN-2020-16
5,641
48
http://wrdingham.co.uk/cybalist/msg/174/93.html
code
--- In firstname.lastname@example.org , João Simões Lopes Filho > What is the origin of Medieval dragons? These dragons are like giant snakes or lizards, but with claws, ears, horns, like a composite animal. Greek Draco:n (<derk- "to see"), like Kadmos' foe or Python or Ladon, was a giant snake. Medieval dragons' head sometimes remind lion, horse or dog. This dog/snake trait is presente in some Greek monsters like Kerberos, Orthros and Hydra (described as having a dog-body). Chinese dragons are thought to be phantastic depictions of South Asian crocodyles. > Why Dragons became so popular in Medieval Europe? Oriental origin? > Joao SL I've been puzzling over this one and I haven't come up with an answer that I like much. The dragon is such a wide-spread concept that it doesn't surprise me that it is hard to pin down. I think one possibility could be Mesopotamian and Anatolian composite monsters. For instance, what about the chimera? Lion/Snake/Goat, fire- breathing, slain by hero. The myth may be Greek as we know it, but the Chimera lived in Anatolian Lycia. Of course, we don't see a lot of goat characteristics in dragons(possibly the horn), and the goat component seemed to be pretty important in the case of Chimera(I have heard that an etymology of chimera is she-goat. Is this believable?). Bellerophon killed the chimera by shooting it with metal arrows that melted in its mouth, a familiar element in dragon stories. As far as Greek myth goes,Ladon and the dragon that Jason killed guarded treasure, another Medieval motif. Interesting to note that the treasure was in a tree-Recalling Midgard serpent and Genesis. Perhaps an eclectic blend of Greek and Near Eastern motifs?
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510352.43/warc/CC-MAIN-20200403061648-20200403091648-00127.warc.gz
CC-MAIN-2020-16
1,699
28
http://jabsto.com/Tutorial/topic-84/SharePoint-2013-48.html
code
Microsoft Office Tutorials and References In Depth Information Figure 1-38. SSAS Data Source View Wizard To save the data source view, provide a valid name (AW Sales Data View) and click Finish. Notice that Design View now provides a database diagram showing all the tables selected and the relations between each of them. Now you can create either a dimension or a cube, but it’s always preferable to create dimensions first and then the cube, because the cube needs the dimensions to be ready. Right-click Dimensions, and choose New Dimension, which brings up the Click Next. On the Select Creation Method screen, choose “Use an existing table” to create the dimension source, and then click Next. On the next screen, choose the available Data source view. For the Main table, select Dim_DateTime and then click Next. The next screen displays the available attributes and types. (See Figure 1-39 .) Notice the attribute types that are labeled “Regular.” By default, only the primary keys of the tables are selected in the attributes for each dimension. You need to manually select the other attributes. This step is important because these are the values you query in MDX along with the available measures. Search JabSto ::
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515564.94/warc/CC-MAIN-20181023002817-20181023024317-00248.warc.gz
CC-MAIN-2018-43
1,235
19
https://www.probir.info/home
code
Email: probirr at umich dot edu Office: 230 CIS, 4901 Evergreen Road, Dearborn, MI 48128 [CV] (Last update: 08/30/2019) My research interests include program analysis, high-performance computing, and operating system. I develop tools and techniques to improve software performance and developer productivity. "I am looking for self-motivated students who are interested in system research. Please send me an email." - Our proposal on "Towards Efficient Cloud Services" has been funded! Thanks, NSF !! [CGO'18] "Lightweight Detection of Cache Conflicts", Probir Roy, Shuaiwen Leon Song, Sriram Krishnamoorthy and Xu Liu, The 2018 International Symposium on Code Generation and Optimization, Feb 24 - 28th, 2018, Vienna, Austria. Acceptance ratio: 28%. [TACO'18] "NUMA-Caffe: NUMA-Aware Deep Learning Neural Networks", Probir Roy, Shuaiwen Leon Song, Sriram Krishnamoorthy, Abhinav Vishnu, Dipanjan Sengupta, Xu Liu, ACM Transactions on Architecture and Code Optimization, 2018. [TPDS'18] "LWPTool: A Lightweight Profiler to Guide Data Layout Optimization", Chao Yu, Probir Roy, Yuebin Bai, Hailong Yang, Xu Liu, IEEE Transactions on Parallel and Distributed Systems, 2018. [HPDC'16] "SMT-Aware Instantaneous Footprint Optimization", Probir Roy, Xu Liu and Shuaiwen Leon Song, The 25th ACM International Symposium on High-Performance and Distributed Computing, May 31 - Jun 4, 2016, Kyoto, Japan. Acceptance ratio: 15.5% (20/129). [CGO'16] "StructSlim: A Lightweight Profiler to Guide Structure Splitting", Probir Roy and Xu Liu, The 2016 International Symposium on Code Generation and Optimization, Mar 12-18, 2016, Barcelona, Spain. Acceptance ratio: 23%. CIS/ECE 578 Advanced Operating System [Winter 2020] CIS 310 Computer Organization & Assembly Language [Fall 2020] [Fall 2019] - Program Committee Member: - Journal reviewer: - Conference sub-reviewer: ICPADS, ICPP, HIPS, CGO
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735812.88/warc/CC-MAIN-20200803140840-20200803170840-00271.warc.gz
CC-MAIN-2020-34
1,880
16
http://themillennial-y.com/sg-lewis-puts-another-single-titled-coming/
code
Last Friday SG Lewis teased us with another single off the upcoming three-part album called “Coming Up”. Its production immediately encapsulates you with its enchanting synth loops. It was announced that the first part of this three-part album, Dusk, will be available for pre-order now. For now, all we have is the lyric video to this beautiful tune. Have a listen below. SG Lewis – Coming Up (Lyric Video) Warning: A non-numeric value encountered in /home/themil15/public_html/wp-content/themes/Pointed/includes/modules/module-block.php on line 1511
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.86/warc/CC-MAIN-20190320190324-20190320212105-00072.warc.gz
CC-MAIN-2019-13
557
3
https://icssindia.in/hack-using-scapy-packet-crafting-tool/
code
How to Hack: Using Scapy Packet Crafting Tools Scapy is a tool that enables the user to craft, sniff and forge network packets. In other words, it is a powerful interactive packet manipulation tool written in python by Philippe Biondi. It can easily handle most tasks like scanning, tracerouting, probing, attacks or network discovery in a network. It can replace hping, arpspoof, arping, and even some part of Nmap, tcpdump, & tshark. Mainly operates two Scenario: sending packets and receiving packets. You will get an interactive terminal when you write command in the terminal. Now let’s create via Scapy tool in Terminal Here in Fig.3, a.show() is used to show the fields of the packets. Now, let’s manipulate the packet. Scapy tries to use sensible default values for all packet fields. If not overridden, - IP source is chosen according to destination and routing table - Checksum is computed - Source MAC is chosen according to the output interface - Ethernet type and IP protocol are determined by the upper layer Other fields’ default values are chosen to be the most useful ones: - The TCP source port is 20, the destination port is 80. - UDP source and destination ports are 53. - ICMP type is echo request. Now, to check if all the fields are set, we can give the command as shown in Fig.5 Now that we know how to manipulate the packet. Let’s see how to send them. The send() function will send the packets as shown in Fig.6 To send a packet more than one time you can give the command as shown in Fig.7 As you can see the packet has been crafted and now, we can send it. From the above figure, you can see the results that we have got. For more in-depth information on Scapy you can also refer to the documentation by Philippe Biondi. We can do a lot using Scapy functions and modules. For More Cyber Security related Blog’s
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057479.26/warc/CC-MAIN-20210923225758-20210924015758-00466.warc.gz
CC-MAIN-2021-39
1,848
22
https://www.freelancer.pk/projects/graphic-design/html-css-animation-senior-designer/
code
Looking for experts only who can make custom changes to the theme. Must be able to create complete responsive site perfectly aligned on all screen sizes. Work is very urgent and need to start immediately. Start your application with the word design. The theme used for the website will be: [login to view URL] 16 freelancers are bidding on average ₹7057 for this job Design I saw the theme and can create complete responsive site perfectly aligned on all screen sizes. Be rest assured that I will provide you quality design. Let's discuss and get started. I have a team of experience developers and a long list of satisfied clients...offering you very cheap rate with sure result...so please give us a chance to start our business relation.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527396.78/warc/CC-MAIN-20190721225759-20190722011759-00549.warc.gz
CC-MAIN-2019-30
742
7
https://docs.infor.com/help_m3_coreit_13.4/topic/com.lawson.help.installa/com.infor.help.m3coreig_ibmi_13.4.0/xqb1493403311431.html
code
Full access to all AD LDS partitions created during LifeCycle Manager server installation is by default restricted to the user who installed the LifeCycle Manager server. Therefore, by default, only this user has permissions to create new users and groups in the LCMADAM instance. If for some reason the original install user can't log into the server, the LCMADAM instance can't be managed. Therefore, Infor recommends that you grant this permission to other users. Adding User Management Permissions to the LCMADAM instance Now, the members of groups/users added to the Configuration Naming Context Administrators Role can manage the AD LDS LCMADAM instance.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401583556.73/warc/CC-MAIN-20200928010415-20200928040415-00003.warc.gz
CC-MAIN-2020-40
660
4
https://www.motorhomefun.co.uk/forum/threads/avatar.4593/
code
Hi Graham, sorry you are having problems. This non animation problem is normally because it is too big. But we just upped the allowable size so unless the file is very large it should work. Bryan is good at shoe-horning them in, I will ask him if he can have a look. I just right-clicked and looked at the properties of your avatar and it is tiny. I think you are uploading just one frame of the animation rather than the whole file. Its worth checking that while we are waiting for Bryan, our avatar expert, to show.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00454.warc.gz
CC-MAIN-2022-33
517
2
http://www.fermanaghomagh.com/your-council/open-data/
code
Open data is about increasing transparency and sharing the information that we hold. It is machine-readable structured data, that can be freely shared, used and built on. The Open Data Charter, which was agreed at the G8 summit held in Fermanagh in June 2013, promotes the publication of public data as open data. In Northern Ireland, the promoting and enabling of open data publications, has been driven by the ‘Open Data Strategy for Northern Ireland 2015-18’ which was endorsed by the Northern Ireland Executive and published in February 2015. The data on this page is published under the Open Government Licence and is available for you to re-use as you wish, including for commercial and research activities. Our Open Data To view these datasets click on the links below: - Bowling Pavilions - Car Park locations - Car Park Tariffs - Community Centres - Leisure Centres - Recycling Banks - Recycling Centres - Sports Pitches - Senior Officer Salaries - Tree Preservation Orders Other sources of Open Data - You can view all of our datasets and a wide range of datasets from across the public sector in Northern Ireland on the open data portal OpenDataNI. - The Detail Data Portal is a publicly-accessible data catalogue for open data from voluntary and community, public, private and academic sectors in Northern Ireland. The portal also hosts a broad range of blogs and features showcasing the use of open data in Northern Ireland.
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804019.50/warc/CC-MAIN-20171117223659-20171118003659-00575.warc.gz
CC-MAIN-2017-47
1,441
18
https://wiki.eclipse.org/HBX_Screen_Scrape
code
HBX Screen Scrape HBX has a very basic kind of screen scraping capability. If a Relying Party Agent (site) page follows certain HTML conventions, and if the Higgins server supporting HBX happens to have a "form map" for a dummy <form> element that is used to identify the page, and if the schema of the Higgins Context associated with the RPA site happens to contain the properties for the fields in the form, then HBX can "capture" or "scrape" data from the page and store them (overwriting current values) as the values of appropriate properties of the Context. When HBX requests a form map from the Higgins server it identifies the form map by concatenating: - host is the host site - name is the name attribute of the form - id = "rpformcapture" The page content to be scraped must be contained within a dummy <form>...</form> structure. The form MUST have an id attribute whose value MUST be "rpformcapture". The form MUST have a name attribute, its value is used to identify the block of content that is to be scraped (captured on the broker server). - <form name="idmashup_profile" id="rpformcapture"> Every individual element within the form MUST have an id attribute. For example: - <a id="existKeywords" - Incorrect: <label>City: </label>Washington - Correct: <label>City: </label> Washington The tags used in the above examples are not important, all that matters is that the tag have an id attribute. See also HBX Form Fill
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00582.warc.gz
CC-MAIN-2022-40
1,435
14
https://oemdrivers.com/rs232-tera-term
code
Tera Term is an open source Windows Terminal software that enables port connections to Telnet and SSH enabled devices. As far as drivers if you are connecting to a physical device via RS232 with a USB cable there may be some drivers required. Here are the pages for specific USB to RS232 Devices, many use the same chipset so check the device ID in the device manager. Prolific USB to Serial Driver Windows 10 Keyspan USA-19HS USB to Serial Adapter Driver Also there is a download below for the latest version Tera Term which is 4.1 - Serial port connections over UART. - TCP/IP (telnet, SSH1, SSH2) connections. - Log replaying. - Named pipe connection. - IPv6 communication. - VT100 emulation and selected VT200/VT300 emulation. - Tek4010 emulation. - File transfer protocols (Kermit, XMODEM, YMODEM, ZMODEM, B-PLUS and Quick-VAN). - Scripts using the "Tera Term Language".
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00548.warc.gz
CC-MAIN-2023-14
875
14
http://www.taoofgini.com/2007/12/happiest-place-on-earth.html
code
So, no, I am not talking about my home - Yes, at times, it is pretty happy around here. But I am talking about the famous "Happiest Place on Earth". Disney World. We have had the fortunate opportunity twice to visit with the kids and have THE best vacation ever. It truly is a place to forget all that is going on in your life, let your hair down, and just have fun!!! I don't know of another place like it. I'm not so HAPPY about it right now, though. Why? Because Bill is there right now for work. WORK. Yup, just talked to him and he was getting up around 10:15 his time. (ET) Taking his time getting showered to get ready for GOLF. After golf, he's hangin' out, until... Yup, I said work. Here is what he has sent us - Day 1 of his trip: a bit blurry - but you get the picture. We sure did! Oh, how about rubbing it in?? We are truly jealous, my love.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525829.33/warc/CC-MAIN-20190718211312-20190718233312-00322.warc.gz
CC-MAIN-2019-30
855
9
https://techcommunity.microsoft.com/t5/sharepoint/spo-list-lookup-value-cannot-be-edited-in-linked-access-table/td-p/1623413
code
I have a list in SharePoint Online with a lookup table. The list, including the lookup table, works fine in SPO. The list is also linked to Access on my desktop. Access links not only to the list containing the lookup field but also to the list containing the values. The values in the lookup field cannot be edited in Access. The Design View option to edit the values in a lookup field is not available; a help popup notes that the design of the lookup field cannot be changed (to allow or disallow editing) in a linked table. Is that the end of the story? Is there no way to be able to change values from the choices in the dropdown menu, which exist in Access as in SPO?
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363598.57/warc/CC-MAIN-20211208205849-20211208235849-00402.warc.gz
CC-MAIN-2021-49
673
1
http://hypebeast.com/forums/off-topic/134555/page/6?topic_page=123
code
Sign up with your email address. QuoteOriginally posted by DandyDan swag on my dick http://crownmeking.tumblr.com/i refollow <3 Kayv: pull bitches, not triggers. throw parties, not fists. futurebass : my mom said life is like a box of kimchi. chopzz.tumblr.com QuoteOriginally posted by zilla fuck tumblr QuoteOriginally posted by Food michaelbayday.tumblr.comthe tumblrs purpose suits my name... Diamond Supply Co., Crooks & Castles, and more (XL-3XL) http://hypebeast.com/forums/apparel/136442/ | http://www.flickr.com/photos/hodrick QuoteOriginally posted by filth http://filthavenue.tumblr.com/ QuoteOriginally posted by ANTHONEEE Goddamn everyone has a tumblr. Idk alot of whats posted on tumblr's seem repetitive and shit.
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540915.89/warc/CC-MAIN-20161202170900-00440-ip-10-31-129-80.ec2.internal.warc.gz
CC-MAIN-2016-50
728
8
https://www.pulsar-neighborhood.io/articles/apache-pulsar-multi-tenancy-explained/
code
Multi-tenancy is a software architecture that allows a single instance of running software to serve multiple tenants, customers, or teams within an organization. In this context, a tenant represents a group of users and a multi-tenant software platform represents multiple users of an organization — or several organizations — sharing system resources of a common software platform. Thus, multi-tenancy falls under the shared design principle. In multi-tenancy architecture, even though infrastructure resources are shared, each tenant can have their policies defined to establish a specific governance strategy. Tenants can also be isolated from each other to meet metrics like service level agreements (SLAs) or security requirements. Apache Pulsar was built from scratch to focus on multi-tenancy as a founding principle. To manage multi-tenancy aspects within a Pulsar instance, Pulsar supports a concept called tenants. In this article, we will cover: - How multi-tenancy oriented aspects are implemented in Pulsar - How organizations can use the multi-tenancy feature of Apache Pulsar - Use-cases that would benefit from multi-tenancy - Benefits of adopting multi-tenancy in an Apache Pulsar instance Deep Dive into Multi-Tenancy In Apache Pulsar, an instance can have multiple clusters. Each cluster consists of several computing resources and storage media. In a multi-tenancy hierarchy, a tenant — which can be across multiple clusters in an Apache instance — forms the topmost level. Next to the tenant is the namespace. Together, the tenant and the namespace form the concept of multi-tenancy in Apache Pulsar. To help understand the concept of multi-tenancy in Apache Pulsar, take a look at the following diagram. - All Cluster 1 components are purple and Cluster 2 components are green. - All components/resources related to Tenant 1 are bolded. - Tenant 2-related components are in default format. Let’s say there’s an organization that has decided to use Apache Pulsar for their event processing needs. This organization consists of several teams, each of which has ownership of a different application serving its business goals. With the multi-tenancy capability of Apache Pulsar, they can save costs, optimize resource utilization, and save productivity through reduced administration tasks. At the same time, they don’t need to lose flexibility in controlling tenant-based policies and security governance because of the architecture. With Apache Pulsar’s multi-tenant capabilities, the architecture can meet these requirements. So, how is this done in Apache Pulsar? One approach is that the Pulsar administrator creates tenants for each application and configures the resource capacity, along with a specific authentication and authorization schema for each tenant. For instance, Tenant-1 is based on the OAUTH2-based authorization mechanism, while Tenant-2 is based on the Kerberos-based authorization mechanism. Also, Pulsar provides flexibility to the admin so that among the “N” number of clusters in a Pulsar instance, only “M” clusters can have a specific, tenant-based policy. The remaining “M-N” clusters can have a different policy setup. Within each application, the tenant admin can create namespaces representing the administrative aspect of a tenant. If an application represents a tenant, modules within the application form a namespace. For instance, If an e-commerce application is a tenant, then a shopping cart can be one namespace, product inventory another, and so on. Underneath the namespaces in Apache Pulsar are the topics. All the configuration policies set on the namespace level also apply to its topics. All tenants, namespaces, and their topics are identified in Apache Pulsar by name. Create Tenant, Namespace, and Topic Using Pulsar Admin Here are some examples of how to create these entities in Apache Pulsar through the command-line utility, pulsar-admin. Note: Similar admin operations can be performed by using the Admin APIs of Apache Pulsar. Command to create a tenant: Command to create a namespace in Command to create a topic in By default, Apache Pulsar has an out-of-the-box tenant named public and namespaces are named default within that default public tenant. Also, in Pulsar, it’s not necessary to create a topic in advance, as Pulsar can create the topic dynamically. Policies and Resource Quotas When it comes to controlling the resource usage or establishing some sort of control over the namespace, you can explore and set different policies and resource quotas in Apache Pulsar. You can set policies and establish control over backlog quotas, time-to-live settings for messages, retention period, dispatch rate of messages from topics in a namespace, et. al., at the namespace level. To get the default policy set on a namespace, execute the command below: Let’s say you need to limit backlog storage to 10 GB in tenant-1. You can issue the following backlog quota policy command: Resource quotas provide options to: - Limit inbound bandwidth and outbound bandwidth in terms of bytes/second for a namespace. - Limit inbound and outbound message rate (messages/second) for a namespace. - Limit memory usage (megabytes) for a namespace. The following command sets the resource quota for the msgRateIn — number of inbound messages per second msgRateOut — number of outbound messages per second bandwidthIn — inbound bytes/second bandwidthOut — outbound bytes/second memory — memory usage in megabytes Benefits of Multi-Tenancy With multi-tenant, architecture-based platforms, infrastructure resources are shared. That means the number of instances required to operate is reduced — along with the associated cost factor. There are also fewer administration activities such as OS-level patching, fewer necessary application software or antivirus upgrades, and fewer monitoring agents that need to be deployed. These are some of the key advantages of a multi-tenancy platform compared to a dedicated tenant or non-multi-tenancy platform. Without a multi-tenancy approach, you must deal with an isolated set of instances for each tenant, leading to a large number of instances, increased cost, and more admin operations. Use Cases of Multi-Tenancy For an organization with a SaaS platform, Apache Pulsar’s multi-tenancy capability offers many benefits for optimizing resources and reducing the cost of microservices applications. If you want to achieve a low footprint in infrastructure resources, then multi-tenancy is the way to go. With multi-tenancy platforms such as Pulsar, you can build cloud computing-based application architectures based on containers, RESTful services, and pub-sub patterns. Also, by setting policies and quotas in Apache Pulsar, the SLAs of client applications are met for each tenant. This means that no single tenant can jeopardize the Pulsar instance and the dependent application’s performance. Apache Pulsar has emerged as a powerful, open-source, distributed messaging and streaming platform. With its powerful multi-tenancy feature, Pulsar enables enterprises to create and manage their infrastructure according to shared design principles. At the same time, users don’t have to compromise on things like data access, data security, topic isolation, scalability, or control of resource usage. Although both computing resources and storage have become cheaper in recent years, they are not free. Every organization and Infra team is looking for options to keep the budget under control, so Apache Pulsar’s multi-tenancy features are most welcome.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00651.warc.gz
CC-MAIN-2023-06
7,599
49
http://finegamedesign.com/data_mining_article.htm
code
by David Ethan Kennerly April 22, 2003 Players spend millions of man-hours selecting optimum strategies in a massive multiplayer online game (MMOG). They are getting the best return on investment (ROI) from your MMOG. Are you? In this article, I will show you how data mining can improve game design in general and then I will present four practical applications: 1. Balance the economy. 2. Catch cheaters. 3. Cut production costs. 4. Increase customer renewal. Although this article is written for massive multiplayer online games, you will also find that most of these techniques can be adapted to multiplayer and single player games. I will give several examples using fantasy MMORPG terms, since it is common knowledge. However, these techniques apply to most MMO genres; I have even used these techniques to improve an online trivia game show. But before we learn the techniques, let’s understand why data mining is a good tool for these jobs. Because players lie. Player feedback alone provides a poor diagnosis of game design. The picture their verbal feedback paints is not even an approximate guide. It is a distorted portrait of psychological and social forces. Players do not accurately report their own behavior in surveys or customer feedback. They may say one thing but do another instead. For example, Dr. William Rathje, an anthropologist surveyed the amount of beer people drank in a household and then went through their garbage. The garbage revealed twice as much consumption as the surveys had. This method was more insightful than surveys, which had been the traditional method of data collection. As psychological and social creatures, players, and developers, subconsciously revise their self-reports. Figure 1. Which gives you the clearest picture of your game? Surveys or logs? As political creatures, players, and developers, also revise their reports. Players belong to special interest groups, which bias their reports. Political ganging, a human trait, exists in online communities, too. Wherever a MMOG has guilds, classes, or any social organizations it has special interest groups. The members of these groups put their own group’s interests before those of the entire community. Each claims that it is the victim of poor game balance. But the players that actually suffer the most from poor game balance are the most silent. The greatest victims are ending their days in your game in quiet desperation. To many players the time spent online in your game is an investment. They expect their investment to perform well. They become upset if, despite their skill and time commitment, someone who happened to pick the better class, item, or other option in your game surpasses them. Data mining begins with accurate, empirical data. With this the game designer can make informed decisions. He can identify the victims of poor game balance, and he can correct it so that all players have an equal opportunity to achieve maximum performance. Data mining also builds better theories. It gives the game designer insight into how players use and abuse the game. It broadens perspective, proves or disproves hypotheses, and substitutes facts in place of opinions. With increasing specialization of game development, a game designer no longer sees the big picture. It is all-too-common for any game developer to acquire a skewed view of the nature of his game. Disinformation, best-case scenarios, and a dose of self-hypnosis distort our theories. But if we can see the big picture, we can begin to challenge our own misinformed opinions. Let’s learn how to scan this big picture. In the beginning there may have been the Design, but let's start the cycle where data mining begins, so we can discover how to recycle old data into new design: Figure 2. Recycle old data into new design. 1. Live: Scoop up lots of raw data in the live service. 2. Archive: From here, clean it up and store it for safe keeping in an archive. 3. Statistics: Sift through the data to create statistics, which are more informative than the raw data. 4. Analysis: Then apply the actual mining, which yields knowledge about player performance. 5. Hypothesis: Propose hypotheses about how to tune the game. 6. Test: Test each hypothesis and then introduce the new design into the live service. The final step closes the loop. Each iteration of this cycle evolves game balance. Let’s dive into the details. A massive multiplayer game has thousands of game assets, or more. Every class, item, monster, quest, skill, zone, or any other game object is a game asset. In the data these game assets are dead; in the live service these assets come to life. It is the players that animate them. Player behavior generates rich information about game balance, so scoop up as much data as possible. Collect a large sample. Like any other statistical data collection, the sample should be random or otherwise representative of the actual proportions of player population. The larger the sample, the clearer the picture becomes. In a perfect game, an infinite number of players would render a perfect portrait of player behavior. On the other extreme, a small or biased sample generates no meaningful statistics. Given that this is a server-based game, collecting data is convenient. The data is already on your server. When should data be collected? Temporal cycles, such as the season, day of the week, and the time of the day, complicate data collection. The most basic and instructive of these cycles is the weekly cycle. Once you understand the week, you can grasp the effect of a month, season, or holiday. Players cannot play as often as they wish on all days of the week. They have real-world schedules. So their playing volume varies depending on which day of the week it is. A graph depicts when most players participate. For a given player demographic it might be higher on some days of the week and some times of the day. For example, usage might peak on Saturdays, Sundays, and Friday evenings. Figure 3. Player behavior is a function of the day of the week. As well as the quantity, the quality of play differs depending on the day of the week. Some players might go on an extended adventure when they have more hours to spend. They might just stop to keep in touch with friends when they have little time to spend. So, to avoid daily variation, collect player performance data once per week. This provides you with the average behavior for the whole week. Be sure to measure at exactly the same day and time of the week. You should automate this process, such as with “crontab” in the Unix environment, or whatever scheduling tools your database management software supports. When you measure once per week instead of once per day, you achieve three ends simultaneously: you eliminate weekday variation, you reduce the data collection workload up to 600%, and you reduce the required archive storage space, also by up to 600%. If you are measuring data other than average player performance, then you may need to collect more often. But that is beyond the scope of this introduction. After scooping up the raw data, let’s make it easier to analyze. Like processing a raw mineral, there are several steps that will prepare your data for mining. Many alternate methods can do this. Here is a simple method that economizes storage space and reduces mining computation. This preprocess has five general steps: 1. Take a snapshot of the database. 2. Validate that the data is clean and appropriate for analysis. 3. Integrate the data into a central archive. 4. Reduce the data down to just the fields you need. 5. Transform the reduced data into a form that is easy to analyze for player performance. The details depend on the system’s configuration. This example explains each step in a simple system: Figure 4. Prepare the raw data for mining. Suppose you are operating a fantasy MMORPG during its commercial service. 1. Start at the accounts database. This will be the first step to economy, since the accounts database has the ID of every record that you want information on. Schedule an automated snapshot of the user data at 00:00 on Sunday morning. 2. Validate which data is relevant and clean. This eliminates garbage as soon as possible, so that you are not storing or analyzing unusable data. Starting at the accounts database, exclude unregistered accounts or administration accounts. For example, exclude test and admin characters that have artificial attributes. For each valid character in an account, query for activity in the log database. If the character has not been active during the previous week, then its record contains no player performance information. 3. Backup valid user, log, and accounts records into an archive database. This will be a useful warehouse that you may return to in the future to mine for data you have not considered yet. Treat this backup preciously; if you were an archaeologist, this would be your find; if you were a detective, this would be your forensic sample. you are now overwhelmed with a deluge of data. There is much more than you need to analyze a particular problem, such as the amount of experience points earned per hour of play. So reduce the data down to the fields you need. In this example, select the character ID, level, class, experience points, and number of hours played. Create a table of these values. ID, level, class, exp, time this reduced data to make it easier to analyze. Since this archive has weekly versions of the data, use last week’s data to create new information. Get the difference of the experience points and the difference of the time played. Append these columns to the table. If this is the character’s first week, then there will be no information from the previous week. If the character has not played a while, then search backward through each prior week’s archive. Δ exp = exp1 – exp0 Δ time = time1 – time0 ID, level, class, exp, time, Δ exp, Δ time Figure 5. Archive a table of player performance data in terms of EPH. Basic statistics can extract information from this fresh, well-prepared data. Since there is too much raw data to draw conclusions from, categorize or aggregate this data. For a simple example, let's categorize the data by one of four fantasy player classes: fighter, priest, rogue, or wizard. We will attempt to measure performance. Do not be misled by the popularity of each category. The number of characters that fit into a certain class or choose a strategy in the game depends on many variables irrelevant to optimum performance. Cultural preferences, aesthetics, fads, rumors, and other trends sway players' choice. Chasing popularity as a measure of performance, leads to a vicious circle. Like a cat chasing its own tail, balance would never be achieved. Measure rates instead of instantaneous values. High performance is not any particular value. It is measure of change from a low value to a high value in a short period of time. The period of time to measure is the week. As noted earlier, the week is more stable than the day. Let's take experience points per hour versus level for each class as an example. “Experience points per hour” is such a useful indicator that I will abbreviate it as EPH. Like a car’s MPH (miles per hour), a player’s EPH indicates his speed or rate of progress. Count the “experience points,” which is a performance indicator, instead of the population of a class. Count the change in experience points from one week to the next week. Count the time that the character actually played, instead of the total amount of time that has passed. For example, if the character played twenty hours in a week then use this value, instead of the 168 hours in a week. This gives the following derivative: Figure 6. Like a car’s MPH indicates speed, a player’s EPH indicates rate of advancement. EPH = Δ exp / Δ time Let’s graph the results. On the vertical axis is the EPH. On the horizontal axis is the level range. If there are too few samples per level, then group nearby levels together. Figure 7. Compare player performance between various strategies in the game. Then plot each category as a data series. In this example each series is a player class: fighter, priest, rogue, or wizard. Along the horizontal axis we can see the difference between the heights of each class' performance. If the difference is small, then it is statistically insignificant. If the difference is large, then it is statistically significant. Based on the size of the sample and other qualities of the data, statistics defines the minimum gap that indicates significantly low performance. In this example, the most significant gap is between the high-level fighter and the other three high-level classes. So statistics discovered that the high-level fighter segment of the player population suffered from low-performance during that week. The core of data mining begins where statistics ends. Here we can extract golden knowledge from the raw mineral that we began with. Several techniques can be applied, most of them particular to the data and the purpose. Here is a simple set of techniques. Calculate the maximum and minimum performance values. Do this for performance rate and performance growth. In this example EPH is a derivative of the experience points, and the EPH itself can be viewed as a function of class and level: EPH = f(level) Calculus provides the derivative: EPH' = f'(level) Because of the finite sample size, the precise limit and derivative does not exist. However the approximate derivative will provide insight into the game balance. At the maximum derivative players rapidly advance. At the minimum derivative players suffer stagnation. They play for hours with little advancement. Each of these will help isolate low-performance segments of the player population. Comparing a previous and subsequent period can identify a trend. In this example, the EPH can be subtracted from its value last week, creating a new function: Δ EPH = f1(level) – f0(level) Where the change is significantly positive, that segment of players is performing better than it had been the previous week. This helps isolate an effect of a modification to a game’s design. Players’ adjustment to the modification delays full impact. Usually only early adopters will use the new feature at first. If it outperforms an old substitute, then most players will migrate. After migration the empirical comparison between the two features stabilizes. Both of the above techniques can be combined to isolate and track specific low-performance. For example, tracking the change in high-level fighters from one week to the next indicates if their performance is improving or not. EPH = Fighter1(80%) – Fighter0(80%) Comparing this value to the other class values indicates the relative change. As the values converge, the classes are becoming balanced. Figure 8. Top-down meets bottom-up when you analyze strategies as clusters of game assets. Data mining can combine top-down analysis techniques with bottom-up analysis techniques. From the bottom-up our game may appear to be a galaxy of game assets with no hierarchical organization. From the top-down the same game may appear to be rigid containers of game assets. Cluster analysis might improve class or strategy design, since it generates clusters from the bottom-up, by mapping differences of individual game assets. This can compare similar assets in different categories. As well, cluster analysis can identify assets that multiple strategies share. If you are interested, the books at the end of this article explain techniques for cluster analysis. As a game designer, it is dangerous to assume that you know your game. The analysis should inspire the hypothesis, since analyzing player behavior can prove or disprove a good hypothesis about game assets. The kind of hypothesis mentioned here meets two criteria: 1. Explain existing trends of game assets. 2. Predict the result of modifying, inserting, or removing a set of game assets. Here are two examples of game asset hypotheses: 1. In EverQuest, players prefer pretty races. 2. In Dark Ages, a trap skill will increase mid-level rogue performance. The domain delimits where the hypothesis applies. In this case the domain is a particular MMORPG, Sony Online Entertainment’s EverQuest or Nexon’s Dark Ages. Define the domain, or scope, that the knowledge that you believe you are discovering applies to. Figure 9. Is player preference skin deep? (SOE's EverQuest) Suppose when you discuss the appearance of races in an MMORPG with artists, the team divides into two camps. One camp argues for an equal number of game assets for gruesome player races as well as beautiful races. The other camp argues that many more players will choose beautiful races, so almost all assets devoted to the more beautiful races. Nick Yee provides survey data in his EverQuest research paper “Norrathian Scrolls” that may inspire this hypothesis. EverQuest players prefer Elves, in general, about 10-to-1 compared to the two least popular and, arguably, the most ugly races Trolls and Ogres (http://www.nickyee.com/eqt/metachar.html#4). To make the hypothesis rigorous, actual player population and the race performances should be analyzed, because, as noted earlier, data mining more accurately depicts player behavior than a survey does. Figure 10. How can you balance group members but still keep the group together? (Nexon's Dark Ages) In the second example, suppose you have analyzed player performance in Dark Ages. You note that mid-level, but not high-level, rogues have low-performance in terms of measured EPH when compared to the other four classes. In 1999 this was one of the decisions that I faced. I hypothesized that inserting a set of mid-level trap skills will improve performance, by improving their damage ratio. Then I used techniques in this article to test my hypothesis. During the transition, some players, especially non-rogues, argued about the performance of rogues. But the experiment succeeded. Within a month mid-level rogues had balanced EPH. Testing is the most rigorous, sensitive, and critical step in the cycle. Although it feels good to hold a gem of wisdom, it feels bad to realize your treasured hypothesis is a false gem. So it is tempting, and sadly common, to halt the cycle before the testing stage. Test each hypothesis. If it is correct, it will survive with its value proven. If it is incorrect, then please conserve the team’s resources by discarding it. A good test has two and only two possible outcomes: the hypothesis is true, or the hypothesis is false. A good test rarely yields an inconclusive result, which means the test needs to be repeated or modified to yield a definite true or false. This cycle is an elaboration of a basic idea: trial and error. Since testing detects error, it improves a game’s design. Figure 11. Measure test results to validate or invalidate the hypothesis. In the earlier example, high-level fighters suffered from low-EPH. Suppose someone suggests a new game asset, a new skill to increase the fighter's combat effectiveness. You design “Sword Mastery” to do this. After collecting data on the test server you compare the old and new EPH for each class in order to conclude if the skill improved high-level fighter EPH and what other results it may have. In the test, mirror actual conditions as much as possible. Just like an ideal point, or a limit, identical conditions do not exist, yet you can approximate. Test an identical configuration, build version, feature set, and at the same day of week and time of day. Additionally, the population will be smaller, which means results will be less precise. But the most uncontrollable factor of the test is the players. Your test player population is not going to be random sample. It will be a self-selected sample whose average motivations and behavior will be biased. So the test contains error. Worse than this, discovering the direction of bias may be an intractable problem. Although a perfect test is impossible, a test that contains experimental error may still improve your game's balance tenfold, because this process is iterative. If a single iteration cuts game imbalance in half, two iterations will quarter game imbalance, and so on. This improvement is far better than no improvement or worse, than designing with disinformation, such as feedback motivated by competing special interest groups. After a new design passes this test, feed the design back into live service. The process is iterative, so for best results, repeat monthly. We have glossed over the general process. Let's now step back and consider a healthy scope for data mining. Data mining provides answers that other methods of evolutionary game design cannot. However, it is not a panacea. Data mining takes numbers, processes them, and makes new numbers. These numbers cannot tell you how each player feels. The player may be misinformed or biased about the balance of the game, but she is always right about how she feels. Some players' feelings may be immature, and some players may have contradictory responses. Yet the paradox is that they are all right. Every player’s emotional response is valid. The data also does a poor job of revealing how players feel about each game asset. It does not indicate which asset has beautiful modeling, expressive animation, or a compelling story. Figure 12. Preemptive data mining employs your staff to harass customers. A healthy scope excludes preemptive or preventive data mining, which attempts to identify and prevent cheating, harassment, or sabotage. This equates to profiling and an invasion of privacy. Besides being unethical, preemptive data mining is disastrous. Data mining cannot establish cooperation or culpability. Not only is it prone to random error and false positives, but also it creates a new source of player harassment. This source of harassment is hard to discover, impossible to eliminate, and much more costly: Harassment by your own staff upon your customers. Data mining is also called knowledge discovery. While you can mine knowledge from data, you cannot mine wisdom. You have to prioritize results and decide which game imbalances should be left alone. Data mining automates a process within your overall evolutionary design cycle. It amplifies an efficient design process and multiplies the problems in a poor process. Now that you have seen the general process, let’s apply it to some common MMOG problems. Here are four practical applications of data mining: 1. Balance the economy. 2. Catch cheaters. 3. Cut production costs. 4. Increase customer renewal. Each game asset that passes hands between players is a commodity or currency. These tradable game assets define the game’s economy. The commodities and currencies need not be limited to money and property. For example, in Nexon’s Dark Ages, I designed and implemented a labor currency, a political currency, and a religious currency. Figure 13. Religion, politics, and labor can also become currencies in an MMORPG. (Nexon's Dark Ages) Be careful when measuring individual character gains and losses. Account for transactions that exchange one commodity or currency for another. For example, a character could have less money after one week but have more wealth. He may have exchanged his money for other commodities of greater value. Track the game’s macro-economic indicators. See if the supply of currency is increasing or decreasing. Like a real-world money supply this tells you about the inflation rate of the currency. Measure key performance indicators and generate hypotheses of how to improve game balance. One simple balance technique you can use is to change the price of a game asset. Players are more receptive to price changes than they are to other attribute changes. For example, in 2002 when Stewart Steel noticed low admittance rate for wizards in Nexon’s Nexus: The Kingdom of the Winds, he increased the rate by increasing the starting items of that class. In effect, this increased the price that an NPC paid to the player for choosing the wizard career. Figure 14. Players tolerate price adjustments more than other changes. (Nexon's The Kingdom of the Winds) After testing the hypothesis, repeat the cycle each month. Each modification, although seemingly insignificant, can have a huge ripple effect on the rest of the economy. In the same example, if there were a higher starting value but a poor prospectus for the career of a wizard, then retention rate among wizards might drop. While balancing the strategies, such as player classes in a fantasy setting, ensure that each strategy remains unique. Keep the clusters in strategic space from converging. Let’s return to the original example. The low-performing high-level fighters have several unique and shared assets. When adding a new asset to balance their performance, it might be better not to give a fighter “Poison Tolerance.” If the Priest class has an ability to cure poison, then this would be redundant. It would reduce the group’s demand for Priests and begin to merge the two classes. Instead it might be better to provide “Sword Mastery” if no other class has this kind of ability. This controls the supply of assets so that each cluster of assets retains its unique niche in the game. Figure 15. Balance each strategy's performance yet keep each strategy unique. A cheater in a MMOG does not just cheat himself. He performs an injustice to all honest players. Cheating short-circuits gameplay, so it achieves exceptionally high performance. Players adopt high performance strategies, whether intended by the designers or not. Cheating also penalizes the relative performance of all non-cheaters. If not corrected quickly, the cheating will spread like wildfire. In a matter of weeks or even days a cheat can flood the game’s economy. These techniques can help catch cheating before it ruins the economy. Start at the table that preprocessing generated. This lists each character ID and their performance. Sort the list by the performance column. Now, at the top of the list is the most suspicious character ID. Investigate his exceptional performance. Figure 16. Investigate suspicious player performance starting at the top. Let's sort the example table by the EPH column. The character at the top is the most suspicious. Even though he has a lower total experience gain, he has a higher rate, since he accumulated the experience during fewer hours. Investigate the logs to discover how he performed so well. The answer will enlighten you as to how players use and abuse your game. The answer does not indicate the player’s intention. The player may have been using a legitimate feature of the game. In fact, a player may argue that unless he modified the software, all of his behavior is a legitimate use. He played the game as it was given to him. Regardless of the motive, deleting a cheater cannot solve the problem. System imbalances breed cheaters, so the design itself can prevent cheating. Each game asset took some amount of programming, art, design, testing, and customer service to develop and maintain. Yet some of these classes, items, monsters, quests, skills, zones, and other objects in your game are being wasted. This lowers developer morale. Low-performance ROI -> 0% Game assets with low performance have a return-on-investment value that approaches zero. Players make decisions and optimize their decisions over time. Communication with other players accelerates migration to an optimal strategy. They quickly adopt the highest performing game assets available. They discard low performance assets. In terms of competition these assets are liabilities, so they become obsolete. For an obvious example, if there are two nearly equivalent weapons, except one has a higher damage rate, the other weapon is obsolete. In a game, this kind of decision, between obsolete assets and newer assets creates fat. It means there is some fraction of your game that might as well not exist, because no one uses it. Imagine having to break this news to an artist: "Thank you for the long-nights you spent making this new graveyard that we specified, but no one hunts there. Sorry about that." Figure 17. Are players using all of your game's assets? (Nexon's Dark Ages) It does not have to be this way. A wise patch can put the assets back into the players’ list of options. Recycle the artists, programmers, and testers’ hard work as much as possible. Create new, well-balanced instances. Measure and prove their balance in terms of performance. Do not change the values of existing instances. Let them remain as they are. Recycle the art with modest modifications so those man-hours are not lost. But only recycle obsolete assets. Players do not tolerate recycling of assets that they do not consider obsolete. They demand fresh assets. If the player cannot or does not realize how to improve his performance with the choices he has already made, he is doomed. For example, if all high-level fighters perform worse than average high-level characters, all fighters are doomed. The players’ sense of doom will become the developer’s death knell unless you act fast. Figure 18. If a player cannot improve her performance you may lose her. When a player suffers from poor performance in a single-player game, he suffers alone. But in a massive multiplayer game his whole team suffers. Unfortunately, a good choice for the team to increase their performance is to exclude low-performers. When low-performance is not the player's fault, this breeds frustration. Suppose a group of players can increase its EPH 20% by excluding low-performance players. Sadly, many groups will. Suppose the excluded low-performer is unable to alter his EPH liability. Like an endangered species that is unfit to hunt and unable to evolve, this set of players becomes extinct. The character will not only become extinct from the playscape, but the player’s motivation to play will become extinct, too. If she will not play, eventually she will not pay, either. To prevent this, balance the strategies. Do not edit existing instances of game assets. This will upset other players using other strategies. They will perceive the correction as an injustice, an act of favoritism. Instead of creating a perceived injustice, add new assets. If your game is commercial, improve player performance instead of worsening it. If some asset is too good, but players love it, let it be. Only when the asset would cause long-term customer losses should it be removed, because removing or degrading an asset decreases customers’ good faith. There are few things that say, “I do not want your money, go away” as quickly as removing a beloved feature in the game. Players paid money in advance and continue to pay a subscription fee each month for a reason. They expect the game to improve each month. Their criterion is simple. The game should improve for their character personally and for the special interest group that their character belongs to. A major error’s existence costs more than this loss of good faith. In this uncomfortable cases, prove to players you care by negotiation and diplomacy. Players' feelings are stake. Imagine a worst-case scenario from the most extreme player’s perspective: This morning your paycheck was suddenly slashed 50%. Your brand of car drove half as fast, required repairs twice as often, and costs twice as much. Because the "gods" said so. Some customers take the game just as seriously. We have only touched the tip of the iceberg of data mining and game design. Both are elaborate and exciting fields for research, experimentation, and application. For years we, as game designers, have wanted systematic and scientific tools. I hope this tool will help improve your game’s design. If you have questions, comments, or would like to discuss this topic in detail please contact me at kennerly%20(AT)%20finegamedesign%20(DOT)%20com I could not have written this article without the support of each of Nexon’s employees and players. They encouraged my experiments. Han, Jiawei and Micheline Kamber. Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers: San Francisco, 2001. An introductory practical explanation for database programmers. Hand, David, Heikki Mannila, and Padhraic Smyth. Principles of Data Mining. MIT Press: Cambridge, 2001. An interdisciplinary explanation of the mathematics and fundamentals of data mining. Electronic Privacy Information Center. “Total Information Awareness (TIA)” <http://www.epic.org/privacy/profiling/tia/> 20 April 2003. An ongoing log of preemptive data mining and its danger to society.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206763.24/warc/CC-MAIN-20200922192512-20200922222512-00750.warc.gz
CC-MAIN-2020-40
32,831
139
https://intellipaat.com/community/12290/how-to-extract-the-data-from-ibmkeyword-in-uipath
code
I am experimenting with the IBM Watson NLU’s Text Analysis package in UiPath with a simple text. I am able to extract the KeyValue pair information for Categories, Concept, and Sentiments using .ToString() . However, I am having trouble in figuring out how to extract information for Keywords, Entity both are of type IBMKeyword, IBMEntity A simple .ToString() method in the message box gives something that's not helping or I don't know how to use it. Below is the screenshot of my UiPath Studio:
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506429.78/warc/CC-MAIN-20230922234442-20230923024442-00876.warc.gz
CC-MAIN-2023-40
499
3
https://meta.stackoverflow.com/questions/266705/why-are-some-pages-randomly-less-secure-than-others
code
I was just browsing Meta when I noticed the following (I was on this post in a different tab when I took the right screenshot), Now this is where the "randomness" comes in. I only get the left screenshot for certain posts (roughly 1 in every ~300), and when I do, if I close the tab and reopen the post in a new one, I get the right shot (as usual). But this doesn't always work, sometimes I have to close, wait ~5 minutes, and then reopen a new tab. I've also noticed this behaviour on SO (only once from what I've seen), so far - if I recall correctly - I've seen this happen four times on Meta. I also disabled all add-ons/extensions and still got the same results. In addition, I've also found that once I come across an "insecure" page and then continue to view other posts (on the same domain in the same tab), it still says the page isn't secure (perhaps this is a bug with Chrome?). Here's a shot of the console for the in/secure pages, Even more puzzling, I've managed to view the same post in different tabs, one says it's secure, the other not, For the left shot tab I was previously at Meta's home page (and again, it said the page was insecure), I then opened a post in that same tab and took the shot. For the right one, I just opened a new tab of the same post. What's causing this to randomly occur? Browser: Chrome 36.0.1985.125, OS: Windows 7 Home Premium (32-bit).
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644855.6/warc/CC-MAIN-20230529105815-20230529135815-00636.warc.gz
CC-MAIN-2023-23
1,383
10
https://www.thestudentroom.co.uk/showthread.php?t=4650438
code
GCSE english language paper 1 question 3Watch I'm struggling with english lang paper 1 q3. could someone list me structural devices I could use in my answer. Non linear narrative Change of perspective The tone of the extract Make the point Develop on the point Make up a fact What impact does the fact have on the point your making ( 2 sentences) Recommendation in response to the scenario you are presented with I.e. Surely it would be better in the future if ...... I used this and got an A grade in unit 2. Posted from TSR Mobile Nonlinear narrative The story skips around to different points in time; the order of events are rearranged or deconstructed in a way reflecting the main character's psychological state or the story's theme. This is used to place audiences in the minds of characters who have unusual ways of thinking or whose ability to process information is impaired. There are twists on the chronological order of traditional narratives (for instance, manipulation of time). The theme in nonlinear novels often deals with the ways people experience memory and time, and the role these elements play in human experience. A retrospective narrative is when the story being told is not happening at the time the narrator is describing it and it highlights changes in the narrator because of and since the events of the story transpired. Cinematic writing Action taking place is watched close enough to highlight important details. Frame narrative contains a second narrative in order to provide a context or setting for it. Sometimes this framing narrative will begin and end the narrative as a whole, providing book ends, while other times the framing narrative will simply be present in the beginning of the narrative. The framing narrative "sets the scene" for the second narrative, giving us a context in which we can read and interpret the text. Prolepsis is the representation or assumption of a future act or development as if presently existing or accomplished. This is done by referring to a future event as if it is already completed or taking the narrative forward in time to show events expected to occur, or have already occurred in the future, even though the main part of the narrative is further back in the past (often used to reveal some parts of a plot, which will be filled in later). Analepsis a form of flashback in which earlier parts of a narrative are related to others that have already been narrated
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107906872.85/warc/CC-MAIN-20201030003928-20201030033928-00158.warc.gz
CC-MAIN-2020-45
2,441
18
https://eurospreed.com/how-to-find-hostname-on-bluehost/
code
How To Find Hostname On Bluehost Finding a premium low-cost web hosting provider isn’t simple. Every internet site will have different requirements from a host. Plus, you have to contrast all the features of a hosting business, all while seeking the most effective offer feasible. This can be a lot to kind via, particularly if this is your first time buying holding, or developing a web site. The majority of hosts will certainly supply extremely inexpensive initial prices, just to raise those rates 2 or 3 times greater once your initial get in touch with is up. Some hosts will certainly offer complimentary bonuses when you subscribe, such as a complimentary domain name, or a free SSL certification. While some hosts will be able to use better performance as well as high degrees of protection. How To Find Hostname On Bluehost Below we dive deep right into the best economical host plans out there. You’ll discover what core holding functions are crucial in a host and also exactly how to assess your own hosting needs to make sure that you can pick from among the best inexpensive holding suppliers below. Disclosure: When you purchase a web hosting package via web links on this web page, we earn some compensation. This assists us to maintain this site running. There are no additional expenses to you at all by utilizing our web links. The list below is of the very best cheap host plans that I have actually personally made use of and also checked. What We Consider To Be Economical Web Hosting When we describe a host bundle as being “Cheap” or “Budget plan” what we mean is hosting that falls under the price brace in between $0.80 to $4 monthly. Whilst investigating affordable hosting carriers for this overview, we took a look at over 100 various hosts that fell under that cost array. We after that evaluated the quality of their cheapest hosting package, value for money and also customer service. In this article, I’ll be reviewing this world-class site hosting firm and also stick in as much relevant info as possible. I’ll look at the attributes, the rates options, as well as anything else I can consider that I think could be of advantage, if you’re deciding to sign up to Bluhost as well as get your web sites up and running. So without more ado, allow’s check it out. Bluehost is among the most significant host companies worldwide, obtaining both substantial advertising and marketing assistance from the business itself and affiliate online marketers who advertise it. It truly is a massive company, that has actually been around for a long time, has a large credibility, as well as is absolutely one of the leading choices when it involves webhosting (absolutely within the top 3, at least in my book). But what is it exactly, and should you get its solutions? Today, I will certainly answer all there is you require to understand, offered that you are a blog writer or an entrepreneur who is seeking a web host, as well as does not know where to begin, since it’s a wonderful service for that target market in general. Allow’s picture, you want to organize your websites and also make them visible. Okay? You already have your domain name (which is your site destination or URL) and now you want to “turn the lights on”. How To Find Hostname On Bluehost You need some hosting… To achieve every one of this, and also to make your website noticeable, you need what is called a “server”. A server is a black box, or gadget, that saves all your site information (files such as photos, texts, videos, links, plugins, and also other information). Now, this web server, needs to be on regularly and also it has to be attached to the net 100% of the moment (I’ll be pointing out something called “downtime” later on). In addition, it likewise requires (without getting too fancy and also right into information) a file transfer protocol generally called FTP, so it can reveal internet browsers your site in its designated type. All these things are either pricey, or call for a high level of technical ability (or both), to develop and maintain. And you can totally go out there and discover these things on your own and also established them up … but what concerning rather than you acquiring and maintaining one … why not just “renting hosting” rather? This is where Bluehost can be found in. You rent their web servers (called Shared Hosting) as well as you release a web site using those web servers. Since Bluehost keeps all your data, the business additionally allows you to establish your web content administration systems (CMS, for brief) such as WordPress for you. WordPress is an extremely preferred CMS … so it just makes sense to have that option readily available (practically every holding company now has this alternative as well). In short, you no longer need to set-up a server and after that integrate a software program where you can develop your web content, separately. It is currently rolled right into one package. Well … think of if your web server is in your home. If anything were to happen to it at all, all your data are gone. If something fails with its inner procedures, you need a professional to repair it. If something overheats, or breaks down or obtains damaged … that’s no good! Bluehost takes all these hassles away, as well as deals with whatever technological: Pay your server “rental fee”, as well as they will certainly take care of whatever. And as soon as you buy the service, you can after that begin concentrating on adding web content to your site, or you can place your initiative right into your marketing projects. What Services Do You Obtain From Bluehost? Bluehost supplies a myriad of various services, however the main one is hosting obviously. The organizing itself, is of various kinds by the way. You can lease a shared server, have a devoted web server, or additionally a digitalpersonal server. For the purpose of this Bluehost review, we will certainly concentrate on organizing services and also various other services, that a blog writer or an on the internet business owner would require, instead of go too deep right into the bunny opening and discuss the other solutions, that are targeted at more experienced people. - WordPress, WordPress PRO, and also ecommerce— these organizing solutions are the plans that permit you to organize an internet site utilizing WordPress and WooCommerce (the latter of which permits you to do shopping). After purchasing any one of these packages, you can begin constructing your site with WordPress as your CMS. - Domain name Marketplace— you can additionally buy your domain from Bluehost rather than other domain name registrars. Doing so will make it easier to aim your domain to your host’s name web servers, because you’re utilizing the very same market. - Email— as soon as you have purchased your domain name, it makes good sense to also get an email address tied to it. As a blogger or online business owner, you need to virtually never ever make use of a free e-mail solution, like Yahoo! or Gmail. An email such as this makes you look amateur. The good news is, Bluehost provides you one for free with your domain name. Bluehost additionally offers dedicated servers. And you may be asking …” What is a dedicated web server anyway?”. Well, things is, the standard web hosting plans of Bluehost can just a lot website traffic for your web site, after which you’ll require to upgrade your hosting. The factor being is that the typical web servers, are shared. What this indicates is that web server can be servicing two or more internet sites, at the same time, one of which can be your own. What does this mean for you? It implies that the single server’s resources are shared, and it is doing numerous jobs at any type of given time. As soon as your web site starts to strike 100,000 website sees monthly, you are mosting likely to need a specialized server which you can additionally receive from Bluehost for a minimum of $79.99 monthly. This is not something yous should bother with when you’re beginning however you should maintain it in mind for certain. Bluehost Prices: How Much Does It Cost? In this Bluehost review, I’ll be focusing my attention mainly on the Bluehost WordPress Hosting plans, because it’s one of the most popular one, and also very likely the one that you’re looking for which will certainly fit you the very best (unless you’re a substantial brand, business or website). The three readily available plans, are as follows: - Standard Plan– $2.95 per month/ $7.99 regular rate - Plus Strategy– $5.45 monthly/ $10.99 regular cost - Choice Plus Plan– $5.45 monthly/ $14.99 normal price The first rate you see is the cost you pay upon join, and the 2nd rate is what the cost is, after the first year of being with the firm. So generally, Bluehost is going to bill you on an annual basis. As well as you can additionally select the quantity of years you wish to organize your site on them with. How To Find Hostname On Bluehost If you pick the Fundamental strategy, you will certainly pay $2.95 x 12 = $35.40 starting today and also by the time you enter your 13th month, you will certainly now pay $7.99 monthly, which is likewise charged each year. If that makes any feeling. If you are serious about your internet site, you need to 100% obtain the three-year option. This implies that for the fundamental strategy, you will pay $2.95 x 36 months = $106.2. By the time you strike your 4th year, that is the only time you will pay $7.99 per month. If you consider it, this technique will certainly save you $120 during three years. It’s not much, however it’s still something. If you wish to obtain greater than one site (which I highly advise, and also if you’re significant, you’ll probably be getting more at some time in time) you’ll wish to utilize the choice plus plan. It’ll permit you to host unrestricted internet sites. What Does Each Plan Offer? So, in the case of WordPress holding strategies (which resemble the common holding strategies, but are a lot more tailored in the direction of WordPress, which is what we’ll be focusing on) the functions are as complies with: For the Fundamental strategy, you obtain: - One internet site just - Guaranteed internet site via SSL certification - Optimum of 50GB of storage space - Complimentary domain name for a year - $ 200 advertising and marketing credit history Remember that the domain names are purchased individually from the holding. You can obtain a totally free domain name with Bluehost below. For both the Bluehost Plus hosting and Choice Plus, you obtain the following: - Unrestricted number of sites - Free SSL Certificate. How To Find Hostname On Bluehost - No storage space or data transfer restriction - Complimentary domain name for one year - $ 200 advertising credit history - 1 Office 365 Mail box that is free for thirty day The Choice Plus plan has an included benefit of Code Guard Basic Alternative, a back-up system where your documents is conserved as well as replicated. If any kind of crash occurs as well as your site information disappears, you can recover it to its initial kind with this feature. Notice that even though both plans cost the exact same, the Choice Plan then defaults to $14.99 per month, regular rate, after the collection quantity of years you have actually chosen. What Are The Advantages Of Using Bluehost So, why select Bluehost over other host solutions? There are hundreds of host, a number of which are resellers, however Bluehost is one choose few that have actually stood the test of time, and also it’s probably the most popular around (and forever reasons). Right here are the three primary advantages of choosing Bluehost as your webhosting provider: - Web server uptime— your internet site will certainly not be visible if your host is down; Bluehost has more than 99% uptime. This is very crucial when it involves Google Search Engine Optimization and positions. The greater the much better. - Bluehost speed— just how your server reaction identifies how quick your internet site reveals on a browser; Bluehost is lighting quickly, which means you will minimize your bounce price. Albeit not the very best when it pertains to packing rate it’s still hugely important to have a fast speed, to make user experience far better and also better your ranking. - Limitless storage— if you get the Plus plan, you need not stress over how many documents you keep such as videos– your storage space capacity is limitless. This is actually crucial, because you’ll most likely encounter some storage issues later down the tracks, and you do not want this to be a problem … ever before. Finally, client support is 24/7, which indicates no matter where you remain in the globe, you can speak to the support team to fix your web site problems. Pretty conventional nowadays, however we’re taking this for provided … it’s also extremely crucial. How To Find Hostname On Bluehost Additionally, if you’ve obtained a free domain name with them, then there will certainly be a $15.99 fee that will be subtracted from the quantity you originally purchased (I visualize this is due to the fact that it sort of takes the “domain out of the marketplace”, not sure regarding this, yet there possibly is a hard-cost for registering it). Lastly, any kind of demands after one month for a reimbursement … are void (although in all sincerity … they ought to most likely be strict here). So as you see, this isn’t always a “no questions asked” plan, like with a few of the other holding choices around, so make sure you’re all right with the policies before proceeding with the hosting.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00250.warc.gz
CC-MAIN-2021-31
13,779
82
http://cogitatornz-cogitatornz.blogspot.com/2009/08/no-user-picture-on-start-menu-in.html
code
Others have had this problem and many an answer has been given in the forums. A Google search for "show user picture on start menu" gets the widest range of answers. Surprisingly none of them mention one of the simplest answers of all as I found after hours of search and experiment. The answer lay in ensuring you had activated "Use visual styles on windows and buttons" in System Properties. To check that you have it activated right click on Computer. When the System Properties pane opens left click on the advanced tab and then under "Performance" on the "Settings" box. When the "Performance Options' pane opens left click on the "Visual Effects" tab. If the "Custom" settings have been chosen scroll to the last item in the list and ensure you have clicked on "Use visual styles on windows and buttons" so that it is activated. Your chosen picture in the User Accounts section of the Control Panel will then be shown at the top of the start menu pane. A different method of getting the same result but losing the benefits of your other custom settings is to go to Desktop Properties and under the appearance tab choose Windows XP style in the "windows and buttons" box.
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202688.89/warc/CC-MAIN-20190322180106-20190322201655-00063.warc.gz
CC-MAIN-2019-13
1,176
2
https://forum.htc.com/topic/6222-how-to-prevent-computer-from-logging-out-during-usage-of-the-vive/
code
perha Posted August 3, 2019 Share Posted August 3, 2019 Hello, Some computers and work environments use the HTC Vive in accessible workstations for users in the workspace, for example at a library or office. However, for security reasons, the computers at use generally have an automatic hard log out after a period of inactivity (lets say 10 minutes), where inactivity is determined by keyboard or mouse input. However, while using the Vive, the user receives no warning and the inputs from the Vive and controllers do not count as activity, causing log outs mid way through a session and deleting data. How do you prevent this? The logical solution would be to include movement of the headset or controllers as activity, however this information cannot be sent to the desktop, from what I understand. I would like to note that solutions that suggest that the log out feature is entirely removed are not acceptable, since workspaces keep automatic log outs for security reasons and this should be considered as part of the problem. The computer should log out if it determines there has been ten minutes of idleness: no user input. The workspace I work with has a workstation with the HTC Vive headset and two motion controllers and runs on Windows 10. Is there any way to do this currently? And if not, could this be implemented? Thanks! Link to comment Share on other sites More sharing options... Create an account or sign in to comment You need to be a member in order to leave a comment Create an account Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656675.90/warc/CC-MAIN-20230609100535-20230609130535-00730.warc.gz
CC-MAIN-2023-23
1,637
6
https://shopify-spy.com/onvoard-prompt/
code
Powerful editor to fully customize pop-ups. Add Cart Trigger - Trigger popup when user adds an item to the cart. Initiate Checkout Trigger - Trigger popup when user clicks checkout. Filter by contact properties. For example, display discount codes for VIPs. Filter by Shopify store properties. For example, prompt if cart amount < $100. Why choose OnVoard Prompt? With Prompt, we are pushing the limits of what you can accomplish with popups by going beyond email capture and helping you to increase store revenue with cart upsell. - Email Popup: Welcome new users with discount code for first purchase. - Cart upsell: Recommend products when user add cart item. - Cross sell products when user initiate checkout. - Abandonment Protector: Use exit popup when a user with cart items tries to exit. For GDPR, data are hosted in EU. Abandonment Protector with Exit Popup
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473558.16/warc/CC-MAIN-20240221202132-20240221232132-00066.warc.gz
CC-MAIN-2024-10
867
12
https://algorithmicfairness.wordpress.com/category/fatml/
code
As part of the research we’re doing in algorithmic fairness we’re looking to hire a post-doctoral researcher who can help us bridge the gap between the more technical aspects of algorithmic fairness and the ways in which this discussion informs and is informed by the larger context in the social sciences. Specifically, - Candidates for this position should have a strong grasp of technical systems (including machine learning), as well as a rich understanding of socio-technical discussions. For example, candidates might have an undergraduate degree in computer science and a PhD in a social science field. Or they may have a more hybrid degree in an information school or CS program. They may be a data scientist or study data scientists. - Candidates should be able to translate between engineers and critics, feel comfortable at ACM/AAAI/IEEE conferences and want to publish in law reviews or social science journals as well as CS proceedings. - Candidates should be excited by the idea of working with researchers invested in fairness, accountability, and transparency in machine learning (e.g., fatml.org). - Preference given for researchers who have qualitative empirical skills. If you might be such a person, please do send in an application (Role #1). Data & Society is a wonderful place to be if you’re at all interested in this area. danah boyd has assembled a group of thinkers that represent the best kind of holistic thinking on a topic that intersects CS, sociology, political science and the law.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057787.63/warc/CC-MAIN-20210925232725-20210926022725-00363.warc.gz
CC-MAIN-2021-39
1,521
7
https://fleio.com/docs/2023.03/developer/obtaining-frontend-sources.html
code
Obtaining frontend sources¶ Fleio frontend sources are available to customers that pay for the license 1 year in advance. Frontend sources are available for download as a docker image: where version is encoded as <year>-<month>:<number>. E. g. for Fleio 2023.01.1 release the image will be named To download Fleio frontend sources you will need to authenticate to hub.fleio.com first. You can authenticate using the following command: docker login hub.fleio.com license id as username and license key as password. After you are authenticated to hub.fleio.com you can download the sources image. docker pull hub.fleio.com/frontend_sources-2023-01:1 In order to extract the sources from the image you will need to create a docker container. Below is a list of commands to 2023.01.1 frontend sources from an image. After you execute these commands a containing Fleio frontend sources should be available in your current directory. container_id=$(docker create hub.fleio.com/frontend_sources-2023-01:1 true) docker cp $container_id:2023.01.1.tar.gz . docker rm $container_id
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510501.83/warc/CC-MAIN-20230929090526-20230929120526-00381.warc.gz
CC-MAIN-2023-40
1,071
17
https://www.insidestoragenetworking.com/2022/06/brocade-fabric-performance-impact.html
code
Brocade Fabric Performance Impact Notification - Get link - Other Apps Brocade Fabric Performance Impact Notification (FPIN) was released in Broadcom FOS v9.0. It is available on Brocade Gen6 and Gen7 switches. This feature enables the switch to detect issues on a fabric such as congestion or physical link issues and then then notify the affected devices that have registered for these notifications. FPIN functions in a similar mechanism to RSCN. RSCN enables the fabric to send notifications to devices when a device they are zoned to is going offline. The devices that receive these notifications can then proactively take steps such as path failover rather than have to react to a path being down. FPIN provides a means to notify devices of link or other issues with a connection to a fabric or a path through it. For both RSCN and FPIN, a device must register with fabric services to receive these notifications. The new Brocade Gen7 hardware can send hardware or software signal notifications. Gen6 can only send software notification. Both the hardware and software notifications require FOS v9.0 on the switches. Hardware signals can be sent from the switch to the adapter in the device. The adapter can then decide what to do about the notification. Software signals are sent higher up in the Fibre-Channel stack, and the adapter driver would then decide how to handle the notification. One advantage to notifications in hardware is reaction time - the adapter can process the notifications and react more quickly than the driver can. Another is that the hardware-based notification is a fibre-channel primitive. This means that even if buffer credits are depleted the signal can still be delivered to the device on the other end of the link. Primitives are not frames so do not need buffer credits to be sent. The software layer signal is an ELS frame, so can be affected by buffer credit depletion and other link congestion. Whether the signal sent is hardware or software, how the devices handle the notifications is up to the vendor of the adapter. Some may log the notification, some may take action. The action that an adapter takes is also vendor specific. - FPIN can alert devices about these events: - End Device Congestion - Device Link Integrity (CRC) - Frame Drops If FPIN is enabled, these events are still monitored via MAPS. Enabling FPIN won't change your existing MAPS configuration for the above events. With FPIN, notifications are sent to the affected devices that register for them. How the devices handle the notification is vendor specific. They may just log the event or they may take other steps such as starting link recovery or slowing traffic on a congested link and re-routing out an un-congested port. As a last resort, the device may shut down a troublesome link. Some vendors that support FPIN today are: - Linux Multi-Path in RHEL 8.2 - Emulex - supports Congestion and Link Integrity notifications on Linux - Marvell - will register for FPIN and log the notifications, these could be used as a source of log data for troubleshooting - AIX - will register for Link Integrity and Congestion notifications but we expect that more HBA and Storage Controller vendors will add support for FPIN in the future. One use case for FPIN is if a switch detects congestion on an ISL or path between devices, it could potentially notify the device sending data so that device could try sending data down another path without waiting for timeouts and path failover to happen. A common cause of congestion occurs when two devices are zoned together with a speed mismatch. In these cases, the faster device can throttle back and send data at a slower rate to the slower device. Some caveats here are that it would be vender specific for storage systems or host adapters, and in the case of throttling data rates, this would only work on the host side, unless a storage system could selectively throttle depending on the destination address. Another use case is the with link integrity issues. If a link is accumulating CRC or Invalid Transmission Words (ITWs) the physical link has a faulty component. A fibre-channel cable can be bad in only one direction. So it is possible that the device at one end of a link is not aware of any issues. The Link Integrity FPIN will notify the host adapter if a path is compromised. The adapter can then determine whether it should try another path by having the multi-path driver fail over. This would happen at the hardware level, long before the problem bubbled up to the software layer. One final note, remember that an FPIN can be sent from any device that supports it. Potentially the storage, the host or the switch can share this information and if they were all to have the capability to re-route data based on these notifications, the SAN is that much closer to an autonomous, self-healing SAN that routes data around blockages as best it can. - Get link - Other Apps Post a Comment
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649439.65/warc/CC-MAIN-20230604025306-20230604055306-00069.warc.gz
CC-MAIN-2023-23
4,956
25
https://guides.gamepressure.com/humankind/guide.asp?ID=60519
code
On this page of our Humankind guide you'll learn more about outposts and claiming territory. Outposts are necessary to gain control of territory - the map in the game is divided into tiles/hexes, which are grouped together into territories (regions). These areas are the most basic administrative unit you'll control. A territory's borders remain unchanged the entire game! They'll never be expanded or reduced. They are created at the very beginning of the game and stay the same right until the end. The player can only influence the territories under their control. Any unit can build an outpost. When selecting a tile, you'll see how much food and industry the outpost will be collecting. Although it might not be a full-fledged city, it's worthwhile to make sure the outpost collects plenty of both. Building each outpost costs Influence - the cost will grow depending on the number of outposts already built and the outpost's distance from your cities. The example above shows that the outpost already has 5 Population thanks to the food collected from nearby tiles, that being 14 Food, and 12 Industry. This is a good location to build a new city in, but it'll cost 510 Influence due to 2 other cities already existing. Population grows faster with a high food production, while high Industry allows you to finish building the outpost sooner. Outposts can evolve into cities (by using Influence, the cost grows depending on how many cities you're controlling at the moment) or be added to already existing cities. Actions available in outposts (not connected to the city) Each outposts has several available actions which bring you benefits. Outposts don't have industrial or administrative facilities, so all available actions cost Influence. Sometimes it's a better option, as these actions have an immediate effect. Cities have to spend Industry and time to build such objects. Therefore it's a good idea to build (or rather buy) all available objects before connecting the territory to a city. Sometimes outposts are built exclusively for strategic purposes. As we can see on the picture above, this area won't support a new city too well. The cost of doing so in Influence is very high. However, around this outpost there are several luxury resources and a yet undiscovered strategic resource. You can build several objects here for a negligent cost in Influence. In the later stages of the game you'll have to annex areas like this one to seize resources necessary for new units and buildings. At the cost of Influence, following actions can be taken in the outposts: - Create Tribe - available only in the Neolithic era. For a few Influence points and a single Population point you can create a new Tribe unit. - Outpost relocation - useful especially if you want to take a more defensive position. Outposts can be relocated, but administrative centers can't be moved. The outpost turns into an administrative center after it's connected to any city for the first time. - Harbor - after researching a specific technology, you can place a Harbor district on any tile containing a lake or a sea, but only one such object is available in a region. Emblematic districts that can be built on water can also be bought with Influence (this doesn't apply to most other special districts which are built on land). - Extractors and Artisans Quarter- these special objects can be placed on strategic and luxury resources. You don't need to connect territories to a city. Just place those buildings on the map, and your entire empire will benefit from them. - Forests - in the later stages of the game you can immediately grow forests on the territory's tiles by spending Influence. It's useful for lowering pollution or preparing the terrain forMakers Quarters.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057227.73/warc/CC-MAIN-20210921191451-20210921221451-00432.warc.gz
CC-MAIN-2021-39
3,764
14
http://stackoverflow.com/questions/12270605/groovy-sql-error-this-resultset-is-closed
code
I am using Groovy's Sql object to perform queries on a postgres db. The queries are being executed as follows: List<Map> results = sql.rows("select * from my_table") List<Map> result2= sql.rows("select * from my_second_table") I have a groovy method that performs two queries and then does some processing to loop through the data to make a different dataset, however, on some occasions I recieve a postgres exception "This ResultSet is closed" error. having searched, I originally thought it might be to do with the issue here: SQLException: This ResultSet is closed (running multiple queries and trying to access the data from the resultsets after the fact) - however, we only seem to get the exception on quite high load - which suggests that it isnt as simple as the first dataset is closed on executing the second query as if this was the case I would expect it to happen consistently. Can anyone shed any light on how Groovy's Sql object handles these situations or suggest what might be going wrong?
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164032243/warc/CC-MAIN-20131204133352-00054-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
1,006
6
https://github.com/bioidiap/bob.db.wine
code
Wine Database Interface for Bob To install this package -- alone or together with other Packages of Bob -- please read the Installation Instructions. For Bob to be able to work properly, some dependent packages are required to be installed. Please make sure that you have read the Dependencies for your operating system. For further documentation on this package, please read the Stable Version or the Latest Version of the documentation. For a list of tutorials on this or the other packages ob Bob, or information on submitting issues, asking questions and starting discussions, please visit its website.
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257832939.48/warc/CC-MAIN-20160723071032-00249-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
606
3
https://www.voicemag.uk/tag/carolinecalloway
code
4 December 2019 Caroline Calloway: a constant source of criticism, Cambridge University graduate, and so-called “scammer”. I’ll take a look at why the anti-Calloway criticism could be undeserved, and suggest she might be more savvy aesthete than scheming charlatan. 15 January 2018 A rundown of my favourite inspirational artists of all time, from Renaissance to Instagram fame.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00773.warc.gz
CC-MAIN-2022-49
384
4
https://community.rachio.com/t/cant-find-moisture-levels-app-and-browser/4492
code
I can no longer find the moisture levels for my yard. This is true for both the app and via the browser interface. My configuration is as follows: - Rachio version 1 (been running for 9 months) - 5 Zones - App version 2.6.0 - Fixed interval scheduling (every 3 days) with Smart Cycle and Weather Intelligence enabled. - Just changed my weather station
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672431.45/warc/CC-MAIN-20191016235542-20191017023042-00070.warc.gz
CC-MAIN-2019-43
351
7
https://opensource-heroes.com/r/twitter/rsc
code
Reasonable Scala compiler (also known as Rsc) is an experimental Scala compiler focused on compilation speed. This project is developed by the Language Tools team at Twitter. Rsc is not a fork, but a reimplementation of the Scala compiler. We believe that a performance-oriented rewrite will provide a unique perspective on compilation costs introduced by various Scala features and idioms - something that is currently very hard to quantify in existing compilers. With Rsc, our mission is to complement official compilers and assist with their evolution through our experiments. We are aiming to discover actionable insight into Scala compiler architecture and language design that will help compiler developers at Lightbend and EPFL to optimize their compilers for the benefit of the entire Scala community. - Dramatically improve Scala compilation performance - Study compilation time overhead of various Scala features - Identify a subset of Scala that can be compiled with reasonable speed - Facilitate knowledge transfer to other Scala compilers - Full backward compatibility (consider Lightbend Scala instead) - New language features (consider Dotty and Typelevel Scala instead) - Improvements to the type system (consider Dotty and Typelevel Scala instead) - Runtime performance (will be addressed independently) - We expanded the supported subset of Scala and are now using Twitter Util in our experiments. - We implemented support for loading dependencies based on the SemanticDB format provided by Scalameta. - Our prototype outliner can compute signatures of public and protected definitions and save them in the ScalaSignature format that enables interoperability with the Scala compiler. - In the future, we will proceed with development according to our roadmap. Our project is inspired by the work of Martin Odersky, Grzegorz Kossakowski and Denys Shabalin. Martin inspiringly showed that in this day and age it is still possible to write a Scala compiler from scratch. Greg unexpectedly demonstrated that compiling Scala can be blazingly fast. Denys shockingly revealed that Scala apps can have instant startup time.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474659.73/warc/CC-MAIN-20240226094435-20240226124435-00633.warc.gz
CC-MAIN-2024-10
2,133
16
https://www.consulting.amiq.com/2021/03/05/ofc-open-source-framework-for-co-emulation-using-pynq/
code
This article is a summary of the paper “Open-Source Framework for Co-Emulation using PYNQ” which was presented at the DVCon U.S. 2021 Conference. Table of Contents - What is co-emulation? - What is OFC? - Basic Concepts - User Integration - DVCon U.S. 2021 Experience What is co-emulation? Accelerating verification through co-emulation consists of translating not only the DUT, but also part of the drivers and monitors of the testbench from software into an FPGA board. By doing so, receiving and processing stimulus by the DUT will be optimized regarding time-consumption due to the parallelism of the hardware system. By translating the DUT from the simulated testbench into the FPGA board, all interactions with it must be redefined, as the testbench no longer has a direct connection to the DUT. Therefore, to accelerate the verification through co-emulation it is necessary to create and place synthesizable drivers and monitors into the FPGA to handle direct connection to the DUT. These modules are further called HDL components. On the other hand, the testbench still requires drivers and monitors to handle transactions between the verification environment and the hardware platform. These are further called HVL components. What is OFC? Due to the lack of accessible solutions regarding co-emulation verification we chose to develop a modular, open-source framework meant to provide an accessible way to achieve hardware integration within a testbench: the Open-source Framework for Co-emulation (OFC). Usually, the hardware platform used for co-emulation contains an FPGA and a processor acting as an FPGA controller. This architecture requires two connections: One between the host machine, containing the testbench, and the FPGA controller One between the FPGA controller and the FPGA logic The OFC provides one component for each of the required connections. Figure 1. OFC Overview For achieving a co-emulation environment the OFC SV-Python connector and the OFC Python-FPGA connector should be used together. Although, you can also use them separately: - The OFC SV-Python module can connect your UVM-SV testbench to a Python environment - The OFC Python-FPGA module can connect your HDL design to a Python environment The OFC framework provides most of the components necessary in order to achieve a co-emulation environment. Figure 2. OFC Architecture The OFC framework was tested on the hardware platform Pynq, where the FPGA Controller is called the Processing System (PS) side and the FPGA is called the Programmable Logic (PL) side. Figure 3. Class Diagram of the OFC Framework Layer 1: Host – Verification Environment The SystemVerilog testbench for co-emulation should do what a simulation testbench does without connecting directly to the DUT. The OFC framework provides an HVL driver (OFC Driver) which sends stimuli to the DUT through a proxy component that manages the connection with the hardware platform. This component is called OFC Server Connector and it plays the role of the client in the TCP connection with the PS side of the Pynq board. For more information about the socket communication used check out the Non-Blocking Socket Communication in SystemVerilog Using DPI-C article. All components from the testbench should be independent of clock cycles and timing constraints, as the clock is generated in the FPGA, not simulated in the testbench. Only the drivers and monitors have to change, assuming that the rest of the testbench does not depend on time constraints. Layer 2: PYNQ – Processing System On the PS side of the Pynq board the framework provides the OFC Python Server that manages the requests from the testbench. The server can make use of the OFC FPGA Connector in order to send stimuli to the DUT in case the goal is to achieve a co-emulation environment, or it can process locally the requests received if the goal is to connect the testbench to the Python environment. The OFC FPGA Connector is using the Pynq API provided by Xilinx to program the PL side of the Pynq board and to transfer data to and from the DUT. In order to transfer data, DMA modules are used. Layer 3: PYNQ – Programmable Logic Figure 4. PL side overview The hardware side is responsible for transferring items received from the software to the DUT and transferring the results back to the software for processing. To accomplish this connection, the DMA IP is used. This IP is provided by Xilinx and can be integrated through the Vivado Design Suite. Based on your preferences, you may choose a different FPGA board and/or a different software tool for programming the FPGA logic. The DMA communicates with the emulated design through the HDL components. The HDL components implement two interfaces: one for communicating with the DMA and one for communicating with the DUT. The DMAs interfaces used to communicate with the programmable logic side are based on the widely adopted AXI4-stream protocol. Therefore the HDL components contain an AXI4-stream slave interface for receiving from the DMA (drivers) or an AXI4-stream master interface for transferring to the DMA (monitors). There are four steps in order to integrate the OFC framework within an existing testbench. Step 1: Replacing the original drivers with the OFC Driver You can replace the usual UVM drivers with the OFC driver using the UVM factory override. The only thing that is necessary for the OFC Driver to work as a “plug and play” component is making sure that the items have a pack function, as the driver uses this function to convert items into messages. Items may have their fields registered with UVM or a custom pack function can be defined. Step 2: Creating a specific OFC Monitor Creating your own OFC Monitor is necessary as all items are received within the OFC Server Connector and there is no way for the framework to know where to send those items. The OFC framework has no knowledge about your verification environment architecture. The OFC Monitor receives string items from the OFC Server Connector which should be converted into UVM items and sent through the appropriate analysis ports. Step 3: Computing response within the Python Server The messages from the testbench, through the OFC Server Connector will be received by the OFC Python Server. The server will split the messages into a list of “string” items and will pass them to a function called “compute_response()” that you need to define. This function has been left to the user’s responsibility in order to offer flexibility regarding the timing for sending stimuli to the DUT or even the possibility to not send stimuli to the DUT at all (in case the goal is not a co-emulation environment). For a co-emulation environment, this function should: - Convert the “string” items into items that will be used by the HDL drivers - Send the converted items to the DUT through the OFC Python-FPGA component - Receive results from the DUT through the OFC Python-FPGA component - Return the processed items as a list of strings For other purposes, when co-emulation is not the goal, only the fourth point should be considered. Step 4: Creating HDL Components The fourth and the final step is implementing the HDL Components within the Programmable Logic side. These components are drivers and monitors that can be timed as opposed to the rest of the testbench. The HDL Components should be linked to the proper DMA module, so that they can be accessed from the OFC FPGA connector. Debugging the programmable logic side of the hardware platform can be quite difficult without any waveforms to be analyzed. One way to debug the FPGA design is to use the ILA IP core available in the Vivado Design Suite. For more information on how to create and configure the ILA core you can check out this article from RealDigital. A co-emulation environment implies splitting drivers and monitors into timed and untimed components and connecting them. The connection proposed through the OFC framework is based on a client-server communication and the usage of the DMA. The speed-up is dependent on several factors, among which: - The amount of logic that is translated into the hardware platform - This is the reason why co-emulation has a greater speed-up than co-simulation (which requires translating only the DUT into the FPGA logic). - Better performance will be achieved for complex designs, therefore you should consider using a co-emulation solution when verifying complex designs. - The number of DMA accesses - The number of DMA accesses should be limited to the minimum necessary if you want to achieve better performance. - To reduce the number of DMA accesses you can send multiple stimuli within the same DMA access, but keep in mind that FPGA resources are also affected. - Context switches - Having fewer context switches between send and receive operations can also lead to an increase of performance. - You can manage the context switches from within the OFC Python Server or from within the OFC Server Connector The OFC is a modular framework with two components that can be used together or separately: The first component is the OFC SV-Python which can be used for communication between a SystemVerilog testbench and a Python environment. The second component is the OFC Python-FPGA which can be used for communication between Python and the Programmable Logic side of the Pynq board. The two components can also be used together for achieving a co-emulation environment. The framework requires little integration effort, can be easily updated, and is open-source and tool agnostic. DVCon U.S. 2021 Experience The OFC was presented on 3rd of March during the Potpourri Session lead by Josh Rensch from Semifore. The paper was awarded with the 3rd place for the “Best Paper”. During the Q&A session the following questions were answered: Does the TCP connection between the UVM-SV testbench and the Python Server support TLS or any other way to ensure network security? Would you recommend that this framework only be used behind a firewall? In our setup we used a private network. The library itself does not provide this kind of security, so you have to handle that at a different layer. The library provides only the connection itself and the mechanism for sending and receiving data. If we would like to use 2 OFC ports with different HDL implementations: if each testbench port operate with different non-multiple frequency, how do we solve synchronization between those ports? So far we only managed to set it up with a single connection using OFC. This could be a next step for us. The time we had to develop this framework only allowed us to set it up for one connection. With further development, the OFC could allow the user to connect multiple times. As a side note, maybe it could also be used if you instantiate your RTL multiple times and run environments in parallel. What about debugging and/or waveform generation support? How are DUT probes done? It’s definitely harder to debug a co-emulation environment compared to a simulation one. The OFC provides debug messages which include sent and monitored items, but it doesn’t offer support for visualizing the actual waveforms. In order to debug using waveforms you can use custom IPs that you can attach to your design. Those IPs have an internal memory that can record the bus level wiggling of the pins. In our development we used an ILA core within the Vivado tool to visualize the waveforms. This method is limited by the FPGA and the tools you are using. What is the maximum frequency (transaction per second) could be achieved with OFC solution in case of sequentially repeating send then get data back? If we define frequency as the time from generating one sequence item in the testbench going through the RTL in the co-emulation environment and than sent back to the testbench, we haven’t looked into that too much because it implies too much overhead. The more that you buffer the items before sending them to the other environment (SV-Python-FPGA), the more speed up you get. If you run transactions one by one then you will actually be slower than the simulation. So that’s not a real use case. So if you hit a bug or issue, would a reproducible replay in a full simulation be an option? It depends on the type of testing that you have. The more random the test cases are in normal simulation, the less likely you are to reproduce the problem. That’s because when you use a co-emulation environment you have to take into account some trade offs regarding the precision of controlling the bus level. When you implement the HDL drivers which actually communicate with the DUT you don’t have the same control you have with UVM drivers.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818105.48/warc/CC-MAIN-20240422082202-20240422112202-00238.warc.gz
CC-MAIN-2024-18
12,733
79
http://panopticgame.com/
code
Panoptic is an asymmetrical local multiplayer VR game, between an HTC Vive player, and a regular PC player. The Vive player controls the powerful Overseer, while the PC player controls the Challenger, a citizen rebelling against the Overseer's oppressive reign. The Challenger must try to avoid the Overseer's scorching gaze, and to do so, they must often hide in plain sight among the other citizens. Participate to the alpha phase of development by downloading the demo! Feel free to email us to provide some feedback on the game, give us suggestions for new features, report bugs, or to just say hello!
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592778.82/warc/CC-MAIN-20180721203722-20180721223722-00361.warc.gz
CC-MAIN-2018-30
605
4
https://throwexceptions.com/media-player-android-mediaplayer-error-codes-throwexceptions.html
code
I am struggling with getting a live radio stream to work on android. I am using the MediaPlayer class and just setting the URL and playing it. It works great for the most part, but after 5-30 minutes it inevitably dies. On 2.1 phones (more specifically a hero) I get this log output W/MediaPlayer( 7919): info/warning (1, 26) I/MediaPlayer( 7919): Info (1,26) I/MediaStreamService( 7919): mPlayer info code:1 extra:26 E/MediaPlayer( 7919): error (1, -11) E/MediaPlayer( 7919): Error (1,-11) MediaStreamService is my Service containing the MediaPlayer the output is coming from the On 2.2 phones I don’t get the OnInfoListener callback ever, the stream just dies. But I do see this in the logcat E/HTTPStream( 1020): recv failed, errno = 11 (Try again) E/HTTPDataSource( 1020): retrying connection failed Seems to work flawlessly on my 1.6 phone despite the constant logcat spam of E/PlayerDriver( 82): Invalid percentage value <big growing number> My question is, what do the error codes (1, 26) mean? What is causing my mediaPlayer to crash? Is the 2.1 problem at all related to the 2.2 problem? Edit: I was looking in the source code to OnInfoListener and found public static final int MEDIA_INFO_UNKNOWN = 1; I’m not sure exactly what it means, and can’t find where these extras are kept either.. Any insight on to what Media info unknown means? or what this 26 stands for would be very appreciated. My question is, what do the error codes (1, 26) mean? - 26 means PVMFInfoErrorHandlingStart, just an error indication The error is -11, which means PVMFErrTimeout. You can check out the definition files here link text Maybe RDS data ? Do you set your buffer size manually ? To start the playback, start() must be called. After start() returns successfully, the MediaPlayer object is in the Started state. isPlaying() can be called to test whether the MediaPlayer object is in the Started state. While in the Started state, the internal player engine calls a user supplied OnBufferingUpdateListener.onBufferingUpdate() callback method if a OnBufferingUpdateListener has been registered beforehand via setOnBufferingUpdateListener(OnBufferingUpdateListener). This callback allows applications to keep track of the buffering status while streaming audio/video. Calling start() has not effect on a MediaPlayer object that is already in the Started state. Maybe it a part of the response.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209999.57/warc/CC-MAIN-20200923050545-20200923080545-00386.warc.gz
CC-MAIN-2020-40
2,392
24
https://askdev.io/questions/69/why-exists-no-64-bit-linux-firefox-construct
code
Why exists no 64-bit Linux Firefox construct? It appears that I need to. build my own 64-bit Firefox for Linux, as Mozilla will not sustain it till Firefox 4. (Modify: Yes I can run the 32-bit variation yet I'm attempting to maintain my system free from 32-bit cruft and also collections etc, and also all the plug-ins functioned penalty in 3.0.11 64-bit informal builds.) Update : No much longer pertinent as Mozilla give 64-bit builds, yet they do not show them on the download web pages of mozilla.org, simply on the ftp website as stated in among the solutions listed below. Builds of Firefox right from Mozilla actually just issue for Windows, and also such, without central database of software program. For us Linux individuals, a lot of the job is done by our trust fund plan supervisors, as holds true for Firefox too. Arc Linux gives a 64 - little bit construct, Ubuntu does, and so on. Simply examine your repo! I thought the internet browser on my laptop computer (running 64 - little bit install of Ubuntu 9.04) was a 64 - little bit version, is that not the instance? From the 'About Firefox' pop - up : Mozilla/5.0 (X11 ; U ; Linux x86_64 ; en - GB ; motor home :188.8.131.52) Gecko/2009060309 Ubuntu/9.04 (jaunty) Firefox/3.0.11
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301264.36/warc/CC-MAIN-20220119064554-20220119094554-00190.warc.gz
CC-MAIN-2022-05
1,244
9
http://www.avsforum.com/forum/26-home-theater-computers/985451-ati-radeon-hd-3450-3650-available-now.html
code
I just got my 3450 from Newegg and popped it into one of my HTPCs. This particular HTPC had problems playing VC-1 content from HD-DVDs. The video stream is not re-encoded and it is muxed into a WMV container. The audio stream is re-encoded to WMA 5.1. My previous video card in the system was a Geforce 6800 AGP (this is a Asrock 939 mobo with AGP and PCI-E). The CPU is a AMD X2 3800. The drivers that came with the CD are version 7.11, but I downloaded and used the 8.1 drivers from AMD's site instead. The packaging included a DVI to VGA adapter and a component video dongle. It did not include the special DVI to HDMI adapter that allows for HDMI audio. I did have a spare ATI adapter lying around and the HDMI audio does work on this card. As far as the VC-1 content, playback has improved. However, I still noted high CPU usage (nearing 100%). I'm using the default Windows WMVVideo Decoder DMO to decode and I am on Windows XP MCE 2005. But, like I said, there was a noticeable difference with only minor skipping. The previous video card caused these videos to not be watchable. I was still curious about the high CPU usage and the minor skipping. I couldn't find the checkboxes to enable wmv acceleartion so I wasn't sure if this was actually working. So, I just tried overclocking my processor a bit from 2.0ghz to 2.15 ghz and now playback is smooth. CPU usage on this high bitrate VC-1 content is now at around 70% for me. So, it still looks like you need some CPU power to playback this content. As far as high bitrate h.264 is concerned, I didn't have a problem with this stuff with the old vid card. I also rip the h.264 elementary stream from HD-DVDs and just mux it into a new container (MKV for h.264 files). I do not re-encode them. Those have always played fine using the CoreAVC decoder for me and they still play fine with the new vid card. I'll try 1080i mpeg2 content later tonight. And, if I'm not too tired, I'll pop it into my Vista machine that currently has a HD 2400 and is sucking big time at 1080i content!
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320338.89/warc/CC-MAIN-20170624203022-20170624223022-00651.warc.gz
CC-MAIN-2017-26
2,038
6
https://www.freelancer.de/work/triston-software/
code
I need someone to make me a software (.EXE) User will enter his name and email and press "play" the wheel will turn and he will see what he wins. Data will be saved in a excel file. (xls) - In that software i need a dashboard for admin to add prize to the wheel - Admin can export all data in xls file (excel) Data will contain all user and prize won *Deadline May 29th* We are a fintech company specializing in providing software solutions for financial brokers. We are looking for to partner up with a programmer/computer scientist who is experienced in data retrieval. This project has been chosen as a test project to get an idea of the programmer's fluency with data retrieval. You need a twitter mockup exists: Software has to detect the audio programms. Software has an red recording button, each time after pressed a file is auto generated. Can reclick on listed files and continue to record. start bid with READ IT, we can give another project after that and add bonus too. Hello I have an Android VPN software I want to make some changes Change the notification service Change the location of the ad to match the ad (A few days ago, Edmoub police warned me) And change the logo and package to make the software new And I want to change the look a little bit I am a corrosion engineer working in the energy sector. I am looking to develop a software to serve as a data base for inspection and monitoring data, conduct basic data analysis and visualization of the data trends aand insights on various plant drawings. I need you to develop some software for me. I would like this software to be developed using Python, android. I need a freelance software developer for my existing academic project, looking for dev who is familiar with Android, kotlin, Retrofit and SQLITE3. Backend server : python web app (flask) and interfaced with raspberry Pi which triggers click Our company software uses MySQL as its database. Currently we manually enter the registration data from JotForms into our company software. I would like an application that takes the data from JotForms using its API/HTTPGET ([Zur Anzeige der URL Anmelden]) and allow us to compare the data side by side with the patients information in our MySQL 5.5 Make software, to be used with open source Billing software (to be discussed), using custom made invoice Templates with fillable fields in pdf, to be filled out on our server. The Templates are made by us with LibreOffice. Server and PBX (Asterisk) are on different machines and networks, with different IP addresses ! All is Linux - no MS ! You must Looking for a complete Sales Funnel Software with options of payment gateway integrations & other features. -Landing page builder -payment gateway - bump offer, - upsell options - email integrations -affiliate module -site-building I want to create a following software: 1. it should run on linux prefferibly DEBIAN stable +LEMP 2. monitor preciosu metal prices from those websites: a) [Zur Anzeige der URL Anmelden] b) [Zur Anzeige der URL Anmelden] 3. save them to database 4. display those prices on web server. 5. i need source code with comments Please recomend procgramming • Writing well designed, testable, and efficient code by using best software development practices • Creating websites using industry standard HTML/CSS practices • Assuring websites have proper and correct view and responsiveness across all major browsers and devices I am looking for a simple AI program for property management. I want to track ...traffic that comes through a particular of real estate location. Ideally the software should be able to avoid double counting of the same people and and differentiate random movement(birds, trees) from actual foot traffic. Heat mapping is also a requirement the software We have an idea about voice messages social media , similar to twitter . Serious software engineers only ! Please ignore the cost which is write on the job . It may discuss with worker ! Our company creates simplified technology for small businesses. Our software empowers them to have everything they need in one place to connect with their customers in the digital era. Our cloud-based suite of mobile and web tools transforms the way our customers run their business, on the go, and everywhere. Summary: We are rapidly expanding and We need a video creator to create and edit a demo video for a software feature. The demo will be screen recorded (see example attached) then we will need to add an intro and outro and some basic overlaying animation to explain what the software does. The software is a doctor appointment system that can be deployed on websites in the form of a chatbot ...I am looking to go into the estate agency market and am looking for a competent developer(s) to create a software for me. This will be a fully loaded CRM software and all components The basic idea of the software is this: App/Web Based software in which I can interact with landlords and students all within the same application. Compatible with all Hello, We are looking for someone with: - an overall good level in software engineering - a really good communication skills - a really good internet connection and audio quality to talk - elasticsearch knowledge ( bonus) - curiousity, and willingness to learn DM me for more info ...free text editors for coders It is about 10 free text editors for coders. Here is the list. Please arrange the below list in order of features and functionalities of each software and write a short paragraph with some pros and cons for each. Visual Studio Code [Zur Anzeige der URL Anmelden] Microsoft Expression Web 4 Some folks would argue that we we WORK DESCRIPTION: Writing various protocols and reports related to process and software validations will be required. Additionally, updating old revisions of these document types will be required. Other various technical and quality related tasks will be required. DELIVERY: The deliverables assigned will be broken out into hourly expectations for Hello. We need a simple web app that allows users to do some text entry and then creates a chart based on the user text entry. The mockup is a...creates a chart based on the user text entry. The mockup is attached. When submitting a proposal, please include a general idea of the entire project cost and what languages or software you would use. Thanks. I want a writer to write me a short report on a cryptographic algorithm with few question and answers. I also need to do some practical work in this report with each process's screen shots. Need only small but reasonable answer let say in bet 150 to 200 word each and total question is 6 two included the practical question. time is very limited let say only 3 to 4 hours but it is a easy report... hello, Need someone who is fluent in using hubspot crm and can teach in spanish..preferred someone from latin american countries who can speak both spanish and engli...hubspot crm and can teach in spanish..preferred someone from latin american countries who can speak both spanish and english fluently and can teach over zoom or another virtual software ...have a dial/knob from 2010 = [Zur Anzeige der URL Anmelden] The dial/knob from 2010 has software that no longer works on windows 10. The software for windows XP used to be able to send keystrokes depending on what program you were actively in. People make custom software that can talk to this dial/knob. [Zur Anzeige der URL Anmelden] https://github Hello! We are looking for a developer that can build us a very simple program for us to use internally for our warehouse. This software should be used for registering cargo being received into our warehouse and for us to take pictures of it to show it is in good condition. Please see a scope of the project and functionality required: 1) To be used ...and accounting for APLYFT's three entities (USA & MENA) - Perform invoicing and periodic entries of the company’s transactions and assets on Quickbooks or Xero accounting software - Prepare and submit regular financial reports (monthly/quarterly/yearly) in accordance with Generally Accepted Accounting Principles (GAAP) - Assist in the completion of ...and, on the backend, some knowledge of Postgres/Redis/Kubernetes is desired to allow graceful working with development systems. Responsibilities: Work with the manager of software engineering and UI Designer to implement new interfaces and maintain existing interfaces. Build and maintain reusable React components. Build and maintain systems up to the We have ASTTP voip billing software and Free switch running on Asterisk. We have many hack attempts everyday when we open port 5060. If we keep the ports 5060 and only open other ports such as 2070, then the hack attempts are down to 1%. We want to keep port 5060 open, so we need someone to help us, how to keep safe from the hackers on port 5060. Hi, I'm Sha. I need to develop a software application using an 8085 microprocessor system based on my title and simulate using MySIM85 simulation software. Needed criteria : all of the IO simulated devices in MySIM85 must be applied. The LED, switch/button,keypad,seven-segment and interrupts namely. Need to use 2PPIs, Seven segment display, switches/buttons * Monitor and predict sales and marketing trends * Measure how well marketing strategies and programs are working * Develop and evaluate ways to collect data including opinion polls, questionnaires, and surveys * Collect data on market conditions, competitors and consumers * Convert findings and complex data into tables, written reports and graphs clients can understand * Create reports and share ... ...Beautifully designed classical UI/UX 4. API must be in laravel framework, and android part using android studio The project will be done in collaboration with our in-house software engineer. Only bid if you can design cultivating user interfaces, the best UI/UX, and you must also know how to use GitHub, Send samples of the work in the same line you I need you to develop some software for me. I would like this software to be developed for Windows and to be installed on PC. Knowledge of developing software in bookkeeping/accounting software is a must. The software will be used to keep the daily records of currency sale orders, currency purchase orders and transactions for a money transfer/money We have a project to develop/customise autonomous or semi autonomous robots to clean high-rise buildings. We will require developers with knowledge/experience in ROS, C++, Python. Requerimos de programador conocimiento Java , desarrollo web y Api , para desarrollo de Back Office y desarrollo a la medida para unificar ventas realizadas por distintos marketplaces. Con conocimiento en API de Amazon, MercadoLibre, Linio, Walmart (nuevo sitio) Claroshop . Elektra , Shopify y registrarlas en un ERP.. Preferiblemente hablar español . Ideal vivir en México/CDMX We are looking for a developer who can deliver the Vicidial agent and admin responsive UI theme for us which includes initial setup and code. The code has to be done on our IP/server where vicidial is installed. Once code is handed over to us, our team must be able to customize it. Dear, We are Looking forward for professionals to join our Team of Cold Calling. You are Required to call 1000 Call Points. For Every Call which you talk, connect, discuss we pay you. You are required to do your best to converge sales deals. Bonus to be paid extra once the deal is converged. Looking for professionals who are very patient, English communication skills are manadatory. Revert ... - Job brief We are looking for a Full Stack Developer to produce scalable software solutions. You’ll be part of a cross-functional team that’s responsible for the full software development life cycle, from conception to deployment. As a Full Stack Developer, you should be comfortable around both front-end and back-end coding languages, development frameworks ...CSGO online esports tournamentswe . we looking a development company who can really do the entire job and walk us through the process and how to effectively administer this software . It is an online community where video game players who play online can play with each other for cash prizes serves as a place for these people to play togother . There needs ...attachments you will find: - The full description of the functionality of the platfrom - A photo of the dashboard (ignore the design, just the idea) - A sample from similar software to give you an idea of the design (it can be simplified to a great degree) The total idea: (details can be found in the attachements) Team members with different permissions
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00042.warc.gz
CC-MAIN-2020-24
12,679
38
https://www.r-bloggers.com/2014/02/twitter-now-supports-database-persistence/
code
For a long time now I’ve wanted to add the ability for storing data from twitteR into a RDBMS. In the past I’ve done things by concatenating new results onto old results which simply becomes unwieldy. I know that many people have doctored up their own solutions for this but it seemed useful to have it baked in. Unfortunately I never had the time or energy to do this so the idea languished. But then dplyr happened – it provides some revolutionary tools for interacting with data stored in a database backend. I figured I’d kill two birds with one stone by finally implementing this project which in turn would give me a lot of data to play with. This is all checked in to master on github. This is still a work in progress, so please let me know if you have any comments, particularly as regards making it more seamless to use. First, some basics: - While theoretically any DBI based backend will work, currently only RMySQL and RSQLite are supported. - The only types of data able to be persisted are tweets (status) objects and user objects. Granted, this likely covers 95%+ of use cases. - Data can be retrieved as either a list of the appropriate object or as a data.frame representing the table. Only the entire table will be retrieved – my expectation is that it will be simpler for users to interact with data via things like dplyr. To continue, suppose we have a list of tweets we want to persist. Simply call store_tweets_db() with your list and they’ll be persisted into your database. By default they will be persisted to the table tweets but you can change this with the table_name argument. Finally, to retrieve your tweets from the database the function is load_tweets_db(). By default this will return a list of the appropriate object, although by specifying as.data.frame=TRUE the result will be a data.frame mirroring the actual table. Much like store_tweets_db() there is a table_name argument. Note that for user data there is a mirror set of functions, store_users_db() and load_users_db(), and the default table name is users.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141171077.4/warc/CC-MAIN-20201124025131-20201124055131-00162.warc.gz
CC-MAIN-2020-50
2,062
9
https://community.ibm.com/community/user/imwuc/people/alain-chabrier1
code
My expertise is about managing development of software to take better decisions. I have been driving the redesign of OPL, the creation of ODM Enterprise (now DOC), the inception of Cognitive Optimization (now Modeling Assistant) and the integration of DO in Watson Studio and Watson Machine Learning. My deep expertize is about Operations Research and its business applications, but I have a good practice in most technologies from the Machine Learning and Business Analytics areas. Over 20 years I have applied these techniques to develop software to solve a wide range of complex problems in a wide range of industries. I have done real research (conferences, publications and patents), I have done real development and coding (too long list of languages and methodologies), I have managed multi-language, multi-culture, geographically spread development team, and I have been responsible for product strategies. I am now a Senior Technical Staff Member (STSM) at IBM and my role is to lead the technical strategy for DO products.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00324.warc.gz
CC-MAIN-2022-33
1,032
6
https://devforum.zoom.us/t/publishing-a-windows-desktop-app-with-zoom-integration-api-sdk/24125
code
We want to integrate Zoom functionality into our existing WPF desktop app. Our app will consume the Zoom API (JWT, we don’t think we need OAuth) just to get meeting details, and SDK c# wrapper to do meeting functionality (start meeting, start self-presenting, etc.). - If we want to share our application with this new Zoom integration to a customer, does it need to be published in the Marketplace or can we manage our own distribution? - Right now we are using developer accounts (keys and secrets) for the integration. If our customers buy a Zoom enterprise account, can they pass their account-level access (email, key and secret for API and SDK) to our app (with the proper security in place, of course) so the Zoom integration works? SDK C# wrapper v4.6.21666.0428 Zoom API 2.0.0
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00036.warc.gz
CC-MAIN-2023-06
787
6
https://www.npmjs.org/browse/keyword/broadcast
code
Browse by Keyword: "broadcast" broadcast Broadcast channels for Node broadcast-pi Broadcasting client and server, Allowing the Raspberry PI (or any other device) to make itself known to registered clients. On OSX it'll send a notification to the notification center cloudjs A network distributed event system. Similar to node JS standard event system. A process pool, where objects can be added and ran at a periodic interval a predefined functions. An auto-balancing system, that migrate objects in the process pool, from one running instance to another, based on the load of each instance. hubby A high feature, distributed, low latency and secure message exchange bus based on redis and mongodb js-cast Voice Streaming from client Browser using nodejs js-cast-example Voice Streaming from client Browser using icecast server parse-push Library to support sending push notifications using Parse plexy Create multiple duplex object streams that read and write through a single text stream. primus-broadcast Adds socket.io style broadcast functionality to primus. primus-rooms-adapter In-memory default adapter for primus-rooms-adapter, use as abstract class redis-broadcast Write redis commands to a set of redises efficiently rundfunk Rundfunk is zero-conf distributed event emitter simplebroadcast Simple broadcasting and repeatign of JSON messages using net sockets streammachine Flexible live streaming for broadcast audio. twtcst Twitter Broadcast windpush Broadcast messages to all active online users on a site
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00074-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
1,518
17
https://www.fi.freelancer.com/projects/php/fix-magento-hdfc-payment-gateway/
code
Need to fix amount tampering issue by validating/comparing the hash keys. PFA for screenshots for more information. 12 freelanceria on tarjonnut keskimäärin %project_bid_stats_avg_sub_26% %project_currencyDetails_sign_sub_27% tähän työhön Hi, I have read the description and seen the attachment. I have more than 6 years of experience in web development. Please share the complete details. I am available to look at this task now. Regards Aprajita
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825123.5/warc/CC-MAIN-20181214001053-20181214022553-00353.warc.gz
CC-MAIN-2018-51
453
3
http://www.sqaforums.com/forums/hp-alm-mercury-quality-center/147748-what-maximum-number-attachments-object-can-have-alm-11-a.html
code
What is the maximum number of attachments an object can have in ALM 11 I know a limit can be set for the size of the attachments for emails but what is the total number MAX of attachments lets say a defect can have? or a Folder in the Test Lab can have? Can this be something that is configured for the project (Meaning the limit) or through Site Admin Ste configuration for all projects? Does anyone know the Parameter that is used to set this limit if it is available. Last edited by JustHuman; 01-09-2013 at 02:06 AM. Reason: Spelling correction Had the same question 2 years back, HP Presales replied that there were no limitations. Personally i don't know of any settings in the site admin for the projects that would limit you to x or y number of attachments... Sorry... It is an interesting question though, which relates to another question I've heard before, what is a recommended project size on the Db server?
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.56/warc/CC-MAIN-20170322212949-00534-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
920
5
http://crypto.stackexchange.com/questions/tagged/mac+commitments
code
to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site What type of hash functions provides non-malleability of hash digests? I want to use a hash function for commitments. I don't want an attacker to construct a commitment related to a previously published (but still unopened) commitment. A simple deterministic commitment ... Feb 2 '12 at 15:09 newest mac commitments questions feed Hot Network Questions Empty tag is replaced with unicode replacement character How many types of Pound Sterling are there and what is the relationship between them? Word for obscuring bad or immoral acts with verbiage Do hard drives really have open cases now? What effect does combining Mithril and an Armored Kilt have on armor category? grep not working in a for loop over a list Can you cancel out a term if equal to zero? Dividing by 2 numbers at once, what is the answer? What is the reverse of an Unsubscribe? Short Deadfish Numbers I don't understand what のも means in this sentence Draw the South Korean flag Reverse Polish Notation Compiler Would we see cannons in a magic-using society? Secular phrase for "Heaven only knows" or "God only knows"? Can corresponding authors include their personal website to be contacted? Begin Sentence with Gerund Contacted by a Company's Client to do a project - should I inform my company? A command to print only last 3 characters of a string 6 Tries to Guess a Number Between 1-100 Board game design: Which design is clearest to indicate failure? Boiling Chicken Breast (or any meat) before cooking to cook evenly The Ackermann function Bicycle shoes for road cycling more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507446943.4/warc/CC-MAIN-20141017005726-00139-ip-10-16-133-185.ec2.internal.warc.gz
CC-MAIN-2014-42
2,188
52
https://findergirls-no.monster/cat2/5060-dating-at-two-different.php
code
Note You can create multiple pools on a server, but you can't add databases from different servers into the same pool. For vCore-based resource limits for dating at two different colleges pools, see vCore-based resource limits, elastic pools. You can also make a set of changes to your elastic pool and submit all https://findergirls-no.monster/cat9/4293-jakarta-expat-dating.php at the same time. I live in Modiin near my son, with a daughter in Manchester. I enjoy bridge, museums, concerts, week. AlexanderSasha, 45 y. Unless the treaty otherwise provides, a particular may be withdrawn at any time and the consent of a White which has accepted the reservation is not reserved for its withdrawal. Unless the treaty otherwise provides, an american to a dating at two different colleges may be withdrawn at any time. Next the treaty otherwise provides, or it is otherwise agreed: a the entire at two different colleges of a reservation becomes operative in accordance to another contracting State only when notice of it has been numerous by that State; b the withdrawal of an objection to a conversation becomes operative only when notice of it has been received by the United which formulated the reservation. The rest will be found out in a conversation. Guy Advice: High School to College Dating!
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401598891.71/warc/CC-MAIN-20200928073028-20200928103028-00783.warc.gz
CC-MAIN-2020-40
1,300
6
https://windowsreport.com/download-teams-add-in-outlook/
code
- Microsoft Teams is used by millions around the world and it's well-known as one of the best collaboration software around. - In many instances, Teams is also integrated with other useful software like business tools and email clients. That's the case with another people's favourite, Outlook. - Downloading Teams add-in for Outlook can be done directly in Microsoft Outlook. - For more useful articles that can help you improve your daily workflow, check out our Microsoft Teams hub or the Outlook hub. Microsoft Teams is a great tool if you want to collaborate with your coworkers online. And what better time to do that than now, when COVID-19 is stranding us to our home desks? Now, if you have installed Microsoft Teams and either Office 2010, Office 2013, or Office 2016, the Microsoft Teams add-in for Outlook should be already installed in your Outlook. You will see the Microsoft Teams Meeting add-in on the Outlook Calendar ribbon. If you don’t find it, follow the easy steps below. How can I download Microsoft Teams add-in for Outlook? - In Outlook, click on the Home button, then click on the Add-ins button. - This will open another window featuring all add-ins that you can install. Click on All, then write Microsoft Teams in the Search field. - To make sure you installed it, go to File, then Manage Add-ins or Manage Apps and you will find it in the list. Make sure you have Office 2013 or Office 2016 and Exchange 2013 or Exchange 2016, otherwise you won’t be able to install any add-ins. You can’t see the Microsoft Teams add-in for Outlook? If you can’t install Microsoft Teams add-in for Outlook here’s a few easy steps of how you can fix that: - Make sure that you have Administrator permissions on the computer you are trying to install Microsoft Teams add-in for Outlook. - Use Outlook as a normal user, not as an Administrator. - There might be a problem with the Microsoft Teams desktop client so the first step would be to close it and open it again. - Sign out of the Microsoft Teams desktop client and sign in again. - Are you sure that your Outlook is up to date? Make sure you install all the updates for the Outlook desktop client. - Restart the Outlook desktop client. - Please check the Outlook user account name for any spaces. Microsoft says that it’s a known issue and that it will be fixed soon. - Are you sure that you have Office 2016 or Office 2016 installed? If you’re not sure, here’s how you can check: open any office application like Word or Excel, then click on File, then Account. You will see the product information on the right and see if you qualify to install add-ins. How do you add a meeting to a Microsoft Team? You might want to use FindTime add-in for Outlook because it will help you a lot with the Teams. If you have a big meeting (more than 3 or 4 people), FindTime helps you with reaching an agreement on finding the perfect moment of the meeting. After you agree on a time, FindTime automatically sends the meeting invite to all participants. After you select an Online meeting option in FindTime, the add-in with schedule a meeting either on Skype for Business or Microsoft Teams, whichever is set as default as an online meeting tool. If you have any questions, please don’t hesitate to drop them in the comments section below.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178358976.37/warc/CC-MAIN-20210227144626-20210227174626-00311.warc.gz
CC-MAIN-2021-10
3,312
26
https://the-marine-detective.myshopify.com/collections/gift-certificates/products/gift-certificate-for-specific-product
code
Personalized Gift Message If you have a specific canvas in mind and want to alert the giftee that it is coming: - Add that canvas to your shopping cart. - Add this item to your shopping cart. I will then take up contact about personalizing the certificate with the appropriate image and the message of your choosing. :) Any questions? Please contact me at this link.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737645.2/warc/CC-MAIN-20200808110257-20200808140257-00354.warc.gz
CC-MAIN-2020-34
366
6
https://onlinepitstop.com/2023/03/26/github-updates-security-protocol-for-operations-over-ssh/
code
The repository hosting service GitHub has announced it is replacing its existing RSA SSH host key with a new one as a precautionary measure after discovering the key was momentarily exposed in a public repository. “We immediately acted to contain the exposure and began investigating to understand the root cause and impact,” GitHub wrote in an article published on its site earlier today. “We have now completed the key replacement, and users will see the change propagate over the next thirty minutes.” The company explained the change was made to protect users’ Git operations over SSH, particularly from potential threat actors attempting to impersonate GitHub or eavesdrop on their actions. At the same time, they clarified the move did not stem from a compromise of GitHub systems or customer information. “Instead, the exposure was the result of what we believe to be an inadvertent publishing of private information,” wrote GitHub CSO, Mike Hanley. “We have no reason to believe that the exposed key was abused and took this action out of an abundance of caution.” SSH host keys are tokens used to authenticate the server and protect both the confidentiality and integrity of communication between the client and the server. Read more on SSH keys here: Microsoft Spots Updated Cryptomining Malware Tool Targeting Linux Systems “This key does not grant access to GitHub’s infrastructure or customer data,” said Hanley. “This change only impacts Git operations over SSH using RSA. Web traffic to GitHub.com and HTTPS Git operations are not affected.” Further, the company added that only GitHub.com’s RSA SSH key was replaced, while no change is required for ECDSA or Ed25519 users. The replacement of the GitHub RSA SSH host key comes a couple of months after the company confirmed threat actors stole three digital certificates used for its Desktop and Atom applications. Editorial image credit: Poetra.RH / Shutterstock.com
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649177.24/warc/CC-MAIN-20230603064842-20230603094842-00203.warc.gz
CC-MAIN-2023-23
1,962
10
https://minnesota-staging.pure.elsevier.com/en/publications/robust-volume-minimization-based-matrix-factorization-via-alterna
code
This paper focuses on volume minimization (VolMin)-based structured matrix factorization (SMF), which factors a data matrix into a full-column rank basis and a coefficient matrix whose columns reside in the unit simplex. The VolMin criterion achieves this goal via finding a minimum-volume enclosing convex hull of the data. Recent works showed that VolMin guarantees the identifiability of the factor matrices under mild and realistic conditions, which suit many applications in signal processing and machine learning. However, the existing VolMin algorithms are sensitive to outliers or lack efficiency in dealing with volume-associated cost functions. In this work, we propose a new VolMin-based matrix factorization criterion and algorithm that take outliers into consideration. The proposed algorithm detects outliers and suppress them automatically, and it does so in an algorithmically very simple way. Simulations are used to showcase the effectiveness of the proposed algorithm.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703549416.62/warc/CC-MAIN-20210124141945-20210124171945-00411.warc.gz
CC-MAIN-2021-04
987
1
https://www.opengl.org/discussion_boards/archive/index.php/t-158841.html
code
View Full Version : glutSpecialUpFunc 05-28-2000, 06:13 AM when porting a game from linux to mac, i ran into trouble when the game used glutSpecialUpFunc() to do stuff when a key is let up. the implementation of opengl in the 1.0 sdk is 3.2, not 3.7 which has glutSpecialUpFunc() (3.2 does not). has anyone compiled glut 3.7 to mac, or will i have to work around this some other way? 05-30-2000, 02:22 AM I think you'll probably have to find a way around it, but you should probably ask around on the OpenGL developer's mailing list (I think it's on the OpenGL page under www.apple.com/developer), (http://www.apple.com/developer),) they might know. Powered by vBulletin® Version 4.2.3 Copyright © 2017 vBulletin Solutions, Inc. All rights reserved.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917119120.22/warc/CC-MAIN-20170423031159-00293-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
751
6
https://search400.techtarget.com/answer/Access-the-system-threshold-value-via-an-RPG-program
code
Is there a way I can access the iSeries system threshold value (i.e.: 90%) via an RPG program? I've looked at your info concerning QSTGLOWLMT, but I need the actual threshold limit as defined via SST. Not as far as I'm aware. There are API's to retrieve thresholds by ASP and the system value QSTGLOWLMT, however the value entered in SST is not a value that is available. Changes to QSTGLOWLMT and/or the ASP thresholds (through DST), do not impact the value of the threshold limit in SST. In some cases, changes to the threshold in SST can change the system value QSTGLOWLMT (as in V4R4). I've never been able to find where the value is actually stored. MORE INFORMATION ON THIS TOPIC The Best Web Links: tips, tutorials and more. Ask your programming questions--or help out your peers by answering them--in our live discussion forums. Ask the Experts yourself: Our application development gurus are waiting to answer your programming questions.
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145533.1/warc/CC-MAIN-20200221142006-20200221172006-00116.warc.gz
CC-MAIN-2020-10
946
6
https://encelo.github.io/page2/
code
If you follow the project on GitHub you might have noticed a big development slowdown during the summer. I blame it on a combination of excessive heat and fatigue that led to a general lack of motivation and perseverance. I have spent nearly two months on a big task this spring: custom memory allocators. They can be useful in different scenarios to alleviate the performance cost of allocating and deallocating memory. Today I upgraded my Arch Linux workstation with pacman as I usually do every day and a little surprise was waiting for me. After a long time in [testing], Mesa 20 came out of the [extra] repository, ready to be installed.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474670.19/warc/CC-MAIN-20240227021813-20240227051813-00522.warc.gz
CC-MAIN-2024-10
642
5
https://www.arxiv-vanity.com/papers/2008.04968/
code
Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical Understanding of Outdoor Scene Learning on 3D scene-based point cloud has received extensive attention as its promising application in many fields, and well-annotated and multisource datasets can catalyze the development of those data-driven approaches. To facilitate the research of this area, we present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks and also an effective learning framework for its hierarchical segmentation task. The dataset was generated via the photogrammetric processing on unmanned aerial vehicle (UAV) images of the National University of Singapore (NUS) campus, and has been point-wisely annotated with both hierarchical and instance-based labels. Based on it, we formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies. To solve this problem, a two-stage method including multi-task (MT) learning and hierarchical ensemble (HE) with consistency consideration is proposed. Experimental results demonstrate the superiority of the proposed method and potential advantages of our hierarchical annotations. In addition, we benchmark results of semantic and instance segmentation, which is accessible online at https://3d.dataset.site with the dataset and all source codes. Due to the significant progress of 3D sensoring technologies in recent years, multiple sources of 3D point cloud become affordable and easily acquired. Reconstruction of outdoor scene from point cloud has also received an increasing interest, which is critical for various areas such as urban planning and management (Carozza et al., 2014), vehicle navigation (Cappelle et al., 2012), virtual reality (Cirulis and Brigmanis, 2013) as well as simulation (Manyoky et al., 2014). As the fundamental step of reconstruction, scene understanding with point cloud data can be greatly facilitated by recent advances of machine learning techniques especially the deep learning. Large and well-annotated datasets play a leading role for the successful application of these techniques. Although dozens of 3D scene-based point cloud datasets are proposed (Silberman et al., 2012; Firman, 2016; Roynard et al., 2018; Armeni et al., 2017; Serna et al., 2014; Vallet et al., 2015; Hackel et al., 2017; Dai et al., 2017; Behley et al., 2019), majority of them are not perfectly fit for outdoor scene reconstruction. Firstly, the datasets may face various limitations from their sources which are are either RGB-D images (Silberman et al., 2012; Firman, 2016; Dai et al., 2017; Armeni et al., 2017) or light detection and ranging (LiDAR) based mobile laser scanning (MLS) (Roynard et al., 2018; Serna et al., 2014; Vallet et al., 2015; Behley et al., 2019) and terrestrial laser scanning (TLS) (Hackel et al., 2017). The RGB-D data can be easily obtained and processed via a mature pipeline (Dai et al., 2017), while it is likely prevented from capturing outdoor environment by the limited measurement range. The LiDAR scanner usually results in unavoidable severe occlusions and expensive equipment costs although it is good at capturing large-scale scenes (Li et al., 2016). Secondly, the annotations of extant datasets are not targeted for outdoor scene reconstruction. Following the well-established data format CityGML (Kolbe et al., 2005), a standard urban model should contain fine structures of building and other artifacts. However, such fineness is not presented by current annotations which mainly consist of indoor objects or traffic elements (Behley et al., 2019). Thus, it is necessary to build new datasets with the aim of supporting scene understanding based automatic reconstruction. In this work, we construct a photogrametry point cloud dataset Campus3D from UAV imagery over the National University of Singapore (NUS) campus of 1.58 area. Due to the recent progress of Structure from Motion (SfM), Multi-View Stereo (MVS) and UAV techniques (Frahm et al., 2010; Wu et al., 2011), photogrammetry point cloud is easily accessible from unmanned aerial vehicle (UAV) imagery. This type of data source is able to fulfill the requirement of scene reconstruction because UAV imagery is robust to occlusion, and can effectively obtain the holistic view of the scene. Inspired by the multiple levels of details (LoD) in CityGML (Kolbe et al., 2005) for the reconstruction, we point-wisely annotate this dataset with hierarchical multi-labels for both semantic and instance segmentation. For a data point, an example annotations is construction-¿building-¿wall/roof. The fine-grained label (e.g., wall/roof) can match the LoD2 for reconstruction (Verdie et al., 2015), where the building model is detailed to roof and wall structures. For the further study on it, we organize the labels as a tree with five hierarchical (granularity) levels displayed by Figure 3. In the end, the whole dataset present a holistic view of scene, which contains 0.94 billion points with 2,530 modality-based instances, 24 semantic classes and 6 pattern-based regions as displayed by Figure 1. The proposed dataset with hierarchical annotations is expected to promote better outdoor scene understanding. Based on the constructed label tree, we formulate a hierarchical learning (HL) problem for semantic segmentation, and propose a new metric for consistency across granularity levels named Consistency Rate (CR). Besides accuracy, prediction consistency is an important issue for the HL. For example, if one point is predicted as “roof” at fine-grained level, the results at the corresponding coarse level must be “building” and “construction”(see Figure 3), otherwise, it is a violation of the hierarchical relationship. Taking this into consideration, we introduce a two-stage method consist of multi-tasking (MT) learning and hierarchical ensemble (HE). The MT based on neural models jointly learns semantic labeling on different granularity levels. The post-processing HE rigidly ensures the results to fulfill the hierarchical consistency by choosing the most likely root-to-leaf path of the label tree. The results of CR and segmentation task suggest the goodness that the HL method utilizes the hierarchical relationship and the chance that hierarchical annotations assists segmentation tasks. Furthermore, we establish the benchmarks on the dataset via applying deep models for two classic scene understanding tasks: (1) semantic segmentation and (2) instance segmentation. For the concern of computational efficiency and compatibility to point-based models, we investigate the data prepossess technique and two sampling methods: (1) random block sampling (RBS) and (2) random centered K nearest neighbor (RC-KNN) sampling. And the RBS is chosen as the unified sampling method for benchmarks in view of its better performance. We summarize the contributions of this paper as follows: A photogrammetry point cloud dataset with hierarchical and instance-based annotations is present. Moreover, an accessible workflow of the acquisition and annotation is provided. An effective two-stage method for the formulated hierarchical semantic segmentation on point cloud is proposed. Experimental results demonstrate the superiority of our HL methods over the non-HL method in terms of both hierarchical consistency and segmentation performance. We propose new benchmarks for semantic segmentation and instance segmentation on 3D point cloud, and release the source codes111https://github.com/shinke-li/Campus3D of the training/evaluation framework as well as the dataset. These benchmarks are standardized with consideration of the unified data prepossess techniques and sampling methods. |Dataset||Data Source Type||Area/Length||Scene Type||Point #||Designed 3D Task| |ScanNet (Dai et al., 2017)||RGB-D|| |Instance semantic segmentation;| |CAD model retrieval| |S3DIS(Armeni et al., 2016)||RGB-D|| |Matterport3D(Chang et al., 2017)||RGB-D|| |Indoor||-||Instance semantic segmentation| |SemanticKITTI (Behley et al., 2019)||Velodyne HDL-64E (MLS)||39.2 km||Outdoor||4,549M||Semantic segmentation| |Semantic scene completion| |Semantic3D(Hackel et al., 2017)||Terrestrial Laser Scanner (TLS)||-||Outdoor||4,000M||Semantic segmentation| |Paris-Lille-3D(Roynard et al., 2018)||Velodyne HDL-32E (MLS)||1940 m||Outdoor||143.1M||Instance semantic segmentation| |Campus3D (Ours)||UAV photogrammtry||1.58||Outdoor||937.1M||Hierarchical semantic segmentation| 2. Related Work In this section, we firstly review the existing 3D scene-based point cloud datasets and compare our dataset with them in detail below. Based on the application area, we briefly divide the existing datasets into two categories: (1) indoor scene datasets and (2) outdoor scene datasets. A summary of comparison between Campus3D and the widely-used datasets is provided by Table 1, and additional comparisons in terms of annotation are provided in the supplementary document. Secondly, we briefly review the existing deep neural models for point cloud segmentation. Indoor Dataset. Indoor scene understanding is an active research area, and many datasets have been reported in literature (Armeni et al., 2017, 2016; Chang et al., 2017; Dai et al., 2017; Hua et al., 2016; Xiao et al., 2013; Silberman and Fergus, 2011; Silberman et al., 2012; Song et al., 2015). These datasets are usually generated by RGB-D images which can be easily got by cheap sensors (e.g., Microsoft Kinect). Early datasets NYUv2 (Silberman et al., 2012), SUN3D (Xiao et al., 2013) and SUN RGB-D (Song et al., 2015) were annotated by either polygons in 2D (Silberman et al., 2012; Xiao et al., 2013; Song et al., 2015) or bounding box in 3D (Song et al., 2015), of which the information for 3D scene reconstruction (e.g., semantic segmentation, surface reconstruction, meshes, etc.) is limited. Recently released indoor scene datasets (Armeni et al., 2016; Chang et al., 2017; Hua et al., 2016; Dai et al., 2017; Armeni et al., 2017) contain more information. For instance, ScanNet (Dai et al., 2017) supplies estimated camera parameters, surface segmentation, textured meshes and semantic segmentations; however, comparing with the proposed photogrammetry dataset Campus3D, these datasets generated by RGB-D sensors have their limitations of short measurement range and sensitivity to the sunlight’s infrared spectrum (Hackel et al., 2017). These natural limitations prevent the RGB-D datasets from applications of outdoor environment understanding. Outdoor Dataset. Several outdoor scene 3D datasets (Hackel et al., 2017; Roynard et al., 2018; Serna et al., 2014; Vallet et al., 2015) are released in recent years. These datasets are generated via either MLS (Roynard et al., 2018; Serna et al., 2014; Vallet et al., 2015; Behley et al., 2019) or TLS (Hackel et al., 2017). Points generated by the LiDAR are the raw output of the laser scanner, which are of high quality and large scales. The MLS point cloud datasets are always annotated with rich traffic elements to push the frontier of the autonomous driving field. One notable MLS point cloud dataset is a part of KITTI which was constructed by Geiger et al. (Geiger et al., 2012, 2013) and generated from 6 hours of traffic scenarios. Based on it, a point cloud dataset, semanticKITTI, has been proposed recently for outdoor semantic scene understanding (Behley et al., 2019). However, different from our dataset collected by UAV imagery, LiDAR devices always suffer from occlusions thus lack for a holistic view of the scene. Deep Segmentation Model. Semantic segmentation and instance segmentation are the major scene understanding tasks concerned by reconstruction. As the pioneering work PointNet and PointNet++ proposed by Qi et al. (Qi et al., 2017a, b), point-based deep neural models come to widely-studied in point cloud segmentation field since it can directly process the point cloud. Categories of point-based deep learning models mainly include feature pooling models (Qi et al., 2017a, b; Zhao et al., 2019; Hu et al., 2019), convolution-based models (Li et al., 2018; Thomas et al., 2019; Wang et al., 2018a), graph-based models (Wang et al., 2019c; Landrieu and Simonovsky, 2018; Wang et al., 2019a) and attention-based models (Xie et al., 2018; Yang et al., 2019). Although most of these models are proposed for a single task, they can be involved in multitask learning. The examples are PointNet++ in ASIS (Wang et al., 2019b) and PointNet in JSIS3D (Pham et al., 2019), which jointly learn instance embedding and semantic labeling in one structure. To jointly learn semantic labeling on multiple granularity levels, we propose our modification of point-based models fitting for multitask learning. The PointNet++ is applied as backbone since its general structure and high compatibility to multitask (Wang et al., 2019b, 2018b). 3. Campus3D Dataset We note that the Campus3D is online accessible. Not only data can be downloaded there but also online interactive visualization and Github link for source codes are provided. 3.1. Data Acquisition The point cloud of Campus3D dataset was constructed by the technique of Structure from Motion with Multi-View Stereovision (SfM-MVS) (Westoby et al., 2012) on UAV images. Here we briefly describe our workflow for obtaining the dataset. Firstly, we flew drones over all areas and took images with exact GPS coordinates. The device to capture imagery was DJI Phaton 4 Pro drones equipping cameras with a 1-inch 2 MP CMOS sensors, and the drone flight planning mobile apps used in our application were DJI GS Pro and Pix4D Capture. Then the points would be generated by photogrammetry processing and registration from captured images and coordinates using Pix4D as SfM-MVS software. In image collection process, we applied two types of flight routing strategies for UAV photography: (1) grid and (2) circular, which were accessible in the drone flight planning mobile apps. For relatively high buildings, we applied multiple circular flights at different height levels. During UAV image capturing, the drone were flown when the clear view was guaranteed by the weather. More detailed settings can be found in the supplementary document. 3.2. Data Annotation To present more complicated geometric features, we annotated the point cloud with point-wise labels. In general, there are two approaches to perform 3D point-wise annotation: (1) label the pre-segmented clusters in 3D and (2) label the projected 2D image and assign labels to 3D points. Our strategy follows the second approach and performs a two-level of 2D projection segmentation, which avoids inherent error induced by pre-segmenting methods and lack of details in 2D projections of stationary angles. Initially, we divided the annotation tasks into hierarchical stages from coarse-grained label to fine-grained label. In each stage, annotation was firstly done by 2D polygon partitions in three orthogonal view-angles. To refine the details, the obtained 3D partitions were then pruned in user-defined rotation angles. All the tasks were completed by opensource software CloudCompare (CloudCompare, ) and its add-on functions. Multiple annotators were hired to perform above labeling task after taking training courses for days. To ensure the accuracy and consistency of annotations, we divided annotators to several groups, and work on labeling and verifying, respectively, for each stage. We require that every point is labeled at least three times by different annotators and verified to an exact label. According to CityGML (Kolbe et al., 2005), objects from urban scene are modeled in different granularity levels defined by the LoD, which can cope with applications in different scales. Motivated by this concept, the category labels used in the Campus3D are constructed as a hierarchical structure with various granularity levels and displayed by Figure 3. The hierarchies of the structure can work similarly to the LoDs. Each label is formed based on two criteria: (1) semantic attribute and (2) geometrical attribute. They may mutually assist each other to parse the points into refined parts. For example of both “roof” and “driving_road” with plane structure, they are difficult to be distinguished in geometric features but need to be separated due to the semantic difference and practical function. All labels are self-explanatory except for the following ones. And we provide explanations for them: (1) “unclassified” refers to unrecognized or over-sparse regions. Instead of removing these data-points, this category is set for reserving the completeness of dataset; (2) “path&stair” is only for pedestrians while “driving_road” is only for vehicles; (3) “artificial_landscape” is referring to man-made landscape such as artificial pool while “others” represents some individual objects because there do not exist enough instances to group them as a new category. All the labels are defined in a rigid way for consistency of annotation. 3.3. Parsing and Statistics To label the raw point clouds, we propose a hierarchical parsing method for decomposing the data into individually labeled points, which is naturally generated by the hierarchical annotation in previous section. The resulting Campus3D dataset can fulfill multiple tasks. We firstly divide the entire dataset into six identified regions: FASS, FOE, PGP, RA, UCC and YIH according to their architecture styles and functions. A descriptive summary of points of these six regions is given by Table 2. |Region||Area||Mean||# of points||# of points per| |()||height (m)||area ()| Due to the hierarchical annotation strategy, class labels of the Campus3D can be defined by a tree-like structure. Based on the this structure, the coarse-grained level data can be simply obtained by merging their sub-class data including all leaf node, which is flexible for multi-level tasks. For example, class “building” data could be obtained by merging “wall” and “roof” data. After labeling each point by a hierarchical class tree, we performed instance labeling for each countable class, which may benefit 3D model reconstruction and scene understanding. For instance, to boost the LoD of the building model, it is necessary to distinguish various planar pieces from a roof. Figure 2 illustrates this parsing. We also note that more descriptive statistics of class and instances are provided in the supplementary document. 3.4. Data Preprocessing To practically perform the machine learning algorithms on the data, we need data simplification on point cloud with consideration of imbalanced density and processing efficiency. We provide a reduced dataset from the original points. This reduced dataset is voxelly sampled from the original dataset with a sampling size of 0.15 meter. The sampling method thins the data points and also inhibits the imbalanced distributions of points among different regions and instances, which is caused by the varies of morphology. Moreover, the 0.15m sample size can keep the smallest object in the whole campus. We term this dataset as Campus3D-reduced. Note that all experimental studies, scene understanding tasks and benchmarks in this paper are run on the Campus3D-reduced. Table 3 shows the training, validation and test splits. This splitting makes sure that training set and test/validation set have all types of instances. And the performance of the class “unclassified” is not included in current study, which follows the convention in this arena (Li et al., 2018; Qi et al., 2017a, b). |Region||FASS, YIH, RA, UCC||PGP||FOE| 4. Hierarchical Learning In order to learn on hierarchical annotations of our dataset, we construct a five-level label tree displayed by Figure 3, where labels in each hierarchy can completely partition the entire dataset. In that case, each point possesses five parallel semantic labels, learning of which can be consider as a multi-label segmentation tasks. Compared with single label learning, the key problem towards hierarchical multi-label learning is how to leverage the relationship among hierarchies, while the hierarchical structure of labels should be kept. Therefore, we propose a simple yet effective framework, which includes a multi-task learning network and an ensemble process to maintain hierarchical structure. Before the methodology, we first proceed to the problem and performance metrics. 4.1. Problem and Metric Description Let represent the class hierarchy, where is a set of classes and is a partial order representing the superclass relationship. For any , if and only if is a superclass of or . Data point with hierarchical annotation is denoted as with and is a maximal chain of . The problem of such label is that length of the label set is not coherent from point to point. To construct a multi-label with coherent length, we further extend the definition of hierarchical learning by allowing duplication. We first notate the set of all maximal elements in by and the set of all minimal elements by . Note that and both belong to , the set of all antichains in . We define a relationship to compare the any two antichains named parent antichain: Definition 4.1 (Parent Antichain). For two distinct sets , , if , let , then is called a parent antichain of with notation . Then we can obtain a sequence of antichains (sets) between and if , namely, with length that and . Based on the sequence, a tree can be constructed and displayed in Figure 3. The nodes in layer of the tree can be associates with , while the edge is defined as the partial order relationship between classes. Now we define the hierarchical learning problem. A dataset , where is the number of points, and . The hierarchical learning problem is to learn a function : from a hierarchically annotated dataset . Given a HL method , the performance can be evaluated by the conventional classification measurements such as accuracy, precision, recall, etc. However, they fail to take consistency into account which is critical for the HL. It is possible that a HL algorithm performs good in terms of conventional measurements, but generates highly inconsistent results violating the hierarchical relationship which are meaningless. Therefore, we propose a new measurement and quantitatively evaluate such consistency. Considering a solution (prediction) , we first define the fully consistent (FC) for a solution at Definition 4.1. The set of all FC solutions is denoted as (), and it includes all paths from root to leaf nodes in tree . Based on it, we propose, consistency proportion (CP), to measure the consistency degree for solution . The CP value is between 0 and 1, and being one represents a FC solution. Then, for a set of solutions , the consistency rate (CR) is defined with parameter being the desired consistency level for each solution. Definition 4.2 (Fully Consistent). Solution is defined as fully consistent (FC) if it satisfies . Definition 4.3 (Consistency Proportion). The consistency proportion (CP) of is defined as: here if is True; 0 otherwise. Definition 4.4 (Consistency Rate). The consistency rate (CR) with CP level for is: Here is a threshold parameter and . We propose a two-stage framework to the HL (see Figure 4): (1) multi-task learning (MT) and (2) hierarchical ensemble (HE). Multi-task Learning (MT). The main structure of MT networks contains a shared encoder and multiple parallel decoders with classification heads. To practically perform the MT, we utilized the feed forward architecture of PointNet++ (Qi et al., 2017b). Specifically, an feature map of point cloud with size and feature dimension is fed as input. Then the shared encoder encodes them as embedding. Such embedding is then decoded parallelly into by decoders for granularity levels. Decoder computes the likelihood distributions of classes () at granularity level for each data point. The loss of MT method is the sum of the losses of its branches, where the prediction loss is the weighted average of the cross entropy losses of levels. And it is formulated as, Here, for granularity level , and are the cross-entropy loss and weight respectively. The consistency loss is served as a regularization term to maintain the consistence structure of predicted distributions here , where is the loss weight of level and is the predicted likelihood distribution over class set (antichain) . By Definition 4.2, given FC solution , is the superclass of or same as . This loss is the sum of the losses of all the parent-child pair in tree , which is to keep a smaller prediction score than its parent score such that consistency is reserved. To investigate the effectiveness of the consistency loss, a loss without consistency loss branch named MT is tested to perform hierarchical semantic segmentation as ablation study, Hierarchical Ensemble (HE). The HE is a post-processing method for initial predicted results. It computes the weighted sum of likelihood scores over all the root-to-leaf paths in tree . The path associated with largest score is the final predicted solution. It is formulated as equation (7). Note that solutions generated by HE are FC and the CR (and/or CP) value is 1. In order to perform comparison analysis, we also apply a multi-classifier (MC) method which does not leverage the mutual relationship across levels, and only trains an independent segmentation classifier for each granularity level. And classifiers are trained and evaluated separately. It performs conventional segmentation times for the dataset based on PointNet++. Futhermore, a variant of the proposed two-stage method, MC+HE is also investigated, which uses the HE to post-process outputs of the MC. 4.3. Experimental Results Based on the class label tree given by Figure 3, we build five granularity levels (). They are given in the first and second columns of Table 5. PointNet++ (Qi et al., 2017b) is used as backbone. We set =1 and with , and more detailed settings are present in the supplementary material. We apply (), intersection-over-union (IoU) as well as overall accuracy (OA) for performance analysis. Comparisons between different HL methods. After removing points with ground-truth label of “unclassified” (unlabeled), for each class, the intersection and union sets of predicted point set and ground-truth are generated; then the IoUs are computed as the ratio of intersection set cardinality to that of the union. And the OA is computed as proportion of correct predictions to total points. Results for test set results are presented in Table 4 and Table 5. There are several observations: (1) in terms of average IoU and OA of the granularity level, it decreases significantly with granularity level changing from to . It indicates that the difficulty of the problem increases as the label instances become small and distributed sparsely; (2) the performance of MC + HE is better than that of MC only for most cases; (3) overall, the HL methods (i.e., MT + HE, MT, MT, MC + HE) taking hierarchical labels into account perform better than the MC without considering them. These observations demonstrate that hierarchical labels help and enhance the performance. One possible reason of the better performance by the HL method is that the inherent relation among label layers provide additional geometrical information for semantic segmentation. A visual illustration is given by Figure 5. The MC semantic segmentation on the level and without other level information results in that “roof” is wrongly recognized as driving road (i.e.“road”or “not vehicle”) or natural ground “natural” (see (b) and (b) of Figure 5). We found that they are geometrically similar but semantically different. Here we first define this phenomenon as geometric ambiguity: points with similar geometric features but significantly different semantic labels are wrongly classified to the same semantic class. As indicated by the result of MT ((c) column of Figure 5), hierarchical and multiple annotation can ameliorate this phenomenon. For the instance of level in Figure 5, points on roof belonging to “construction” are easily recognized as “ground” by the MC, while the MT framework is able to segment them correctly by leveraging information from finer levels. Insights of Consistency Rate. As a metric evaluating consistency of hierarchical relationship, the result of CR reveals that our framework leverages the mutual assistance among hierarchies. From Figure 6, the of MT is over around 15% than that of MC which ignores the hierarchical annotation. It suggests that MT learning may correct the results in certain level according to features from other granularity levels. On another hand, with comparison of MC and MC+HE results from Table 5 and Table 4, HE also boosts the performance by maintaining hierarchical relationship forcibly, while this boosting is not significant from MT to MT+HE. The results of quantitatively explain why HL methods can effectively address the geometric ambiguity discussed above. Effectiveness of Consistency Loss. Performance differences between MT and MT in Figure 6 and Table 4 demonstrate the effectiveness of the proposed consistency loss. With it, MT can significantly restrain the hierarchical violations in segmentation results, while MT, ignoring it, results in around 10% decrease in terms of . Moreover, MT also performs better than MT in terms of OA (see Table 4). 5. Hierarchical Scene Understanding Tasks and Benchmarks In this section, we apply the HL framework for scene understanding and build benchmarks on two tasks of hierarchical semantic segmentation and instance segmentation. We first investigate the effectiveness of sampling methods for feature learning on large-scale point cloud datasets. 5.1. Sampling Methods For scene based point cloud datasets, sampling is necessary for feature learning because of requirements of efficiency and fixed size of model input. Typical sampling strategies are uniform sampling and farthest point sampling (FPS) (Qi et al., 2017a). Here we do not apply farthest point sampling (FPS) due to its computational inefficiency. The simplest sampling method is to randomly pick a fixed size of points with uniform distribution. However, randomly sampling a small size of points from large point cloud data would induce considerable randomness into the samples, which may lead to failure of training process. Therefore, to conduct less bias and learnable sampling, we experimented two variations of uniform sampling, the details of which are presented in the supplementary document: (1) - random block sampling (- RBS), and (2) random centered K nearest neighor (RC-KNN). Given a set of points ( and three coordinates: latitude, longitude and height), we define them and illustrate how to select () points from as follows. - RBS randomly chooses a point from with an uniform distribution and then uniformly samples in a - block centered at , namely, for any , and . RC-KNN randomly chooses a point from with an uniform distribution, and then () nearest neighors to point in terms of Euclidean distance are chosen as the sampled points. In order to investigate the effectiveness of the above two sampling methods on the HL methods, we apply them with PointNet++ (Qi et al., 2017b) as the feature leaning network on our dataset. Specifically, in each training iteration, we use either RBS or RC-KNN to select points from a randomly picked region in training set as a sample in a batch. The sample size is set as 2048, and block size of both and in RBS are set as 12m. We compute the mean IoU (mIoU) across all classes for each granularity level. The test results are given in Table 6. From the results, we can see that the RBS method dominates the RC-KNN method by our setting. Note that the setting of RBS sampling is also utilized in Section 4. |MT + HE||RBS||82.2||68.5||41.1||29.8||20.1| 5.2. Semantic Segmentation In this section, we benchmark the performance on semantic segmentation task. Three established models are applied: PointNet++ (Qi et al., 2017b), PointCNN (Li et al., 2018) and DGCNN (Wang et al., 2019c). The sampling method used here is the RBS sampling with block size of 12m (). We take the hierarchical annotation into account and apply our the proposed MT+HE method. The mIoU across all classes for each granularity is used as the performance metric. The results of both test and validation are given by Table 7. 5.3. Instance Segmentation In this section, we build the instance segmentation benchmark of current dataset. The training, validation and test splitting still follows Table 3. We perform this task for the granularity level four () only, where there exists the largest number of available classes and instances among all granularity levels for training, validation and test. The ASIS (Wang et al., 2019b) and SGPN (Wang et al., 2018b) method were used here for the baseline evaluation. For each class, the weight coverage (WCov) as introduced by Wang et al. in (Wang et al., 2019b) is computed as the performance measurement. Results for both validation and test sets are shown in Table 8, which shows that ASIS (Wang et al., 2019b) performs better than SGPN (Wang et al., 2018b). Note that classes “natural”, “path&stair”, “not vehicle” and “facility” are not countable, thus no instance segmentation results are for them. A well-annotated point cloud dataset with two benchmarks, Campus3D, is proposed in this paper. It is annotated with multiple and hierarchical label for the better scene understanding and potential usage in reconstruction. We define the HL problem and propose a new measure to evaluate the consistency across granularity levels. A two-stage method MT+HE is presented to the HL. Experimental results demonstrate its effectiveness comparing with MC without taking multiple and hierarchical information into account. Moreover, we investigate two sampling methods for point cloud learning with HL methods and identify RBS as the useful one. Future users will benefit from these initial and basic explorations. In the end, we apply established models and benchmark performance for semantic and instance segmentation for future comparisons. Other potential tasks can be built based on the Campus3D such as hierarchical instance segmentation and 3D model reconstruction. This research is supported by National University of Singapore (NUS) Institute of Operations Research and Analytics (IORA) grant R-726-000-002-646 and National Research Foundation of Singapore grant NRF-RSS2016-004. The authors gratefully acknowledge the data collection support of Virtual NUS team. - Joint 2d-3d-semantic data for indoor scene understanding. arXiv preprint arXiv:1702.01105. Cited by: §1, §2. - 3d semantic parsing of large-scale indoor spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1534–1543. Cited by: Table 1, §2. - SemanticKITTI: a dataset for semantic scene understanding of lidar sequences. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9297–9307. Cited by: Table 1, §1, §2. - Virtual 3d city model for navigation in urban areas. Journal of Intelligent & Robotic Systems 66 (3), pp. 377–399. Cited by: §1. - Markerless vision-based augmented reality for urban planning. Computer-Aided Civil and Infrastructure Engineering 29 (1), pp. 2–17. Cited by: §1. - Matterport3d: learning from rgb-d data in indoor environments. arXiv preprint arXiv:1709.06158. Cited by: Table 1, §2. - 3D outdoor augmented reality for architecture and urban planning. Procedia Computer Science 25, pp. 71–79. Cited by: §1. - CloudCompare: 3d point cloud and mesh processing software, open source project. External Links: Cited by: §3.2. - Scannet: richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5828–5839. Cited by: Table 1, §1, §2. - RGBD datasets: past, present and future. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp. 19–31. Cited by: §1. - Building rome on a cloudless day. In European Conference on Computer Vision, pp. 368–381. Cited by: §1. - Vision meets robotics: the kitti dataset. The International Journal of Robotics Research 32 (11), pp. 1231–1237. Cited by: §2. - Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. Cited by: §2. - SEMANTIC3D. net: a new large-scale point cloud classification benchmark.. ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences 4. Cited by: Table 1, §1, §2, §2. - RandLA-net: efficient semantic segmentation of large-scale point clouds. arXiv preprint arXiv:1911.11236. Cited by: §2. - Scenenn: a scene meshes dataset with annotations. In 2016 Fourth International Conference on 3D Vision (3DV), pp. 92–101. Cited by: §2. - CityGML: interoperable access to 3d city models. In Geo-information for disaster management, pp. 883–899. Cited by: §1, §1, §3.2. - Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4558–4567. Cited by: §2. - Reconstructing building mass models from uav images. Computers & Graphics 54, pp. 84–93. Cited by: §1. - PointCNN: convolution on x-transformed points. In Advances in Neural Information Processing Systems, pp. 828–838. Cited by: §2, §3.4, §5.2. - Developing a gis-based visual-acoustic 3d simulation for wind farm assessment. ISPRS International Journal of Geo-Information 3 (1), pp. 29–48. Cited by: §1. - JSIS3D: joint semantic-instance segmentation of 3d point clouds with multi-task pointwise networks and multi-value conditional random fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8827–8836. Cited by: §2. - Pointnet: deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660. Cited by: §2, §3.4, §5.1. - Pointnet++: deep hierarchical feature learning on point sets in a metric space. In Advances in neural information processing systems, pp. 5099–5108. Cited by: §2, §3.4, §4.2, §4.3, §5.1, §5.2. - Paris-lille-3d: a large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification. The International Journal of Robotics Research 37 (6), pp. 545–557. Cited by: Table 1, §1, §2. - Paris-rue-madame database: a 3d mobile laser scanner dataset for benchmarking urban detection, segmentation and classification methods. In 4th International Conference on Pattern Recognition, Applications and Methods ICPRAM 2014, Cited by: §1, §2. - Indoor scene segmentation using a structured light sensor. In 2011 IEEE international conference on computer vision workshops (ICCV workshops), pp. 601–608. Cited by: §2. - Indoor segmentation and support inference from rgbd images. In European Conference on Computer Vision, pp. 746–760. Cited by: §1, §2. - Sun rgb-d: a rgb-d scene understanding benchmark suite. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 567–576. Cited by: §2. - Kpconv: flexible and deformable convolution for point clouds. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6411–6420. Cited by: §2. - TerraMobilita/iqmulus urban point cloud analysis benchmark. Computers & Graphics 49, pp. 126–133. Cited by: §1, §2. - Lod generation for urban scenes. ACM Transactions on Graphics 34 (ARTICLE), pp. 30. Cited by: §1. - Graph attention convolution for point cloud semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10296–10305. Cited by: §2. - Deep parametric continuous convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2589–2597. Cited by: §2. - Sgpn: similarity group proposal network for 3d point cloud instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2569–2578. Cited by: §2, §5.3. - Associatively segmenting instances and semantics in point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4096–4105. Cited by: §2, §5.3. - Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (TOG) 38 (5), pp. 146. Cited by: §2, §5.2. - ‘Structure-from-motion’photogrammetry: a low-cost, effective tool for geoscience applications. Geomorphology 179, pp. 300–314. Cited by: §3.1. - Multicore bundle adjustment. In CVPR 2011, pp. 3057–3064. Cited by: §1. - Sun3d: a database of big spaces reconstructed using sfm and object labels. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1625–1632. Cited by: §2. - Attentional shapecontextnet for point cloud recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4606–4615. Cited by: §2. - Modeling point clouds with self-attention and gumbel subset sampling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3323–3332. Cited by: §2. - PointWeb: enhancing local neighborhood features for point cloud processing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5565–5573. Cited by: §2.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356232.19/warc/CC-MAIN-20210226060147-20210226090147-00233.warc.gz
CC-MAIN-2021-10
41,688
131
https://escholarship.org/search/?q=author%3A%22Bisht%2C%20A%22
code
Retinoblastoma is a rare pediatric tumor of the retina, caused by the homozygous loss of the Retinoblastoma 1 (RB1) tumor suppressor gene. Previous microarray studies have identified changes in the expression profiles of coding genes; however, our understanding of how non-coding genes change in this tumor is absent. This is an important area of research, as in many adult malignancies, non-coding genes including LNC-RNAs are used as biomarkers to predict outcome and/or relapse. To establish a complete and in-depth RNA profile, of both coding and non-coding genes, in Retinoblastoma tumors, we conducted RNA-seq from a cohort of tumors and normal retina controls. This analysis identified widespread transcriptional changes in the levels of both coding and non-coding genes. Unexpectedly, we also found rare RNA fusion products resulting from genomic alterations, specific to Retinoblastoma tumor samples. We then determined whether these gene expression changes, of both coding and non-coding genes, were also found in a completely independent Retinoblastoma cohort. Using our dataset, we then profiled the potential effects of deregulated LNC-RNAs on the expression of neighboring genes, the entire genome, and on mRNAs that contain a putative area of homology. This analysis showed that most deregulated LNC-RNAs do not act locally to change the transcriptional environment, but potentially function to modulate genes at distant sites. From this analysis, we selected a strongly down-regulated LNC-RNA in Retinoblastoma, DRAIC, and found that restoring DRAIC RNA levels significantly slowed the growth of the Y79 Retinoblastoma cell line. Collectively, our work has generated the first non-coding RNA profile of Retinoblastoma tumors and has found that these tumors show widespread transcriptional deregulation.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00092.warc.gz
CC-MAIN-2021-10
1,818
1
https://techwiser.com/how-to-navigate-to-a-folder-in-terminal-mac/amp/
code
In Windows 10, you can open cmd in any folder by either typing cmd in the location bar in File Explorer or, simply hold down the Shift key and right-click on the explorer window. In the context menu, you will see the option to Open command window here. However, there is no such option to quickly open Terminal on Mac. When you open a terminal on Mac, it always opens in the home directory, but there are times when you may need to open them in a particular folder on your system. Turns out you can open Terminal in any directory on macOS as well. There are 3 ways to go about it. - Use the cd command - Use the Mac’s built-in shortcut - Use a third party app Quickly Navigate to a Folder in Terminal on Mac This is the most usual method. Simply, open the terminal, type in the cd command followed by the folder path you want to navigate. For example, Alternatively, if you are too lazy to type the entire path name, you can also drag a folder (or pathname) onto the Terminal application icon. It’ll automatically grab the path of the folder, next hit enter. While the previous method works, it’s not the most efficient way to navigate to a folder in the terminal on Mac. Much like Windows, Mac also lets you open the terminal directly from a specific folder. However, this option is buried deep under Mac’s setting. Let’s see how to resolve it. To get started, go to System Preferences > Keyboard > Shortcuts > Services. Find “New Terminal at Folder” in the settings and click the box. Now, when you’re in Finder, just right-click a folder, go to Services and you’ll see a new option – New Terminal at Folder. Clicking on which will open Terminal in the current folder Alternatively, you can use many of the third-party apps available to navigate to a folder in the terminal. The one I recommend is cdto and OpenInTerminal. For this article, we will use OpenInTerminal app. It’s free and open source. To get started, Download OpenInTerminal. As of writing, the latest version is OpenInTerminal-Lite 0.4.1. Once downloaded, head over to the app, unzip it and move the app to the Applications folder. Now, you need to add the OpenInTerminal-Lite to your finder’s toolbar. To do so, hold down the Cmd key and drag the app into Finder Toolbar. Once done, the app shows you a small icon in the finder window, clicking on which will open Terminal in the current folder. And that’s about it. Now, to remove OpenInTerminal, you have to first remove it from the Finder toolbar before you delete it from the Applications folder. To do so, open Finder and go to View > Customize toolbar. A new window will open, click and hold OpenInTerminal’s icon and drag it out of the toolbar to remove it from the toolbar. Now, you can go ahead and delete the original app from the Application folder. All in all, these were a few ways to navigate to a folder in terminal on Mac, while it’s not a live saver, it does save some time if you use the terminal a lot. In general, I would recommend, using the ‘New Terminal at Folder’ option as it’s native in Mac or if you prefer folder option, use OpenInTerminal app. Let me know, your thoughts, in the comment section below.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662529658.48/warc/CC-MAIN-20220519172853-20220519202853-00496.warc.gz
CC-MAIN-2022-21
3,185
22
https://blog.adafruit.com/2012/12/20/accurate-focus-stand-for-adafruit-usb-microscope-by-equinoxefr-3dthursday/
code
Now here’s a perfect use of a 3D printer. We love the inexpensive USB microscopes we offer, but there just aren’t that many options for stands. (One that is working great but is more expensive than we’d like!) Check out this excellent “Accurate focus stand for Adafruit USB microscope” (Thing:38460) from designer equinoxefr: A DIY printable stand for your adafruit USB microscope. You need a little peace of plywood for your stand and some epoxy glue. ( JB Weld is good for the job !) You must glue top and bottom printed parts to the 8mm rod for a good rigidity. File end of 8mm rod to make a square and glue it to the original microscope mount. - M6 threaded rod - 8mm rod - 2 M6 nilstop nuts - 1 M6 nuts - 2 screws - A little piece of plywood and a piece of wood. Enjoy with your high accuracy microscope! Every Thursday is #3dthursday here at Adafruit! The DIY 3D printing community has passion and dedication for making solid objects from digital models. Recently, we have noticed electronics projects integrated with 3D printed enclosures, brackets, and sculptures, so each Thursday we celebrate and highlight these bold pioneers! Have you considered building a 3D project around an Arduino or other microcontroller? How about printing a bracket to mount your Raspberry Pi to the back of your HD monitor? And don’t forget the countless LED projects that are possible when you are modeling your projects in 3D! The Adafruit Learning System has dozens of great tools to get you well on your way to creating incredible works of engineering, interactive art, and design with your 3D printer! If you’ve made a cool project that combines 3D printing and electronics, be sure to let us know, and we’ll feature it here!
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590794.69/warc/CC-MAIN-20180719090301-20180719110301-00266.warc.gz
CC-MAIN-2018-30
1,734
13
https://www.ziabird.com/products/copy-of-everyday-shirt-white-poplin-one-size
code
Wrapper Top- Black Poplin (One Size) The Wrapper is an oversized relaxed cvotton top with a v-neck, 3/4 sleeves, and a versatile tie front. Tie the frot tight or tick it into your favorite pair of jeans to create a classic blouse silhouette. V-neck - Versatile front tie Designed in Austin | Made in Los Angeles
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100016.39/warc/CC-MAIN-20231128214805-20231129004805-00863.warc.gz
CC-MAIN-2023-50
311
4
http://www.backtrack-linux.org/forums/showthread.php?t=1057&page=2&p=189106
code
@chkov - Thanks for the suggestions. I'm going to try and do the DNS Spoof next, maybe the Proxychain idea depending on how much I learn about the subject (I understand the idea, but I don't know enough to teach it). I know how aircrack and such work and how to gain wep using it, but I don't own a computer that has a good enough wireless card that is compatible (I keep telling myself I need the practice and that I should buy a WiFi USB but I never get around to doing it... Really want to now though, probably going to order one soon @Sniper - How would you prefer to have it? Plain text (for viewing while in a command line), Doc (pretty, but you know Microsoft :P), or PDF. If you let me know your favorite I'll upload it somewhere for you guys. (I'll probably upload a plain text and a PDF just to be safe) @Code - Hopefully the next one is more detailed. This was my first attempt at a tutorial and now that I got feedback I can make the next one better and more in depth on the subject manner instead of "click this, now this". I want it more to be me just talking about the subject manner and just briefly talking about the tools. Thanks for the kind words guys n' gals. Keep watching for the next tutorial, Hopefully out within this coming week (depending on if I can write a good one about the subject).
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124478.77/warc/CC-MAIN-20170423031204-00354-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,315
5
http://www.my.jobs/chantilly-va/rt-logic-development-engineer/45A1474F055D43E19533C29C18109D56/job/?utm_campaign=.JOBS%20Sitemap%20Feed&vs=28&utm_medium=.JOBS%20Universe&utm_source=.JOBS%20Sitemap%20Feed-DE
code
Kratos Technology & Training Solutions RT Logic Development Engineer in Chantilly, Virginia RT Logic is a leading supplier of ground signal processing systems (RF, DSP, and digital domains) for satellite operations including factory test, launch, business operations, and assured mission capabilities. Our technology also supports non-space applications. We have an exciting opportunity for a Development Engineer! This position will: - Develop, integrate, and test real time software and/or firmware applications to support various RT Logic products. - Design and develop firmware/software in support of new product development and custom deliveries. - Work directly with customers and RT Logic engineers to perform requirements analysis/design trade-offs and develop solutions in accordance with system/product architectures. - Work with program management to scope and estimate new efforts as needed. - Use C/C++, Python and other software tools to develop solutions for various Linux based platforms. - Use VHDL to implement real-time communications processing for tactical and network communications products. Experience and Skills: - Bachelors degree in Electrical Engineering, Computer Science, Computer Engineering or related field - 0-3+ years of related experience - C or C++ experience - VHDL experience - Linux experience desired - Knowledge of network protocol (TCP, IP, UDP) - Must be able to work closely in a small engineering team which will include other software engineers, FPGA firmware, RF hardware, system hardware, and test engineers U.S. Citizenship and ability to obtain and maintain a U.S. Government Security Clearance is required. RT Logic offers challenging work, an excellent environment, & great benefits! Qualified applicants apply on-line at www.rtlogic.com EOE M/F/D/V Please, no phone calls, agencies or recruiters. Job Tracking ID: RTL:16-026 Location: Chantilly, VA Job Type: Full-Time/Regular Date Updated: July 19, 2016 Job Level: Entry Level (less than 2 years) Number of Openings: 1 Years of Experience: At least 1 Year Level of Education: BA/BS Starting Date: ASAP
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00548-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
2,106
28
https://menstrual-cups.livejournal.com/3033845.html
code
I am looking for some specific comparison photos. I currently have a L fleur, small Lunette Diane, and Jasmine (using a Diva for reference since they are more common). I am looking for... • a small Lunette with a (L) Cuplee • Pictures of a blue (new?) Cuplee. Wondering if the femininewear periwinkle sort of color is the current color, and if it's as solid as it appears. Basically any combination of the tags above, highlighting Cuplee and Large Meluna compared to the three other cups that I have. And if anyone by chance has comparison photos of the Lunacup, Lilacup, or Sckoon, (with or without any of these tags, I just would like an idea of size), that would also be lovely. Thanks! PS (edit)-- does anyone know if anything is new with Cuplee? On their .ru page, it shows lots of different colors (like purple!), and also makes mention of 2 sizes. These show on cuplee.org for purchase as well. Femininewear only has a few colors and the one size. And I'm not sure of any other distributors to cross-check sizes/colors.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00191.warc.gz
CC-MAIN-2021-43
1,030
8
https://www.mpug.com/tag/self-assign/
code
Tag Archives: self-assign An Introduction Project’s Web App allows one to define three types of resources: Work, Material, and Cost. Work resources are equipment and people needed to perform tasks on a project. Work resources may be further classified as Generic, Named, and Team. Generic resources are placeholders for work resource roles such as designer, developer, architect, etc….
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151197.83/warc/CC-MAIN-20200714181325-20200714211325-00196.warc.gz
CC-MAIN-2020-29
389
2
https://cepr.org/publications/dp6361
code
DP6361 Productivity Effects of International Outsourcing: Evidence from Plant Level Data We investigate the impact of international outsourcing on productivity using plant level data for Irish manufacturing. Specifically, we distinguish the effect of outsourcing of materials from services inputs. Moreover, we examine whether the impact on productivity is different for plants being more embedded in international markets through exporting or being part of a multinational. Our results show robust evidence for positive effects from outsourcing of services inputs for exporters, either domestic- or foreign-owned. By contrast, we find no statistically significant evidence of an impact of international outsourcing of services on productivity for firms not operating on the export market.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818740.13/warc/CC-MAIN-20240423192952-20240423222952-00215.warc.gz
CC-MAIN-2024-18
789
2
https://accu.org/journals/overload/18/99/harris_1702/
code
Numerical computing has many pitfalls. Richard Harris starts looking for a silver bullet. The dragon of numerical error is not often roused from his slumber, but if incautiously approached he will occasionally inflict catastrophic damage upon the unwary programmer's calculations. So much so that some programmers, having chanced upon him in the forests of IEEE 754 floating point arithmetic, advise their fellows against travelling in that fair land. In this series of articles we shall explore the world of numerical computing, contrasting floating point arithmetic with some of the techniques that have been proposed as safer replacements for it. We shall learn that the dragon's territory is far reaching indeed and that in general we must tread carefully if we fear his devastating attention. On the classification of numbers As programmers we are probably aware that integers and floating point numbers have different properties, even if we haven't spent a great deal of time thinking about their precise nature. However, I rather suspect that we are somewhat less aware of how they fit into the general mathematical classification of number types. Therefore, before we start looking at the various techniques for representing numbers with computers I should like to explore what exactly it is we mean by Number . The concept of Number has been refined throughout history as generations of mathematicians have time and again stumbled across inconsistencies in their understanding. A hierarchy of number types as we currently understand them is provided in figure 1. Traversing this tree from left to right, we more or less recover the sequence of development in our concept of Number from prehistory to the modern day. The story of Number begins with the integers, or more accurately the natural numbers; those whole numbers greater than zero. Animal studies have shown that primates, rats and even some birds have a rudimentary ability to count; presumably using neural circuitry similar to that we use to distinguish at a glance between three and four objects, but not between nineteen and twenty. It is not unreasonable, therefore, to suppose that awareness of the natural numbers predates man. The negative numbers are, comparatively, an example of striking modernity having been discovered in India just a few millennia ago. The integers, as important as they are, are not particularly useful for measurement; the distance between the ziggurat and the brothel is never quite a whole number of cubits, for example. For this task we instead employed the fractions, or rationals; those numbers equal to the ratio of two integers. Note that the rationals are a superset of the integers; every integer is trivially the ratio of itself and 1. For many years it was thought that the rationals comprised the totality of Number. Legend has it that a member of the school of Pythagoras discovered that the square root of 2 could not be expressed as a fraction and that his compatriots were so put out by this fact that they drowned him (we shall revisit this in a later article). The algebraic irrationals are those numbers which are roots of polynomial equations with rational, or equivalently integer, coefficients. By roots we mean those real values, if any, at which the polynomial equates to zero. The square root of 2 is a root of the polynomial x 2 -2, for example. Technically, the algebraic numbers are a superset of the rationals since the latter are solutions to linear equations with integer coefficients. The final breed of numbers, the transcendentals, is the most elusive. These are the numbers which are not solutions of polynomial equations with rational coefficients and include such notable numbers as p and e . They are so difficult to identify that it is still not known whether the sum of p and e is itself transcendental. Despite this, it is known that the transcendentals form the vast majority of numbers; if you were to throw a dart at a line representing the numbers between 0 and 1, you would almost certainly hit a transcendental. To understand why, we need to discuss the mathematics of infinite sets. In the late 19 th century Georg Cantor perfected the theory of infinite sets. The transfinite cardinals are not, as their name suggests, characters in a Catholic science fiction blockbuster, but are in fact those infinite numbers that denote the size of infinite sets. Cantor identified the smallest of the transfinite cardinals, the size of the integers, as . This is known as the countable infinity since we can imagine an algorithm that, given infinite time, would step through them sequentially, counting them off one at a time. He then asked the question of whether the rationals were larger than the integers; whether they were uncountable. His proof that they were not is one of the most elegant in all of mathematics. When we say a set is countable, we strictly mean that it can be put into a one to one correspondence with the non-negative integers. For example, the integers are countable since we can map from the non-negative integers to them with the rules - if n is even, n maps to ½n - if n is odd, n maps to -½(n+1) Enumerating this sequence yields 0, -1, 1, -2, 2, -3, 3, ... which clearly counts through the integers, one at a time. Cantor laid out the rationals such that the numerator (the number on top of the fraction) was indicated by the column and the denominator (the number on the bottom of the fraction) was indicated by the row, as shown in figure 2. What Cantor realised was that, whilst each row and column stretched on forever and so couldn't be counted one after the other, the diagonals between the first column of a given row and the first row of the corresponding column were all finite and hence countable. For example, we could iterate over the first row, counting diagonally backwards through the table until we hit the first column yielding the sequence 1/1, 2/1, 1/2, 3/1, 2/2, 1/3, 4/1, 3/2, 2/3, 1/4, ... If we skip any number we have seen before, we have the sequence 1, 2, 1/2, 3, 1/3, 4, 3/2, 2/3, 1/4, ... So, rather surprisingly, despite there being an infinite number of fractions between any two different integers, the sizes of the set of fractions and the set of integers are in fact equal. Cantor proceeded to demonstrate that the set of polynomial equations with integer coefficients is also countable and, since each has a finite number of roots, so are the algebraic numbers. He did this by defining a function, we shall call it c , that takes a polynomial with integer coefficients and returns a positive integer. It operates by adding together the absolute values of the coefficients and the largest power to which the variable is raised, the order of the polynomial, minus one. Note that we can insist that the term with the highest order is positive, since multiplying a polynomial by minus one doesn't affect its roots. Cantor realised that every possible value of this function is shared by a finite number of such polynomials. For example, there are just 4 such polynomials for which this function yields 2. So we can count off these polynomials by counting through the positive integers and, for each of them in turn, enumerating the members of the finite set of them for which Cantor's function returns that value. We are left with the question of whether or not the transcendental numbers are of the same size. If the transcendental numbers are countable then the real numbers, being the union of both they and the algebraic numbers, must be countable too since we could simply alternate between the sequences of each of them. Cantor noted that if the reals were countable we could construct a list of them as they are generated by the mapping from the integers. Figure 3 illustrates what this list might look like for the numbers between 0 and 1. 0. x 00 x 01 x 02 x 03 x 04 x 05 ... 0. x 10 x 11 x 12 x 13 x 14 x 15 ... 0. x 20 x 21 x 22 x 23 x 24 x 25 ... 0. x 30 x 31 x 32 x 33 x 34 x 35 ... 0. x 40 x 41 x 42 x 43 x 44 x 45 ... 0. x 50 x 51 x 52 x 53 x 54 x 55 ... Now starting after the decimal point in the first row and moving diagonally down and to the right we can construct a new number This number is clearly between 0 and 1, but must differ from every number in the list at no less than one digit. Note that we add 2 to each digit rather than 1 to avoid the irritating corner case of recurring nines, such as 0.099999... being equal to 0.1. We have thus found a number between 0 and 1 that was not in our list and hence the list is incomplete. It is not, therefore, possible to construct such a list and hence the reals, and consequently the transcendentals, are uncountably infinite. Being more sizable than the other numbers, their cardinal number is denoted by . So now we know the mathematical classification of numbers we are ready to start looking at how we might implement numeric types with computers. The IEEE standard [ IEEE ] defines floating point numbers to have a format similar to the scientific notation many of us will recognise from our calculators and spreadsheets. In the familiar decimal base 10 this means a number between 1 and 10 multiplied by 10 raised to the power of another number. For example, the number of days in a year is approximately 365. Dividing by this 100 gives us a number between 1 and 10, namely 3.65. Since 100 is 10 raised to the power of 2, the number of days in the year can be written as 3.65 × 10 2 , or commonly 3.65E2. The number that we multiply by the power of 10 is known as the mantissa and the power of 10 by which we multiply it is known as the exponent. Since base 10 is rather inconvenient from a computing perspective, IEEE floating point numbers are defined in the binary base 2. Specifically, numbers are defined as ± b × 2 a with, in the single precision format, the sign taking one bit, the exponent a taking 8 bits and the mantissa b taking 24. Much as in decimal the mantissa must lie between 1 and 10, so in binary it must lie between 1 and 2. The leading digit must therefore be 1, and we can represent b with 23 bits rather than the full 24. There is, in fact, a special case when we assume the leading digit is 0 rather than 1. This occurs when the exponent takes on its most negative value, yielding the very smallest floating point numbers. Since the leading digits of these numbers, known as subnormal or denormalised numbers, may be 0 there may consequently be fewer bits left to represent the mantissa resulting in fewer significant digits of accuracy, or equivalently in lower precision. In contrast recall that normal numbers have an implied leading digit of 1 and consequently have the full 24 bits with which to represent the mantissa. In addition to the normal and subnormal numbers, IEEE 754 defines bit patterns to represent the positive and negative infinities and a set of error values for invalid calculations known as the NaNs, for Not a Number. Many of us are probably aware of the quiet and signalling NaNs identified by std::numeric_limits , but perhaps not of the fact that there are actually 2 24 -2 of them in the single precision format, allowing for error codes to be embedded in invalid results. Figure 4 enumerates the full set of IEEE 754 single precision floating point numbers, ±a 1 a 2 a 3 ...a 8 b 1 b 2 b 3 ...b 23 Note that since the mantissa is finite, floating point numbers are actually a finite subset of the rational numbers and it is vitally important not to confuse them with real numbers. Double precision floating point numbers have precisely the same layout as single precision floating point numbers, differing only in that they have an 11 bit exponent and a 53 bit mantissa. Recall that one of the bits in the mantissa is implied, so that these and the sign bit fill 64 bits. Henceforth, we shall assume that the double precision format is being used. Now that we have covered the mundane implementation details of floating point numbers it is time to start looking at the rather more important topic of their precise behaviour. Not a number The NaNs infect any calculation they come into contact with since the result of any operation upon a NaN yields a NaN. Furthermore, any comparison involving a NaN is always false, even an equality comparison between two NaNs. If you keep this in mind when designing loops and branches, you can ensure that your algorithms will behave predictably in the face of invalid arithmetic operations. Floating point numbers overflow in a satisfyingly predictable way, namely to plus or minus infinity. Dividing any finite number by an infinity will yield zero and dividing any non-zero number by zero will yield an infinity of the same sign as that number. Adding or subtracting any finite number to or from an infinity will result in that infinity. These properties mean that many numerical algorithms can implicitly cope with numerical overflow since arithmetic operations and comparisons are internally consistent and, accompanied by some vigorous hand waving, mathematically sound. Note that dividing 0 by 0, dividing an infinity by an infinity, multiplying an infinity by 0 and subtracting an infinity from itself all yield NaNs. One of the most common surprises facing the programmer using floating point arithmetic stems from the fact that there are a fixed number of bits with which to represent the mantissa. We can illustrate the problem by considering decimal notation. Say we restrict ourselves to 4 figures after the decimal point. Assuming that we have chosen the closest number in this representation, x , to a given number we can only say that its true value lies somewhere within x ±5E10 -5 . For example, given π to 4 decimal places, 3.1416, we can only state with certainty that it lies between 3.14155 and 3.14165. Similarly, for an IEEE double precision floating point number x with an exponent of a , we can only be sure that the true value is between x ±2 a -53 . Conveniently, since normalised floating point numbers have a implicit leading digit of 1, these bounds can be written as x (1±2 -53 ) or, conventionally, as x (1±½ ε ). Of course, this means that operations on denormalised numbers will introduce proportionally even greater errors but we shall ignore this fact in our analyses and effectively treat them as if they behave in the same fashion as zero. If an algorithm really must treat denormalised numbers with the same respect as normalised numbers, it will require much more careful analysis. The mathematical operations of addition, subtraction, multiplication, division, remainder and square root are required by the IEEE standard to be accurate to within rounding error. Specifically, they must return the correctly rounded representation of the result of performing the actual calculation with real numbers. This means that, if using round to nearest, they will introduce a proportional error no larger than (1±½ ε 0). Note that because of these accumulated rounding errors, equality comparisons between floating point numbers often behave counter-intuitively; values of unlike expressions that should mathematically be equal may have accumulated slightly different rounding errors. In general, we should prefer to test whether two floating point numbers are similar to each other rather than the same. It is important to note that the rounding guarantees of the IEEE arithmetic operations do not take into account any rounding error in their arguments. We can capture the sensitivity of the result of a function f to rounding errors in its argument x with the condition number, given by where f' is the derivative of f and the vertical bars mean the absolute value of the expression between them. This value is approximately equal to the absolute value of the ratio between the relative error of f ( x ) and the relative error of x , as shown in derivation 1. Note that it assumes that f can be calculated exactly and so the condition number does not take into account rounding during the calculation or of the result. Given a real value x and the nearest normal floating point x* we have The relative error in f is given by Dividing by the relative error in x , we have As an example, consider the exponential function e x whose derivative is equal to e x for all x . Its condition number is therefore | x |, meaning that its relative error at x before rounding is approximately equal to |½ ε x |. When the condition number is large, a calculation is said to be poorly conditioned and we cannot trust that it is accurate to many digits of precision. Noting that the number of digits of precision is approximately equal to the logarithm of the reciprocal of the absolute relative error, we can use the condition number to estimate the number of decimal digits of precision of a calculation. Specifically, we use where log 10 is the base 10 logarithm, as demonstrated in derivation 2. This is equivalent to subtracting the log of the condition number from the number of digits of precision of the floating point type. Assuming that the floating point epsilon has n decimal leading zeros, for a given real number and its closest normal floating point number we have where b is between 1 and 10. Defining the absolute relative error in the result of a function f as Given enough operations or poorly conditioned functions, rounding error can significantly affect the result of a calculation, but it is by no means the worst of our troubles. Far more worrying is cancellation error which can yield catastrophic loss of precision. When we subtract two almost equal numbers we set the most significant digits to zero, leaving ourselves with just the insignificant, and most erroneous, digits. For example, suppose that we have two values close to p , 3.1415 and 3.1416. These values are both accurate to 5 significant figures, but their difference is equal to 0.0001, or 1.0E-4, and has just 1 significant figure of accuracy. Whilst rounding error might sneak up upon us in the end, cancellation error is liable to beat us about the head with a length of two by four. The poster child of cancellation error is the approximation of numerical differentiation with the forward finite difference. The derivative of a function f at a point x is defined as the limit, if one exists, of as δ tends to zero. The forward finite difference replaces the limit with a very small, but non-zero, δ and is a reasonably obvious way to approximate the derivative. It is equally obvious that we should choose δ to be as small as possible, right? To demonstrate why not, consider the function e x whose derivative at 1 is trivially equal to e . Figure 5 plots a graph of minus the base 2 logarithm of the absolute error in the approximate derivative at 1, roughly equal to the number of correct bits, against minus the base 2 logarithm of δ , equal to the number of leading zeros in its binary representation. Clearly, decreasing δ works up to a point as indicated by an initial linear relationship between the number of leading zeros and the number of accurate bits. However, this relationship seems to break down beyond δ equal to 2 -25 and the best accuracy occurs when δ is equal to 2 -26 . From the Taylor series expansion of f we have From this we can obtain the result of the approximate derivative Assuming that we can exactly represent both x and x + δ and that f is accurate to machine precision, the floating point result of this formula will be which is equal to Hence, if δ is too large the Ο ( δ ) term will introduce significant errors into the approximation, whereas if it is too small the Ο ( ε / δ ) will do so instead. With some vigorous hand waving, we can ignore the constant factors in these terms, and conclude that since a choice of δ = ε ½ results in them both having the same order of magnitude it is, in some sense, optimal. This is suspiciously close to half of the number of bits in the mantissa of a double precision floating point number. In fact it can be proven, under some simplifying assumptions, that the optimal choice for δ is the square root of ε , which has roughly that many leading zeros. The reason for this behaviour is that, as d gets very small, the results of the two calls to f get very close together and their difference introduces a large cancellation error, as shown formally in derivation 3. Note that since cancellation error results from the dramatic sensitivity of the subtraction of nearly equal numbers to the rounding errors in those numbers, it can be captured by the condition number. For example, the expression x -1 has a condition number of | x /( x -1)|. As x tends to 1, the condition number tends to infinity, reflecting the growing effect of cancellation error on the result. Order of execution The final surprising aspect of floating point numbers is that the exact order in which operations are performed can have a material effect on the result. For example, suppose that we wish to calculate the cube of the square root of a number, or equivalently the square root of the cube. Starting out with a value x accurate to within rounding, we have Next, we take the square root, introducing another rounding error Finally we multiply it by itself twice to recover the cube, introducing two more rounding errors Now let's try it in the reverse order. This time we perform the two multiplications first yielding Secondly we take the square root, introducing one additional rounding error Surprisingly, this second version of the calculation has accumulated just a little more than half the error that the first had. Whilst this is a relatively simple example, the fundamental lesson is sound; in order to control errors when using floating point numbers we must plan our calculations with care. You're going to have to think! I recently read a comment on a prominent IT internet forum proposing that scientists should not be trusted to implement their own computer models since, presumably unlike the comment's author, they are not trained in computer science and are consequently likely to make mistakes. The given example of such a mistake was the calculation of the average of 20 or so values in the low 20s 1 which was ironic, since a computer scientist should be able to demonstrate that the result of performing this calculation with double precision floating point is correct to about 15 decimal digits of precision! Such unfair criticism of floating point is not particularly unusual, is often unduly concerned with rounding error and hardly ever mentions the vastly more important topic of cancellation error. One can only assume that many computer science graduates have forgotten their numerical computing lectures and have generalised the very specific and predictable failure modes of floating point arithmetic to the rule of thumb that any use of floating point renders a program broken by design. These criticisms are generally accompanied by suggestions of alternative arithmetic types that fix the perceived problems with floating point. We shall investigate these in coming articles in this series and we shall learn that if we wish to use computers for arithmetic calculation we shall have to accept the fact that we are going to have to think.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818732.46/warc/CC-MAIN-20240423162023-20240423192023-00813.warc.gz
CC-MAIN-2024-18
23,243
129