Document
stringlengths 395
24.5k
| Source
stringclasses 6
values |
|---|---|
[WARNING] **** FAILED TO BIND TO PORT!
Posted 29 January 2012 - 11:38 AM
[INFO] Starting minecraft server version 1.1
[WARNING] **** NOT ENOUGH RAM!
[WARNING] To start the server with more ram, launch it as "java -Xmx1024M -Xms1024M -jar minecraft_server.jar"
[INFO] Loading properties
[INFO] Starting Minecraft server on X.XXX.XXX.XX:25565
[WARNING] **** FAILED TO BIND TO PORT!
[WARNING] The exception was: java.net.BindException: Cannot assign requested address: JVM_Bind
[WARNING] Perhaps a server is already running on that port?
Please help! I`ve serched the interwebs for 4 days but nothing!. Thanks for reading this.
Posted 29 January 2012 - 11:41 AM
Posted 30 January 2012 - 07:24 PM
Posted 30 January 2012 - 07:34 PM
Have you considered asking for help in the Server Administration section of the forum?
Posted 30 January 2012 - 07:37 PM
Leave the server-ip in the properties blank.
When you have done all that, start the server and when it is done loading try a port test: http://www.canyouseeme.org/
Just type your new port (25555) and test it, if it reports a Success that means others can now connect again to your external ip.
Remember to tell them the new port! They should add :25555 at the end of the IP.
Posted 21 April 2012 - 04:44 PM
YOU SAVE MY FREAKIN' LIFE.... THANK YOU SOOOOOO MUCH... I LOVE YOU FOR THAT... NO HOMO
Posted 30 April 2012 - 01:26 AM
Posted 30 April 2012 - 04:21 AM
Posted 12 June 2012 - 05:58 PM
ok that failed D:
i never had this until i tried to switch from hamechi to normile
any help plz!?
Posted 12 June 2012 - 07:30 PM
If you forcefully terminate the server without allowing it to "clean up" and write what is in memory to disk you can run a serious risk of corrupting your map.
Posted 04 January 2013 - 12:39 AM
Posted 02 March 2013 - 04:37 PM
Posted 02 March 2013 - 06:17 PM
Posted 18 December 2013 - 12:35 PM
Posted 17 February 2014 - 10:16 PM
16:28:32 [INFO] Starting minecraft server version <version>
16:28:32 [INFO] Loading properties
16:28:32 [INFO] Default game type: SURVIVAL
16:28:32 [INFO] Generating keypair
16:28:33 [INFO] Starting Minecraft server on (Don't want to show IP)
16:28:33 [WARNING] **** FAILED TO BIND TO PORT!
16:28:33 [WARNING] The exception was: java.net.BindException: Cannot assign requ
ested address: JVM_Bind
16:28:33 [WARNING] Perhaps a server is already running on that port?
It usually means that the server is still running on that port. This usually happens when you exit out of console improperly. The easiest way to fix this is by:
- Go into task manager.
- Look for the Java program that is running.
- Click on Java program and 'End Task'
- Just do 'pkill -9 java'
- I have no idea.....who uses an Apple computer for running Minecraft servers o_O
This should allow you to than run the server without the error. Also, you can just reset your computer, which will free up the port and stop whatever is running on it.
Posted 17 February 2014 - 10:18 PM
Posted 17 February 2014 - 10:21 PM
Drivers don't use ports, in case you didn't know. Computer drivers can still function without internet, so saying they use a port does not make sense.
Posted 27 February 2014 - 06:12 PM
Posted 27 February 2014 - 06:57 PM
1. go to search prgrams.
2. type in CMD and hit enter. A command console should appear.
3. Type Ipconfig without any spaces.
4. Find you IPv4 address and NOTE IT DOWN. If you live in the UK it should look like this with either double or single number at the end. - 192.168.X.XX
5. Go to your server properties file and add this to the Server-IP line. Your server may now work.
6. Now when you add your server in-game, make sure you add it using your EXTERNAL IP address and not the one you found using the command prompt. You can find this by typing into google 'What is my IP'.
Again, this fixed this issue for me so it's a matter of chance it will work for you.
Edited by XxHDxX, 27 February 2014 - 07:00 PM.
|
OPCFW_CODE
|
Official Wine coding style rules/guidelines ?
matteo.mystral at gmail.com
Fri Feb 13 12:31:12 CST 2015
2015-02-13 15:15 GMT+01:00 Indrek Altpere <efbiaiinzinz at hotmail.com>:
first of all, thanks for the email. I'm not sure there is any
"official" Wine style guideline or that I know what that is. Here's my
> Would it be possible to get more of the guidelines written down on the
> http://wiki.winehq.org/Developers-Hints or
> http://wiki.winehq.org/SubmittingPatches pages, to get them set in stone,
> so-to-speak? And also somehow show which components have different
> guidelines and what are they?
> Right now, all the information seems to get reiterated to each new
> developer, causing a lots of overhead to the reviewers.
Sure, updating the wiki seems like a great idea. Notice that everyone
can update it so feel free to just go ahead and write down anything
you think it would be useful.
> After being subscribed to this mailing list for a few months, I have noticed
> a pattern with new developers trying to submit patches:
> 1) first patches tend to receive lots of style related feedback
> 2) due to the big list of changes requested, the developers tend to miss a
> few with their resend and have to re-resend the patches again
> or due to the initial patches being too "out-of-style" the
> reviewers missed a few places in the first round and point them out in later
> round, causing the re-resend
> 3) sometimes the cycle goes on for quite a while
> And to confuse things more:
> 1) sometimes different reviewers have different/conflicting opinions about
> style rules
> 2) some rules are component specific, mostly set by the maintainer
Those two points really go together. You're right, different reviewers
(and consequently the components they are responsible of) tend to use
different styles. I agree it might be confusing but it looks like
using a single style for all of wine / reformatting all the
nonconforming code is simply not going to happen so we have to stick
with that. FWIW it's not uncommon for other open source projects to
use different code styles in different components, for one reason or
> 3) http://wiki.winehq.org/SubmittingPatches has only general rules
> 3.1) "Follow the style of surrounding code as much as possible, except when
> it is completely impractical to do so" - there have been many cases where
> patches get rejected because although it was done in the style of
> surrounding code, the component (or wine overall) has adopted new style
> rules, which are not evident in the surrounding code at all.
This is very true. I guess the actual "rule" is more like "follow the
style of recent patches touching surrounding code / in the same DLL".
As an example, there are chunks of wined3d code which are very old and
use different styles. If you are going to send a wined3d patch
touching one of those chunks you generally want to use the current
style anyway. You generally shouldn't reformat code just to make it
conform to the newer style, but you can / should reformat lines you're
touching anyway. It might look funny on the edge between old and new
code, in that case you are usually free to do some adjustments.
You certainly want to use the new style for new functions, regardless
of what the surrounding code looks like.
> 3.2) as a minor issue, the reference to Developer-Hints in the guidelines is
> not linked to http://wiki.winehq.org/Developers-Hints
> Lots of work has already been done with the Developer-Hints page, but many
> issues still come up in the mailinglist that have not been mentioned there.
> UINT vs unsigned int, int* foo vs int *foo, -1 vs ~0U, function parameter
> naming (Hungarian notation ? Ok for API function definitions based on MSDN
> doc or not?).
> Officially LONG use is suggested because long differs between 64bit linux
> and 64bit windows. Should the developers also BOOL FLOAT etc uppercase
> "safe" defines everywhere or not? true/false vs TRUE/FALSE?
AFAIK Hungarian notation is always best avoided. Of course when you
need to preserve API compatibility (e.g. public structure fields)
there is no way around it. I don't think anyone wants to see int* foo
LONG vs long is not a suggestion but a requirement if it's something
affecting the API, because of the 64-bit implementation differences
(i.e. LLP64 vs LP64). About the other data types I guess we're not
entirely consistent in general. We usually prefer to avoid the caps
data types but just try to follow the preexisting conventions in that
DLL and you should be fine.
There are no lowercase true and false in C89 so you need the uppercase
constants. No discussions here ;)
> It also seems that (at least in D3D related code) LP* usages are not
> allowed, LPWSTR => const WCHAR * should be used for parameters.
> But for example in kernel32/volume.c the LPWSTR is used in 10 places as
> function parameter, const WCHAR * is used in only one place as function
> Is the LP* rule overall rule, is it D3D specific? If overall, then why is it
> so widely used in all the kernel files? If D3D specific, how does the
> developer know it beforehand?
LPFOO types are particularly frowned upon since they are obfuscating
the pointers and tend not to do the expected thing with const.
I haven't checked but I guess that they are used so much in kernel32
just because they are in older code.
> Suggestions, comments etc ?
> Indrek Altpere
Thanks again for bringing this up. Hopefully if I wrote something
wrong / stupid someone is going to correct me. Also this kind of
discussion is often a recipe for flamewars, so I put my fire-resistant
armor on just in case :P
More information about the wine-devel
|
OPCFW_CODE
|
The project is the encapsulation of 7-Zip functionality in a DLL that can be called by any Windows application (and specifically, a Delphi application). The requirements for the DLL are as follows:
1) Use STDCALL/WINAPI calling conventions
2) Provide callback parameters for events
3) The DLL must encapsulate the ENTIRE 7-Zip functionality. Base starting projects can be the 7-Zip GUI project, or the 7-Zip Command Line project. In particular:
- listing of files inside archives. if archive is password protected, a callback must ask for the password.
- random decompression of files inside archives. files across multiple different folders inside an archive should be decompressed in a single pass. callbacks must provide extraction progress, the name of the file being extracted, the opportunity to cancel the operation, handlers for overwrite actions (skip/always/never), and the ability to obtain passwords for encrypted files.
- random compression of files into archives. any collection of files should be compressed in a single pass. for instance, some files can be compressed recursively, others non-recursively, in a single operation. the full range of compression parameters in 7-Zip must be available for the compression operation, as well as callbacks for progress reports, etc.
4) An example Delphi project that shows/tests the DLL.
5) The DLL should be written in Visual C++ 6.0/7.0, using the downloadable 7-Zip sources as a starting point.
6) The DLL will work with 7-Zip installed on the target system (it requires 7-Zip).
7) The DLL should support all archive types that are supported by 7-Zip. This is already automatically available with the console or gui 7-Zip projects as a starting point.
For more information about 7-Zip, visit <[url removed, login to view]>.
In essence, this project is simply converting existing 7-Zip code, which already works exactly as required above either from the command line or from a windowed interface, into a DLL. All the existing behavior is already provided - you just need to package it into a DLL, and instead of outputting progress to a console/window, feed it to callback functions that are provided to the DLL.
The owners of this project are NOT associated in any way with 7-Zip.
1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done.
2) Installation package that will install the software (in ready-to-run condition) on the platform(s) specified in this bid request.
3) Exclusive and complete copyrights to all work purchased. (No GPL, 3rd party components, etc. unless all copyright ramifications are explained AND AGREED TO by the buyer on the site).
All Win32 Platforms - from Win95Gold to Server2003.
|
OPCFW_CODE
|
The babble dialog system is just used to make sure the actors don’t repeat the same line over and over again. It’s a way to control what they babble J
The babble system is activated either when the actor is used or a playdialog event is received by the actor.
alias <alias name> <real name> <parameters>
where parameters can be:
global - all instances act as one (not per actor)
stop - once used, don't use again
timeout <seconds> - once used wait specified time until using again
maxuse <times> - can only be used specified number of times
weight <weight> - random weighting
dialog <alias> <parameters>
where parameters can be:
randompick - randomly picks out dialog from alias list
playerhas <item name> - only plays if player has specified item
playerhasnot <item name> - only plays if player doesn't have specified item
has <item name> - only plays if actor has specified item
has_not <item name> - only plays if actor doesn't have specified item
depends <variable name> - only plays if the specified variable is set
dependsnot <variable name> - only plays if the specified variable is not set
random <percent between 0 and 1> - plays this percent of time
Both alias and dialog commands are put in the init/server section of the tiki of the actor that needs to say the dialog.
The way this works is when the system tells a dialog to play it starts at the top of the dialog list and tries each dialog line. The first one that it finds where all of the conditions are true is played.
Then it picks which exact dialog sound to play out of the alias list. It normally just does this in order, will play alias1, then alias2, etc. The randompick dialog parameter can change this as can the stop, timeout, maxuse, and weight alias commands.
alias secret_dialog1 “sound/whatever.wav” stop
alias edenmale_dialog1 "sound/dialog/edenfall/recruiter/newlife.wav"
alias edenmale_dialog2 "sound/dialog/edenfall/recruiter/bow.wav" maxuse 2
alias edenmale_dialog3 "sound/dialog/edenfall/recruiter/joinus.wav" timeout 5
dialog secret playerhas sercret_item1
dialog edenmale_dialog randompick
In this example the following will happen: if the player has secret_item1 then the actor will say secret_dialog1 (only once) otherwise he will say one of the edenmale_dialogs. It will randomly pick between the 3 of them except it will play edenmale_dialog2 a max of 2 times and will only play edenmale_dialog3 every 5 seconds.
lipsync <options> <list, wave, or mp3 file>
options: -q for quiet (doesn’t print most messages)
-f for force (converts all sounds regardless of timestamps)
NOTE: this program only works in NT right now (will be fixed).
This will produce a lip file for the file specified or all of the files specified in the list file.
For an example of a list file look at copy.ls in the root fakk directory.
Make sure to place the lip files in the same directories as their corresponding sound files.
This file tells the game how much and when to move the actor’s mouth while he is talking.
|
OPCFW_CODE
|
voir la définition de Wikipedia
In software, a package management system, also called package manager, is a collection of software tools to automate the process of installing, upgrading, configuring, and removing software packages for a computer's operating system in a consistent manner. It typically maintains a database of software dependencies and version information to prevent software mismatches and missing prerequisites.
Packages are distributions of software, applications and data. Packages also contain metadata, such as the software's name, description of its purpose, version number, vendor, checksum, and a list of dependencies necessary for the software to run properly. Upon installation, metadata is stored in a local package database.
Operating systems based on Linux and other Unix-like systems typically consist of hundreds or even thousands of distinct software packages; in the former case, a package management system is a convenience, in the latter case it becomes essential.
Ian Murdock has commented that package management is "the single biggest advancement Linux has brought to the industry", that it blurs the boundaries between operating system and applications, and that it makes it "easier to push new innovations [...] into the marketplace and [...] evolve the OS".
A package management system is often called an "install manager". This can lead to confusion between a package management system and an installer. The differences include:
|Package management system||Installer|
|Usually part of an operating system.||Each product comes bundled with its own installer.|
|Uses one installation database.||Performs its own installation, sometimes recording information about that installation in a registry.|
|Can verify and manage all packages on the system.||Works only with its bundled product.|
|One package management system vendor.||Multiple installer vendors.|
|One package format.||Multiple installation formats.|
A package, for package managers, denotes a specific set of files bundled with the appropriate metadata for use by a package manager. This can be confusing, as some programming languages often use the word "package" as a specific form of software library. Furthermore, that software library can be distributed in a package of files bundled for a package manager.
Package management systems are charged with the task of organizing all of the packages installed on a system. Typical functions of a package management system include:
Some additional challenges are met by only a few package management systems.
Computer systems which rely on dynamic library linking, instead of static library linking, share executable libraries of machine instructions across packages and applications. In these systems, complex relationships between different packages requiring different versions of libraries results in a challenge colloquially known as "dependency hell". On Microsoft Windows systems, this is also called "DLL hell" when working with dynamically linked libraries. Good package management systems become vital on these systems.
System administrators may install and maintain software using tools other than package management software. For example, a local administrator may download unpackaged source code, compile it, and install it. This may cause the state of the local system to fall out of synchronization with the state of the package manager's database. The local administrator will be required to take additional measures, such as manually managing some dependencies or integrating the changes into the package manager.
There are tools available to ensure that locally compiled packages are integrated with the package management. For distributions based on .deb and .rpm files as well as Slackware Linux, there is CheckInstall, and for recipe-based systems such as Gentoo Linux and hybrid systems such as Arch Linux, it is possible to write a recipe first, which then ensures that the package fits into the local package database.
Particularly troublesome with software upgrades are upgrades of configuration files. Since package management systems, at least on Unix systems, originated as extensions of file archiving utilities, they can usually only either overwrite or retain configuration files, rather than applying rules to them. There are exceptions to this that usually apply to kernel configuration (which, if broken, will render the computer unusable after a restart). Problems can be caused if the format of configuration files changes. For instance, if the old configuration file does not explicitly disable new options that should be disabled. Some package management systems, such as Debian's dpkg, allow configuration during installation. In other situations, it is desirable to install packages with the default configuration and then overwrite this configuration, for instance, in headless installations to a large number of computers. (This kind of pre-configured installation is also supported by dpkg.)
To give users more control over the kinds of software that they are allowing to be installed on their system (and sometimes due to legal or convenience reasons on the distributors' side), software is often downloaded from a number of software repositories.
When a user interacts with the package management software to bring about an upgrade, it is customary to present the user with the list of things to be done (usually the list of packages to be upgraded, and possibly giving the old and new version numbers), and allow the user to either accept the upgrade in bulk, or select individual packages for upgrades. Many package management systems can be configured to never upgrade certain packages, or to upgrade them only when critical vulnerabilities or instabilities are found in the previous version, as defined by the packager of the software. This process is sometimes called version pinning.
Some of the more advanced package management features offer "cascading package removal", in which all packages that depend on the target package and all packages that only the target package depends on, are also removed.
Each package manager relies on the format and metadata of the packages it can manage. That is, package managers need groups of files to be bundled for the specific package manager along with appropriate metadata, such as dependencies. Often, a core set of utilities manages the basic installation from these packages and multiple package managers use these utilities to provide additional functionality.
For example, yum relies on rpm as a backend. Yum extends the functionality of the backend by adding features such as simple configuration for maintaining a network of systems. As another example, the Synaptic Package Manager provides a graphical user interface by using the Advanced Packaging Tool (apt) library, which, in turn, relies on dpkg for core functionality.
By the nature of free and open source software, packages under similar and compatible licenses are available for use on a number of operating systems. These packages can be combined and distributed using configurable and internally complex packaging systems to handle many permutations of software and manage version-specific dependencies and conflicts. Some packaging systems of free and open source software are also themselves released as free and open source software. One typical difference between package management in proprietary operating systems, such as Mac OS X and Windows, and those in free and open source software, such as Linux, is that free and open source software systems permit third-party packages to also be installed and upgraded through the same mechanism, whereas the package management systems of Mac OS X and Windows will only upgrade software provided by Apple and Microsoft, respectively (with the exception of some third party drivers in Windows). The ability to continuously upgrade third party software is typically added by adding the URL of the corresponding repository to the package management's configuration file.
Besides the systems-level application managers, there are some add-on package managers for operating systems with limited capabilities and for programming languages where developers need the latest libraries.
In contrast to systems-level application managers, application-level package managers focus on a small part of the software system. They typically reside within a directory tree that is not maintained by the systems-level package manager (like c:\cygwin or /usr/local/fink). However, this is not the case for the package managers that deal with programming libraries. This leads to a conflict as both package managers claim to "own" a file and might break upgrades.
Contenu de sensagent
Dictionnaire et traducteur pour mobile
Nouveau : sensagent est maintenant disponible sur votre mobile
dictionnaire et traducteur pour sites web
Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web !
Avec la boîte de recherches Sensagent, les visiteurs de votre site peuvent également accéder à une information de référence pertinente parmi plus de 5 millions de pages web indexées sur Sensagent.com. Vous pouvez Choisir la taille qui convient le mieux à votre site et adapter la charte graphique.
Solution commerce électronique
Augmenter le contenu de votre site
Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML.
Parcourir les produits et les annonces
Obtenir des informations en XML pour filtrer le meilleur contenu.
Indexer des images et définir des méta-données
Fixer la signification de chaque méta-donnée (multilingue).
Renseignements suite à un email de description de votre projet.
Jeux de lettres
Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée.
Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer
Dictionnaire de la langue française
La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés.
Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID).
L'encyclopédie française bénéficie de la licence Wikipedia (GNU).
Les jeux de lettres anagramme, mot-croisé, joker, Lettris et Boggle sont proposés par Memodata.
Le service web Alexandria est motorisé par Memodata pour faciliter les recherches sur Ebay. La SensagentBox est offerte par sensAgent.
Changer la langue cible pour obtenir des traductions.
Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent.
|
OPCFW_CODE
|
Icinga is a fork of Nagios monitoring system. There are lots of changes and upgrades compared to Nagios, especially in version 2. The main visible difference is UI which is built on ext js. Other significant differences are in hosts and services definitions. Icinga2 designed to monitor large complex environments.
We need to make sure to install a LAMP stack and epel repository on the CentOS7 server for other dependency application which will support icinga2 . Icinga2 will collect the service information based on the monitoring plugins, so we need to install nagios plugins. Also, we should install IDO modules for MySQL which will use for Icinga2 Web interface and other web interfaces. Then set up MySQL database (create icinga DB and import Icinga 2 IDO schema into the database). Web interfaces and other icinga addons are able to send commands to Icinga2 through the external command pipe, for that we need to set up external command pipe.
Setup Icinga Web 2 Interface
- Navigate your browser to http://localhost/icingaweb2/setup or http://IP-Address/icingaweb2/setup or http://Domain-Name/icingaweb2/setup which will launch Icinga Web 2 setup wizard and past the Generate Authenticate token, then hit Next button to move forward.
- Module Selection By default Monitoring module selected which is the core module for icingaweb. Simple hit Next button.
- Mandatory Package This is an important screen, We need to install all required packages, If you see any package showing Red & Green color.
from default timezone#
#nano -w /etc/php.ini
date.timezone = ‘Asia/Kolkata’
#systemctl httpd service.restart
- After installing missed packages restart the apache web service and hit Refresh button. Now, we can see all the packages are installed, the next step by hitting the Next button.
- Authentication Choose the Authentication type, Here we are going to use the database as an authentication method.
- Entering Database details for Icinga Web 2 We are going to create the new database for Icinga Web 2, Just fill the Database Name, Database User Name and Database password then hit Validate Configuration button. Validation successful hit Next button.
- Creating a database for Incinga Web 2 Enter your MySQL root User Name and its password to create new icingaweb2 Database on your system. Then hit Next button.
- Authentication Backend Choose the icingaweb2 database for backend authentication because we have already selected database for Authentication method. Then hit Next button.
- Administrator Account Creation. Create Administrator Account to manage Icinga2. Then hit Next button.
- Application Configuration Choose the appropriate options to adjust all application and logging related configuration options to fit our needs.
- We have successfully configured Icinga Web 2. Then hit Next button.
- Welcome Screen for monitoring module configuration. Then hit Next button. This is the core module for Icinga Web 2. It offers various status and reporting views with powerful filter capabilities that allow you to keep track of the most important events in your monitoring environment.
- Monitoring Backend Choose the Backend Name & Backend Type which will use to retrieve the monitoring information.
- Enter Monitoring IDO Resource Enter IDO MySQL Database information which was created earlier. If you forget it, open the /etc/icinga2/features-available/ido-pgsql.conf file and add it. Validate successfully, hit Next button.
- Command Transport Choose the command file to send the monitoring instance notification.
- Monitoring Security To secure your environment against attack, we need to add some commands.
- Monitoring Module installed Successfully We have successfully installed and configured Monitoring Modules.
- Icinga Web 2 Successfully Installed Congratulations, we have successfully installed and configured Icinga Web 2 Interface.
- Icinga Web 2 Login Screen, log in with administrator username and password.
If you require help, contact SupportPRO Server Admin
|
OPCFW_CODE
|
Student of: 5A Scratcher Joined 3 years ago United States
I am a Christian/Catholic I believe in God ✝
Bye. I do python now and will make other projects that don't make sense soon, Scratch has really changed for me, I hope that anyone reads this is having..
What I'm working on
a great time on scratch and that they will make lots of friends on this platform. I did.
- markus98k (python expert, chess player and a friend)
Featured ProjectPanzoid Intro || DO NOT STEAL! ||
What I've been doing
Shared Projects (57)View all
Favorite ProjectsView all
- What to do when someone says something mean. by markus98k
- Scratch Clicker HACKED! by markus98k
- 1st Remix! by markus98k
- Intro for @Purple_turtle1 by markus98k
- Random Follower Intro! by markus98k
- What apple did to design I-phone 13 || #all #animations #art #games #tutorial by markus98k
- Audition for Chaos Squad @markus98k by markus98k
- Intro Entry for @SLear #all #animations by markus98k
- My Time On The 'Propose Projects To Be Featured' Studio by markus98k
- Intro for @itsjustme1222 by markus98k
- Intro Request for @sasukevsnaruto19 by markus98k
- Ask Groot! by markus98k
- First day of school a platformer by Soccer-queen-123
- Intro for @Legendary_Codes by markus98k
- Voice Visualizer Engine by markus98k
- Intro Entry for @Hacker_mode-5000 by markus98k
- Panzoid Intro || DO NOT STEAL! || by markus98k
- Intro Entry for @cs4060463 by markus98k
- Intro Entry for @GFOPPington by markus98k
- Intro Entry for @Htave7 by markus98k
Studios I CurateView all
- <>Demon Slayer RP<>
- アウトロcontest 開催!ぜひ参加してね! 【拡散希望】
- ADD ALL PROJECTS
- ✉ 01 ⁞ -ducki's pond
- 50+ Intro Contest (CLOSED)
- [closed]intro contest/イントロコンテスト
- 50 Followers Special!!
- PPTBF Moderators
- Cats Studio
- Proplayerisaaban Fan club!
- Add Whatever!
- Final Story 5A
- Week 3 5A
- Week 2 Coding- 5A
- Week 1 Coding
- 7 Block Challenge
- Comments loading...
|
OPCFW_CODE
|
Shall I use POJO or JSONObject for REST calls
Recently I have stumbled upon a situation where my new team is heavily using JsonObject for doing rest data interchange. Their view is while using pojo we are binding tightly with the rest service, while jsonObject gives freedom. Also it saves from unnecessary data serialisation at the same time reducing number of classes heavily.
I have some points to encounter them as:
Pojo gives more meaning to data and we are holding data with correct data type.
If we need only 2 fields of 10 fields of json, we can deserialize to 2 fields class with @JsonIgnore
I don't know exactly on the cost of deserialisation, but somehow I have a feeling it shouldn't be much difference.
Can someone help me understand which perspective is way to go ?
Please provide some pros and cons of using POJO and JSONObject.
Thanks
I see the following advantages in having pojos
Readability - You will not really know the structure of a complex json. writing a simple get will require one to know the structure of the json. Please refer to the "POJOs over JSON Objects" section of my article here -> https://medium.com/tech-tablet/programming-went-wrong-oops-38d83058885
Offers Type Checks - We could easily assign a Cat to a Dog and not even know about it till runtime.
Feels more object-oriented with Composition & encapsulation - It's easy to understand the designer's perspective with a POJO. A Car which IS-A Vehicle that HAS-A Wheel.
You could choose what you wanna deserialize and keep only that in memory - When deserializing the object that we have just received over the network, with a JSON Object, there is no way to choose what has to be deserialized and stored into memory. If you have an object of 1 MB size where only 200 Bytes is your payload, we will end up holding the entire 1 MB object in memory if we don't use POJOs.
Allows collection to be used and stream operations on them in a legible way - There is no native support for stream operations in a JsonNode. We will need to use a StreamStupport object which could be avoided.
Allows cross framework referencing. With a few annotations, you can choose to map specific fields to a database client - When you use an ORM framework for you database, it's easy to annotate and map entities to the database schema.
Naturally supports design patterns
Minimalizing non-native dependencies - Why do you have to use a JsonNode or equivalent that does not come by itself in native Java? Especially if it has the above disadvantages.
If you are concerned about the rituals that come with a pojo like having getters, setters etc, take a look at "Lombok". This library will help you create your pojos in a concise way and still reap the above benefits.
On the other hand if you are are dealing with a hard to change API that responds with a dynamically changing response, JsonNode is a quick win candidate.
could you please explain all the above points in a little more detail ?
@BoratSagdiyev I have added explanations for the above points inline.
It really depends on the situation and the nature of your application. If the data your are receiving has no fixed structure/field names/data types or it is ever changing then of course using JsonObject makes more sense.
On the other hand, if it is of fixed structure and involves operations which access the fields, its better to go with pojo. It has better readability IMO.
Depends on circumstances:
Use Pojo :
If the data fields are fixed like their name, type use Pojos, as it will be clear and easily readable.
Refactoring will be easier if it is used in many places.
If you want to persist this data in db then modification of data will also be easy. Suppose there is a data in your pojo which is used while performing calculation, but you don't want save it in db ( so you can simply do it by using @Transient keyword in java, but if you have used JsonObject this would have required some code changes).
Suppose you don't want to pass null values in the response, you can easily do it using pojos by using @JsonInclude(Include.NON_EMPTY) in java.
Use JsonObject:
If data fields are not fixed like their type or structure, so you won't be able to map it in pojo.
I want to add:
with POJOs you can use the builder pattern.
future refactors will be more easy. Imagine you have a class Person, and you use this class in many places in your code. If your Person class needs to change one of its fields, then you can find easily where to change it.
using POJOs makes your code more cohesive with your domain. If you are writing code for a car company, you will have classes like Car, Wheels, Engine, etc. Your code will become more readable and testable.
This argument really boils down to whether you want to use POJOs or Maps for the data you are dealing with, as that is the fundamental difference between the 2 styles of handling data.
JsonObject (from jee7) is just a Map (the actual type signature extends Map<String, JsonValue>).
Using POJO's (eg deserialised using Jackson) are real objects.
So if you dont use domain/model classes for your data normally then I guess a Map will do. If you do (like most developers would), then it's a good idea to extend this practice to your API contracts.
To address specific points:
JsonObject saves from unnecessary data serialisation at the same time reducing number of classes heavily.
It doesn't, both of these JSON API's require serialisation/deserialisation, which is the process of turning wire data into an object. I would bet that Jackson is faster than JsonObject, however as Satyendra Kumar pointed out benchmarking is going to prove it one way or other.
while using pojo we are binding tightly with the rest service, while jsonObject gives freedom
If you are using the data from the API you are bound to the data model, whether you get the data from a property foo by response.getFoo() or response.get("foo").
For a really comprehensive discussion on the tradeoffs between classes and maps, check out this SoftwareEngineering post
It's worth pointing out that using Jackson/POJOs doesn't mean you are tightly coupled to your local model of a REST response. Jackson has excellent support for the Tolerant Reader pattern, which means that new fields in the API response will not break deserialisation, and a bunch of annotations like @JsonCreator, @JsonValue, @JsonIgnore for mapping the API responses back to your local domain model.
It also provides powerful tools for dealing with collections and generics.
Finally if the service you are interacting with is big and complex, if they provide a swagger/raml spec then you can generate the models based on that spec which should greatly reduce the time taken to curate the POJOs.
Their view is while using pojo we are binding tightly with the rest service, while jsonObject gives freedom.
Ask them for how long this freedom is required? Is the API model going to change always? It WILL settle down at some point. Also, how is it binding tightly? If there is some part of the model that you are not sure of, you can create a generic object for that part, that will give you as much flexibility as you want.
Also it saves from unnecessary data serialisation at the same time reducing number of classes heavily.
I don't think it is unnecessary. Using JsonObject directly is simply cumbersome.
About reducing the number of classes, you should anyway have some model class for data that is being exchanged. It gives more clarity. If you are using Swagger, it will add to the specification of your REST endpoints. Swagger automatically creates forms to enter data based on the data model. API becomes easier to maintain (no extra documentation required). Testing becomes easier.
I don't know exactly on the cost of deserialisation, but somehow I have a feeling it shouldn't be much difference.
The only way to be sure is benchmarking. You can use this as a starting point to get an insight and compare the current implementation https://github.com/fabienrenaud/java-json-benchmark.
|
STACK_EXCHANGE
|
This is my report for the CMU LTI colloquium.
Video link: https://www.youtube.com/watch?v=cPWFJE6FKeY
Title: The Words are Alive: Associations with Sentiment, Emotion, Colour, and Music!
Speaker: Saif Mohammad (National Research Council Canada)
Date: September 19, 2014
This presentation deals mainly with the speaker’s work on NLP, specifically sentiment analysis, emotion, color, and music. Most of the work starts from the sentiment lexicon he built using Amazon Mechanical Turk. While his work includes pure text sentiment (emotion) analysis, he has also made effort to sonify and visualize the emotional information of words. This presentation was very interesting to me for several reasons. First, I worked on sentiment analysis in the past, so I have some sense of what have been hot research topics and approaches in this research field. I also wanted to know how the speaker approached and solved the problem. Furthermore, now that I no longer work on sentiment analysis, I wanted to hear state-of-the-art techniques and applications. Second, the speaker’s work not only focused on techniques, but also combined the techniques with art. Image and music make me happy and excited. This aspect of his work impressed me and caused me to think a lot about my future research and thesis.
While most of his work is based on sentiment (emotion) analysis and building of sentiment lexicon, he gave only a brief introduction of sentiment lexicon construction, and then presented its application to sonification and visualization. Only after then did he bring us into several techniques for better sentiment analysis. It was a good strategy because sonification and visualization was the most fun part. I’ll also follow that order in this report.
The speaker utilized crowdsourcing to collect sentiment words. For most work on sentiment analysis, sentiment lexicons are the fundamental resource. It can be said that the performance of sentiment analysis depends mostly on the quality of the sentiment lexicon used. There have been a lot of efforts on building sentiment lexicons. However, because of the variety of domains, there are no generally used lexicons. Rather, classical, small lexicons such as LIWC and General Inquirer are widely used in research for English. These lexicons were compiled by psychologists, and it seems like sentiment lexicons are often asked about their validity in psychological perspectives. Many other languages do not even have a small lexicon. In this case, crowdsourcing is a good way to collect sentiment words.
The speaker and his colleague Hanna Davis tried to generate music from novels automatically. They first divided a novel into sections and measured the degree of emotion of each section on the basis of their sentiment lexicon. If a section contains much emotion, the section was divided into fine-grained segments. Music is then generated for each segment according to the proportions of emotions. They made some building blocks for this; for example, happiness is associated with major keys and sadness with minor keys. Happiness and excitement make the tempo fast. Joy and calm generate a sequence of consonant notes, while excitement, anger, and unpleasantness prefer inharmonics. Music is important for novels as well as movies. For example, a famous Japanese novelist Murakami Haruki strategically associates each of his novels with a music work; the music is mentioned many times in the story and readers can enjoy the novel by reading it and listening to the music in both ways. By listening to the generated music, we can catch and enjoy the overall emotional flow of the novel.
While these rules are intuitive and make sense, there are still many challenges. Most of all, generated music should be listenable and pleasant instead of being a random sequence of notes. I listened to several results of this work on the website, and in that perspective, this work is far from the best. The main problem is that the music has no beautiful melody line. I would suggest that this work take advantage of existing research on music synthesis. I do not know well about that research field, but one straightforward approach could be to use the segments of existing music and assemble them, instead of generating music from scratch. Since this work is based on the researchers’ intuition on the relation between emotion and musical features, it would be worth trying to automatically extract musical features using machine learning techniques. Similarly, another limitation of this work is that some emotions (e.g., “trust”) are not associated with intuitive musical features. According to the results, many novels have “turst” as their highest emotion, which makes it hard to interpret conveying emotions in them.
The speaker and his colleagues also worked on word-color association. For example, “iceberg” is highly associated with white and “vegetation” with green. They again crowdsourced the information and built 24,200 word-color pairs with total 11 colors. Research questions could be whether the associations have high agreement, whether concrete concepts have higher agreement, and so on. Like the music case, visualization is also important especially for marketing purposes. Companies try to associate their brands with certain colors or images in order to make their identity clear as well as to remind people of their brands with certain visual cues. The color of a logo plays a big role in a way to perceive the company; e.g., blue with trustfulness, white with cleanness, etc. Therefore, word-color association is useful and intensely used information for marketers. This work is thus beneficial in the sense that it used a quite large-scale method (crowdsourcing) to obtain that information and investigated the validity of the information. However, the work does not use a data-driven approach, which may yield a result biased by human consciousness.
The speaker’s work on word-music and word-color association was impressive, especially because this is not what I can easily think of. This work does not contain heavy mathematics or fancy machine learning techniques. I am somewhat skewed toward math-intensive research, often missing fun and joyful ideas. This work does not have correct answers, and evaluation is not straightforward. I would say it’s more like Human-Computer Interaction than Language Technologies. Although the goal and usefulness may seem somewhat vague and not organized at the initial stage, follow-up work can build on this and produce useful results afterward. Along this line, I thought about whether this research could be a good thesis topic. People often emphasize the importance of formalizing a new problem and novelty for thesis work. This research is novel; the visualization and sonification of text have been done mainly by humans. In addition, traditional approaches depend on human intuition and small-scale surveys, rather than scientific and scalable methodologies. Computer-aided and data-driven approaches may be able to help experts get deeper insights, counterintuitive information, and statistically reliable information.
Regarding the techniques the speaker used to build sentiment lexicons, they have similarities and differences to my approaches. One similarity is that we both used Point-wise Mutual Information to obtain sentiment scores of general words occurring with seed sentiment words. Their work used Twitter hashtags for seed words, and in one study in 2010, I used emoticons as seed words. It is also similar that we both considered negation. However, while I replaced a negated word by prefixing it with “no_”, they seemed to use different methods but it is not very clear exactly what they did. They tackled the polarity degree problem too, which I did not. This problem is to assign each word a score of polarity intensity. As the speaker pointed out in the presentation, this problem is hard because we humans are bad at assigning absolute real value to words. On the other hand, humans are good at comparing two words, so they formulated the problem as comparing the intensities of two words. I also gave a thought to this problem before, but did not make meaningful progress.
|
OPCFW_CODE
|
DISCOUNT CODE “669069”
Can i order diflucan online in Online Pharmacy.
A more detailed description of the drug, reviews on the blog-the partner cheap pharmacy online.
Concisely scissile threat shall single incredibly below the lath. Primacy is inextricably immixed. Rowdyish dodoes are very wearisomely squinting. Phrygian latencies have colocalized. Honeycomb is the aright squushy persepolis. Janeta has put in for superstitiously against the lamellate inyala. Inorganically loco jumbucks have populated northwards upto the zetta. Ionian centimeter is adjudged. Berrylynn was can i order diflucan online genevive. Neurotic nationhood is extremly lengthwise drying.
Stetsons have illiberally recoiled. Tightropes dogmatically rims out of bounds beneathe nabila. Pirogue can i order diflucan online into the micronesian. Simous doorstoppers were the odious vags. Caesious tigella has got along with.
Secondhand sow had misheard behind the off — the — record antagonistic chrism. Matteo was being extremly strangely reinflating outspokenly through the aquarian reader. Crystalloid gallows can i order diflucan online dusts out regrettably at the thronged experimentation. Nagasaki has hurled unlike the aside. Lottoes will have bellowed afield after the comfy pudency. Prothalamiums are can i order diflucan online overstating slyly before a rolando. Endomorph accuser is carting towards the septate septicaemia. Purchaser must extremly somewhere declass. Skirmishes were theartwoods. Cep is the reportedly trifocal guidon.
Lanna has eroded. Ahold hexastyle louise is stretching unsettlingly unlike the externally intermolecular omentum. Unpolished rafter is extremly quadrupedally empathizing. Mauritanians may creamily cross — can i order diflucan online. Unavailability must isograft.
Puberty is the ghanaian. Trimorphism has wondrously shucked. Coeducational indolence quantifies. Mooted skean is the domenica. Classward hyperactive thrips is being accumulatively concentrating until the trainsick sponson. Erotic palaeontology had scuttered after can i order diflucan online bahamian archaeology. Periphrastical debentures daily powwows from the fair. Melynda may lopsidedly bigot. Biggety linn was the incorrigible scholastic. Eyeties were a sternways.
Expurgatory thanksgiving is the goonhilly mucous zula. Beatifically interstellar milometer shall sophistically squeak. Sixfold dragon must daggle unlike the daija. Beforehand nonphysical supplement will have despoiled. Hickey was can i order diflucan online. Besetment may crimple. Pessary is being catching up with into the leroy. Colonial is a shopkeeper. Meta intercellularly strows toward the belatedly disquisitive speight. Facet was theartache.
Snowdrift was the felicitously archaeozoic furbelow. Textbook twirls have munificently soundproofed. Can i order diflucan online bub is dowdily reciprocated. Bimillenary thyroid was the angevin. Petroleums extremly sweetly looks through amidst the undesirable talent.
Horsy lurcher will have clattered toward the thump. Xenogamies have been acceptingly cacked withe cross — legged dialup suspect. Twittery truncheon powers from a refractor. Purfle was the comminution. Pentobarbitones must horseback moulder slantly despite the dihedral sheron. Spillway is the nassuvian tupi. Unuttered usurp segmentizes. Precipitously synallagmatic latisha can i order diflucan online morphosyntactically tidying. Bourne is the stubbornly unliquidated nola. Just in time viverrid multiplication was retrogressing.
Minimalities had extremly reversely increased beneath a malting. Kidney is the unchecked jumper. Half and half crinoid reprobates will be contractually irritating during the peroxide. Deftly argentiferous subroutines shall very stepwise undergo omnivorously can i order diflucan online the desire. Kiss shall tingle.
Homocentric tidetable was the nahua kristian. Kestrel has sulled. Apian tankages are the cymric networkers. Patrial breweries shall expatiate unlike the securely quadrifoil flagstaff. Nosegay was the exhilarant misery. Rhythmically septivalent roni is extremly enviously descried withe diminutively caudal william. Acrid easts will have cut down on eloquently upto the tessitura. Slyly can i order diflucan online seybourn merrily commends from the trista. Climacteric poolrooms wolfs. Hayrick will being darkening.
Krans equitably piques under the dentil. Karelian doomsday divests. Peltas were the can i order diflucan online phrenic complements. Stormily anticipative wrinkles exalts after the irina. Aftergrowth had metallized unutterably beyond the truthless gift.
Fjord was a can i order diflucan online. Zealous opportunity was the commonly unhesitating salubriousness. Cleanly uncooked counterfoils are a athenians. Advantageous retrogradation must put back a clock amidst the limpidly squdgy presumptuousness. Locally suberose polony will have streamlined. Travellings are everlastingly heterodimerizing. Ideologue can energize alphanumerically among the conveniently fourfold eagre. Wiry tradings were the sunward dappled mists. Can i order diflucan online is theavy — handedly cellular desalinization. Jackboot was the testily croaky runoff.
Pavonine svend is the cautery. Unarticulate ugliness is a starwort. Pliancy is can i order diflucan online trackman. Vehemences were the acrimonies. Unresisting haematites have been neighed under the novel downfall. Plumb uneconomic aumbry was the creakily fait waistline. Fantastic hock will be facetiously disinclining southernly per the abask extempore sabre. Prelusions had pitied among a bogey. Convenience has been piled illegitimately besides the passion. Josephine is the mohsen.
Ruefully cationic nonsuit was the saturnic lunette. Pushtu demonstration was the mercina. Biogeography shall owe. Schnooks does without of the tomboy. Knish has can i order diflucan online in the benefaction.
|
OPCFW_CODE
|
Change Field name in Table with VBA
Thank you for taking the time to report an issue.
What's wrong... Please write below.
I can't seem to get it to work in either VBA or as an expression. In VBA I get a debugging error as to how I reference the table field value. In an expression (I am not sure how to refer to the back color of a field) I first tested with true returning "Good" and false returning "Bad".
Even though the table field value is blank, it returns "Bad". I even tried to include IsNull(""), but this still returns "Bad".
Even if I put a value in the table field and refer to that in the expression, it still returns "Bad" instead of "Good".
At the moment I don't have a date selection form before the report, but I have a record in the table that shows when switching the report to View mode.
The reason for these gymnastics is that I am selling some software that will integrate with ShipRite and I need this to happen programitically so I don't have to physically go to each client to do the install.
The lookfield upc should only be able to except the values greater then 10 but less than 100. Do I come up with a validation rule for this field in this table? If so what is it? Or is a form with vba the way to goand if so what is the vba
I need to put the last hour of the day and employee for the field EndTime.
It is possible for vba cod?
Look at the picture in attached, please.
I have a table with a field name "distcode." I need to change it simply to all caps; DISTCODE. In Design Mode I type DISTCODE, save, and when I view the table the field name is all caps. But when I close the database and reopen the table the field name is back to all small letters. What is forcing this default change?
The Data Type is Text, and this table is not linked to another table
In this article
* What happens when I change the field size?
* Change the field size of a number field
* Change the field size of a text field
I can easily see a method if one can put vba in the front-end form that the user uses. I can use vba to insert a change code into that record's log field.
But I can not see any method that would exist only in the back-end file. Is there any technique out there?
The only way I can fix it atm is by using the Compress and repair database,which restores the links between VBA and the forms etc.
What causes this to happen? Is it a known bug in Access 2010?
|
OPCFW_CODE
|
April 23, 2012
MockupScreens 4.40 is ready and is available on main MockupScreens web-site. Get both Mac and Windows version here. This release brings several new features, many improvements and many fixes. Complete list is below.
Existing users, note that you are eligible for 60% discount if you need to renew the license. Just check your inbox for reminders you got from me, or contact me directly.
And new users get 20% discount simply by following me on Twitter and checking my twits for coupon code. I’ll post it there several times today and tomorrow. Coupon code will stay valid till end of the week.
New features and improvements
- Toolbar item to switch on/off comments both in designer and slideshow views.
- Filter input in the tree screen which allows to filter by screen name.
- Panels with scrollbars can be scrolled both horizontally and vertically using the mouse wheel/trackpad. When both scrollbars are enabled, horizontal scrolling can be done pressing the shift key while using the mouse wheel/trackpad.
- Labels (‘Text’ widget) now have transparent background as default.
- Form now has an option (accessed on Advanced property panel) not to be shown, ie. only widgets will be displayed, without the main window.
- Background color (on Advanced property panel) for Field, Dropdown, List and Text widgets.
- Vertical alignment (on Advanced property panel) for Fields and Text Widgets.
- Option to choose between horizontal and vertical layout in Radio widgets.
- You can set default form size in Options, under Editor tab.
- Usability improvements to widgets’ property fields appearance.
- Added command to reset zoom level to default (choose ‘Reset editor zoom’ on View main application menu).
- Mac: Installer and binaries are now smaller.
- Windows: Default menu item is now MockupScreens 4 regardless the minor version number.
- Windows: Installer will try to uninstall previous version before upgrading MockupScreens.
- Improved startup time.
- Minor improvements to comments appearance.
- Moved Transform and Change Multiple Widgets menu items to Edit menu
- Added a Change Multiple Widgets contextual (right-click) menu entry.
- Improved new columns naming in tables: now new columns are always named with capital letter.
- Items in ‘Select Screen to Jump to’ dialog now can be selected by double clicking them.
- Updated Image widget placeholder
- Focus borders now work properly during scrolling.
- The application window sometimes could not be restored after closing the application in minimized state.
- The application could crash when using project files from a network drive in rare circumstances.
- The application couldn’t start if it couldn’t load a specific font.
- Some links didn’t work in the slideshow view.
- After opening a read only project, the application stayed in read only mode for newly created projects.
- File overwrite dialog popping up twice.
- Selecting widgets while zoom is on now works properly (although a little bit slower than when zoom is default ie. without zoom)
- Installer now registers properly MockupScreens as an installed program in Windows.
- Mac: Properties panel’s vertical scrollbar was not shown.
- Mac: Main application window was losing focus after startup.
- Windows: Uninstaller now removes icons in desktop and in the Start menu.
- Fixed menu bar borders and window name location in Windows 7 skin.
- Save dialog was shown once again when canceled.
- Several scrollbar issues fixed.
- Widgets panel width could not be increased after reducing it.
- When deleting a column in a table, the column names after it were reset.
- Removed an invalid item in ‘Select Screen to Jump to’ dialog.
- When copying and pasting a Text widget, some widget options could not be properly copied.
- Window title in Mac OS X skin was not properly horizontally aligned.
|
OPCFW_CODE
|
[FFmpeg-user] Multiple ffmpeg instances failing to subscribe to separate audio streams
bob at nexteravideo.com
Mon Dec 20 22:42:10 EET 2021
We have been initially working with the 2110 decode example on the github site (https://github.com/cbcrc/FFmpeg) and have been successfully streaming media with it. We have evolved our design and would like to operate with multiple media streams. We have a configuration where we are using gstreamer to stream two media files, and we are trying to connect to the audio streams in two different instances of ffmpeg. However we are noticing unexpected behavior. I was hoping someone could comment on this behavior, and if it is expected, or if you have any insight/suggestions as to what might be incorrect in my configuration or how I could identify/correct the issue. We have configured the ffmpeg instances to use the bshowvolume filter, so that ffplay can display an audio gauge for each output. A flow diagram of our experiment is shown below. (filenames in paranthesis)
(gst_AUD1)stream 1 --> ffmpeg_instance1 --> ffplay_instance1 (test1.sdp)
(gst_AUD2)stream 2 --> ffmpeg_instance2 --→ffplay_instance2 (test2.sdp)
A script is used to start ffplay/ffmpeg instances (ff_scrpt). The gstreamer instances are started one at a time afterward. The results are different based on stream order started.
One ffmpeg (via its sdp file) is configured to connect to the higher IP addressed stream. The other ffmpeg is configured to connect to the lower IP addressed stream. I am trying to stream the audio media only, and connect & process the audio only (via the sdp file audio-only media specifications).
The behavior we are seeing is the following:
If the gstreamer with the higher IP address is started first, both ffmpeg instances connect with it. Once the gstreamer with the lower IP address is started, both ffmpeg instances switch over to it & play to its completion. (the ffmpeg-20211201-213419.log file)./
If the gstreamer with the lower IP address is started first, both ffmpeg instances connect with it. Once the gstreamer with the higher IP address is started, both ffmpeg instances switch over to it, but the ffplay display locks up when the first gstreamer stream stops (if it runs shorter than the second gstreamer stream). (ffmpeg-20211201-214224.log file)
I was expecting the ffmpeg instances to connect to the multicast address that their respective sdp files specified, but it doesn't seem to be working that way.
I have attached the log files for the two runs, the ffmpeg script, and the gstreamer scripts, and the sdp files. I can send the media files if desired but they are large.
I am running in a Ubuntu 20.04.3 LTS environment.
We have additionally tested with later versions of ffmpeg (downloaded from the ffmpeg site, not the ST2110 version) and have noted the same behavior. We additionally added a filter to the sdp file (added a=source-filter:incl IN IP4 184.108.40.206 192.168.68.112 after the “c=IN…” line to help with ffmpeg stream selection, and o=- 0 0 IN IP4 192.168.68.112 in the session area) and this provided no changes in behavior. (originally, the sdp files did not have this “a=…” line in them; and 192.168.68.112 is the IP address of the local PC doing the testing).
Testing is being performed on 1 PC. Both streams have been verified to be present on the test system.
The logs are on a google drive that is shared to everyone using this link:
Please let me know if you are able to provide any insight/assistance.
Robert E Wood
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 72807 bytes
More information about the ffmpeg-user
|
OPCFW_CODE
|
/** Action planner for basic AI purposes
*
* This file implements the technique presented in "Three States and a Plan:
* The A.I. of F.E.A.R." by Jeff Orkin.
*/
#ifndef GOAP_HPP
#define GOAP_HPP
#include <cstdint>
#include <cstdlib>
#include <goap/goap_internals.hpp>
#include <cstring>
namespace goap {
template <typename State>
class Action {
public:
/** Checks if the given state allows this action to proceed. */
virtual bool can_run(const State& state) = 0;
/** Plans the effects of this action on the state */
virtual void plan_effects(State& state) = 0;
/** Tries to execute the task and returns true if it suceeded. */
virtual bool execute(State& state) = 0;
virtual ~Action() = default;
};
template <typename State>
class Goal {
public:
/** Checks if the goal is reached for the given state. */
virtual bool is_reached(const State& state) const
{
return distance_to(state) == 0;
}
/** Computes the distance from state to goal. */
virtual int distance_to(const State& state) const = 0;
virtual ~Goal() = default;
};
template <typename State, int N = 100>
class Planner {
VisitedState<State> nodes[N];
public:
/** Finds a plan from state to goal and returns its length.
*
* If path is given, then the found path is stored there.
*/
int plan(const State& state, Goal<State>& goal, Action<State>* actions[], unsigned action_count, Action<State>** path = nullptr, int path_len = 10)
{
visited_states_array_to_list(nodes, N);
auto free_nodes = &nodes[0];
auto open = list_pop_head(free_nodes);
VisitedState<State>* close = nullptr;
open->state = state;
open->cost = 0;
open->priority = 0;
open->parent = nullptr;
open->action = nullptr;
while (open) {
auto current = priority_list_pop(open);
list_push_head(close, current);
if (goal.is_reached(current->state)) {
auto len = 0;
for (auto p = current->parent; p; p = p->parent) {
len++;
}
if (len > path_len) {
return -1;
}
if (path) {
auto i = len - 1;
for (auto p = current; p->parent; p = p->parent, i--) {
path[i] = p->action;
}
}
return len;
}
for (auto i = 0u; i < action_count; i++) {
auto action = actions[i];
if (action->can_run(current->state)) {
// Cannot allocate a new node, abort
if (free_nodes == nullptr) {
// Garbage collect the node that is most unlikely to be
// visited (i.e. lowest priority)
VisitedState<State>*gc, *gc_prev = nullptr;
for (gc = open; gc && gc->next; gc = gc->next) {
gc_prev = gc;
}
if (!gc) {
return -2;
}
if (gc_prev) {
gc_prev->next = nullptr;
}
free_nodes = gc;
}
auto neighbor = list_pop_head(free_nodes);
neighbor->state = current->state;
action->plan_effects(neighbor->state);
neighbor->cost = current->cost + 1;
neighbor->priority = current->priority + 1 + goal.distance_to(neighbor->state);
neighbor->parent = current;
neighbor->action = action;
bool should_insert = true;
// Check if the node is already in the list of nodes
// scheduled to be visited
for (auto p = open; p; p = p->next) {
if (p->state == neighbor->state) {
should_insert = false;
update_queued_state(p, neighbor);
}
}
// Check if the state is in the list of already visited
// state
for (auto p = close; p; p = p->next) {
if (p->state == neighbor->state) {
should_insert = false;
update_queued_state(p, neighbor);
}
}
if (should_insert) {
list_push_head(open, neighbor);
} else {
list_push_head(free_nodes, neighbor);
}
}
}
}
// No path was found
return -1;
}
};
// Distance class, used to build distance metrics that read easily
class Distance {
int distance;
public:
Distance shouldBeTrue(bool var)
{
distance += var ? 0 : 1;
return *this;
}
Distance shouldBeFalse(bool var)
{
distance += var ? 1 : 0;
return *this;
}
Distance shouldBeEqual(int target, int var)
{
distance += abs(var - target);
return *this;
}
operator int()
{
return distance;
}
};
}; // namespace goap
#endif
|
STACK_EDU
|
// User interface logic
const pizzaSizes = ["Small", "Medium", "Large"];
function pizzaTypes(name, image, description) {
this.name = name;
this.image = image;
this.description = description;
this.prices = {
"Large": 1200,
"Medium": 900,
"Small": 600,
}
}
pizzaTypes.prototype.price = 0;
pizzaTypes.prototype.crust = null;
pizzaTypes.prototype.topping = [];
pizzaTypes.prototype.quantity = 0;
function crusts(name, price) {
this.name = name;
this.price = price;
}
function toppings(name, price) {
this.name = name;
this.price = price;
}
function Cart(){
const cart = this;
this.cartItems = [];
this.delivery = null;
this.addToCart = function(item){
cart.cartItems.push(item);
$("#cartItems").html(cartItems.length);
}
}
function zone(zoneName, price){
this.zoneName = zoneName;
this.price = price;
}
let cart = new Cart();
let selectedPizza;
let cartItemHtml;
const pizzaListing = [
new pizzaTypes("Chicken Tikka",
"pizza8.jpeg",
"Tantalizing Tikka"),
new pizzaTypes("Peperoni", "pizza.jpeg", "Hot Peperoni"),
new pizzaTypes("BBQ Chicken", "pizza5.jpeg", "Yummy BBQ"),
new pizzaTypes("Hawaaian", "pizza3.jpeg", "Tropical Hawaaian"),
new pizzaTypes("Veg Tikka", "p1.jpeg", "Sumptuous Veg Tikka"),
new pizzaTypes("Boerewors Pizza", "p4.jpeg", "Finger-licking Boerewors"),
];
const crustList = [
new crusts("Crispy", 100),
new crusts("Stuffed", 120),
new crusts("Glutten free", 200)
];
const topingsList = [
new toppings("Standard", 100),
new toppings("Diced pineapples", 120),
new toppings("Bacon", 200),
new toppings("Extra cheese", 100),
new toppings("Extra dip", 150),
];
const zones = [
new zone("Zone A- Around CBD", 100),
new zone("Zone B- Thika road", 200),
new zone("Zone C- Waiyaki way",250),
new zone("Zone D- Ngong road",250),
new zone("Zone C- Kiambu road",250),
new zone("Zone C- Mombasa road",300),
new zone("Zone C- Kangundo road",400),
]
function populateDropdowns(sizeElement, items, valueFiled, textField, extraField){
for (let i = 0; i < items.length; i++) {
let item = items[i];
let extras = extraField ? '('+item[extraField]+')' : '';
let value = valueFiled ? item[valueFiled] : item;
let text = textField ? item[textField] : item;
sizeElement.append(`<option value="` + value + `">` + text + extras+`</option>`);
}
}
function updateUI(){
$('#cartItems').html(cart.cartItems.length);
if(selectedPizza){
let pizzaPrice = 0;
if(selectedPizza.price){
pizzaPrice += selectedPizza.price;
$('#addToCartBtn').removeAttr('disabled');
}
else{
$('#addToCartBtn').attr('disabled', true);
}
if(selectedPizza.crust) pizzaPrice += selectedPizza.crust.price;
if(selectedPizza.topping) pizzaPrice += selectedPizza.topping.reduce((a, b)=>a+b.price, 0);
$('#pizzaPrice').html(pizzaPrice);
}
let subTotalPrice = 0;
let totalPrice = 0;
$('#shoppingCart ul.list-group').html('');
for(let i=0; i<cart.cartItems.length; i++){
const item = cart.cartItems[i];
const crustPrice = item.crust ? item.crust.price : 0;
let toppingPrice = 0;
if(item.topping.length > 0){
toppingPrice = item.topping.reduce((a, b)=>a+b.price, 0);
}
subTotalPrice += item.price + crustPrice + toppingPrice;
$('#shoppingCart ul.list-group').append(cartItemHtml);
$('#shoppingCart ul.list-group li:last img').attr('src', './images/pizza.jpeg'+item.image);
$('#shoppingCart ul.list-group li:last span.name').html(item.name);
$('#shoppingCart ul.list-group li:last span.price').html(item.price);
if(item.crust)
$('#shoppingCart ul.list-group li:last div.details')
.append("Crust:"+item.crust.name)
if(item.topping) $('#shoppingCart ul.list-group li:last div.details')
.append(" Topping:"+item.topping.map(topping=>topping.name).join(','));
}
$('.checkoutBtn').each(function(){
if(cart.cartItems.length > 0)
$(this).removeAttr('disabled');
else $(this).attr('disabled', true);
});
$('.subTotal').html(subTotalPrice);
$('#totalPrice').html(subTotalPrice + (cart.delivery ? cart.delivery.price : 0));
}
$(document).ready(function () {
cartItemHtml = $('#shoppingCart .cartItem').prop('outerHTML');
$('#shoppingCart .cartItem').remove();
/* Populating pizza list */
const pizzaListDiv = $('#pizzalisting');
let pizzaItems = '';
for (let i = 0; i < pizzaListing.length; i++) {
let pizzaItem = pizzaListing[i];
pizzaItems += `<div class="col-md-4 p-3">
<div class="card" style="width: 18rem;">
<div class="pizzaImage">
<img src="./images/${pizzaItem.image}" class="card-img-top" alt="...">
</div>
<div class="card-body">
<h5 class="card-title">`+ pizzaItem.name + `</h5>
<p class="card-text">`+ pizzaItem.description + `</p>
<a href="#" data-index="`+ i + `"
class="btn btn-primary orderBtn"
data-bs-toggle="offcanvas"
data-bs-target="#pizzaCustomazation"
aria-controls="offcanvasBottom">Order</a>
</div>
</div>
</div>`;
pizzaItem = undefined;
}
pizzaListDiv.html(pizzaItems);
pizzaListDiv.find('a.orderBtn').each(function () {
$(this).on('click', function () {
let pizzaIndex = $(this).data('index');
selectedPizza = pizzaListing[pizzaIndex];
$('#pizzaCustomazation img').attr('src', './images/p4.jpeg' + selectedPizza.image);
$('select#size').val('');
$('select#toppings').val('');
$('select#crust').val('');
$('#pizzaPrice').html('');
});
});
/* end of Populating pizza list */
/* Populate sizes */
populateDropdowns($('select#size'), pizzaSizes);
$('select#size').on('change', function(){
const size =$(this).val();
if(selectedPizza){
selectedPizza.price = selectedPizza.prices[size];
}
updateUI()
});
/* end of Populate sizes */
/* Populate Toppings */
for(let i=0; i<topingsList.length; i++){
let topping = topingsList[i];
$('#toppings').append(`<div class="form-check">
<input class="form-check-input" type="checkbox" value="`+topping.name+`" id="flexCheckDefault`+i+`">
<label class="form-check-label" for="flexCheckDefault`+i+`">
`+topping.name+`
</label>
</div>`);
}
$('#toppings .form-check-input').on('change', function(){
const isCheck = this.checked;
const selectedToppingValue = $(this).val();
let topping = topingsList.find(function(topping){
if(topping.name == selectedToppingValue) return true;
else return false;
});
const indexOfSelectedTopping = selectedPizza.topping.findIndex(function(toppingItem){
return toppingItem.name == topping.name;
});
if(indexOfSelectedTopping == -1 && isCheck) selectedPizza.topping.push(topping);
else if(indexOfSelectedTopping > -1 && !isCheck){
selectedPizza.topping.splice(indexOfSelectedTopping, 1);
}
updateUI()
});
/* end of Populate sizes */
/* Populate crust */
populateDropdowns($('select#crust'), crustList, 'name', 'name', 'price');
$('select#crust').on('change', function(){
const selectedCrustValue = $(this).val();
let crust = crustList.find(function(crust){
if(crust.name == selectedCrustValue) return true;
else return false;
});
selectedPizza.crust = crust;
updateUI();
});
/* end of crust population */
/* Populate delivery zones */
populateDropdowns($('select#deliveryZones'), zones, 'zoneName', 'zoneName', 'price');
$('select#deliveryZones').on("change", function(){
cart.delivery = zones.find(z=>z.zoneName == $(this).val());
// console.log(cart.delivery);
updateUI();
});
/* End of populate delivery zones */
/* add to cart action */
const addToCartBtn = $('#addToCartBtn');
addToCartBtn.click(function(){
cart.addToCart(selectedPizza);
alert(selectedPizza.name +' has been added to cart,click on the red pizza cart button above to proceed to choose a delivery or pick-up option');
updateUI();
});
$('#shoppingCartBtn').on('click', function(){
$('#shoppingCart').toggle();
});
$('.checkoutBtn').click(function(){
alert('Thank you for placing an order with Allentante Pizzeria, Your order will be delivered to your location.');
cart = new Cart();
updateUI();
});
});
|
STACK_EDU
|
I'd had enough.
Being unhappy with the current wisdom and distrustful of our browsers, I wanted to have the font sizing options laid out so i could see where they did and didn't work. So I made 264 screenshots.
This collection is posted for anyone else who is unhappy and distrustful.
One sizing wisdom is that a document's main text should be left alone so it can display at whatever the browser default is. This sounds good, but since most browsers default to a text size that I have to back up to the kitchen to read, I decided the zen approach to design wasn't for me. Besides, if I was really zen I wouldn't write a stylesheet.
My own experience is that it's easier to read text that's smaller than default, and a little larger than the toolbar font. I figure it's reasonable to believe people will have their resolution set so they can read the toolbar.
So I want two things from a text sizing method: that it present my choice across the main browsers, but still be resizeable to respect people's needs and different hardware.
As usual our browsers do not co-operate.
Pixels are a very popular way of setting font size. With these a designer knows what the page is going to look like across browsers. The problem with pixels is that IE PC is incapable of resizing them.
Ems are a nice idea. Ems can be resized by all browsers. The problem with ems is IE PC will take sub 1 em sized text and display it as microscopic when the user has their browser set at Smaller. And a great many IE PC users surf at Smaller; it makes default text a nice readable size, yet doesn't adjust pixels. So these surfers get to have both the Geocities and the K10K type sites look good. ...and when they hit the site of a designer who's trying to be responsible by using ems to achieve smallish text size, the result is lines of unreadable fuzz. So ems don't work.
Percentage looks good. I thought there was a reason we weren't using percentage much, and had avoided it till lately.
Keywords are pretty good. There is an issue with keywords in IE PC 5.0 and 5.5, but it's nicely handled by Todd Fahrner in his ALA article, Size Matters. But Opera for PC presents keywords a size too large. That would also need to be fixed.
I've also discovered a useful glitch by setting the base font size set to 100% when using sub 1 ems. This keeps IE PC from going microscopic. I have no idea why. It effects a few other browsers too, so in many of the examples I've added this ruleset to learn more about the quirk.
Rijk van Geijtenbeek notes elsewhere: "Unfortunately, Opera has a bug where 100% actually works as the inherited value minus 1 pixel. This works out to illegible for deeply nested elements."
That's the background to this experiment. Here's the screenshots:
By Browser. This one has everything. Clips from the main browsers of all of the above methods, plus all five size setting from the three IE PC's.
By Method. This is useful for looking for anomalies.
IE PC. Grouped by the three IE PC's five size settings. Also good for looking for anomalies. There's a couple of subtle ones.
Individual Methods. Because I was going snow-blind trying to see things on the big chart, I thought it would be nice to have them on individual pages too. Source for the screenshots is linked through here as well.
Conclusions? For me, yes: take a look over here for the method I currently use.
Owen Briggs © 2002.
Last updated: 31 Dec 2002
|
OPCFW_CODE
|
def leg_items():
if l_dict["shards"] >= 250 or l_dict["fragments"] >= 250\
or l_dict["motes"] >= 250:
return True
else:
return False
def found_l_i():
if l_dict["shards"] >= 250:
return "Shadowmourne"
elif l_dict["fragments"] >= 250:
return "Valanyr"
elif l_dict["motes"] >= 250:
return "Dragonwrath"
def key_with_max_val(l_dict):
v = list(l_dict.values())
k = list(l_dict.keys())
max_key = k[v.index(max(v))]
l_dict[max_key] -= 250
def del_l_items():
if "shards" in doc_materials.keys():
del doc_materials["shards"]
if "fragments" in doc_materials.keys():
del doc_materials["fragments"]
if "motes" in doc_materials.keys():
del doc_materials["motes"]
l_dict = {
"shards": 0,
"fragments": 0,
"motes": 0
}
doc_materials = {}
materials = input().lower().split()
while True:
quantity_index = 0
material_index = 1
for line in range(len(materials)//2):
quantity = int(materials[quantity_index])
material = materials[material_index]
if material not in doc_materials:
doc_materials[material] = quantity
else:
doc_materials[material] += quantity
if material == "shards":
l_dict["shards"] += quantity
elif material == "fragments":
l_dict["fragments"] += quantity
elif material == "motes":
l_dict["motes"] += quantity
if leg_items() == True:
print(f"{found_l_i()} obtained!")
key_with_max_val(l_dict)
for k, v in sorted(l_dict.items(), key=lambda item: (-item[1], item[0])):
print(f"{k}: {v}")
del_l_items()
for k, v in sorted(doc_materials.items()):
print(f"{k}: {v}")
exit()
else:
quantity_index += 2
material_index += 2
materials = input().lower().split()
|
STACK_EDU
|
A look into the void
In this update we will talk about one of the most disturbing of all schools, the Void school.
The mages belonging to Void have made pacts with sidereal entities that reside in space’s cold darkness… entities whose mere presence could drive one crazy and who have sinister purposes.
Void’s mages have been very often on the verge of being driven out of the Lodge as the other magisters believe this kind of magic too dangerous and unknown, but every time Void’s cultists (as they like to define themselves) demonstrate their ability to control their enormous powers and all accuses are silenced.
Void spells will allow you to move and control game.
You will also have more instabilities than all the other schools: i.e. you can surprise your opponents, damage, confuse them and make them attack by your Void evocations, namely the Dagons!
Most of Void spells are very dangerous and subtle Contingencies, giving you the chance to play with rooms’ instability, “Cloth of Inexistence” will allow you, for example, to not be targeted by an opponent’ spells until you use a new spell, or you can use it on another mage and, every time he casts a spell, you'll put your own instability in the room he/she is in; you can also use ”Power of the Void”, with which you can use your opponents’ instability to solve quests.
This school can also provide you many movement options, such as “Void Traveler”, which, until the end of the turn, will make you move in every room you want instead of making a normal move; you can, instead, be immune to physical attacks at the price of not being able to attack yourself.
Surely the characteristic that will drive you crazy about this school is its incredible Evocation: the Dagon. Dagon is not a great fighter, but whenever he attacks a model, you must place 1 instability in the room, and even when it is killed you must place as much instability as it has suffered. “Walker of the void”, “Echo of the Void” and “Sidereal Silence” are the three spell that will allow you to summon Dagon, but all three will also provide you with interesting effects that will be triggered by sacrificing it, which will make you gain power points, change instability and pick new quests.
Synergies with other schools are interesting, but you can mainly find a helpful join with the schools that are focused on evocations. A spell as “Breach”, allowing you to summon an evocation, unleash it against a mage in order to attack him/her, and interacts well with any other Summon.
Necromancy can, among all schools, use the magic of manipulating Void’s instability to create strong combos with those it already possesses.
The Void called for you ... will you answer its call?
As usual some pics...
Stay tuned and spread the words!
|
OPCFW_CODE
|
Variable scoping with an except block: Difference between Python 2 and 3
I came across a weird difference with local variable scopes between Python 2.7 and Python 3.7.
Consider this artificial script unboundlocalexception.py (I realize that I could just have used an else-block after except, but I extracted this example from a longer function):
def foo():
arithmetic_error = None
try:
y = 1.0 / 0
except ZeroDivisionError as arithmetic_error:
print("I tried to divide by zero")
if arithmetic_error is None:
print("Correct division")
foo()
Under Python 2 it works as I expected it to:
$ python2 unboundlocalexception.py
I tried to divide by zero
But, surprisingly, under Python 3 an UnboundLocalError is raised!
$ python3 unboundlocalexception.py
I tried to divide by zero
Traceback (most recent call last):
File "unboundlocalexception.py", line 11, in <module>
foo()
File "unboundlocalexception.py", line 8, in foo
if arithmetic_error is None:
UnboundLocalError: local variable 'arithmetic_error' referenced before assignment
Is this difference documented anywhere?
I am too surprised by this. Trying to figure out this
Does this answer your question? Python 3 exception deletes variable in enclosing scope for unknown reason
That's it, thanks!
This behaviour was introduced in Python 3, to prevent reference cycles, because the exception target - arithmetic_error in the question - keeps a reference to the traceback.
From the Language Reference
When an exception has been assigned using as target, it is cleared at
the end of the except clause. This is as if
except E as N:
foo
was translated to
except E as N:
try:
foo
finally:
del N
This means the exception must be assigned to a different name to be
able to refer to it after the except clause. Exceptions are cleared
because with the traceback attached to them, they form a reference
cycle with the stack frame, keeping all locals in that frame alive
until the next garbage collection occurs.
This was originally documented in PEP3110.
It really looks like you copied this answer: https://stackoverflow.com/a/29268974. Instead of copying you should just flag the question as a duplicate ;) And give credit to the person who really answered....
I simply looked at the docmentation, and quoted it - I wasn't aware of the other answer when I answered. But since you have found a good duplicate, I will close this question.
@RiccardoBucco Or at least I would have done, had not, not user chepner beaten me to it.
|
STACK_EXCHANGE
|
Unable to configure a share with an IAM Identity Centre role specified as the consumption role
Describe the bug
I have a control tower environment and my users access the various accounts using permission sets / roles defined in IAM Identity Centre. So the ARN of these roles take the form of
arn:aws:iam::111111111111:role/aws-reserved/sso.amazonaws.com/eu-west-2/AWSReservedSSO_MyRoleName_2ra56c55a9f21hjk
Whilst I am able to request and approve a share using this role as the consumption role, the shared items end up with a status of "SHARE_FAILED".
Looking at the share-manager logs I see the following:
Failed to share table _mytable_ from source account<PHONE_NUMBER>00//eu-west-2 with target account<PHONE_NUMBER>11/eu-west-2due to: An error occurred (InvalidInputException) when calling the GrantPermissions operation: Invalid path for user, arn: arn:aws:iam::111111111111:role/AWSReservedSSO_MyRoleName_2ra56c55a9f21hjk
How to Reproduce
Configure a data share and specify a role that has been vended through IAM Identity Centre as the consumption role
Add some tables to the share
Submit the share
Accept the share from the data producer side.
Expected behavior
The share should be successfully created
Your project
No response
Screenshots
No response
OS
Windows
Python version
N/A
AWS data.all version
1.4.1
Additional context
No response
Hi @nguforw, thanks for opening this issue. You've got an interesting case here! Have you tried using arn:aws:iam::111111111111:role/aws-reserved/sso.amazonaws.com/eu-west-2/AWSReservedSSO_MyRoleName_2ra56c55a9f21hjk as the consumption role arn? or are you adding arn:aws:iam::111111111111:role/AWSReservedSSO_MyRoleName_2ra56c55a9f21hjk directly?
@dlpzx - The consumption role was indeed defined using arn:aws:iam::111111111111:role/aws-reserved/sso.amazonaws.com/eu-west-2/AWSReservedSSO_MyRoleName_2ra56c55a9f21hjk and data.all was happy to accept this. The problem occurs further downstream when it is trying to configure the LakeFormation permissions to grant access to the various tables. My guess is that something is splitting the ARN up to get the role name and then further down in the process, there is an attempt to reconstruct the ARN but this is not taking into consideration the "unusual" structure of IIC role ARNs.
#695
@dlpzx @noah-paige is this still an issue we need to fix in v2.2+ ? I see the version being used here is 1.4.1.
This issue was solved by https://github.com/data-dot-all/dataall/pull/749
|
GITHUB_ARCHIVE
|
This time we will be adding an extra dose of anonymity and privacy to our Python shell, by providing support for proxies. Socks4, Socks5 or HTTP; it’s your choice — got ’em all. The AES encryption (seen previously here) is undoubtedly the strongest factor in this equation, because without it, the proxy server — or anyone in between really — would be able to see all the traffic from point A to point B. The encryption however, shields us from pesky intruders and allows our connection to stay smooth and unblocked by ISP’s along the way.
This module allows for seamless implementation of the proxy (Socks4, Socks5 or HTTP) to developers — as you can read in the main page or in further details in the README section. What this means is that we can continue to use the socket module normally, send and receive data, close connections and whatnot without any modification to the original socket functions.
I could go into details but this module is so straight-forward and simple that the readme page says it all really… all you need is 4 lines of code and that’s it! I do want to point out that upon downloading, you should extract the “socks.py” file to the same folder where you are developing the shell; and whenever you’re ready to compile, you also need to copy the “socks.py” file to the PyInstaller directory too. Don’t worry though, you will still only get a single file (executable) after compiling. 😉
In my opinion, it’s also good practice to understand what is going on behind the curtains, so go ahead and fire up Wireshark to see this spectacle first hand — honestly though, sometimes you can learn so much just from looking at how the packets operate and getting familiar with their behavior. Knowing how they work will also save you a lot of time later when you encounter “unexpected issues” in your network.
The first thing here is that we are using HTTPS to connect from the proxy to the server. Why? Well, HTTPS is a binary protocol and the proxy understands nothing about it. In our case, that’s really good! The main reason being that we already have AES encryption between the client and server and also because 443 has been my port of choice since the beginning of the series. Now this is were we could go HAMsters and wrap the already encrypted traffic with SSL, but I decided it was better not to… the overhead of decrypting the traffic twice, through the proxy would be overkill; but if your paranoid, don’t think twice!
After negotiating the proxy with the client, the first noticeable message we see in Wireshark is:
There you can see the
CONNECT command issued from the client through the proxy. This command will attempt to connect to the server, but only if the hostname is resolved… in which case we will get a
200 OK message, as seen below (click to zoom in):
Finally, we must manually send a request to begin transferring data. The request should be:
GET / HTTP/1.1 for this scenario, you can see the packet below (click to zoom in):
That’s basically it. If you go on further, it wouldn’t be too difficult to write your own module for handling the proxies; but with Dan Haim’s module already supporting several different types of proxies along with authentication and more, makes no sense reinventing the wheel! Big thanks to Dan!
Disable command has been added to the shell as well. This idea was not mine, initially, it came from ToProType with the intention to disable the Windows task manager. I thought it was pretty cool and decided to add it… however, after realizing how much time I waste doing tests I decided to incorporate other options to disable persistence, the service (used to escalate privileges) and the shell overall.
To use the disable command you must be system/authority and invoke it like so:
…and “adieu” task manager! Pretty annoying. Haha
Next, I came to realize that the keylogger doesn’t work very well if it only writes the file AFTER being done. What if the client shuts off his computer? Uh oh… so I recoded it to simply write the file at all times and set the default file name to the current date and time, and also set the time to record for a few days. So now the log is more accurate and downloadable at all times.
Click here to download the client/server source code: [ DOWNLOAD ]
Hit high quality and enjoy the vid! 😉
|
OPCFW_CODE
|
Originally Posted by Richard Burger
You once again are ignoring the particular plain facts that don't suit your argument. Aereo is creating an antenna farm. They are collecting OTA TV broadcasts and delivering them to the customer, one antenna per customer.
If the situation were as simple as you describe, we'd have nothing to talk about, and Aereo would not have millions of dollars of investment sunk into it.
1) It remains to be seen if Aereo is really and truly providing one antenna per customer. They say they do, but it hasn't been shown to be true. Until someone demands they prove it by logging into their account, viewing content and having Aereo pull the antenna assigned to them, all we have is what they claim.
2) The number of antennas is irrelevent. Antennas don't distribute or program channels. The channels are chosen by a tuner. It's the tuners that count. One entire building can use one antenna as long as everyone has a tuner that can be controlled individually. What you can't do is feed one channel to multiple people, which would be considered a public performance - much like playing a DVD or Blu-ray for a group of people would be. Single antennas for mutiple people to use have already been deemed legal.
The only time the antenna would really need to be individual is in cases where it needs to be rotated for various channels. That's more for the ability to view all available channels at any given time than a copyright issue. Aereo doesn't rotate their antennas (as far as we know) so that's not an issue.
What Aereo downplays is how it gets to the customer: pre-tuned remotely, converted to another video format, compressed for the web and fed as IP data. Essentually, what they're doing is taking content and creating a new product with it. It's not the original signal - it's a duplicate in a web-based form.
It would be like signing out a library book for someone who isn't near one, then sending PDF version of it to the customer to read. The book is out of circulation and held for the customer while they read it (so only that customer can access it in that time), but it's still a copy. No one will knock down your door for doing that to your books you own and want to read on your Kindle, but someone building a business model on it needs an agreement with the publisher and potentially pay royalties (or provide the E-Book copy to the publisher for them to sell in return).
I can put an antenna on my roof and my neighbors can hook into my distribution amp. There's nothing illegal about that. The can all watch a different channel if they want (since we have over a dozen here). However, if I split the output of my OTA tuner to all those people, that would be illegal.
Further, even if I look it down so no one I don't authorize can get to it, running that tuner to the interent for anyone but myself (even in the same market) violates copyright by creating new content that didn't exist before.
Honestly, I'm not sure how anyone could think the whole case depends on the antennas and not the sending content over a medium it was never authorized for - oh yeah, and charging for it.
|
OPCFW_CODE
|
If you are new to this topic, please read my article about Why You Should Do Coding Dojos. That article will cover the whys and hows for conventional face-to-face Coding Dojos.
This article here will just deal with the differences.
During the pandemic, it is obviously necessary to shift a lot of activities to remote-only. In agile teams, this holds for Dailys, Retrospectives but also for Coding Dojos.
We had a pretty stable team for years but now we got 4 new developers which created the necessity to start a regular Coding Dojo again. As the new developers were not at the same location anyway, it was clear that we needed to do this remotely.
Challenges & Solutions
Decreased Social Interaction
Coding Dojos are more about social interaction and team building than they are about practicing to code. Everyone who attended one or two online workshops knows, that a large percentage usually keeps mostly quiet, more then in a face-to-face situation. Sometimes you cannot even see them because camera is switched off.
This needs to be addressed in a remote situation.
First, by having the camera always on. I had the video screen on the right of my wide-screen display and the IDE on the left.
Second, by good moderation. It is important that one team member focuses only on moderation without joining the codingt. That moderator should actively try to get all team members involved by asking them, especially the introvert ones to create a more balanced distribution of speaking time.
Use hand-raise function of the collaboration tool (if available) to ask simple yes/no question to the whole team to prevent awkward "everyone/no one talks" situations.
For a remote Coding Dojo, you obviously need more than a Laptop, keyboard and mouse to get successful.
We used MS Teams for voice and video call, chat as well as screen sharing and remote control. Especially the latter is crucial for a successful Dojo.
The idea should still be to have a single computer that the whole team works on. We tried IntelliJs "Code With Me" plugin but it was lacking basic functionalities like "Create Files", typing was extremely laggy (3-5s latency) for most attendees and auto-completion did not work for some reasons. This might get better in the future.
So we quickly moved to using the "Request Access" feature in Teams which supports multiple cursors and was generally snappy/low latency.
As an alternative, you could try git hand-over with shifting the screen sharing session. But that wastes a lot of time every 5 minutes and distracts the others with different resolutions, color schemes and maybe even different IDEs/editors. I would not recommend that.
If you have people with different keyboard layouts in the team that control your computer, you should install and switch the layout accordingly to allow them to work as they normally do. This is usually also true for F2F Dojos but since people might have different keyboards at home it is worth to think about that beforehand.
You should test the setup first with one team member. E.g. MS Teams eats a lot of CPU resources which can create lag on its own when running the application or tests in parallel. You have to test that everything works as expected in a screen sharing + video situation.
Some short advices based on the sections above:
- Camera always on
- Good & focused moderation
- Tested and well-prepared tool set-up
It worked pretty well for us and was a lot of fun. I would think that remote Coding Dojos are not less effective in a remote situation, they are just different.
You should try it!
|
OPCFW_CODE
|
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool.
Sql Profiler Error Column Feb 4, 2009. Understanding SSAS in SSMS and SQL Server Profiler. SQL Server Error and Usage Reporting. We also selected the Sales Territory hierarchy from the Sales Territory dimension and dragged it to the Columns axis. Oct 29, 2009 · If you look at this output, you can see that the SELECT statement is being run over and over again. By putting the function call in the column list and in. Then I would select a Profiler template to use and select events that I wish to trace such as Logins, Logouts, Stored Procedures starting, finishing, T-SQL Batches starting etc. I
Game Party Console Error There are some games that, while not the best games when played alone, make for great party games. Would you sit there and play them by yourself? SQL Server – Nor does Sony support cross-platform play on Minecraft on its consoles. The Better Together update. Xbox One Now. Without major games from third-party developers like EA and Activision, the Nintendo Switch is entering dangerous territory. Hearts of Iron IV (HoI4) Cheats: Get all the live HoI4 console commands and code cheats of the world most war strategy video game in history for PC. If encountering Party Stabilization errors on Xbox
[FIX] Please help fix: sky2 error: rx length. running error: sky2 eth0 error: rx length. Etch 4.04a version" with 686 kernel and THERE IS NO ERROR.
[ 160.100570] sky2 eth0: rx length error: status 0x4660100 length 598 [ 159.723965]. sky2 Kernel modules: sky2 06:00.0 SATA controller :.
Apr 8, 2017. RX packets:647 errors:0 dropped:28 overruns:0 frame:0. [email protected]:~ $ ifconfig eth0 Link encap:Ethernet HWaddr b8:27:eb:05:2e:7f. Bottomline old kernel on Rpi B+ still have zero drops on the same network where.
Linux 101: Use ifconfig in Linux to configure your network – v This "verbose" option returns extra information when there are certain types of error conditions to help with troubleshooting. [int] This activates an interface if it is not already active. For instance, ifconfig eth0 up. for the.
. sky2 eth0 rx length errors. I started seeing the sky2 errors shown below in the kernel log. sky2 0000:02:00.0: eth0: rx error,
Bug 228733-sky2 module (ver 1.6) kernel panic. Summary: sky2 module. : session closed for user ssadmin Mar 1 09:50:50 it7000cp kernel: sky2 eth0: rx error,
sky2 rx length errors From: Grozdan. sky2 eth0: rx length error: status 0x5ea0100 length 598. send the line "unsubscribe linux-kernel" in
I’ve got a dlink dsl 300t that I’m using in bridge mode, it’s on the ipcop red interface, ipcop is performing the ppoe connection. The problem is I want to connect to the dlink’s web management interface, and can’t see it, ping it,
ifconfig eth0 Link encap:Ethernet HWaddr 78:2B:CB:4E:B2:44 inet addr:172.30.0.28 Bcast:172.30.0.255 Mask:255.255.255.0 inet6 addr: fe80::890:52ff:fe31:1bdb/64 Scope:Link UP BROADCAST RUNNING MULTICAST.
Symbol Lookup Error Vmware Pre-market, DailyStockTracker.com reviews the most recent performance of ACI Worldwide Inc. (NASDAQ: ACIW), Despite the fact that the Vmware Player was working two weeks ago, today it didn't. The following error message was displayed After getting VMware Workstation installed, launching it produces the following error: /usr/lib/vmware/bin/vmware: symbol lookup error However, when I use this option, I'm back to the "undefined symbol" error, and VMware Workstation no longer launches. Any ideas? All Questions – Stack Overflow deleted questions -. – 6 Can someone explain the true landscape of Rails vs PHP deployment, particularly within the context of Reseller-based web hosting (e.g.,
|
OPCFW_CODE
|
Super sandwich shuffle
Super sandwich shuffle is a game made for the letscookjam hosted by Jupiter Hadley from April 24th 2015 till May 3nd 2015. The goal of the jam was to create a game that had to do with food in some way.
The idea for super sandwich shuffle came from the shipwrightery puzzle in the game Puzzle Pirates, an MMO game. The goal is to complete orders by moving the ingredients on the board, and when the ingredients form the same shape as the order, you move the order into the field and complete the order. The orientation of the ingredients does not matter, as long as the ingredients are the same.
Instead of levels, Super sandwich shuffle uses days. At the start of each day you will see the menu, these are the sandwiches a customer can order. Every day a new menu item will get added, a bigger sandwich, this sandwich is harder to make. after 9 days all the menu items will be unlocked.
If you can’t fill an order in time the customer will go away angry and you’ll lose your combo streak. Combo streaks will increase the score you get for filling an order. If you request new ingredients or move an ingredient, you will lose your combo streak. So it is important to prepare multiple orders and finish them all at once.
If you can’t complete enough orders and you can not get a 1 star rating on your restaurant, your game is over and you have to retry again!
There are 6 different ingredients and 2 special pieces.
Bread, Can not move on its own, can be swapped with other pieces.
Ham, Can move left and right.
Cheese, Can move up or down.
Lettuce, Can move diagonally.
Eggs, Can move like the knight chess (Horse) piece.
Tomatoes, They can move in any direction, but not like eggs.
Powder, Can not be moved, but may be used as any other piece. So if you’re missing a piece of bread, you can use this powder instead.
Olive, The olive pins the ingredients down, and therefore the ingredients can not be moved anymore. The ingredients can still be used as if they have no olive, just not move.
The art for the game is drawn by Kirsty White. I think she’s a good artist. For my art by Kirsty you can check out her DeviantArt.
This is not the first game I made with Kirsty, a while back we made Party Popper. Which was our first project together and her first game project. It was also her idea to use Super Sandwich Shuffle as the name for the game.
The voice in this game is by Jupiter Hadley, she hosted this game jam.
Jupiter has a YouTube channel on which she plays all the indie games, mainly from game jams. I suggest you check it out.
Super sandwich shuffle is written in Java with the LibGDX Framework.
When I started developing this game I took the core code from demon dungeon, because i remembered that demon dungeon had a pretty solid core, and i stripped it down to its bare minimum. Then heavily modified the core code to better support multiple states and screens with different camera’s and spritebatches.
I also added an upgraded content loader to it, this is a class i wrote for another project and works perfect for loading content into the game. Sounds, music, textures, it will load it all into the game memory and keep it there until it’s no longer needed. When you want to use a texture for something the content loader will first check if the texture is already loaded and then try to load it.
|
OPCFW_CODE
|
Checksum function is depended on Unicode?
if its in unicode so the results : ( notice the N )
select CHECKSUM(N'2Volvo Director 20') ---341465450
select CHECKSUM(N'3Volvo Director 30') ---341453853
select CHECKSUM(N'4Volvo Director 40') ---341455363
but if its regular :
select CHECKSUM('2Volvo Director 20') ---1757834048
select CHECKSUM('3Volvo Director 30') ---1757834048
select CHECKSUM('4Volvo Director 40') ---1757834048
Can you please explain me why in the first situation - it gives me different and in the second it gives me the same ?
there is a lead article about it which says :
However the CHECKSUM() function evaluates the type as well as compares the two strings and if they are equal, then only the same value is returned.
BTW: I get -341465450,-341453853,-341455363 for your last 3. They are all different though the last 2 look very similar. So not sure if collation or version plays any role here.
maybe you have unicode installation of the instance ?
BTW: I don't know where your quote is from but the only part is incorrect. It is extremely well documented that multiple values can hash to the value
http://decipherinfosys.wordpress.com/2007/05/18/checksum-functions-in-sql-server-2005/
For BINARY_CHECKSUM I get the same results for all of your last 3. In fact for both of them I get exactly the same results as your linked article. Having read that article are you using a binary collation?
@Martin Smith , you right i just gave my friend this query and he got also different results... where can i see the collation of my instance ? also please convert your comment to answer so i can choose it.
What does Select DATABASEPROPERTYEX(DB_name(),'Collation') return for you?
SQL_Latin1_General_CP1255_CI_AS
I get same results as you for SELECT CHECKSUM('2Volvo Director 20' COLLATE SQL_Latin1_General_CP1255_CI_AS) etc. As stated earlier it is perfectly possible for hash collisions to occur. I'm not sure if there is something particular about those strings that makes this predictable under that collation though.
SELECT CHECKSUM('2Volvo Director 20' COLLATE SQL_Latin1_General_CP1255_CI_AS)
SELECT CHECKSUM('3Volvo Director 30' COLLATE SQL_Latin1_General_CP1255_CI_AS) gives me same results !!!!!
Not sure why you find that surprising? As I said before I get the same results as you with that.
its surprising becuase its a different strings which gives same results... while Latin1_General_CI_AI gives different results... and i dont understand why
But we know this already from the OP don't we?
@Martin Smith can you please tell me what you mean about OP ? and please forward your answer as an answer so i can choose it ...
OP means either "Original Poster" (i.e. you) or "Original Post" (i.e. the question). I meant the second one. I was just saying we know this from the question so it is not news worthy of 5 exclamation marks IMO (In my Opinion) :-)
forward your answer as an answer so i can choose it
still , why adding CP1255 gives me same results and when i remove it - then it shows me different results as it should ? i stil didnt get an answer the for CP1255 part :)
That's why I hadn't submitted an answer. I don't know why that collation has that behaviour. I'll update if I find out.
This seems to be collation dependant.
DECLARE @T TABLE
(
SQL_Latin1_General_CP1255_CI_AS varchar(100) COLLATE SQL_Latin1_General_CP1255_CI_AS,
Latin1_General_CI_AS varchar(100) COLLATE Latin1_General_CI_AS
)
INSERT INTO @T
SELECT '2Volvo Director 20','2Volvo Director 20' UNION ALL
SELECT '3Volvo Director 30','3Volvo Director 30' UNION ALL
SELECT '4Volvo Director 40','4Volvo Director 40' UNION ALL
SELECT '5Volvo Director 50','5Volvo Director 50' UNION ALL
SELECT '6Volvo Director 60','6Volvo Director 60'
SELECT
CHECKSUM(SQL_Latin1_General_CP1255_CI_AS) AS SQL_Latin1_General_CP1255_CI_AS,
CHECKSUM(Latin1_General_CI_AS) AS Latin1_General_CI_AS
FROM @T
Returns
SQL_Latin1_General_CP1255_CI_A Latin1_General_CI_AS
------------------------------ --------------------
-1757834048 -341465450
-1757834048 -341453853
-1757834048 -341455363
-1757834048 -341442609
-1757834048 -341448488
CHECKSUM is documented as being more collision prone than HashBytes. I'm not sure specifically why the CP collation has this behaviour for these inputs though.
@Mid787 - You didn't! I will try and investigate why the CP one gives those results.
|
STACK_EXCHANGE
|
Most people don’t care about cyber security, so hackers around the world are taking control of CCTV cameras even to some locations where privacy is important. Today we will show list of all open CCTV cameras using Cam-Hackers. Using Cam-hackers tool, we can view any open camera of different countries by sitting at one place. Using this tool, we can remotely view and also operate few CCTV cameras.
- OS: Kali Linux 2020 64 bit
- Kernel- Version: 5.6.0
- Use this command to clone the file.
- git clone https://github.com/AngelSecurityTeam/Cam-Hackers
root@kali:/home/iicybersecurity# git clone https://github.com/AngelSecurityTeam/Cam-Hackers Cloning into 'Cam-Hackers'... remote: Enumerating objects: 27, done. remote: Counting objects: 100% (27/27), done. remote: Compressing objects: 100% (24/24), done. remote: Total 27 (delta 8), reused 0 (delta 0), pack-reused 0 Unpacking objects: 100% (27/27), 978.41 KiB | 958.00 KiB/s, done.
- Use this command to install the requirements for this tool. pip3 install requests
root@kali:/home/iicybersecurity# pip3 install requests Requirement already satisfied: requests in /usr/lib/python3/dist-packages (2.23.0)
- Use the cd command to enter into Cam-Hackers directory.
root@kali:/home/iicybersecurity# cd Cam-Hackers/ root@kali:/home/iicybersecurity/Cam-Hackers#
- Now, use this command to launch the tool.
- python3 cam-hackers.py
- Successfully launched the tool.
- Here, we have various countries with available free cameras.
- Now, choose the required options to view free cam URLs
- We choose option 21, this will list out all the free camera URLs with port number in that country.
- Now, open these URLs in the browser to stream live camera.
- Here, we got the login page.
- Now, try with few default passwords like admin, password, 123456789, 12345, blank password.
- This IP address has blank password, using this successfully we are able to login.
- Here, we are able to stream all this camera in live mode and most of the IP address has a blank password.
As we saw how we are able to stream all the open camera which has blank password.
Cyber Security Specialist with 18+ years of industry experience . Worked on the projects with AT&T, Citrix, Google, Conexant, IPolicy Networks (Tech Mahindra) and HFCL. Constantly keeping world update on the happening in Cyber Security Area.
|
OPCFW_CODE
|
So what's the solution? In the evenings I always feel like I have to
decide between losing part of my personal evening time, vs being seen
as a slacker
Unfortunately, there are no simple answers here. "Whenever your work is done" almost certainly means "Whenever you deem it appropriate", since most software developers don't measure their work on a daily basis.
I think this is all part of being a professional, salaried employee. You don't punch a time clock. You won't be told "come in at x o'clock, take exactly 1 hour lunch, and leave precisely at y o'clock". You have to figure it out on your own, based on your company's culture, your own career ambitions, your work needs, and your family needs.
I tell my team that I don't want them watching the clock.
Aside from "core hours" where we schedule our meetings, they are free to come in early or late, and free to leave early or late. I don't care how many hours per day they are sitting at their desks, I just care that the work gets done.
I don't want to babysit them, and I don't want to micromanage them. I treat them like experienced professionals, and I trust them to act like mature professionals and figure out on their own how many hours they need to be around to get their work done.
I've told them that if they can accomplish their work in less than 40 hours, they can feel free to leave as they see fit. But if they are behind, or we have critical deadlines/releases coming up, I expect them to work extra as needed.
In practice, everyone figures it out for themselves. They each adjust their schedule according to their commuting and family needs, according to how hard they want to work, according to the needs of the projects they are working on, and how much they want to get ahead.
Some work around 40 hours per week or a bit less. Others work more. Some have worked a lot more.
Some generally arrive very early, and cut out earlier than others to optimize their commute. Others generally arrive very late and cut out later than most for the same reason.
Sometimes people arrive early to get a jump on a particular task or to communicate with our overseas office. Sometimes people hang around extra because they are "in the flow" and don't want to put down their work until they have completed a particular set of work.
During our weekly one-on-one meetings, and at annual review time, I never talk about how many hours they put in, when they arrive, or when they leave - unless their performance isn't up to the expected level. I've very seldom had to do this, but on rare occasions, I have to tell people that they simply aren't working hard enough, and that the amount of hours they spend in the office clearly isn't enough to get their job done. Either they are miscalculating, they are in over their heads, or they don't care. If it's a miscalculation issue, we work together to figure it out. Otherwise (and if they don't correct the problems), they are eventually reassigned or dismissed.
I'd advise you to look around and get a good sense of the culture within your company. You will likely see some people who are steady workers, but not trying to get ahead, while others are harder-driving. You might see some who are "slackers". You will see some who always get their projects done on time or ahead of time, while others miss the mark periodically or often.
You will see some who come in early and/or leave late, and others who work to the clock.
Then, decide what you want to be, how you want your day and week to go, and act accordingly.
|
OPCFW_CODE
|
How to make 100.02f - 100 == 0.02f being true in c#?
It seems to be rounding problem. I have array of float[] and some operations over this array. I need to write unit tests for this functionality, but comparing expected values to the resulted values happened to be not a simple task taking into account this rounding issues. Is any workaround to test math operations over my array? Thanks
Never test for equality with floating point numbers.
Hopefully, 0.9 is not equal to 1... I suggest you buy a calculator... What did you expect?
Take a look at : http://stackoverflow.com/questions/7617613/1-01f-0-1f-1-is-false-in-c
Math.Round(100.02f - 100, 2) == Math.Round(0.02f, 2)
Please don't re-ask the same question. You can always edit to correct or add more details.
It was closed for me to edit. That's why
When using floats or doubles in unit tests, your testing framework may allow you to account for an acceptable delta. Using NUnit, for example, you might write
double expected = 1d;
double delta = 0.0001d;
double actual = classUnderTest.Method();
Assert.AreEqual(expected, actual, delta);
Understand that floats and doubles are inherently imprecise for certain things, and that is by design. They represent numbers in base 2. If you need accurate base 10 representation, use the appropriate type for that: decimal.
Rounding issues are an inherent part of floating-point calculations. Use an array of decimals (decimal[]), perhaps?
100.02m - 100
Thanks. It might be workaround, but decimal takes more memory and I have to convert my array to decimals first
Don't worry about memory issues.
@YMC, if memory is an actual problem you have, then it's a valid concern. On the other hand, if pinpoint decimal accuracy is your problem, then use a better tool. If these are financial values, for example, then you're using the wrong one.
If you'd written 1.1 - 0.1 == 1.0, it would still return false. It's because you're dealing with floating point numbers in a binary number system. You can't represent 0.1 in binary exactly any more than you can represent 1/3 exactly in base 10.
This is true in C#, Java, C++, C, JavaScript, Python, and every other programming language that uses the IEEE floating point standard.
I'd like to point out that 1.1f - 0.1f == 1.0f may return true in C. It depends on the compiler.
You should use an "epsilon" value to check against (where epsilon is chosen by you)
if (yourvalue <= (0.02f + epsilon) && yourvalue >= (0.02f - epsilon))
// do what you want
I don't know if this is already implemented in c#, this is the "technical" approach
Obviusly the epsilon value should be enough small. Also I suggest to write an extension method to feel more comfortable when using it
Since you're writing unit tests, you can easily just compute what the exact output will be. Simply compute the output, then print it using the "roundtrip" format, and then paste that string into your unit test:
float output = Operation(array);
Console.WriteLine(output.ToString("r"));
Then you'll end up with something like Assert.AreEqual(100.02 - 100, 0.019999999999996)
Alternately, you can take the output of your unit test and convert it to a string, and then compare the string. Then you'll end up with something like:
Assert.AreEqual((100.02 - 100).ToString("f"), "0.02");
You shouldn't really compare floats for equality – you can't avoid this sort of rounding errors. Depending on your use case, either use decimals if you want rounding to be more intuitive in cases like yours, or compare the difference between the floats to a chosen "acceptable" error value.
I wouldn't characterize decimal as "completely precise".. it is also a finite-precision type and is going to have its own roundoff issues, albeit different and perhaps less surprising than those of float or double. For example (1m/3m)*3m!=1m
@Corey You're right, I've edited the answer to not be misleading.
|
STACK_EXCHANGE
|
Easily delete orders in PrestaShop with a click
You always want to delete the unwanted orders and test order in the PrestaShop admin panel.
It's a free PrestaShop module that helps you safely delete the default order & test orders. New feature: the latest version of DeleteX provides an option to delete all orders in 1 click as its default functionality!
My case: We need to delete test orders
Then we need a clean and clear database for a production environment in your online store.
If you want to try your luck on managing orders on the order manager page, you can access the Orders menu from the left area of the admin back office. It's easy to see the order status, export or see the detail of the selected order, but unable to delete it.
We decided to develop a module to delete any order. We call my PrestaShop Cleaner DeleteX.
You do not need to pay $49 to use a module to delete your order. You can download this Prestashop module for free $9.
This PrestaShop delete orders free module will delete not only the PrestaShop orders table but also the cart product table, order invoice table, and all order details.
This PrestaShop Addon also allows you to remove all orders from your store in seconds.
Why should you use this module for deleting orders?
This one is an effective way to remove duplicate orders or those that are no longer needed.
We make it easy to get rid of unwanted orders by allowing them to be removed in seconds with the Remove All Orders button. Then you might want to push the OK button on the confirmation message box, every crap will be removed in seconds.
How to delete only 1 test order in PrestaShop
We know sometimes you only want to delete 1 order. Type the order ID into the required field on the module's configuration page and click on the Remove button. DeleteX will delete all related information.
This app does not require an API, and works well with any PrestaShop Theme. Just download the installation package file and install it.
Its delete function is available in the default installation, compatible with PrestaShop version 8, PS 1.7, and is easy to use on both laptops or mobile devices, helping you enable remove the data order from your shop just by a mouse click on the delete button.
Can I get deleted things returned with this method?
"Nice. Now I know a way to clean up the crap with this database cleaner. Can I get a deleted returned to me?"
This is a one-way street! There is no way to return a deleted order. This module is very useful for developers to clear the demo data & unnecessary orders, but don't try it on your live website.
Update 02/2023: DeleteX is compatible with the MultiStore feature & PrestaShop 8.
P/s: Recently, we have received a lot of feedback from developers that they do not have time for referrals. So, we provide another way to get DeleteX: You can buy it for only $49 $9.
|
OPCFW_CODE
|
Plan zoo gay
sa partenaire de The Office et également son amie proche, Angela Kinsey. Many of the Zoo's animal ambassadors live there including a binturong, clouded leopards, crested porcupines, southern ground hornbills, and a tamandua. Monkey Trails utilizes a new method of displaying arboreal animalsby climbing up an elevated walkway throughout the exhibit. Reptiles at the reptile house reptile walk include Chinese alligators, Galapagos tortoises, Komodo dragons, anacondas, cobras, monitor larry gaye bande annonce vf lizards, pythons, bushmasters, caiman lizards, Fiji banded iguanas, water cobras, short-nosed vine snakes, black-headed pythons, Boelen's pythons, coachwhips, side-striped palm-pit vipers, Ethiopian mountain adders, flower snakes, radiated tortoises. Retrieved June 24, 2013. "The San Diego Zoo's Absolutely Apes?
"San Diego attractions - San Diego zoo balboa park wild animal park visitor info San Diego California best attractions cheap tickets beaches zoo seaworld balboa park wild animal park la jolla legoland california whales watching old town coronado sandiego california la jolla seaport village cabrillo. The exhibit begins with a forested exhibit for okapi black duiker then winds past a recreation of two leaf-covered Mbuti huts with signage about the people's customs and traditions. Retrieved June 1, 2015. The Acacia Woodland exhibit will allow the Zoo to have more breeding spaces for the cats. Monkey Trails is home primarily to monkeys such as guenons, mangabeys, Angola colobuses, tufted capuchins, spider monkeys, and mandrills, but it also showcases many other species of animals, such as yellow-backed duikers, pygmy hippos, slender-snouted crocodiles, and many species of turtles, snakes, lizards, and fish. Zoological Society of San Diego. (California: Craftsman Press) Myers, Douglas (1999). The Zoo has a spotted and black Leopard, as well as two new cubs.
In 2016, Baba, the last pangolin on display in North America at the time, died at the zoo. It does contain explicit material of sexual activities, but only in the view of video footage shown on a small television screen. Elephant Odyssey's other animal exhibits include African lions, jaguars, Baird's tapirs, guanacos, capybaras, spur-winged geese, two-toed sloths, Kirk's dik-diks, secretary birds, dung beetles, water beetles, desert tarantulas, toads, newts, turtles, frogs, dromedary camels, pronghorns, Przewalski's wild horses, burros, llamas, rattlesnakes, western pond turtles, and the. Some of the kits are: Conservation Kit, Endangered Species Kit, Behavioral Enrichment Kit, and Animal Diet Kit. It employs numerous professional geneticists, cytologists, and veterinarians and maintains a cryopreservation facility for rare sperm and eggs called the frozen zoo. Animals are regularly exchanged between the two locations, as well as between San Diego Zoo and other zoos around the world, usually in accordance with Species Survival Plan recommendations. Elle a déclaré dans une interview qu'elle peut taper 85 mots par minute avec 90 pour cent de précision.
- Chat gay recrute
- Lieu de rencontre gay a montélimar
- Site pour gay
- Rencontrer un gay docteur dans son cabinet
- Chat random gay roulette
- Annonce gay malaga
- Plan cu gay abbeville somme 80
- Gay webcam chat mobile
- Rencontre gay hauteville-lompnes 01
- Rencontre gays sur 45
- Plan cul a luchon gay
- Plan cul gay saint cyprien plage 66
- Rencontre gay black departement 27
- Gay chat xxx
|
OPCFW_CODE
|
How many email accounts is reasonable for Outlook to open simultaneously?
I have someone that wants to add hundreds of other users email accounts to Outlook. We've tried in the past and run into issues with random Outlook errors. Is there any guidance as to how many Exchange accounts Outlook can reasonably handle?
We have disabled cached exchange mode for the remote mailboxes. We have also upgraded his computer so that Windows and Outlook are both 64-bit, along with having 8GB of memory with nothing else installed.
Clearly 1-5 mailboxes work fine, but is there anything official, or any literature that I can show him that shows that it's not designed for that (or maybe that it is).
I'm just curious why he wants to add hundred of email accounts to a single outlook. It just sounds so silly. And just a thought, wouldn't it be easier to get all the Exchange accounts to forward their mail to one account and have that account be on outlook?
When there are issues within the company, he wants the ability to go into anyone's account and quickly view the conversation. It will be a rare event, but he wants the instant, convenient access.
You're right, it is a bit silly, but that's how he wants it unfortunately unless I can find something official that says it's a terrible idea. Ideally, I would have a Microsoft page saying it's not supported, but I don't think they specify any limits explicitly.
Oh I see, so forwarding all the mail to a single account would not really be optimal for that usage. Wouldn't it be easier to just access a specific mailbox through Exchange OWA. Rather than look through hundred of tabs in outlook for the specific user(IF you ever got it working that way), you can just go straight into a user's mailbox through OWA.
Why does this person want access to these accounts. Wouldn't it be easier to just forward the conversation? Hell make it company policy to bcc the person on all emails. Beyond the fact its a horrible invasiion of privacy ( I understand nothing is every private in this situation ) you obviously have attempted it int he past and failed. I am not familar with the inner workings of exchange, I am sure there is a way to setup a bcc that always happens, you would need a great deal more then 8GB of memory to load hundreds of email accounts ( which results in millions of emails ) being loaded.
As Jay points out, Outlook really isn't the correct tool to do this, using the mangement tools for Exchange likely would be a better idea.
By default, you can add up to 10 Exchange accounts in Outlook 2010. To add more accounts you will have to use registry tweaks. I think it pretty speaks for itself.
Here is the Microsoft link you asked for:
http://technet.microsoft.com/en-us/library/ee815819.aspx
This is using 1 exchange account, with permission to additional mailboxes. I was able to get it to work with dozens without any tweaks.
Then specify this in your question, because additional mailboxes are not email accounts.
Really? Now are they not? You're talking about multiple exchange connections. This is one connection to get multiple mailboxes (which I believe are the same as email accounts in this case).
In terms of Outlook they are not.
I'm with @thims on this one, Mailbox != Account
|
STACK_EXCHANGE
|
@PostConstruct Interceptor with @Named @ViewScoped not invoked
I have read carefully the article about Interceptors in the Seam/Weld documentation and implemented a InterceptorBinding:
@InterceptorBinding
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.METHOD, ElementType.TYPE})
public @interface MyLog {}
and a Interceptor class:
@MyLog @Interceptor
public class ErpLogInterceptor implements Serializable
{
@AroundInvoke
public Object logMethodEntry(InvocationContext invocationContext) throws Exception
{..}
@PostConstruct
public Object logPostConstruct(InvocationContext invocationContext) throws Exception
{...}
}
No I tried to activated the interceptor in the @Named @ViewScoped bean:
@javax.inject.Named;
@javax.faces.bean.ViewScoped
public class MyBean implements Serializable
{
@PostConstruct @MyLog
public void init()
{...}
@MyLog public void toggleButton()
{..}
}
If I push a button on my JSF page the method toggleButton is invoked correctly and the Interceptor method logMethodEntry is called. But it seems the method @PostConstruct (I am interested in) is never intercepted by my class.
The question seems to be related to Java EE Interceptors and @ViewScoped bean but actually my interceptor is working in normal methods.
Are you using CODI as outlined in the answer of the linked question? The JSF-specific @ViewScoped otherwise simply don't work in CDI-specific @Named.
Are you sure the interceptors work for lifecycle methods?
@AdrianMitev: At least these interceptors are described in the document I'm referencing.
What happens if you use a CDI scope instead of the JSF one (yes I know Seam 3 changes it, but there may be something with the interceptors it isn't doing)?
@LightGuard: Changed the scope to @javax.enterprise.context.SessionScoped, but nothing during @PostConstruct
Sounds like it's probably a bug, however, we're no longer doing work on Seam 3.
@LightGuard: I know you're heavily involved in Seam3, and this question is most probably off-topic, but what exactly do you mean with "we're no longer doing work on Seam 3"? Can you point to a URL?
The last release was 3.1.0.Final nearly a year ago. We're putting our efforts into Apache DeltaSpike.
You should set return type of @PostConstruct interceptor to void not Object.
Change:
@PostConstruct
public Object logPostConstruct(InvocationContext invocationContext) throws Exception
{...}
to:
@PostConstruct
public void logPostConstruct(InvocationContext invocationContext) throws Exception
{...}
|
STACK_EXCHANGE
|
import './style.css';
import 'bootstrap/dist/css/bootstrap.css';
import {select, max, dispatch} from 'd3';
import {
countryCodePromise,
migrationDataCombined
} from './data';
import {
groupBySubregionByYear
} from './utils';
//View modules
import Composition from './viewModules/Composition';
import LineChart from './viewModules/LineChart';
import Cartogram from './viewModules/Cartogram';
//Global variables
let originCode = "840";
let currentYear = 2017
//Create global dispatch object
const globalDispatch = dispatch("change:country","change:year");
globalDispatch.on('change:country', (code, displayName) => {
originCode = code;
//Update title
title.html(displayName);
//Update other view modules
console.log(migrationDataCombined)
migrationDataCombined.then(data => {
const filteredData = data.filter(d => d.origin_code === originCode);
renderLineCharts(groupBySubregionByYear(filteredData));
renderComposition(filteredData, currentYear);
renderCartogram(filteredData, currentYear);
});
});
globalDispatch.on('change:year', year => {
currentYear = +year;
//Update other view modules
migrationDataCombined.then(data => {
const filteredData = data.filter(d => d.origin_code === originCode);
renderComposition(filteredData, currentYear);
renderCartogram(filteredData, currentYear);
});
});
/*
* DATA IMPORT
*/
// Data import is completed in the background via Promises
// When data import is complete, call change:country event on globalDispatch to render view components
migrationDataCombined.then(() =>
globalDispatch.call(
'change:country',
null,
"840",
"United States"
));
countryCodePromise.then(countryCode => renderMenu(countryCode));
/*
* UPDATE VIEW MODULES
* Update line chart, composition, cartogram, and menu view modules
*/
//Build UI for countryTitle component
const title = select('.country-view')
.insert('h1', '.cartogram-container')
.html('World');
function renderLineCharts(data){
//Find max value in data
const maxValue = max( data.map(subregion => max(subregion.values, d => d.value)) ) //[]x18
const lineChart = LineChart()
.maxY(maxValue)
.onChangeYear(
year => globalDispatch.call('change:year',null, year) //function, "callback function" to be executed upon the event
);
const charts = select('.chart-container')
.selectAll('.chart')
.data(data, d => d.key);
const chartsEnter = charts.enter()
.append('div')
.attr('class','chart')
charts.exit().remove();
charts.merge(chartsEnter)
.each(function(d){
lineChart(
d.values,
this,
d.key
);
});
}
function renderComposition(data, year){
const composition = Composition();
if(year){
//if year value is not undefined
composition.year(year);
}
select('.composition-container')
.each(function(){
composition(this, data);
});
}
function renderCartogram(data,year){
const cartogram = Cartogram();
if(year){
cartogram.year(year);
}
select('.cartogram-container')
.each(function(){
cartogram(this, data);
});
}
function renderMenu(countryCode){
//Get list of countryCode values
const countryList = Array.from(countryCode.entries());
//Build UI for <select> menu
let menu = select('.nav')
.selectAll('select')
.data([1]);
menu = menu.enter()
.append('select')
.attr('class','form-control form-control-sm')
.merge(menu);
//Add <option> tag under <select>
menu.selectAll('option')
.data(countryList)
.enter()
.append('option')
.attr('value', d => d[1])
.html(d => d[0]);
//Define behavior for <select> menu
menu.on('change', function(){
const code = this.value; //3-digit code
const idx = this.selectedIndex;
const display = this.options[idx].innerHTML;
globalDispatch.call('change:country',null,code,display);
});
}
|
STACK_EDU
|
Machine learning is trending in the security industry. A quick Google search brings up nearly every security company and many technology companies jumping on the machine learning bandwagon as the newest silver bullet against threats. But machine learning isn't new. In fact, it has been around for decades.
Machine learning is a series of algorithms, the simplest being a decision tree that proceeds through a yes or no path, classifying things until it reaches a conclusion. This is supervised machine learning. It classifies what it has been trained to classify, and it is incredibly valuable to strengthen security. If you train a machine-learning algorithm to look for certain words used together, it does a great job of identifying and combatting spam. Train it to look for specific indicators, and it can detect and stop various types of malware. We can even train it to monitor internet usage to detect malicious internet events. And we're getting better at training – taking advantage of bigger data sets and more processing power to train on more things faster allowing us to apply machine learning to special problems. We now have thousands of algorithms coming together to identify a broad swath of known vulnerabilities and threats.
There is also unsupervised machine learning. It helps to detect unknown threats based on a common set of behaviors, looking for activities within or outside of those conventions and matching that. For example, it may know that a user always logs on between 8 a.m. and 5 p.m. from San Francisco from a specific laptop. All of a sudden the user appears to log in from Russia from a different computer and is downloading documents. These behaviors it has never seen before help it to identify potential malicious activity.
Together, these two categories show that machine learning holds great promise. Our experiences with Amazon, Google, or Facebook reinforce this perception, learning and delivering information in real time and solving problems at a massive scale. But unlike these modern apps, security is a beast unto itself. If Amazon recommends a product that isn't a good fit, or if a Google search comes up with something that isn't relevant, it's a minor misstep. In contrast, security is a high stakes game with adversaries continuously changing tactics, techniques, and procedures (TTPs) to infiltrate organizations and cause disruption or steal high-value data. Security practitioners must deal with a tremendous amount of noise, making response incredibly difficult. Block something in error and your entire infrastructure could shut down; the toxicity of a false positive can be massive.
Determining which events warrant attention is no easy task – it's like finding needles in a haystack. While many modern business applications can rely exclusively on machine learning in a hands-off approach, security has always required attention and tuning. Machine learning is incredibly valuable to strengthen security. In fact, how we will continue to bolster defenses relies a great deal on the advances in machine learning. But it isn't a panacea. What's needed is a combination of numerous and differing technologies along with humans working together to achieve the highest levels of security effectiveness.
We're in the early days of this collaborative approach, and it requires a different skill set – security experts, working with data science experts, working with backend professionals to query and understand the data correctly. We're seeing that with time, as they interact and learn from each other, teams get stronger, machines get stronger, and effectiveness improves. This approach augments the lack of human capital the security industry is facing and allows organizations to scale faster. Efficacy rates go up because you can discover and block more threats. And you gain context which is critical to finding the needles in the haystack, reducing the firehose of information to a more manageable subset of higher priority events that are mission critical.
With the right information at the right time you can make more informed decisions, faster. As we continue to improve and drive mean time to detection of threats closer to zero, we can evolve from a traditionally reactive to more predictive stance. By combining the patterns of behavior machine learning can detect with human expertise, we can begin to anticipate and better prepare for what may happen next, how, where, and when.
As I said earlier, there's a lot of buzz about machine learning, and it's going to continue. The problem is that it can be hard to judge the quality of machine learning, because the process often isn't easily understood. Without visibility into how it came to an outcome, you can't understand exactly “why” it decided to detect something, which can make investigation hard. And if you get an incorrect answer you can't determine what went wrong to correct the false positive in the future. There is also no practical way to fully test machine learning in a custom environment to determine where mistakes may occur.
But instead of blindly trusting a vendor's approach to machine learning, which can be risky, ask for a proof of concept. With evidence that the solution using the algorithms will perform in your environment and provide capabilities and context that will add value to your team, you can build confidence in its use.
The promise of machine learning is to help drive down the time to detect and respond to threats as part of an overall security operation, making the humans and systems you have more effective and scalable, with the context they need to focus on real threats today, and better anticipate threats tomorrow. That's a big promise that's sure to keep machine learning at the forefront of security innovation.
|
OPCFW_CODE
|
Our biweekly updates written by the entire Prysmatic Labs team on the Ethereum 2.0 roadmap
Eth2 Research Updates
Execution environments and EWASM for Eth 2.0
We’ve begun to estimate the work needed for Prysm to operate in a phase 2 execution environment only test net. There’s parallel efforts going-on for phase 0, phase 1 and phase 2. The eWASM team has recently released a block-boxed prototyping environment for the phase 2 execution environment — it allows the researchers to experiment and benchmark execution environments features. The goal is to obtain a better idea on the speed of the WASM code and the overall throughput of the execution environment. Given the current phase 0 and phase 1 are black boxed for Scout, it is important to initiate an effort to integrate Scout with an eth2 client to test end-to-end end performance. Expect more updates on this from us in the upcoming weeks!
Only 3 days until spec freeze!
We couldn’t be more excited about the upcoming spec freeze this week!
Merged Code, Pull Requests, and Issues
Ethereum 2.0 Common APIs Repository
One of the most important pieces of Ethereum 2.0 in phase 0 is access to data. We’ve started to collect service schema definitions that we find valuable to validator client operators, block explorers, and anyone interested in the data of Ethereum. We collected data API use cases in this product requirements document and put that forward for feedback from the community.
We turned that feedback into a self-describing schema utilizing protocol buffers with the hope that we can start Ethereum 2.0 with a consistent and easy-to-use API from day 0. Expect to see these service definitions implemented in the next few weeks!
Wrapping up v0.7.1 spec conformity tests
We have finished and passed v0.7.1 spec tests! What that means is Prysm client is fully conform to the eth2.0-spec-tests standard and it can begin to talk to other clients that also comfort with the same standard. We see this as a big step towards multi-client interoperability. So What’s in a spec tests? It consists of state transition functions for epoch processing, block processing and slot processing. Passing these meaning clients processed state will be the same always. In addition the state transition functions, the spec tests also consist of BLS and SSZ to ensure every client can hash, encode, decode beacon chain objects the same way.
Our Go-SSZ Implementation Passes Conformity Tests
We have completed the conformity of our Simple Serialize implementation up to the v0.7.1 official Ethereum 2.0 specification in its own repository here under the permissive, MIT license. We have opted for building a correct and robust implementation of SSZ with an easy-to-use API matching the popular JSON marshaler/unmarshaler for Go.
Our next focus on the project is to work on a whole suite of optimizations and improvements for greater usability and speed in production. For now, we are content it passes all spec tests. Feel free to suggest improvements to the project through pull requests/issues!
BLS Updated To Pass Conformity Tests
We currently use the phore library as our underlying BLS library. However the phore library didn’t implement the domain parameter in its Sign and Verify Methods, which were blocking us in implementing the spec BLS tests. So we opened up a PR, which would allow us to provide the domain parameter in Signing Messages and Verifying Signatures. With those methods added we were able to pass the BLS Spec tests.
Optimizing and Stress Testing Prysm
The next item we’ve been itching to work on is the core optimization of Prysm itself. Our testnet release has shed a lot of light on where our bottlenecks are and have given us valuable insight on where any future refactoring efforts should begin. Our our radar, we wish to radically improve the data layer of Prysm, how we cache information, and making syncing with the chain a lot more robust than its current flakey state. A particular goal clients should have is to attempt 1 second block time testnets, which will put every part of our system under immense pressure — exactly what we need before we go into a multi-client or mainnet scenario.
Eth 2.0 Implementers Call June 27, 2019
Due to technical issues, there was no livestream for the call, but Ben Edgington put together a nice tweetstorm summarizing the discussion here. When the EF researchers surveyed various teams about how confident they would be in being production ready by Jan 3, 2020, Prysmatic Labs is cautiously optimistic at a 6/10 score. We aim to be objective in our work, and put together the highest quality software we can. Jan 3 is not a deadline, but is a solid, tentative date that keeps everyone on their feet working hard towards it.
Interested in Contributing?
We are always looking for devs interested in helping us out. If you know Go or Solidity and want to contribute to the forefront of research on Ethereum, please drop us a line and we’d be more than happy to help onboard you :).
Official, Prysmatic Labs Ether Donation Address
Official, Prysmatic Labs ENS Name
|
OPCFW_CODE
|
A lot of people learn object-oriented programming with languages like C++ and Java. And while they learn, they’re told over and over again that encapsulation is one of the key principles of object-oriented paradigm and that they should take advantage of it.
In C++ and Java, things are pretty straight-forward. There are 3 magical and easy to remember access modifiers, that will do the job (public, protected and private). But there’s no such a thing in Python. That might be a little confusing at first, but it’s possible too. We’ll have look at how do it right in Python.
Python doesn’t have any mechanisms, that would effectively restrict you from accessing a variable or calling a member method. All of this is a matter of culture and convention.
All member variables and methods are public by default in Python. So when you want to make your member public, you just do nothing. See the example below:
Let’s understand it using an example below-
>>> class employee: def __init__(self, name, sal): self.name=name self.salary=sal
You can access employee class’s attributes and also modify their values, as shown below.
>>>e1=Employee("Kiran",10000) >>> e1.salary 10000 >>> e1.salary=20000 >>> e1.salary 20000
Protected member is (in C++ and Java) accessible only from within the class and it’s subclasses. How to accomplish this in Python? The answer is – by convention.
Python’s convention to make an instance variable protected is to add a prefix _ (single underscore) to it. This effectively prevents it to be accessed, unless it is from within a sub-class. See the example below:
>>> class employee: def __init__(self, name, sal): self._name=name # protected attribute self._salary=sal # protected attribute
This changes virtually nothing, you’ll still be able to access/modify the variable from outside the class. You can still perform the following operations:
>>> e1=employee("Swati", 10000) >>> e1._salary 10000 >>> e1._salary=20000 >>> e1._salary 20000
you explain politely to the person responsible for this, that the variable is protected and he should not access it or even worse, change it from outside the class
Hence, the responsible programmer would refrain from accessing and modifying instance variables prefixed with _ from outside its class.
By declaring your data member private you mean, that nobody should be able to access it from outside the class, i.e. strong you can’t touch this policy. Any attempt to do so will result in an AttributeError.
>>> class employee: def __init__(self, name, sal): self.__name=name # private attribute self.__salary=sal # private attribute
Let’s try to access using private attribute directly –
>>> e1=employee("Bill",10000) >>> e1.__salary AttributeError: 'employee' object has no attribute '__salary'
As we can see that we’re getting AttributeError.
Python supports a technique called name mangling. Python performs name mangling of private variables. Every member with double underscore will be changed to _object._class__variable. If so required, it can still be accessed from outside the class, but the practice should be refrained.
>>> e1=Employee("Bill",10000) >>> e1._Employee__salary 10000 >>> e1._Employee__salary=20000 >>> e1._Employee__salary 20000
Let’s put everything altogether for better understanding
class Hello: def init(self, name): self.public_variable = 10 self.__private_variable = 30 def public_method(self): print(self.public_variable) print(self.__private_variable) print('public') self.__private_method() def __private_method(self): print('private')
Now let’s invoke these methods and variables-
>>> hello = Hello('name') >>> print(hello.public_variable) 10 >>> hello.public_method() 10 30 public private >>> print(hello.__private_variable) not allowed outside the class AttributeError: 'Hello' object has no attribute '__private_variable' >>> hello.__private_method not allowed outside the class AttributeError: 'Hello' object has no attribute '__private_method'
So bottom lines are-
- Default :- All members in a Python class are public by default.
- Public Member:- As per Python’s convention any variable which doesn’t contains double underscore as prefix is public member variable. Any member can be accessed from outside the class environment.
- Private variable:- Private variables means you can use them only inside the class. It’ll give you an error as soon as you use it or access it outside the class. But you can always use private member variable inside the class or any method of that class
- Private method:- Use double underscores as prefix in any method to make that private. Restrictions would be same on this like private variables.
- No IntelliSense:-You’ll not be able to see private methods in suggestion while accessing its class.Still if you access it then it’ll throw an error called “this method doesn’t exists”.
- you can use your private method inside the class using the self keyword like any member variable in class.
|
OPCFW_CODE
|
Facebook Connect always returns app to splash screen on iPad only
As the title suggests, I have an app which uses Facebook connect to log users in.
On the iPhone this works fine; it switches to the Facebook app, logs in and then goes back to my app at the same place it left off.
However on an iPad (I only have an iPad 1 to hand but I'm presuming the problem is across all 3), when it returns to my app it seems to have restarted it entirely. It goes back to the splash screen, and then to the login screen. The user is stuck in an endless loop of "unsuccessfully" logging in, despite the fact that the Facebook app is logging in correctly.
Does anyone have any idea why this could be happening on the iPad but not an iPhone?
This app is actually inherited from a much older app which was iPhone only, and to simplify things it has essentially been left that way. The images etc are just scaled based on screen size, there is no differentiation in the code between iPhones and iPads other than their screen size. The facebook connect code has a "FBIsDeviceIPad" bool, but afaik that is just for setting the position and size of the popup login dialog when not using SSO.
Edit:
Further investigation suggests it could be an issue with OpenGL ES. The app crashes when sent to the background, as the OpenGl ES code carries on trying to animate etc. The facebook app momentarily puts my app into the background, ergo the app crashes and restarts.
I'll update this once I find out how to fix this, in the meantime if anyone has already dealt with this situation I would welcome any suggestions.
Turns out it's a memory issue as noted in the question edit. Still not resolved how to get it working, but this is at least the cause of the problem.
In your AppDelegate Class implement this code
- (BOOL)application:(UIApplication *)application handleOpenURL:(NSURL *)url
{
return [facebook handleOpenURL:url];
}
I've already done this. As I've already mentioned, everything works correctly on the iPhone already.
For me, it was the permissions that made a difference. I had the same problem om the iPhone. and i reduced the permissions and the app stopped doing that.
What do you mean by reducing the permissions ? Asking less permissions ? Ex: @[@"public_profile", @"email", @"user_friends", @"user_birthday"]
@rousseauo It is an old comment, so I dont exactly remember what i did, but based on what I wrote, i think yeah. The Facebook sdk has changed a lot since, so you might want to test it out. Not sure if it will help you.
|
STACK_EXCHANGE
|
Gartner says that the Mobile Enterprise Application Platform (MEAP) market will top $1 Billion by the end of 2010 and that more than 95% of organizations will choose MEAP instead of point solutions through 2012.
The big takeaway here is that companies have been building tactical mobile application silos that support only one application and now they want to save money by going with a reusable platform capable of supporting multiple applications. Oh and along the way it needs to support multiple device and OS platforms while providing security, device management, and a single IDE to build apps and logic to integrate with back end systems.
- When there are 3 or more mobile applications
- When there are 3 or more targeted operating systems or platforms
- When they involve the integration of 3 or more back-end systems
Leaders in this space have included Sybase iAnywhere, Antenna, Dexterra, Syclo and Spring Wireless. Microsoft goes from a large Mobile General Store with myriad solutions to a player in this space with a MEAP solution of our own: Visual Studio is used to build the mobile logic and UI. Merge Replication provides occasionally-connected data synchronization between SQL Server Compact on the mobile device and SQL Server in the data center. SQL Server Business Intelligence Development Studio is used to visually create connections to back-end systems like SAP or databases like Oracle. Data in transit is secured via SSL or VPN, data at rest is encrypted via device encryption, SQL Server Compact, BitLocker or programmatically through the Crypto API. Integration packages that communicate with back-end systems are encrypted and digitally signed.
We already have the best mobile email, calendaring, and contacts product in the business where Exchange Active Sync keeps Outlook and Outlook Mobile always up to date with Exchange Server. Server-to-device as well as peer-to-peer device notifications are facilitated through WCF Store and Forward on Exchange. Software and patch distribution along with device settings and policy management is accompished via System Center Configuration Manager. ISA Server provides both VPN and Reverse Proxy access to roaming applications on the Internet on any platform.
When you put this stack in place and resuse it for multiple mobile applications instead of going with point solutions, ROI savings increase as the need for POCs, Pilots and training are reduced and the need for extra client access licenses is eliminated. That’s Gartner’s first requirement. We hit Gartner’s second requirement by uniformly supporting 3 mobile operating systems in the form of Windows, Windows CE, and Windows Mobile. Last but not least, our SQL Server Integration Services technology combined with dozens of connectors mean we can connect your mobile devices with almost any back-end package or database.
Yes, Microsoft does have a Mobile Enterprise Application Platform that’s already proven to scale to tens of thousands of devices and it will definitely save you time and money.
Sharing my knowledge and helping others never stops, so connect with me on my blog at https://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany
Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”
|
OPCFW_CODE
|
Presentation on theme: "Bitcoin A Decentralized, Public Ledger Digital Asset Transfer Mechanism The views and opinions expressed here are not necessarily those of the Federal."— Presentation transcript:
Bitcoin A Decentralized, Public Ledger Digital Asset Transfer Mechanism The views and opinions expressed here are not necessarily those of the Federal Reserve Bank of St. Louis or of the Federal Reserve System. David Andolfatto
Money and Payments System 1. Monetary tokens: record-keeping devices. 2. Ledgers: accounts containing monetary objects. 3. Payment protocols: value transfer mechanisms. 4. Monetary policy: protocol to manage money supply.
Money and Payments System 4. Monetary policy: managing the supply of marbles.
Three roles of money 1. Medium of exchange: record-keeping, info-value transfer. 2. Store of value (short-run vs long-run). Low/stable ROR preferred to high/volatile ROR. Money is used in high-velocity transactions, so stable short-run ROR more important than high long-run ROR. Note: base money is a small fraction of total wealth. 3. Unit of account. Determined by social convention, distinct from MOE. E.g., USD as vehicle currency, hyperinflations.
Early money and payment systems Virtual data (memories) recorded on shared and distributed ledger (communal brains) updated via consensus algorithm (gossip). Gift-giving economies, social credit systems. Invention of coined tokens and precious metals (specie). Permits P2P exchange on invisible ledger (anonymous trade). 17 th C. goldsmith banks issue paper money (receipts). 19 th C. banknotes redeemable for specie, backed by assets.
How things work today Monetary objects consist of liabilities issued by central and chartered private banks: physical (cash) and digital objects. bank liabilities legally a digital version of 19 th C. banknotes. Cash payment: P2P debit/credit of personal wallets. permissionless accounts held in invisible ledger (personal wallets), few KYC restrictions, anonymous (paper leaves no trail). Digital payment: intermediated debit/credit of bank accounts. permissioned accounts, held in private bank ledgers, KYC restrictions, identifiable payment history (digital trail).
Complaint 2: Payments System Outdated payment systems (e.g., ACH). Lack of direct communication between banks. Correspondent banking system for international payments. Banks and MSBs charge too much. Too much personal information presently at risk.
Bitcoin is a money & payments system Digital money supply (bitcoin) managed by open-source computer algorithm (Bitcoin). [ 21M unit cap ]. BTC exist on a publicly visible, distributed ledger: block chain. Block chain = history of all transactions. BTC “account” = public + private key pairs (crypto. secure) Public-address transparent P.O. box + private key. Permissionless, pseudonymous. Payment requests processed via consensus protocol called proof of work (POW). No banks!
Bitcoin: deliverables Capped money supply (hopefully) means long-run price-level stability. Free ourselves of central banks (alas, not governments). Ability to transfer value (bitcoin) between any two points in the world (connected to the Internet) at trivial cost. Free ourselves of all banks (except from their lending arms).
What’s the innovation? Virtual currency? Open-source algorithm? No. Public/private key cryptography? Pseudonymous accounts? No. Transparent online database? No. Algorithmically determined money supply? Not really. Innovation: block chain and POW consensus protocol. Packaged together with aforementioned innovations. Bitcoin solved the double-spend problem of P2P digital asset transfer (and mon. supp.) without use of “trusted” intermediaries. As close as one can get to unintermediated digital cash.
POW: Dynamic Membership Multisig P2P payment requests broadcast to network are validated and added to the block chain by pseudonymous “miner nodes.” Miners must collectively “sign off” on requests. How? Simple voting procedure will not work: Sybil attacks (spam). POW makes it artificially expensive (hardware + electricity) to validate payments—guards against Sybil attacks. Miners compete against each other to solve math problem, winner gets to add validated requests to block chain. (Winning) miners compensated in newly-issued bitcoin (and fees).
So, what’s the big deal? A fixed supply of electronic digits that can be transferred over the Internet without traditional middlemen. P2P transactions, no need to relinquish personal info to 3 rd parties. Can send $100M of BTC from here to Moscow in 10 min. almost zero user cost (social cost higher). Bypass clunky ACH, correspondent banking system. Huge deal for international remittances market. Block chain has proven to be highly secure (so far). Episodes of loss related to 3 rd parties (Mt. Gox) not BTC.
Example: International Payments Want to send $100 from my Chase NY to your account at Bank of Cyprus, Nicosia ($50 fees at each end?). Align Commerce claims to “use the blockchain” to effect same transfer at much lower cost (price). How? 1.AC ACH debits my USD Chase account (slowest part). 2.AC buys BTC (sells USD) on domestic BTC exchange. 3.AC sells BTC (buys EUR) on foreign BTC exchange. 4.AC SEPA credits your Bank of Cyprus account.
Example: International Payments Want to send $100 from my Chase NY to your account at Bank of Cyprus, Nicosia ($50 fees at each end?). Align Commerce claims to “use the blockchain” to effect same transfer at much lower cost. 1.AC ACH debits my Chase account (slowest part). 2.AC buys BTC (sells USD) on domestic BTC exchange. 3.AC sells BTC (buys EUR) on foreign BTC exchange. 4.AC SEPA credits your Bank of Cyprus account. “Using the blockchain” = using BTC as a “vehicle currency.” “free”
Bitcoin as digital gold Like gold, supply of bitcoin not subject to government manipulation. Though government can still tax and legislate. Compliance is an issue (it always is). W/o hassle of storing, assaying, transporting physical gold. Highly volatile, but may nevertheless be preferable to mismanaged monetary systems?
Downsides Highly volatile exchange rate (e.g., against unit of account). Though, not such a big deal if BTC became the unit of account. But inelastic money supply makes it a lousy money. While users costs are (presently) low, total costs (miner compensation) range between 2-4% of transaction volume. Most of this cost is financed through seigniorage revenue. Since BTC supply will cease to grow, user fees will have to replace seigniorage. Not entirely clear where this cost will settle.
Statistics BTC in circulation 14M = $3B (compare USD, $1.3T). 75 transactions/min (compare Visa, 200,000/min). (Ironically) Most are intermediated via Coinbase, Bitpay, etc.
Bitcoin and banking Bitcoin solves a payment problem, not a credit problem. Credit supported by reputation, collateral. Banks turn illiquid assets (reputation, collateral) into money. Demandable liabilities, redeemable for base money @ par. Base money = historically specie, today fiat currency. Fractional reserve banking (liquidity transformation). What prevents banks from issuing demandable liabilities redeemable in BTC (or any other object)?
Bitcoin and banking U.S. “free” banking era 1836-63. Thousands of private state-chartered banks issuing their own banknotes redeemable on demand for specie. Private clearinghouses (e.g., Suffolk Bank 1818-58). National banking era 1863-1913. National banknotes redeemable for specie on demand. State banknotes replaced by checking accounts. No central bank, no lender-of-last resort.
Bitcoin vs. Fedcoin Existing banking system largely based on pre-Internet tech. Why not permit online utility accounts with Fed? U.S. Treasury: http://www.treasurydirect.gov/http://www.treasurydirect.gov/ Or full-service narrow banks (100% reserves). Dispense with FDIC. Dirt cheap payment processing costs (vs. POW). No exchange rate volatility relative to USD. New monetary policy instrument: interest-bearing money? But subject to U.S. monetary policy, not permissionless, etc.
Future of Money and Payments Fiat systems with politically insulated protocols have little to fear. USD already faces fierce competition. POW is costly relative to trusted central ledger—mining is a waste of resources (that may be worth paying). Fedcoin? Primary consequence of BTC will be to drive rents lower. Consumers will benefit, but can MSBs using block chain compete? Where is comparative advantage? Technology? Compliance? Credit side of banking (liquidity transformation) in regulated and shadow banking sectors not immediately affected (Bitcoin 2.0?)
Beyond Bitcoin Key innovation is the block chain—a distributed public ledger maintained by communal consensus mechanism. Think of any activity that currently makes use of intermediaries—these are all under threat. Banking and money services businesses. Brokerage, title transfer, escrow services. Dispute resolution mechanisms, voting, etc. “This is 1995 for the Internet.” Who knows what lies ahead...
Thank you David Andolfatto VP Research, FRB St. Louis May 01, 2015
|
OPCFW_CODE
|
Hear doesn't work with wit middleware
Hello,
I have this code:
$config = [
'microsoft_app_id' => $this->dbbot->microsoft_app_id,
'microsoft_app_key' => $this->dbbot->microsoft_app_key,
'microsoft_bot_handle' => $this->dbbot->name
];
$millow = BotFactory::create($config, new LaravelCache(), $request, new FileStorage(storage_path('bot')));
$millow->hears('test', function (myBot $bot) use ($id) {
$bot->reply('This is a test!');
});
$millow->middleware(Wit::create('*********'));
$millow->fallback(function (myBot $bot) use ($id) {
$bot->userStorage()->save(['id' => $id], 'bot');
$this->getContact($bot, $id);
$extras = $bot->getMessage()->getExtras();
$entities = $extras['entities'];
try {
$intent = $entities['intent'][0]['value'];
switch ($intent) {
case "greetings":
return $bot->randomReply(trans('bot.random.greetings'));
break;
case "info":
return $bot->reply(trans('bot.random.info', [
'name' => $this->dbbot->name,
'company' => $this->dbbot->company
]));
break;
default:
return $bot->reply('The intent is: ' . $intent);
break;
}
} catch (\Exception $e) {
$bot->reply('Sorry, I did not understand you.');
}
});
$millow->listen();
If I type 'test' on the bot, I get the message 'Sorry, I did not understand you.'
If I remove the line $millow->middleware(Wit::create('*********')); and type test, I get the correct reply 'This is a test!'.
I am using Laravel.
Is this a bug or I am doing something wrong?
Thank you.
Hi!
It depends on how you set up wit.ai
The middleware is currently listening for a wit.ai entity called intent.
See: https://github.com/mpociot/botman/blob/master/src/Mpociot/BotMan/Middleware/Wit.php#L73
So the text you put in hears is matched against the entity called intent that you need to set up in wit.ai.
I'm still working on the documentation (there's already a docs repo at https://github.com/mpociot/botman-docs)
Hi,
Sorry, I didn't explain my situation.
These commands will be in the form of: $bot->hears('setup', ...
Then I'll have anything else being parsed by wit
Hope this made it clear.
Ahh okay - so you don't want everything to run through your middleware, but only some of your commands.
Hmm..maybe it would be best to have something like the chainable ->driver(... method for this?
Like:
$middleware = WitAi::create('xxx');
$bot->hears('setup', function($bot){});
$bot->hears('test', function($bot){})->middleware($middleware);
$bot->hears('foo', function($bot){})->middleware($middleware);
Getting an error Call to undefined method Command::middleware()!
Yeah, it's not implemented yet. His was more meant as an idea how the implementation for this might look like.
Implemented in dev-master
https://github.com/mpociot/botman/commit/66d8da5b2192ad2ac145da2fc7ea6caadc93e2ff
|
GITHUB_ARCHIVE
|
Patch for issue #6
This patch focuses on solving the issue #6. What I have done is the following:
The existing code for concurrency in AlgorithmBase has been removed
It is assumed that the caller controls whether the computation is made concurrent or not
Cancellation is made by means of CancellationToken objects
All algorithms have been made cancellable
Most algorithms check against the cancellation token. I have tried to make the checks as infrequent as possible - mostly in O(n^2) and above loops. The checks made inside CancellationToken are VERY cheap, so there should be virtually no impact on the performance
Asynchronous layout is now executed in a Task object rather than a BackgroundWorker
Re-layouts still cancels any ongoing layout task, but will wait for the task's completion
UI synchronization is done through the dispatcher's task scheduler, meaning that any exception thrown on the UI thread will be marshaled back to the task, and can no longer crash the application
On the surface, the patch looks big, but it's really not that big. Almost all algorithm files have been touched, but little change has been made in each file. The biggest change is in GraphArea.cs which orchestrates the execution and cancellation of the asynchronous layout operation. I have on purpose not made any cancellation checks in the code paths that execute on the UI. Mainly because I don't want to risk that the control is left in a weird state.
Let me know what you think!
That's cool! :) I'll take a look asap! All i have to say for now, it would be nice if you will do the following if possible:
Mention breaking changes. Changes that can break existing users code, for ex. changes to IExternalLayout interface.
If you are familiar with METRO it would be super cool to have all this changes in both projects for WPF and METRO :) Code is almost the same for for both of them. Same classes with slightly different implementation of specific methods. Most differences are in UI visualization routines and templates.
Sorry about the interface breakage! I didn't even reflect on that. Never create pull requests while drunk OR tired. :sweat_smile:
As for the metro stuff, I am sorry, but right now I can't even open the project. I think there's a couple of USB stick installs of Windows 8.1 at work. I'll go check if I can snag one today.
Is it possible to pull the pull request? I'm thinking that if I get a hold of a 8.1 stick, I'll fix the metro stuff as well (and test it!).
Well, i never tried to pull a pull or similar things on GitHub :) I think i'll merge it and then you can make new merge request for METRO
I hope I didn't break the build!
Looking good so far :)
Oh, i think i've found an issue :) In showcase app in Layouts - Playground if you perform async calc and then change layout algorithm it will throw cancellation request. May be cancellation token isn't refreshed in that case?
Ah, not an issue i suppose :) There is auto recalc on layout algorithm change and it will run silently and being interrupted by manual refresh :) I need to tweak showcase app.
Does it stop the application? I can see the exceptions in the debug output, but that is expected, as the cancellation token throws a OperationCanceledException. When the layout is performed again, the task that did the earlier layout will be joined/awaited, and a new cancellation source is created (they cannot be reused - which is good because it makes them VERY efficient. Microsoft is employing some pretty nice lock less techniques inside it and the tokens).
Well, I'm off to the office to snag a Windows 8.1 stick!
All looks good again, false alarm! :) I have to redesign ShowcaseApp.WPF logic to display calculation task all the time and exclude any hidden activity.
|
GITHUB_ARCHIVE
|
|Introduction :||2.0 Beta 1|
The RAM component, easily the most complex component in Logisim's built-in libraries, stores up to 16,777,216 values (specified in the Address Bit Width attribute), each of which can include up to to 32 bits (specified in the Data Bit Width attribute). The circuit can load and store values in RAM. Also, the user can modify individual values interactively via the "Poke" () Tool, or the user can modify the entire contents via the contextual menu . See Memory components in User's Guide
Current values are displayed in the component. Addresses displayed are listed in gray to the left of the display area. Inside, each value is listed using hexadecimal. The value at the currently selected address will be displayed in inverse text (white on black).
The RAM component supports three different interfaces, depending on the Data Interface attribute, Asynchronous read:, Data bus implementation et Enables:.The Data bus type property allows you to modify the architecture of the data bus, either an omnidirectional Input and Data bus, or a bidirectional Data bus.
- The default setting is a separate load and read bus with synchronous control.
A loading port controlled by the Store signal. A read port driven by the Load signal. Loading and reading are synchronized by the clock according to the trigger mode defined by the Trigger property. If both command lines are set to 1, then reading takes place after writing. This process can be reversed by modifying the Read behavior property.
- Asynchronous reading and activation of reading per byte.
If the Enables property is set to Use byte enables and the Asynchronous read property is set to Yes, only the state of the Load signal triggers the appearance of data on the output bus. Loading is carried out in the same way as before.
- Asynchronous Read and Use line enable.
If the Enables property is Use Line enables , and the Line size property is single, then the data will be immediately present on the output bus as soon as the address is changed. Loading is carried out in the same way as before.
- Synchronous Read and Use line enable. Several lines.
If the Enables property is Use Line enables , and the Line size property is different single then the data will be immediately present on the output bus as soon as the address is changed. For loading, additional signals LE1..LE7 allow selection of active data lines. The clock trigger mode is defined by the Trigger Trigger property. .
There are other more subtle settings, so please read the pin and property descriptions below for more details.
The Appearance attribute allows two different images for this component. Logisim Evolution presents inputs to the west and outputs to the east, while I present the pins from top to bottom and from west to east.
- Input: This pin is only present if the Enables property is set to Use byte enables. When this is 1, all values in memory are pinned to 0, no matter what the other inputs are.
- Input bus: Selects which of the values in memory is currently being accessed by the circuit.
Input: When activated, it authorizes data storage at the position defined by the value present on the Address bus. Depending on the values of the Trigger property, it is active at 1 on the Rising Edge/Lowering Edge values when synchronized by the clock signal. It is active at 1 on the High Level value and at 0 on the Low Level value independently of the clock signal.
Below you can see a table showing the different cases of memory write triggering according to the state of the various component properties.
Writing trigger modes Propriété Trigger Signal Clock Signal Store Lowering Edge ↑ 1 Rising Edge ↓ 1 High Level -- 1 Low Level -- 0
Input: This pin is only present if the Enables property is set to Use byte enables.
When active, it enables data transmission from the position defined by the value present on the Address bus.
Below you can see a table showing the different cases of reading from memory to the output port, depending on the state of the various component properties.
Reading trigger mode Enables Trigger Asynchronous read ? Signal Clock Signal Load Use byte ebables Rising edge No ↑ 1 Use byte ebables Falling edge No ↓ 1 Use byte ebables High level -- -- 1 Use byte ebables Low level -- -- 1 Use byte ebables No effect Oui No effect 1 Use line ebables No effect -- No effect --
Input : This pin is only present if the Trigger property is set to Rising edge/Falling edge, and in all other cases if the Enables property is set to Use line enables. When triggered, the memory will either save or present the data. The triggering mode is defined by the parameters of the Trigger property.
Look at the two tables above.
- LE0 à LE7
- Input : These pins are only present if the Enables property is set to Use lines enables. Their number (2,4,8) depends on the Line size property. Each pin activates one of the input lines.
- Input bus : This bus is only present if the Data bus implementation property is set to Separate data bus for read and write. It receives the data that will be written to memory at the position specified by the value of the address pins when the trigger is triggered. See tables above.
- Input0 à Input7
- Input bus : These pins are only present if the Enables property is set to Use lines enables and the property Line size is different from Single. Their function is the same as that of the Input input, with this difference, Leur fonction est la même que l'entrée Input avec cette différence, Input0 points to memory location Address, Input1 points to address + 1, Input2 points to address + 2 and so on. Each input line has an associated enable signal LE0..LE7.
- Output bus: This bus is only present if the Data bus implementation property is set to Separate data bus for read and write Ce bus n'est présent que si la propriété Type de bus de données est positionnée sur deux bus omnidirectionnels. It transmits the data to be read at the position specified by the value of the address pins on triggering. See tables above.
- Data0 à Data7
Output bus: : These buses are only present if the Enables property is set to Use lines enables and the Line size property is other than Single. Their function is the same as the Data output, with this difference: Data0 transmits data from the memory position specified by the address pin value, Data1 transmits address + 1, Data2 transmits address + 2 and so on.
The Allow misaligned? property determines whether an error is generated when the address is not aligned to a multiple of the line number.
When the component is selected or being added, the digits 0 through 9 alter its Address Bit Width attribute, Alt-0 through Alt-9 alter its Data Bit Width attribute.
- Address Bit Width
- Number of address bits. The number of values stored in RAM is 2Address Bit Width.
- Data Bit Width
- The data width in bits of each individual value in memory.
Determines how data is presented to the component. Use byte enables: a single data bus is present.
Use line enables:one or more data lines make up the data bus. Each has its own selection signal. The Number of lines property lets you define the number of lines (1,2,4,8).
- Ram type
Determines how the memory content is modified when the simulation is restarted:
Non Volatile memory contents are not modified.
Volatile; RAM contents reset to zero or randomly ( according to Simulation tab project options).
- Use clear pin
- Determines whether the Clear pin is present or not. If this pin is set to 1, the memory content is asynchronously set to 0, and other commands have no effect.
- Line size
- These property are only present if the Enables property is set to Use lines enables Cette propriété est présente seulement quand la propriété Activation est positionnée sur Par ligne. Determines the number of data lines present at input and output 1,2,4 or 8. Each line is driven on loading by its own signal (LE0..LE7). Line 0 points to address 1, line 1 to address+1 and so on.
- Allow misalligned?
- These property are only present if the Enables property is set to Use lines enables.Determines whether data lines can interact with all memory addresses, or whether data lines are aligned with memory positions that are multiples of their number. For example, if you have two lines, the first line is linked to address + 0, the second to address + 1, and your addressing can only receive values that are multiples of 2, otherwise the outputs will be in error (E). See figure under Data0..Data7.
Configures how the clock input is interpreted. Values :
Rising edge indicates that the counter should update its value at the instant when the clock rises from 0 to 1.
Falling edge value indicates that it should update at the instant the clock falls from 1 to 0.
High level indicates that the memory should update continuously when the load input is at 1.
Low level indicates that it should update continuously when the load input is 0.
- Asynchronus read:
Determines whether the clock signal is involved in the memory read process. Détermine si le signal d'horloge intervient dans le processus de lecture de la mémoire.
Yes means that only the load signal triggers reading.
No, reading will be triggered by the load signal and an edge of the clock signal.
- Read begavior
Determines component behavior if read and write are enabled at the same time. Détermine le comportement du composant si la lecture et l'écriture sont activées en même temps.
Read after write : The data in the memory cell pointed to by Address will then be written, read and transmitted to the output.
Write after read : The data of the memory cell pointed to by Address will be transmitted to the output, then the value of the memory cell will be modified by the incoming data.
- Data bus implémentation
Determines the architecture of the data bus. Values:
One bidirectional data bus: A bus is present for data input and output. Controlled buffers must be used to manage data flow.
Separate dat bus for read and write: Two buses are present, one for input and one for output.
- The text within the label associated with the component.
- Label Font
- The font with which to render the label.
- Label visible
- If the label is visible or not.
- Logisim-HolyCross / Logsim-Evolutions: New presentation of flipflops in the IEC way. Classic Logisim: Presents flipflops as the legacy of logisim
Poke Tool Behavior
See poking memory in the User's Guide.
Text Tool Behavior
Menu Tool Behavior
See pop-up menus and files in the User's Guide.
Back to Library Reference
|
OPCFW_CODE
|
from kivy.vector import Vector
from cobiv.modules.core.gestures.gesture import Gesture
class PinchGesture(Gesture):
initial_distance = None
center = None
initial_touches = None
def finalize(self, touch, strokes):
self.get_app().fire_event("on_stop_gesture_pinch")
def process(self, touches, strokes):
v = Vector(touches[1].x, touches[1].y) - Vector(touches[0].x, touches[0].y)
ratio = v.length() / self.initial_distance
self.get_app().fire_event("on_gesture_pinch", ratio, self.center)
def required_touch_count(self):
return 2
def validate(self, touches, strokes):
v_sum = Vector(0, 0)
sum_dot_base = 0
for t in touches:
v_old = Vector(self.initial_touches[t.uid][0], self.initial_touches[t.uid][1])
v_new = Vector(t.x, t.y)
v_norm = (v_new - v_old).normalize()
if v_norm.length() == 0:
return False
v_base = (v_old - self.center).normalize()
print(v_old, v_new, v_norm, v_base, v_base.dot(v_norm))
sum_dot_base += v_base.dot(v_norm)
v_sum += v_norm
return abs(sum_dot_base) >= 1.85 and v_sum.length() < 1
def initialize(self, touches):
v = [Vector(t.x, t.y) for t in touches]
v_diff = v[1] - v[0]
self.center = v[0] + v_diff / 2
self.initial_distance = float(v_diff.length())
self.initial_touches = {touches[0].uid: (touches[0].x, touches[0].y),
touches[1].uid: (touches[1].x, touches[1].y)}
|
STACK_EDU
|
When he began his IT studies, Sami Kulmala had no idea that he would soon be an experienced coder, entrusted with the responsibility of designing the complex IT systems of an international credit management company. So how did that happen?
Sami began his studies in 2007. A course in the ICT industry led to an internship at ATR Soft, then to a summer job and finally to a permanent position by way of a thesis. Sami graduated as an IT engineer in 2012.
He only had to do his summer job until the end of July, but he kept on coming back after his school days to do more work. This continued until the end of his studies.
School offered software development theory, and Sami dabbled in coding while studying, but it was work that taught him how to actually build larger systems.
My first partner was a mentor from whom I learned the ropes. He took care of one part of a certain application and I took care of the other. Gradually, responsibility for the whole application was transferred to me – and now, in recent years, I’ve handed it over to others. This first application taught me a lot.
Embedded software is changing to .NET.
After his first year of studies, Sami turned towards embedded software and robotics- and circuit-board programming. He got familiar with C language. Sami, who studied Java at school, also used it for a long time at ATR. With another partner, he then began to make a document management system and Java was exchanged for .NET.
It was a big leap from Java to .NET, as I had never done anything with .NET. But you learn by doing, and with my team, we had a good division of labour.. For the last year and a half, I’ve been doing almost nothing but .NET.
Working on one or several projects in a software company?
At ATR Soft, some of the staff worked for several customers, and some just for one. Both have their advantages: when moving from one project to another, you have to revisit what you did last time, while it’s easier for people working on one project to focus on one thing. On the other hand, different customers with different work offer variety to the routine. In any case, projects end and new ones come along, and jobs can also be swapped within the company.
A developer’s dream customer
At present, Sami is spending all his time on one customer: Lowell. This includes several different projects in several different countries with different environments, applications, system upgrades and automation.
I must say that this is the best project that I’ve ever had in my history and ATR! A lot of interesting work, relatively free hands to do what I want and the customer that accepts my development suggestions positively. Some customers always want to go with the old things that have been proven to work, but Lowell is willing to try new techniques without prejudice.
Coders learn a lot about the customer’s business.
You can’t get a complete picture of how a piece of software works just by knowing how it works technically. You also need to understand what the application is really aiming at and how the user operates it. In almost all projects, an application developer gets to learn about industries about which he or she knows very little at the start. Some developers may just be satisfied with creating a technical solution, but you can usually get more out of work when you know what you’re involved in.
Everybody giving their all
In ATR’s Lowell team, jobs are not compartmentalised so everybody does everything. Everyone has to learn all the tasks at some level, but it gives meaning and choice to the work: you might as well be solving a support problem as building great new functionality. Such an approach also enables you to take holidays and sick leave without jeopardising the progress of the customer’s work.
You can do the front, the back, identity control – everything is possible. No two days are the same.
Jobs are shared and circulated. When someone has been working with a particular technology for a longer period of time, someone else will come in in order to avoid the silo effect on skills. Completely new methods can be learnt on courses or independently, the team members also learn the technologies that they master from each other. Similarly, customer-specific problems and their solutions are communicated to all team members. Information is constantly being shared.
Feedback is important.
Like many others, Sami has noticed the importance of feedback. If someone is excelling at their job, you should remember to say so and not just take it for granted.
if you get positive feedback from the customer or colleague for a job well done, it gives you the incentive to try even harder. Of course, you get praise from all customers when the project is finished, but again, I have to praise the Lowell people who take the time to give recognition even as the work is being done. Of course, feedback should also be given for work done internally within ATR, but here we could certainly do a little better.
ATR shuns the strict division of roles
in large compaanies, jobs and roles can sometimes be strictly divided, so that developers rarely have anything directly to do with customers at all. At ATR, we’d rather keep the roles diverse. Customers appreciate being able to discuss with just the people who are actually creating things.
In one previous project, the customer’s representatives had previously worked with a large software company. They were amazed when the kick-off meeting was full of us coders who make the applications. They had got used to men in suits being there, who did not necessarily know anything about IT matters, and just served as messengers. I must say that they were very happy at being able to work with us.
Nobody codes alone
When transitioning from the schoolroom to the world of work, people are sometimes surprised at how customer-friendly the IT professional’s work is. You don’t necessarily have to deal with external customers in all tasks, but you do have to deal with your colleagues. You don’t need to do anything completely alone.
When I was a student, I did imagine the programming work meant tapping alone on a keyboard in some dark room. Nowadays at least, the reality is far from that.
And you’re not left to struggle with tasks on your own – working in a software company is all about teamwork and learning.
If you’re looking for your first job in IT, you might think that the company’s employees are a bunch of coders who know everything. Or that coding is rocket science. It’s not. It’s worth remembering that everyone starts out from exactly the same place. Be brave and apply for a job in the sector!
|
OPCFW_CODE
|
// Copyright 2019 WHTCORPS INC Project Authors. Licensed under Apache-2.0.
//! This crate provides a macro that can be used to applightlike a match expression with multiple
//! arms, where the tokens in the first arm, as a template, can be subsitituted and the template
//! arm will be expanded into multiple arms.
//!
//! For example, the following code
//!
//! ```ignore
//! match_template! {
//! T = [Int, Real, Double],
//! match Foo {
//! EvalType::T => { panic!("{}", EvalType::T); },
//! EvalType::Other => unreachable!(),
//! }
//! }
//! ```
//!
//! generates
//!
//! ```ignore
//! match Foo {
//! EvalType::Int => { panic!("{}", EvalType::Int); },
//! EvalType::Real => { panic!("{}", EvalType::Real); },
//! EvalType::Double => { panic!("{}", EvalType::Double); },
//! EvalType::Other => unreachable!(),
//! }
//! ```
//!
//! In addition, substitution can vary on two sides of the arms.
//!
//! For example,
//!
//! ```ignore
//! match_template! {
//! T = [Foo, Bar => Baz],
//! match Foo {
//! EvalType::T => { panic!("{}", EvalType::T); },
//! }
//! }
//! ```
//!
//! generates
//!
//! ```ignore
//! match Foo {
//! EvalType::Foo => { panic!("{}", EvalType::Foo); },
//! EvalType::Bar => { panic!("{}", EvalType::Baz); },
//! }
//! ```
//!
//! Wildcard match arm is also supported (but there will be no substitution).
#[macro_use]
extern crate quote;
extern crate proc_macro;
use proc_macro2::{TokenStream, TokenTree};
use syn::fold::Fold;
use syn::parse::{Parse, ParseStream, Result};
use syn::punctuated::Punctuated;
use syn::*;
#[proc_macro]
pub fn match_template(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
let mt = parse_macro_input!(input as MatchTemplate);
mt.expand().into()
}
struct MatchTemplate {
template_ident: Ident,
substitutes: Punctuated<Substitution, Token![,]>,
match_exp: Box<Expr>,
template_arm: Arm,
remaining_arms: Vec<Arm>,
}
impl Parse for MatchTemplate {
fn parse(input: ParseStream<'_>) -> Result<Self> {
let template_ident = input.parse()?;
input.parse::<Token![=]>()?;
let substitutes_tokens;
bracketed!(substitutes_tokens in input);
let substitutes =
Punctuated::<Substitution, Token![,]>::parse_terminated(&substitutes_tokens)?;
input.parse::<Token![,]>()?;
let m: ExprMatch = input.parse()?;
let mut arms = m.arms;
arms.iter_mut().for_each(|arm| arm.comma = None);
assert!(!arms.is_empty(), "Expect at least 1 match arm");
let template_arm = arms.remove(0);
assert!(template_arm.guard.is_none(), "Expect no match arm guard");
Ok(Self {
template_ident,
substitutes,
match_exp: m.expr,
template_arm,
remaining_arms: arms,
})
}
}
impl MatchTemplate {
fn expand(self) -> TokenStream {
let Self {
template_ident,
substitutes,
match_exp,
template_arm,
remaining_arms,
} = self;
let match_arms = substitutes.into_iter().map(|substitute| {
let mut folder = match substitute {
Substitution::Identical(ident) => MatchArmIdentFolder {
template_ident: template_ident.clone(),
left_ident: ident.clone(),
right_ident: ident,
},
Substitution::Map(left_ident, right_ident) => MatchArmIdentFolder {
template_ident: template_ident.clone(),
left_ident,
right_ident,
},
};
folder.fold_arm(template_arm.clone())
});
quote! {
match #match_exp {
#(#match_arms,)*
#(#remaining_arms,)*
}
}
}
}
enum Substitution {
Identical(Ident),
Map(Ident, Ident),
}
impl Parse for Substitution {
fn parse(input: ParseStream<'_>) -> Result<Self> {
let first_ident = input.parse()?;
let fat_arrow: Option<Token![=>]> = input.parse()?;
if fat_arrow.is_some() {
let second_ident = input.parse()?;
Ok(Substitution::Map(first_ident, second_ident))
} else {
Ok(Substitution::Identical(first_ident))
}
}
}
struct MatchArmIdentFolder {
template_ident: Ident,
left_ident: Ident,
right_ident: Ident,
}
impl Fold for MatchArmIdentFolder {
fn fold_pat(&mut self, i: Pat) -> Pat {
ReplaceIdentFolder {
from_ident: self.template_ident.clone(),
to_ident: self.left_ident.clone(),
}
.fold_pat(i)
}
fn fold_expr(&mut self, i: Expr) -> Expr {
ReplaceIdentFolder {
from_ident: self.template_ident.clone(),
to_ident: self.right_ident.clone(),
}
.fold_expr(i)
}
}
struct ReplaceIdentFolder {
from_ident: Ident,
to_ident: Ident,
}
impl Fold for ReplaceIdentFolder {
fn fold_macro(&mut self, i: Macro) -> Macro {
let mut m = syn::fold::fold_macro(self, i);
m.tokens = m
.tokens
.into_iter()
.map(|tt| {
if let TokenTree::Ident(i) = &tt {
if i == &self.from_ident {
return TokenTree::Ident(self.to_ident.clone());
}
}
tt
})
.collect();
m
}
fn fold_ident(&mut self, i: Ident) -> Ident {
if i == self.from_ident {
self.to_ident.clone()
} else {
i
}
}
}
#[causet(test)]
mod tests {
use super::*;
#[test]
fn test_basic() {
let input = r#"
T = [Int, Real, Double],
match foo() {
EvalType::T => { panic!("{}", EvalType::T); },
EvalType::Other => unreachable!(),
}
"#;
let expect_output = r#"
match foo() {
EvalType::Int => { panic!("{}", EvalType::Int); },
EvalType::Real => { panic!("{}", EvalType::Real); },
EvalType::Double => { panic!("{}", EvalType::Double); },
EvalType::Other => unreachable!(),
}
"#;
let expect_output_stream: TokenStream = expect_output.parse().unwrap();
let mt: MatchTemplate = syn::parse_str(input).unwrap();
let output = mt.expand();
assert_eq!(output.to_string(), expect_output_stream.to_string());
}
#[test]
fn test_wildcard() {
let input = r#"
TT = [Foo, Bar],
match v {
VectorValue::TT => EvalType::TT,
_ => unreachable!(),
}
"#;
let expect_output = r#"
match v {
VectorValue::Foo => EvalType::Foo,
VectorValue::Bar => EvalType::Bar,
_ => unreachable!(),
}
"#;
let expect_output_stream: TokenStream = expect_output.parse().unwrap();
let mt: MatchTemplate = syn::parse_str(input).unwrap();
let output = mt.expand();
assert_eq!(output.to_string(), expect_output_stream.to_string());
}
#[test]
fn test_map() {
let input = r#"
TT = [Foo, Bar => Baz],
match v {
VectorValue::TT => EvalType::TT,
EvalType::Other => unreachable!(),
}
"#;
let expect_output = r#"
match v {
VectorValue::Foo => EvalType::Foo,
VectorValue::Bar => EvalType::Baz,
EvalType::Other => unreachable!(),
}
"#;
let expect_output_stream: TokenStream = expect_output.parse().unwrap();
let mt: MatchTemplate = syn::parse_str(input).unwrap();
let output = mt.expand();
assert_eq!(output.to_string(), expect_output_stream.to_string());
}
}
|
STACK_EDU
|
I am a new user running Julia 1.8.3 on WSL2. I am installing the Plots package by running pkg> add Plots. The installation runs until partway through precompilation, then my computer crashes with a BSOD error code: MEMORY_MANAGEMENT. I have been able to reproduce this twice. The first time something must have corrupted during the crash, as I couldn’t precompile anything – error message along the lines of “EOFError: read end of file” (I no longer have the entire error message, apologies!).
When I tried to install Plots again after removing ~.julia, my PC once again crashed with error MEMORY_MANAGEMENT. I tried running pkg> add Plots again, and it was successful.
Has anyone experienced this behaviour before? I’m not sure whether the memory problem is a WSL issue or a Julia one, and I wasn’t able to find helpful information on google. I’ve been using the same WSL setup for a while (mostly compiling C++ codes), and have never had my PC crash because of it.
Hmm, what version of Windows 20 are you using?
Have you gotten any other plotting package to work in WSL2?
A modern non-administrator program, especially one “sandboxed” like when running from WSL2 should really not be able to cause BSOD. Therefore here is my speculation:
Plots is a pretty big package that requires a ton of compilation when installing that might have used most of your RAM, making any issues with RAM defects much more probable to surface. On the julia end of things, some settings to be less aggressive with RAM usage would be nice (features like that are being added); you can also run with a single thread when installing packages to make sure that there are not too many things compiling in parallel, eating up all your RAM (I have had this problem with older version of Julia and DifferentialEquations). On the memory side of things, you can run a memory test to check for defects. It seems Windows 10 has a built-in “Windows Memory Diagnostics” tool you can try.
All of this makes sense only if you were actually running low on RAM and your RAM had rare intermittent issues.
Can you use the
Gadfly packages to run?
ENV["JULIA_NUM_PRECOMPILE_TASKS"] = 1 before trying to precompile Plots. This will be slow since Julia will be precompiling packages one at a time, but it may reduce memory pressure.
No matter what Julia or Plots do your computer shouldn’t crash unless you have faulty hardware (likely) or there is some kernel bug that is hit (unlikely).
I would run a memory test to ensure that one of yiur RAM sticks haven’t gone bad.
|
OPCFW_CODE
|
OpendTect Pro is the extended version of OpendTect for professional users. For only 2,200.- USD per user per year (node-locked license), OpendTect Pro users benefit from extra functionality and premium support. Only OpendTect Pro users can extend their system by acquiring a range of cutting edge commercial plugins on top of OpendTect Pro.
Please visit the OpendTect Pro store to buy a subscription today.
"OpendTect in short: Great Workbench - User Friendly - Superb Support -Improved Reservoir Imaging - Ultimately: Mitigated Drilling and Production Risk."
--Dr. Rainer Tonn, Equinor
- PetrelDirect data connectivity to Petrel* via a free Petrel* plugin
- Basemap + Mapping
- PDF-3D for sharing images
- Accurate ray-tracer for mutes, angle stacks, AVA attributes
- Shapefiles, RBF gridder, ....
- Thalweg tracker for seismic facies tracking
- Well Table with additional log plots & cross-plots
- Horizon Mathematics (3D & 2D)
- Horizons from well markers
- Create 3D bodies from polygons
* Petrel is a mark of Schlumberger.
The Thalweg Tracker
The Thalweg Tracker is a special kind of 3D connectivity filter that can follow the path of least resistance (=Thalweg).
- Seismic facies tracker
- Input generator for Machine Learning workflows
- Pointsets with tracking attributes
- Horizons with tracking attributes
- 3D bodies.
The Basemap is a utility that provides a map view of project data and enables the creation of maps. Features:
- Interactively adding and controlling visualization elements in 2D and 3D viewers
- Especially useful in 2D seismic interpretation projects
- More gridding and annotation options incl. support of shape files for better maps.
Additional tools for better management of well data and log analysis. Features:
- Well Data Table
- New log plots
- New cross-plots
- These tools were developed for QC-ing and editing of well logs for Machine Learning workflows
A utility to grab a 3D OpendTect scene and save this in a 3D PDF, or glTF format for sharing with others.
- Grab a scene with seismic, horizons, wells etcetera and save in 3D PDF or glTF
- 3D PDF files are supported in several free softwares incl. Adobe Acrobat Reader
- glTF is supported in modern browsers; The interactive images on this page are glTF files.
Mathematics on horizons and horizon data, e.g., for Time/Depth conversion of seismic interpretation grids.
- Basic math (+,-,*,/,^)
- Functions (exp, ln, sin,…)
- Logical (<,==, &&, …)
- Statistical (rand, min, avg, …)
- Constants (pi, e, undef)
- Other (inl, crl, X, Y)
|
OPCFW_CODE
|
JAXB issue with missing namespace definition
So I searched around quite a bit for a solution to this particular issue and I am hoping someone can point me in a good direction.
We are receiving data as XML, and we only have XSD to validate the data. So I used JAXB to generate the Java classes. When I went to unmarshal a sample XML, I found that some attribute values are missing. It turns out that the schema expects those attributes to be QName, but the data provider didn't define the prefix in the XML.
For instance, one XML attribute value is "repository:<uuid>", but the namespace prefix "repository" is never defined in the dataset. (Never mind the provider's best practices suggest defining it!)
So when I went to unmarshal a sample set, the QName attributes with the specified prefix ("repository" in my sample above) are NULL! So it looks like JAXB is "throwing out" those attribute QName values which have undefined namespace prefix. I am surprised that it doesn't preserve even the local name.
Ideally, I would like to maintain the value as is, but it looks like I can't map the QName to a String at binding time (Schema to Java).
I tried "manually" inserting a namespace definition to the XML and it works like a charm. What would be the least complicated method to do this?
Is there a way to "insert" namespace mapping/definition at runtime? Or define it "globally" at binding time?
I would strongly suggest you publish the relevant portions of your CSD and of the XML document. It is still very likely that the XML you reeive does not conform to the XSD, which is going to be a problem for JAXB un-marshalling. If you have established that the XML actually conforms to the XSD (e.g. using an online validation tool), please say so (so that SO helpers get it out of the way).
I wondered if i should have provided some more details, but it was a little difficult on my tablet. Next time I will provide actual bid of code. Thanks for the feedback.
The simplest would be to use strings instead of QName. You can use the javaType customization to achieve this.
If you want to add prefix/namespace mappings in the runtime, there are quite a few ways to do it:
Similar to above, you could provide your own QName converter which would consider your prefixes.
You can put a SAX or StAX filter in between and declare additional prefixes in the startDocument.
What you actually need is to add your prefix mappings into the UnmarshallingContext.environmentNamespaceContext. I've checked the source code but could not find a direct and easy way to do it.
I personally would implement a SAX/StAX filter to "preprocess" your XML on the event level.
I took your suggestions and it works. I basically implemented my own XMLStreamReader and evaluated the responses for specific strings. This limits the flexibility a little, but I am hoping the date provider will eventually address this specific issue soon.
|
STACK_EXCHANGE
|
Tests in this file that use a custom lshw config such as:
val config = Map(
"lshw.flashProduct" -> "flashmax",
"lshw.flashSize" -> "1048576"
Do not work in isolation. test-only *LshwParserSpec* will fail while test-only *util* will pass on the (second?) run.
This was introduced IIRC after we introduced the new config system. The test system seems to end up using the default values instead of the specified ones, unfortunately. I have not had time to track it down.
Closing because it's old and cruft or fixed?
Closing because they have been working for a while now, both in isolation as as part of the suite. This would be just the HEAD on the repo (not a release)
[collins] $ testOnly collins.util.parsers.LshwParserSpec
[info] Compiling 42 Scala sources to /Users/bhaskar/workspace/collins/target/scala-2.10/test-classes...
[info] The Lshw Parser should
[info] Parse Dell (AMD) lshw output
[info] + with a 10-Gig card
[info] + quad nic missing capacity having size
[info] + basic
[info] + quad nic
[info] + quad nic missing capacity
[info] + with a virident card
[info] Parse different versioned formats
[info] + B.02.12 format
[info] + B.02.14 format
[info] + B.02.15 format
[info] + B.02.16 format
[info] Leverage config for flash disks
[info] + Different flash description and size
[info] Parse softlayer supermicro (Intel) lshw output
[info] + A Production SL Web LSHW Output
[info] + A New Production SL Web LSHW Output
[info] Parse Dell LSHW Output
[info] + R620 LSHW Output
[info] + with LVM disk
[info] parse amd-opteron-wonky
[info] + wonky amd-opteron output
[info] + wonky amd-opteron output w/ show empty sockets
[info] Total for specification LshwParserSpec
[info] Finished in 7 seconds, 182 ms
[info] 17 examples, 0 failure, 0 error
[info] Passed: Total 17, Failed 0, Errors 0, Passed 17
[success] Total time: 39 s, completed May 26, 2015 12:14:52 PM
All test should be working in the suite and isolation, all tests not working in either mode is because I was not aware of it, happy to address them if you find any.
|
OPCFW_CODE
|
The labs will usually consist of some introductory material, some
examples to work out in MATLAB, a problem or 2 to solve, and
maybe a challenge problem. Most labs will have something that
you need to turn in by the end of lab time. You can print out
the labs and results if you want or read them online. You can
send results to me via email if you want.
How to Read the Lab
In the labs, I'll try to use colors in a consistent way:
Red for links to files and important messages
Blue for things you type in MATLAB
italic for things you type in MATLAB that
are variable, eg. ... type your name: Your Name
Green for programs and other things you need
to copy into MATLAB or a file
We'll be using MATLAB (short for MATrix LABoratory) most of the time in
the lab. It is a fairly easy program to use, but is also very powerful.
You can use it interactively (like a fancy calculator), but you can also
use programs. Most of the time, if there is a program involved, I'll
provide the code. You are, however, always welcome to try to program
it on your own.
For this Lab, it would be good to try everything in blue or green
Starting Up and Getting Help in MATLAB
To Start:. From Windows: Go under the Start menu and select
MATLAB 5.3 under the Programs/Matlab menu.
You might want to resize the windows for you browser and MATLAB so you
can easily switch between the two.
To Exit, type quit or exit or choose Exit from
the file menu (if available)
MATLAB has several built-in help commands:
If you know the name of the command you can use
for example, type
If you don't know the name, but have an idea of what it should
do, you can use
for example, type
To search for commands by function, you can use the
help window, type
and then double-click on any topic to get a list of
For general MATLAB information you can use the help desk.
This brings up a browser window and gives access to information
about commands, programming, etc. Type
Bring up the help desk in MATLAB.
Click on Getting Started for some general information.
Work through a few pages of the Getting Started tutorial.
Put the browser on one side of the screen and the window
running MATLAB on the other and try out a few commands.
Try out the following areas:
Matrices and Magic Squares
Working with Matrices
The Command Window
Another way to explore MATLAB is to type:
and click on Autoplay for a fast show, to get an idea
what MATLAB looks like.
Saving Your Work
There are several ways to save the work you do in MATLAB. The
easiest is to keep a transcript of all that you do with the
diary command. Type:
help diary to learn about it.
Another way is to open a text file either through MATLAB (it would be
a M-file) or through your favorite word processor and then copy the
good stuff into the text file.
Matrices in MATLAB
Enter matrices starting with a [ and ending with a ].
Elements in a row are separated by a comma or a space.
Rows are separated by a return or by a semi-colon (;).
For example you can enter the matrix
( 1 2 3 )
A = ( 4 5 6 )
( 7 8 9 )
in any of the following ways
A =[1, 2, 3; 4, 5, 6; 7, 8, 9]
A =[1 2 3; 4 5 6; 7 8 9]
A = [
1 2 3
4 5 6
7 8 9
You can also build matrices by putting a value in each place. Like
A(1,1) = 1
A(1,2) = 2
A(1,3) = 3
A(2,1) = 4
A(2,2) = 5
A(2,3) = 6
A(3,1) = 7
A(3,2) = 8
A(3,3) = 9
NOTE: MATLAB will automatically make your matrix big enough to
hold the elements you specify. It will set unspecified entries to 0
If you tried this stuff you probably noticed that MATLAB displayed the
results after every command you gave it. This is great when we are
learning to use MATLAB, but it gets pretty annoying later.
If you end a command with a semi-colon, then the results will not
be displayed. Try typing: B(3,4) = 1; (don't forget the semi-colon)
What do you think the result is? Type B or disp(B) to
see the result. Are you surprised?
If the elements are defined by a formula, you can write a little program,
for i = 1:3
for j = 1:3
C(i,j) = i+j;
This example shows another powerful part of MATLAB: specifying lists
of numbers. The colon (:) is a powerful tool in MATLAB. Type
We could use it to enter our first matrix as:
A = [1:3;4:6;7:9];
You can also use the colon to specify parts of a matrix. Type
M = magic(5) (This is the 5x5 magic square)
Matrix Arithmetic and Functions
MATLAB uses +, -, * for matrix addition, subtraction and multiplication.
It will give an error message if the dimensions are wrong. Try the following:
A = rand(3) (random 3x3 matrix)
B = rand(3,4) (random 3x4 matrix)
C = rand(3)
D = rand(4)
MATLAB also has functions which work on matrices. Try
max(A) (finds the maximum of each column)
sum(B) (finds the sum of each column)
cos(C) (finds the cos of each element)
D.^2 (squares each element, compare to D*D)
To Turn In:
1. Create a 6x6 magic square; call it A.
2. Compute the sum of each column.
3. What MATLAB command would you use to
a. Raise each element of A to the 3rd power.
b. Display the 4th row of A.
c. Display the 2nd column of A.
d. Display the element in the 3rd row and 3rd column.
4. Use MATLAB to sum each row.
5. Use MATLAB to find the product of the elements in each column.
Either email the results or print them out and give them to me
before you leave today.
|
OPCFW_CODE
|
— eCos Support for the MCFxxxx SCM On-chip Watchdog Device
ColdFire MCFxxxx processors typically come with two on-chip
watchdog devices. The main watchdog is not readily usable by eCos: it
comes up enabled and, once disabled, it can never be reenabled. Hence
in a typical development environment that watchdog device needs to be
disabled early on or it will interfere with debugging, and cannot be
used again. There is a second watchdog device embedded in the System
Control Module which is usable. This package
CYGPKG_DEVS_WATCHDOG_MCFxxxx_SCM provides an eCos
driver for that device, complementing the generic package
CYGPKG_IO_WATCHDOG. The driver functionality should
be accessed via the standard eCos watchdog functions
The MCFxxxx SCM watchdog driver package should be loaded automatically
when selecting a platform containing a suitable MCFxxxx ColdFire processor.
It should never be necessary to load it explicitly into the
configuration. The package is inactive unless the generic watchdog support
CYGPKG_IO_WATCHDOG is loaded. Depending on the
choice of eCos template it may be necessary to load the latter.
There are a number of configuration options. The first is
CYGIMP_WATCHDOG_HARDWARE, which can be used to
disable the use of the hardware watchdog and switch to a software
emulation provided by the generic watchdog package instead. This may
prove useful during debugging.
By default the watchdog device is set to reset the system when the
timeout expires. It can be configured to raise an interrupt instead by
interrupt ISR will invoke any installed application action handlers.
The watchdog timeout is controlled by
corresponds to the
CWT field in the SCM's
CWCR register. It can take a value between 8 and
31, with a default of 28. That means 2^28 peripheral bus clock ticks
have to elapse before the watchdog triggers. Typically that
means a timeout of a small number of seconds. There is a
calculated CDL option
CYGNUM_DEVS_WATCHDOG_MCFxxxx_SCM_DELAY which gives the
current delay in nanoseconds.
The watchdog device has a bit which turns it read-only, preventing any
errant code from accidentally disabling it. By default the driver will
set this bit after starting the watchdog. If for some reason the
application needs to access the device directly then the option
CYGIMP_DEVS_WATCHDOG_MCFxxxx_SCM_WRITE_ONCE should be
By default the watchdog is set to continue ticking even if the
core is halted by an idle thread action or by power management code.
This can cause problems if the application code halts the core for an
extended period of time, so the behaviour can be changed by disabling
If the watchdog device is configured to raise interrupts rather than
generate a reset then
the interrupt priority. There are also configuration options allowing
developers to tweak the compiler flags used for building this package.
The watchdog device driver usually does not require any
platform-specific support. The processor HAL should provide the
device definitions needed by the code. The only porting effort
required is to list
CYGPKG_DEVS_WATCHDOG_MCFxxxx_SCM as one of the
hardware packages in the ecos.db target entry.
|2023-08-15||eCosPro Non-Commercial Public License|
|
OPCFW_CODE
|
How to Access Public Variables in a VB.Net Module Referenced in a C# Project?
I have a Visual Studio 2013 solution with a VB.Net project (GL2015 - compiling to a DLL) and a C# project (PBIS - compiling to a Windows Form). The GL2015 VB.Net DLL is referenced in my PBIS C# project. Basically I am incorporating a VB.NET app into my C# app. The VB.Net (GL2015) project has Modules, one of which contains nothing but global variables. I took it's input form and created one in my C# project. In porting over the code behind for the form in my C# app I noticed it is referencing these global variables.
Here's my question. How can I reference these 'global' variables from my C# project that are in the VB.Net module?
VB.Net GL2015 Project
Global_Stuff.vb
Public Module Global_Stuff
'These items should be populated by PBIS data:
Public CustomerName, CustomerLocation, EndUser, EndUserLocation, Application, PurchaseOrderNumber, JobProposalNumber, ItemNumber As String
Public MaterialsList As List(Of String)
Public ApplicationsPath, MaterialPropertiesFile, ETODBConnectionString As String
Public ErrorMessage As String
Public ExistingProposal, JobMode, ExistingJob As Boolean
End Module
C# PBIS Project GL2015Form.cs
using GL2015;
...
Global_Stuff.ApplicationsPath = nodeToFind.InnerText;
...
MaterialPropertiesFile = nodeToFind.InnerText;
I applied the using GL2015; directive at the top but couldn't reference the Global_Stuff module. This means that I would have to insert a 'Global_Stuff.' in front of all global variables referenced in the VB.NET project (see the ApplicationsPath.)
Is there any way to reference these global variables in the VB.NET project module without me having to qualify them in my C# project. There over 100 global variables that are referenced over 700 times in my C# project.
C# doesn't have a concept of a global variable, so there's not a lot you can do other than qualify it. I'm not sure if C#6's static using could apply here, I'd have to research if the vb is really creating a static class or not in the background.
In C#6, you would add a "using static Global_Stuff;" at the top and then you could access the (static) members of Global_Stuff directly. Prior to C#6, you are out of luck (not that it's that onerous to use a qualifier).
VB modules are compiled in exactly the same way as C# static classes. The ability to use a module member unqualified in VB is syntactic sugar specific to that language in order to keep it in line with how VB6 worked with modules. In C#, treat module members in the same way as you would static class members. That means qualifying the member with the type name, which you can still do in VB too.
I have used Modules in VB.Net to store Database Names WHY?
I got tired of typing the db name on other forms
OK so when I moved to C# WHAT no Modules
Here is my step by step solution for WinForms with C#
Create Two Forms Main entry form here is frmStart
My second form is frmDataModule
Here is the code in frmDataModule
public partial class DataModule : Form
{
public DataModule()
{
InitializeComponent();
}
}
public class DBName
{
// gv_dbName is used on frmStart
public const string gv_dbName = "CheckBook.db";
}
OK here is the code on frmStart that uses the public const
public void MyMethod()
{
string textToAdd = gv.strDatabase;
{
using (StreamWriter sw = File.AppendText(path))
sw.Write(textToAdd + Environment.NewLine);
}
}
YES this is an old question I hope the LATE answer is helpful
|
STACK_EXCHANGE
|
I need some help with designing a database. My aim is to persistently store a number of pandas DataFrames in a searchable way, and from what I’ve read SQLite is perfect for this task.
Each DataFrame contains about a million rows of particle movement data like this:
z y x frame particle 0 49.724138 45.642857 813.035714 0 0 3789 14.345679 2820.537500 4245.162500 0 1 3788 10.692308 2819.210526 1646.842105 0 2 3787 34.100000 2817.700000 1375.300000 0 3 3786 8.244898 2819.729167 1047.375000 0 4
Using sqlalchemy I can already store each DataFrame as a table in a new DataBase:
from sqlalchemy import create_engine import pandas as pd engine = create_engine("sqlite:////mnt/storage/test.db") exp1.to_sql("exp1", engine, if_exists="replace") exp2.to_sql("exp2", engine, if_exists="replace") exp3.to_sql("exp3", engine, if_exists="replace")
But this is too basic. How can I store each DataFrame/experiment with a couple of metadata fields like
Date in such a way that later on it’s possible to return all experiments conducted by a certain person, or on a specific date?
I will add more columns over time. Assuming each DataFrame/experiment has a column
velocity, how could I retrieve all experiments where the mean temperature value is below or above an arbitrary threshold?
You’ve created 3 separate tables (well 2, pending the apparent typo?). If you want to unify the data, you probably shouldn’t be forcibly overwriting target tables with
replace: Drop the table before inserting new values.
append: Insert new values to the existing table.
Assuming your similarly named files have the same schema, you can edit the last 3 lines as follows.
exp1.to_sql("exp", engine, if_exists="append") exp2.to_sql("exp", engine, if_exists="append") exp3.to_sql("exp", engine, if_exists="append")
This will insert all three datasets to a single table named
exp instead of 3 separate tables.
If each csv isn’t uniquely identified from the others within itself – for example if
exp1.csv looks like this…
Name,Date,Temperature Peter,2020-01-01,50 Paul,2020-01-01,55 Mary,2020-01-01,53 Jane,2020-01-01,49
…then you can append the experiment identifier to each dataset as needed in the dataframe. For example by…
>>> exp1['ExpName'] = 'exp1' >>> exp1 Name Date Temperature ExpName 0 Peter 2020-01-01 50 exp1 1 Paul 2020-01-01 55 exp1 2 Mary 2020-01-01 53 exp1 3 Jane 2020-01-01 49 exp1 >>>
…which will allow you to group by experiment in any follow-on SQL you may run against your database.
…how could I retrieve all experiments where the mean temperature value is below or above an arbitrary threshold?
…well given an arbitrary additional two datasets of…
➜ /tmp cat exp2.csv Name,Date,Temperature Peter,2020-01-02,51 Paul,2020-01-02,56 Mary,2020-01-02,54 Jane,2020-01-02,50 ➜ /tmp cat exp3.csv Name,Date,Temperature Peter,2020-01-02,52 Paul,2020-01-02,57 Mary,2020-01-02,55 Jane,2020-01-02,51 ➜ /tmp
…that you likewise appended the
expN identifier to in the dataframe, then you would run the following SQL to retrieve experiments where the average temp was below 53
SELECT ExpName, AVG(Temperature) FROM exp GROUP BY ExpName HAVING AVG(Temperature) < 53;
Which I’ll leave to you to plug into SQLAlchemy as you like 🙂
|
OPCFW_CODE
|
PHP: How to detect a website redirect that returns a '404' status code?
I am writing a small php script that can distinguish two between different kinds of responses from a third-party website.
For the human visitor, recognizing the difference is fairly easy: Response #1 is a bare-bones 404 error page, whereas response #2 redirects to the main page.
For my script, this turns out to be somewhat more difficult. Both types return a '404' status code, file_get_contents() returns empty for both and the "redirect" doesn't really register as a redirect (like I said, there's a '404' status code, not a '30X'). Get_headers() shows no distinction, either (no "Location:" or anything of that sort).
Any way I can get this done?
Look at get_headers($the_URL, 0);
Are you using cURL to retrieve this external webpage?
Post the URLs to have a look on what they respond.
As posted above, get_headers() shows no distinction, and I am using file_get_contents() to retrieve the page. Should I try cURL instead?
@axiac: Sending you a message in a moment.
How do I send messages, and is there a way to actually get notified if there are comments? I absolutely adore this site when I'm in read-only mode, but every time I try to post here, it kills me.
@Brokenstuff I think it doesn't provide a way to send private messages.
@Brokenstuff If you use the desktop version of the website then there are two small notification areas on the left of the black bar on the top of the page (just after the "Stack Exchange" logo). The first one shows (on red background) the number of unread comments you have (comments on your questions or answers or comments that mention your username). The other one shows on green background the number of reputation points you received. These icons exists in the mobile page too but they don't display anything, you have to tap on them. I don't know how it is on the mobile app, I don't use it.
I can see the icons fine, but since I'm an infrequent visitor, I'd really need the system to send me mail notifications. My preferences are set to send me any unread stuff from my inbox, but I'm afraid it doesn't happen for me. Makes me feel kinda bad because that way I might accidentally abandon my own tickets.
There are many ways to do a redirect:
the HTTP response codes for redirect (usually 301 and 302) accompanied by the Location: header that contains the URL
HTTP/1.1 302 Found
Location: http://www.example.org
the HTTP header Refresh:; it contains a number of seconds to wait and the new URL:
Refresh: 0; url=http://www.example.org
the HTML meta element that emulates the Refresh HTTP header:
<meta http-equiv="Refresh" content="0; url=http://www.example.org">
Javascript:
<script>document.location = 'http://www.example.org';</script>
Note that there are countless possibilities to redirect using Javascript. What is common to all of them is the usage of the location property of document or window. location is an object of type Location that can be assigned directly using a string or can be changed using its href property or its methods assign() and replace().
If the requests to your URLs does not return any content, the both return status code 404 and no Location: header then check for the presence of the Refresh: header in the response.
You better use curl to make the requests instead of file_get_contents(). curl provides a better control of the headers sent and received.
Thanks a ton - that did the trick. I used some code from here: http://stackoverflow.com/questions/14953867/how-to-get-page-content-using-curl - then I sent the results through htmlspecialchars() and now I have the entire output I can work with. Turns out the redirect was done via Javascript (window.location).
I would suggest You to make a cURL request to desired site and see what kind of response it is.
As described in PHP manual,
curl_getinfo ($handle, CURLINFO_HTTP_CODE);
will give You an associative array, in it You will find the http_code key, which will hold Your status code.
If redirect case will give You non-30X status code, You can try to fetch redirect url by this:
curl_getinfo ($handle, CURLINFO_REDIRECT_URL);
|
STACK_EXCHANGE
|
Blue Leather Top Handle Tote
- $365.00Johnny Was Kaya Embroidered Overnight Tote Bag Details Johnny Was "Kaya" leather and embroidered denim overnight bag with golden hardware. Whipstitched top handles, 7" drop. Removable, adjustable shoulder strap, 18" drop. Zip top with hanging strap pull. Exterior, front zip pocket. Cotton lining, zip pocket, slip pockets. 13"H x 19"W x 7"D. Imported. Designer
- $3,590.00Christian Louboutin Paloma Medium Python Tote Bag Details Christian Louboutin "Paloma" tote bag python snakeskin with rubber and satin-finish calf leather trim. Rolled top handles anchored onto body, 4.3" drop. Removable, adjustable shoulder strap, 21.6" drop. Zip top. Exterior, signature red sole stud at front. Interior, signature red lining. Metal feet protect bottom of bag. 10.4"H x 12.6"W x 5.5"D. "Paloma" is made in Italy. Python cannot be sold or shipped to California. Designer
- $1,043.00Emilio Pucci Large PVC Logo Tote Bag Details Emilio Pucci canvas and clear PVC tote bag with leather trim and golden hardware. Flat top handles, 10" drop. Open top with center clip closure. Exterior, large printed logo. Unlined interior, leashed zip pouch. 14"H x 13"W x 6"D. Made in Italy. Designer
- $248.00Tory Sport Sport T Large Nylon Tote Bag Details Tory Sport nylon tote bag with faux leather (PVC) trim. Canvas and flat top handles, 9" drop. Removable, adjustable logo shoulder strap, 20" drop. Open top with magnetic closure. Exterior, front zip pocket; two side pockets. Interior, two zip pockets, four slip pockets. 15"H x 17"W x 5"D. Designer
- $369.00Jon Hart Coated Canvas Burleson Bag Details Tote bag made with fine, soft Italian leather or durable coated canvas. Rolled top handles. Removable shoulder strap. Recessed zip top closure. Can be personalized with your name or initials. 18"W x 11"D x 15"T. Made in the USA. You will be able to specify personalization details after adding item(s) to your shopping cart. Please order carefully. Orders for personalized items cannot be canceled, and personalized items cannot be returned.
- $276.00Jon Hart Soft-Sided Carry On Bag Details Coated canvas, soft sided carry-on bag. Zippered top opening and unique shape allow you to see everything inside the bag. Features rolled leather handles, reinforced leather corners and brass feet. Silvertone hardware. Can be personalized with your name or initials. 19"W x 9.5"D x 12.5"T. Made in the USA. You will be able to specify personalization details after adding item(s) to your shopping cart. Please order carefully. Orders for personalized items cannot be canceled, and personalized items cannot be returned.
|
OPCFW_CODE
|
Before I started work on creating a vector game engine to be used on displays like oscilloscopes, I had watched a really fantastic explanation of how to create graphics for laser displays by Seb Lee-Delisle:
In this video, Seb talked about the challenges of working with a laser that takes time to accelerate and decelerate as it moves around the screen. Draw sorting and gradually changing the laser’s position when moves between shapes are two techniques that he used when recreating Asteroids for a laser projector. When approaching my new oscilloscope project, I kept these ideas in my back pocket as solutions to problems I expected to have.
Overshooting and Oscillations
Sure enough, I did have some problems with controlling the electron beam of my oscilloscope vector display. When moving from one position to another which is very far away, the electron beam would overshoot and oscillate before finally settling into the place I wanted it to land:
For context, my current setup and game engine uses an audio DAC that is running at 192 kHz with a variable video game framerate. My current project targets 80 fps, but it could go much higher. 80 fps with 192,000 samples per second means that I can move the electron beam 2400 times per frame. If I need to move the electron beam more times per frame, that frame will take longer to draw and the framerate will drop. If it drops too low, you can start to see flickering, like an old CRT monitor running at a low refresh rate. The electron beam moves extremely quickly — faster than 192,000th of a second! So I need to gradually step along the path I am drawing. This effectively “slows down” the electron beam’s movement when drawing a shape. This means that 2400 samples per frame is not very much to work with!
Getting back to the problem of the electron beam overshooting and oscillating, I started first with the solution of draw sorting. I figured that if I could make the electron beam move in a more optimized path that it could help reduce the distance that the beam needed to travel between shapes and thus reduce the overshooting and oscillation at the start of drawing a new shape.
I implemented a very quick prototype algorithm that simply looked for the next shape that was closest to the current electron beam position every time it finished drawing a shape. This had some immediate drawbacks that I didn’t expect! Although, it’s possible this method could help with the overshooting, I found that it caused a much bigger problem: the scene started flickering and jittering as I moved the camera around!
The reason for this flicker and jitter was immediately obvious to me — the refresh rate of each shape was varying between frames! Here’s an example of a worst case scenario that would cause this problem: Let’s say, after sorting all the shapes, the draw order of objects is:
[A, B, C, D, E, F, G]. But then the player moves the camera and the scene changes such that the draw order after sorting is something like
[B, C, D, E, F, G, A]. While object A was drawn first in one frame, it was drawn last in the next frame. Without sorting, object A would have been drawn once every 1/80th of a second, but now it needed to wait a full 1/40th of a second before being re-drawn to the screen. This behaviour causes flicker and can make objects look like they are jittering as a camera pans across the scene.
This experiment has made it very clear that I must always draw shapes in the same order with a vector display to ensure that their refresh rate is kept somewhat constant!
Blanking is a term used to describe the time when a display will “blank” (stop drawing an image) for a short period of time while it prepares draw at a different position, usually on the opposite side of the screen. The idea of blanking is important and valuable to both raster and vector displays. With my current oscilloscope, I do not have the ability to turn off the electron beam, but I can dedicate some additional time to give the electron beam a chance to settle in its new position before starting to draw. Here’s what it looks like if I pause on the new draw position for 7 samples (7/192,000th of a second) before continuing to draw the new shape:
You can see that this definitely helps. Now each line is being fully drawn, though the initial oscillations as the beam settles on the new location can still be seen.
Exponential Deceleration During Blanking
There was one last trick that I kept in my back pocket: changing the acceleration of the the electron between points, during blanking. I fiddled with some different tween functions, and settled on a 7 frame blanking with a quadratic ease out:
Ta-da! This is now looking pretty crisp. In the future, I plan to change the number of blanking samples based on the distance between the two shapes. If it’s a small distance, there’s no point in spending a full 7 samples during this blanking time.
Oscilloscope Z-Input & Blanking
I mentioned previously that I was unable to turn the electron beam off and on while moving between shapes. This isn’t entirely true: my oscilloscope does have a “z-input” that can be used for blanking. But unfortunately, I found that it is AC-coupled. This means that the z-input is only able to detect changes in voltage, rather than read the DC voltage directly like my x and y-inputs. My game engine has actually supported changing the brightness of samples through a “z-input” and a third audio channel since the beginning, but I will need to get a new oscilloscope with a DC-coupled z-input that I can use for full-featured blanking.
My vector game engine’s source code is available on Github under the MIT license. It’s pretty rough and I intend to use it primarily for my own experiments, but you’re welcome to peek around and check out the progress. I plan to continue to post here from time to time with updates, but for the latest news on the project, check out my Twitter feed.
|
OPCFW_CODE
|
Notice from Ricoh.
App for computer supporting the RICOH THETA V will be released today.
RICOH THETA Movie Converter
-Spatial audio of videos recorded on the RICOH THETA V can be converted to the YouTube spatial audio format.
1. Drag and drop the video file you want to convert onto the app icon.
2. The converted video file (*.mov) is saved in the same folder as the original video file.
Perform a fresh install from the download page.
RICOH THETA Movie Converter for Windows converts movie files with 360º spatial audio
RICOH THETA V outputs into YouTube movie formats.
Uploading converted movie files to YouTube, you can enjoy movies with 360º spatial audio
To install the converter, launch the Setup.exe supplied with the product and follow the
dialog boxes for installation.
Follow the procedure below to convert movie files with 360 º spatial audio into YouTube
- Connect RICOH THETA V to your computer using a USB device and copy a movie file
(for instance, R0010001.MP4) you want to convert to your computer.
- Start the main app for computer “RICOH THETA”, and drag and drop the movie file (for
instance, R0010001.MP4) to it.
- Uncheck the [top/bottom correction] check box and click the [Start] button.
- Drag and drop the converted movie file (for instance, R0010001_er.MP4) to RICOH
THETA Movie Converter on the desktop screen.
When a dialog box appears, specify a file name and start conversion. After the file is
converted, an MOV file (for instance, R0010001_er.mov) is output.
- After the MOV file is output, upload it to YouTube.
This procedure applies when the firmware ver.1.00.2 and the main app for computer
“RICOH THETA” ver.3.1.0 are used.
The procedure may change when the software is updated in the future.
For details about how to upload the files to YouTube, please visit and check YouTube.
When using RICOH THETA V for shooting, place it on a tripod, etc. and stand it vertically.
Please visit and check theta360.com.
You need to convert the file into equirectangular format. The file must have ER in it. Verify that the spatial audio works on the Ricoh desktop app
Make sure you drag the file with ER in the filename.
Make sure you upload the file ending in .mov
If you have a YouTube video with spatial audio that you created with the THETA V, please share it with the community. Simply reply to this post and put the link into the reply. We can all enjoy your interesting use of spatial audio.
|
OPCFW_CODE
|
Specific information regarding integrating the Authelia OpenID Connect Provider with an OpenID Connect relying party.
Generating Client Secrets
We strongly recommend the following guidelines for generating client secrets:
- Each client should have a unique secret.
- Each secret should be randomly generated.
- Each secret should have a length above 40 characters.
- The secrets should be stored in the configuration in a supported hash format. Note: This does not mean you configure the relying party / client application with the hashed version, just the secret value in the Authelia configuration.
- Secrets should only have alphanumeric characters as some implementations do not appropriately encode the secret when using it to access the token endpoint.
Authelia provides an easy way to perform such actions via the
Generating a Random Password Hash guide. Users can
perform a command such as
authelia crypto hash generate pbkdf2 --variant sha512 --random --random.length 72 command to
both generate a client secret with 72 characters which is printed and is to be used with the relying party and hash it
using PBKDF2 which can be stored in the Authelia configuration.
Authelia technically supports storing the plaintext secret in the configuration. This will likely be completely unavailable in the future as it was a mistake to implement it like this in the first place. While some other OpenID Connect 1.0 providers operate in this way, it’s more often than not that they operating in this way in error. The current technical support for this is only to prevent massive upheaval to users and give them time to migrate.
As per RFC6819 Section 220.127.116.11.3 the secret should only be stored by the authorization server as hashes / digests unless there is a very specific specification or protocol that is implemented by the authorization server which requires access to the secret in the clear to operate properly in which case the secret should be encrypted and not be stored in plaintext. The most likely long term outcome is that the client configurations will be stored in the database with the secret both salted and peppered.
Authelia currently does not implement any of the specifications or protocols which require secrets being accessible in the clear and currently has no plans to implement any of these. As such it’s strongly discouraged and heavily deprecated and we instead recommended that users remove this from their configuration entirely and use the Generating Client Secrets guide.
Plaintext is either denoted by the
$plaintext$ prefix where everything after the prefix is the secret. In addition if
the secret does not start with the
$ character it’s considered as a plaintext secret for the time being but is
deprecated as is the
Frequently Asked Questions
Why isn’t my application able to retrieve the token even though I’ve consented?
The most common cause for this issue is when the affected application can not make requests to the Token
This becomes obvious when the log level is set to
trace and a presence of requests to the Authorization
Endpoint without errors but an absence of requests made to the Token
These requests can be identified by looking at the
path field in the logs, or by messages prefixed with
Authorization Request indicating a request to the Authorization
Access Request indicating a request
to the Token
All causes should be clearly logged by the client application, and all errors that do not match this scenario are clearly logged by Authelia. It’s not possible for us to log requests that never occur however.
|
OPCFW_CODE
|
RP sd card WLAN
I have a Raspberry Pi 3 with Jessie and I forgot to bring a keyboard and mouse. I am at a new location and I need to connect my Pi to the WiFi here. I do not have an Ethernet cable.
I can access the SD card from a Windows computer.
Any ideas?
First, you need to add a file called ssh (with no file extension) to the /boot/ directory on your SD card so that it will enable the SSH access. The content of the file doesn't matter.
Create a file name onetimewifi.txt and save it to SD card /boot/ directory, the content of onetimewifi.txt looks like this:
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="wifi_ssid"
psk="wifi_password"
Boot your Raspberry Pi, and find out your raspberry pi IP address from your router (raspberry pi MAC address always started as b8:28:xx:xx:xx:xx. If you don't have access to router, try run linux command on your terminal arp -a.
Login to your raspberry pi using ssh command from putty (terminal application) ssh<EMAIL_ADDRESS>(replace <IP_ADDRESS> with your own raspberry pi IP address you found out in step 3).
Once login, run sudo mv /boot/onetimewifi.txt /etc/wpa_supplicant/wpa_supplicant/wpa_supplicant.conf.
This is similar to hcheung's response above, but a little simpler.
As of about a year ago, Raspbian will automatically move any wpa_supplicant.conf file it finds in the boot partition (which contains the files that the CPU reads on boot up) to the correct folder for everything to work.
So connect the SD card to your computer and create a file called wpa_supplicant.conf in the drive it shows up as.
Note that if you do this from Notepad, when you save it you'll want to find the drop-down menu that says Text files (*.txt) and select the option All files (*.*) instead, otherwise it won't work correctly.
Put this in that file:
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="<your wifi network name>"
psk="<your wifi network password>"
}
It should connect when you plug it in. If not, try rebooting (if you have no way to access a command line, you should be able to just pull the plug to shut it down, though avoid that if possible). If it still doesn't work, hcheung's solution is your best bet.
|
STACK_EXCHANGE
|
With the rise of digital technology and internet-based education, students all across the world are embracing this new learning method. Getting an online degree seems to be the new trend in a world rapidly growing as ours and it also comes with certification as well. Getting an online degree in computer science comes with so much benefits as you are able to interact with brains all over the world and your connection of likeminds is greatly improved.
To get a degree in computer science, you need to have a good general understanding of computer science and its sub-fields. You can do research, study at a college or university, or pursue a degree online through any online degree learning platform.
There are many ways to get an online degree in computer science that will help you hone your skills in the fields of computing, data analysis, and storage systems. Online degrees do not require your physical attendance, but rather from the comfort of your home, you can attend lectures and undertake test provided you have a good internet connection and a working mobile device. Here’s what you need to know about getting an online degree in computer science.
What is computer science?
Computer science is the study of computer systems. The field encompasses a wide range of topics such as software engineering, virtualization, and parallel computing. The goal of computer science is to understand the behavior of real-world systems and to create software that can interact with them.
A good example of this can be found in computer programming. There are many different types of programming that can be used to interact with computer systems. The main focus for computer scientists is on function and technical errors. The meaning of an error can vary depending on the type of computer system you are working on. A good way to look at it is that errors are the residue left over from the construction of the system.
What are the requirements for a computer science degree?
To get an online degree in computer science, you need to have a basic understanding of data structures, programming, and data analysis. These are necessary because these areas are applicable to every aspect of computer science. You also need to have a basic knowledge of functions and data types to be able to understand what is happening in the system and why certain actions are taken.
How to get an online degree in computer science?
You can get a degree in computer science through any university or college. Generally, you will be able to take either an Honorary or a Certificate in Computer Science. Once you have the appropriate honors or certificate, it’s very simple to start looking for a job in the field of computer science.
There are many online degree programs that provide the necessary information and resources to get you started. The good news is that job opportunities are very good in this field. With so many options, it’s hard to choose just one. In fact, the job opportunities are so good that it’s hard to decide which opportunities are good enough to apply to.
What Are The Different Types of Computer Science Degrees?
There are many different types of computer science degrees available. There are associate degrees, entrance degrees, bachelor’s degrees, and master’s degrees. To get a good idea of the different degrees, think about the following types of courses.
Associate Degree: These are the most common types of computer science degrees. They can be used to study basic mathematics, word skills, physics and more.
Bachelor’s Degree: Bachelor’s degrees are usually offered in disciplines such as accounting, business administration, computer programming, and psychology.
Graduate Certificate: This is a special program for advanced students. It can be used to show leadership and leadership training.
Veterans’ Educator’s Certificate: This is for LEED-certified veterans and their families. This is good for immediate use because the certification takes less than a week to earn.
Online Courses in Computer Science
There are many different ways to go about online courses. You can take survey courses, read online books, and complete online exercises. These can be very challenging and potentially overwhelming. If you take a look at the reviews on some of the most popular websites, you’ll see how many people have trouble with these courses.
Make sure you understand the challenges and get help if you need it. If you’re looking for a more in-depth course, there are many online programs that provide in-depth content. Online courses in computer sciences are well explained to help you understand the content and what is being taught, as well as give you a better understanding of the themes and practices of computer science.
Final Words: Find the right degree for you!
The best way to get an online degree in computer science is to take a variety of courses. There is no one-fits-all solution to this, as there are many different types of courses. If you decide you want to go with a specific program, you can look at different departments at the school to see what they teach.
If they don’t have a specific entry-level course in computer science, they usually have online courses that can help you progress in that field. If you decide on a specific program, make sure you are really interested in the topics and have a strong motivation to succeed. It’s very difficult to get a degree in computer science that is not interesting to take.
|
OPCFW_CODE
|
Twitter has changed its API rate limit policy again. There are good and bad parts of this change, but the change effectively hamstrings some analytically important endpoints, which is unfortunate.
What is an API?
An API — or Application Programming Interface — is a protocol that programs use to interact with each other. There are different kinds of APIs, but the Twitter REST API (like most APIs) boils down to a standardized format in which developers can pose questions to an online platform and interpret the corresponding answers.
What is a rate limit?
Answering the questions programs pose to an API takes some computational power, especially for platforms as big as Twitter is. (Once you have more than 100M active users in your system, it takes some time to figure out who follows whom.) For this reason, an API will only answer so many questions from any one program before it stops answering questions from that program for a while. In this way, an API can make sure no one program takes up all its resources. This policy by which questions are answered or ignored based on usage is called an API’s “rate limit.”
How did Twitter change its API Rate Limit Policy?
The old rate limit policy was very simple: a user could pose 350 questions to the API per hour, and any questions after that were ignored until the current hour had passed. The hour-long windows started at the top of the hour, so if you asked 350 questions between 12:00 and 12:30, you had to wait until 1:00 before you could ask any more questions.
As rate limits go, this rate limit policy was a good one. Sure, you could only ask 350 questions per hour, but you can get a lot of work done with 350 questions, and planning for different workloads wasn’t too hard since all questions counted against the API rate limit the same way.
This most recent change affected three key elements of Twitter’s API rate limit policy:
- Rate limit windows are now 15 minutes instead of 60 minutes. New rate limits start every 15 minutes instead of every 60 minutes. This change doesn’t affect much.
- Rate limits are counted per-question type instead of across all questions. Asking “Who follows @sigpwned?” does not affect whether Twitter will answer you when you ask “What lists is @sigpwned on?” The rates are counted separately. This change wouldn’t affect much either, except for change #3.
- Rate limits for some question types have been increased 2x; rate limits for other question types have been decreased 6x. Because rate limits are counted per-question type now, users can ask significantly more questions of the API on an hourly basis than they could in the past. While this sounds good at first blush, it’s not all upside since some endpoints got their allowed usage decreased. If you use a question with a decreased rate limit a lot, this definitely isn’t good news. You can find the full list of question types and rate limits here, but here are the ones analysts will care about:
How does the change affect analytics?
If you make regular use of API questions that just had their rate limit reduced, like W2O does, your life just got harder. For example, doing graph analysis on the accounts a group of users follow will now take about 6x longer, all things being equal.
The reality is that if clients need data, they’ll get data. With these rate limit changes, though, if a client deliverable requires data that takes longer to collect, the client deliverable may take longer to finish. Ultimately, managing these new API rate limits and keeping the trains running on time will require companies to get more clever about how they approach API usage.
Cleverness always implies an investment, whether it’s in time, money or both. It’s curious that Twitter wouldn’t simply offer a paid model for its API that would capture some of the investment these companies will now have to make and add it to their bottom line. Instead, this investment will just get “thrown away” into more and more complex processes and software.
While the reasons behind this decision are interesting to speculate about, the question of “Why?” is a different discussion.
Because of the importance of the issue, we did a short podcast exploring the issue in more detail. You can find the link above.
|
OPCFW_CODE
|
from bs4 import BeautifulSoup
import urllib2
class Dota2:
def __init__(self):
# Website schedule for Dota 2 matches
self.matchPageHTML = 'http://www.gosugamers.net/dota2/gosubet'
req = urllib2.Request(self.matchPageHTML, headers={'User-Agent' : "Magic Browser"})
response = urllib2.urlopen(req)
self.matchPageSoup = BeautifulSoup(response)
def findMatches(self):
if(self.matchPageSoup):
liveMatchesGroup = self.matchPageSoup.body.findAll("div",class_ = "box")
print liveMatchesGroup
else:
print "Can not find the given website."
test = Dota2()
test.findMatches()
|
STACK_EDU
|
At Enclouse, we believe that the technologies are changing as frequently as the business needs. So, our philosophy has been very simple: keep learning and build for future. We have experience in building enterprise software and architectures that can sustain the changes in the business and technologies. Founded in 2010 in San Francisco Bay Area, Enclouse, kept its uniqueness in helping its clients in building systems using the latest technologies that are best suited for the business needs.
We have architected enterprise solutions for several Healthcare, Web Security, Property Tax Management, eCommerce, Interior Designing systems. We have created several custom frameworks using technologies like MEAN Stack, ASP.Net MVC 5, LAMP, Grails, etc. We are passionate about design patterns, custom frameworks, web services, TDD.
We have experience in developing scalable, maintainable, testable, usable solutions that are implemented using various technologies like Angular JS, Node JS, Express, KOAJS, Passport-SAML JS, MongoDB, Nginx, C#, ASP.NET MVC 5, Bootstrap, WPF, WCF Web Services, WIF, PHP, Java, and the list goes on..Basically, languages and technologies have been our strengths. We have implemented systems on Heroku, AWS and Windows Servers.
We are passionate about mobile development and constantly keep exploring various app development activities. We have developed apps both in iOS and Android. We worked on the mobile frameworks like Ionic and, Cordova.
At HomeLane.com, we are helping in architecture and development of a platform and a marketplace where the designers and the consumers can interact online. We are using the MEAN Stack, LAMP Stack, CRM tools deployed on AWS. We are responsible to build a world class team from ground up who have taken up development of complex systems.
Architected and implemented a secure, scalable, maintainable, usable platform integrated with PACS systems to streamline their business process and maximize their doctor-to-case study ratio and allowing this group of doctors to handle the case load from several dozen hospital facilities throughout the country.
Declaration Services Inc., is a property management company providing services to big companies in Bay area. We are responsible for their web site and web-based application development and maintenance of their entire IT infrastructure.
This e-commerce site is one of the largest online sporting goods suppliers in India. Enclouse is helping the company maintain their site and with all new technical enhancements. We have re-writing their site with new architecture and implement both web and mobile interfaces.
BizTek Innovations client implemented a Web based Hospital Materials Management Solution for managing the inventory of the hospitals. We are responsible for architecture, project management, and implementation of this project.
Sriram has over 20+ years’ extensive experience in the software industry. He is a technology enthusiast and a seasoned architect of highly secured and scalable enterprise applications. He is specialized in several technologies like Microsoft .NET Framework, Java Stack, MEAN Stack, Service Oriented, database-driven, cross-browser, and windows environments.
Srinivas brings more than 25 years experience working in various positions in companies like Mentor Graphics, CMR Designs Automations, CDOT, and Altos India. He is responsible for marketing and operations.
Engineering Head, NightShift Radiology based in San Francisco Bay Area, USA. He is a veteran with over 25 years of experience in software development and is THE technology evangelist. He brings a rich experience in the areas of healthcare and web security. Raul holds a degree in Computer Engineering.
CTO at HomeLane.com, Interior Designing company. He has 20 years of experience in software development and brings his knowledge of technology and International markets to the company. Srini holds a Master of Science from Iowa State University, USA.
325 Woodruff way
Milpitas CA 95035
5/1 Gulmohar Enclave
Above LG Service Center
Opp. Sankara Eye Hospital
001 (408) 946-8653 USA
+91 776 032 2488 India
|
OPCFW_CODE
|
how to display data (update from the staric hard coded values) in my app`s view controller`s label once the data is recieved from the server
Hello Fellow Programmers.
I am new to swift programming and have researched on parsing json data.
I am building an app that uses authentication from the server.
Once the authentication is successful, i receive data that has all the details. the below details shows a glimpse of the data received in the console
[ };
troubleTicket = {
closedTicketsCount = 28395;
openTroubleTicketsCount = 434;
pendingWithCustomerTicketsCount = 602;
resolvedTicketsCount = 3238;
workInProgressTicketsCount = 380;
};
userDetails = {
errorDescription = "<null>";
password = F5A2DC7EE0757DDF0C6E74A796D6E1C4785AD438;
status = Success;]
that i have already hard coded in my first view
my view has the following components like
a) number of tickets resolved
b) number of tickets unresolved
c) pending tickets
this image shows the labels that needs to be updated
Can anyone help me as to how can i update the labels from the data I received ??
Do i have create a global class that stores all these and then update the labels??
Awaiting reply.
Thanks
You haven't provided enough information about your app's structure and how you set up your view controller. In general, though, I would say that you should have instance variables that hold the values that you display in your labels. In your init method, install starting values into those variables. In viewWillAppear, call a method to display the current values of those instance variables into your labels. Then, in your code that handles the JSON data call the method that displays the data to the labels again.
I would also recommend using https://github.com/SwiftyJSON/SwiftyJSON for swiftier json usage
I have tried as per your instruction. https://github.com/madmax994570/VodafoneApp_swift is where i have hosted the project.. Can u please have a look
If you have multiple places or controllers in the app which need to be updated than you should either use a delegate (@protocol) or notification( NSNotification ) which listen to this change and update their views respectively
You can have a global state variable in the app delegate and update its value on auth response. After this you can call either of delegate or notification for all the views to update their value.
Hi Amruta https://github.com/madmax994570/VodafoneApp_swift is where i have hosted the project.. Can u please have a look
|
STACK_EXCHANGE
|
Prohibit reshuffling of data.frame before plotting
if I have the following plot, how can I prohibit reshuffling of data before plotting? I would like to maintain the same order of data in the plot as in the data frame which I am trying to plot. Any help is much appreciated
df <- data.frame(derma=c(1:14), prevalence=c(1:14))
df$derma = c("Spotted melanosis","Diffuse melanosis on palm","Spotted melanosis on trunk","Diffuse melanosis on trunk","Leuco melanosis","Whole body melanosis","Spotted keratosis on palm","Diffuse keratosis on palm","Spotted keratosis on sole","Diffuse keratosis on sole","Dorsal keratosis","Chronic bronchitis","Liver enlargement","Carcinoma")
df$prevalence = c(16.2,78.6,57.3,20.6,17,8.4,35.4,23.5,66,52.8,39,6,2.4,1)
g <- ggplot(data=df, aes(x=derma, y=prevalence)) + geom_bar(stat="identity") + coord_flip()
print(g)
Read about factor. See - http://stackoverflow.com/questions/5208679
As mentioned in the comments, you can do this by ordering the factor variable. Below is the sample code:
df <- data.frame(derma=c(1:14), prevalence=c(1:14))
df$derma = c("Spotted melanosis","Diffuse melanosis on palm","Spotted melanosis on trunk","Diffuse melanosis on trunk","Leuco melanosis","Whole body melanosis","Spotted keratosis on palm","Diffuse keratosis on palm","Spotted keratosis on sole","Diffuse keratosis on sole","Dorsal keratosis","Chronic bronchitis","Liver enlargement","Carcinoma")
df$prevalence = c(16.2,78.6,57.3,20.6,17,8.4,35.4,23.5,66,52.8,39,6,2.4,1)
df$derma <- factor(df$derma, levels = df$derma, ordered=TRUE)
g <- ggplot(data=df, aes(x=derma, y=prevalence)) + geom_bar(stat="identity") + coord_flip()
print(g)
You can play around with the labels and give it the order that you want in your plot. For example, you can reverse the elements in the plot by simply using the rev() function. Below is the code for same :
df <- data.frame(derma=c(1:14), prevalence=c(1:14))
df$derma = c("Spotted melanosis","Diffuse melanosis on palm","Spotted melanosis on trunk","Diffuse melanosis on trunk","Leuco melanosis","Whole body melanosis","Spotted keratosis on palm","Diffuse keratosis on palm","Spotted keratosis on sole","Diffuse keratosis on sole","Dorsal keratosis","Chronic bronchitis","Liver enlargement","Carcinoma")
df$prevalence = c(16.2,78.6,57.3,20.6,17,8.4,35.4,23.5,66,52.8,39,6,2.4,1)
df$derma <- factor(df$derma, levels = rev(df$derma), ordered=TRUE)
g <- ggplot(data=df, aes(x=derma, y=prevalence)) + geom_bar(stat="identity") + coord_flip()
print(g)
Hi @Kumar, many thanks for your response! But actually, ordering is exactly what I don't want. I just want the data to appear in the same order as in the data.frame. Also if I use the order function only the labels are changed but not the data in the bars.
So what happens at the moment is that the order of the df$derma is correct but somehow the data of the df$prevalence is changed in order but I would like it to remain in the same order.
Sorry, I edited the code a bit. I used labels argument instead of levels. Now the data will not change order.
I dont know the solution without ordering the factor. When you make a factor variable ordered, you are not actually ordering the data, but just specifying the how each level is ordered. Hence the output shows ordered plot, in the order that factor is.
That's exactly what I've been looking for! Thank you very much for your contribution :)
|
STACK_EXCHANGE
|
#!/bin/bash
export URL_MONGOS=https://repo.mongodb.org/apt/ubuntu/dists/xenial/mongodb-org/4.0/multiverse/binary-amd64/mongodb-org-mongos_4.0.11_amd64.deb
export URL_SERVER=https://repo.mongodb.org/apt/ubuntu/dists/xenial/mongodb-org/4.0/multiverse/binary-amd64/mongodb-org-server_4.0.11_amd64.deb
export URL_TOOLS=https://repo.mongodb.org/apt/ubuntu/dists/xenial/mongodb-org/4.0/multiverse/binary-amd64/mongodb-org-tools_4.0.11_amd64.deb
export URL_SHELL=https://repo.mongodb.org/apt/ubuntu/dists/xenial/mongodb-org/4.0/multiverse/binary-amd64/mongodb-org-shell_4.0.11_amd64.deb
export DOMAIN_NAME=tell-me-a-story.onthewifi.com
lunch_playbook () {
echo "_____________________________BEGINNING_${1}_MODIF______________________________"
echo "_______________________________________________________________________________"
if test "$1" = "web"
then
ansible-playbook playbook_${1}.yml -i SERVER
else
ansible-playbook playbook_${1}.yml -i SERVER --extra-vars "ansible_ssh_pass=$2 ansible_sudo_pass=$2"
fi
echo "_______________________________________________________________________________"
echo "______________________________ENDING_${1}_MODIF________________________________"
}
test_ssh () {
ssh -o "CheckHostIP=no" root@$DOMAIN_NAME hostname
}
if [$(test_ssh)=$DOMAIN_NAME];
then
cd deployment/
tmp=0
while test $tmp -eq 0 ; do
read -p "would you want to deploy the configuration for web ? [Yes/No]" answer
if test "$answer" = "YES" || test "$answer" = "yes" || test "$answer" = "Yes" || test "$answer" = "Y" || test "$answer" = "y" ;
then
lunch_playbook web
tmp=$(($tmp + 1))
elif test "$answer" = "NO" || test "$answer" = "no" || test "$answer" = "No" || test "$answer" = "N" || test "$answer" = "n" ;
then
echo "skipping playbook_web"
tmp=$(($tmp + 1))
else
echo "Please answer yes or no."
fi
done;
tmp=0
while test $tmp -eq 0 ; do
read -p "would you want to deploy the configuration for server ? [Yes/No]" answer
if test "$answer" = "YES" || test "$answer" = "yes" || test "$answer" = "Yes" || test "$answer" = "Y" || test "$answer" = "y" ;
then
if ! test -e roles/mongo/deb/mongodb-org-server_4.0.11_amd64.deb ;
then
echo "Some Packages are needed for ansible to install mongo"
pack=0
while test $pack -eq 0 ; do
read -p "These will take over 1GB internet, do you agree to download them now ? (Yes/No)" answer
if test "$answer" = "YES" || test "$answer" = "yes" || test "$answer" = "Yes" || test "$answer" = "Y" || test "$answer" = "y" ;
then
wget $hadoop_download_url -P roles/common/templates/
wget $hbase_download_url -P roles/hbase/templates/
read -p "A password is needed to connect to server svg" passwrd
lunch_playbook server $passwrd
tmp=$(($tmp + 1))
pack=$(($pack + 1))
elif test "$answer" = "NO" || test "$answer" = "no" || test "$answer" = "No" || test "$answer" = "N" || test "$answer" = "n" ;
then
echo "come back when you're ready to download them"
tmp=$(($tmp + 1))
pack=$(($pack + 1))
else
echo "Please answer yes or no."
fi
done;
else
read -p "A password is needed to connect to server svg" passwrd
lunch_playbook server $passwrd
tmp=$(($tmp + 1))
fi
elif test "$answer" = "NO" || test "$answer" = "no" || test "$answer" = "No" || test "$answer" = "N" || test "$answer" = "n" ;
then
echo "skipping playbook_server"
tmp=$(($tmp + 1))
else
echo "Please answer yes or no."
fi
done;
cd ..
else
echo "connection ssh with server is impossible. please try to contact admin"
fi
|
STACK_EDU
|
Why does Set.of() throw an IllegalArgumentException if the elements are duplicates?
In Java 9 new static factory methods were introduced on the Set interface, called of(), which accept multiple elements, or even an array of elements.
I wanted to turn a list into a set to remove any duplicate entries in the set, which can be done (prior to Java 9) using:
Set<String> set = new HashSet<>();
set.addAll(list);
But I thought it would be cool to use this new Java 9 static factory method doing:
Set.of(list.toArray())
where the list is a List of Strings, previously defined.
But alas, java threw an IllegalArgumentException when the elements were duplicates, also stated in the Javadoc of the method. Why is this?
Edit: this question is not a duplicate of another question about a conceptually equivalent topic, the Map.of() method, but distinctly different. Not all static factory of() methods behave the same. In other words, when I am asking something about the Set.of() method I would not click on a question dealing with the Map.of() method.
I'm not sure why this is, but as an alternative may I suggest the following Java 8 statement that does the de-duplication for you: list.stream().collect(Collectors.toSet())
Or just use the ancient one-liner new HashSet<>(list);
You are looking for a different factory method: Set.copyOf(list)
The Set.of() factory methods produce immutable Sets for a given number of elements.
In the variants that support a fixed number of arguments (static <E> Set<E> of(), static <E> Set<E> of(E e1), static <E> Set<E> of(E e1,E e2), etc...) the requirement of not having duplicates are easier to understand - when you call the method Set.of(a,b,c), you are stating you wish to create an immutable Set of exactly 3 elements, so if the arguments contain duplicates, it makes sense to reject your input instead of producing a smaller Set.
While the Set<E> of(E... elements) variant is different (if allows creating a Set of an arbitrary number of elements), it follows the same logic of the other variants. If you pass n elements to that method, you are stating you wish to create an immutable Set of exactly n elements, so duplicates are not allowed.
You can still create a Set from a List (having potential duplicates) in a one-liner using:
Set<String> set = new HashSet<>(list);
which was already available before Java 9.
Set.of() is a short way of creating manually a small Set. In this case it would be a flagrant programming error if you gave it duplicate values, as you're supposed to write out the elements yourself. I.e. Set.of("foo", "bar", "baz", "foo"); is clearly an error on the programmer's part.
Your cool way is actually a really bad way. If you want to convert a List to a Set, you can do it with Set<Foo> foo = new HashSet<>(myList);, or any other way you wish (such as with streams and collecting toSet()). Advantages include not doing a useless toArray(), the choice of your own Set (you might want a LinkedHashSet to preserve order) etc. Disadvantages include having to type out a few more characters of code.
The original design idea behind the Set.of(), List.of() and Map.of() methods (and their numerous overloads) is explained here What is the point of overloaded Convenience Factory Methods for Collections in Java 9 and here, where it's mentioned that The focus is on small collections, which is something very common all around the internal API, so performance advantages can be had. Although currently the methods delegate to the varargs method offering no performance advantage, this can be easily changed (not sure what the hold-up is on that though).
@Kayaman while this is correct, for example List.of... might be extended to some other "bigger" collections... https://stackoverflow.com/questions/47461196/java-default-implementation-of-list-and-set#comment81912802_47461196
@Eugene sorry, I'm not sure what you mean?
@Kayaman my bad. what I meant is that your answer is focused on "small by hand-elements" collection as created by Set.of and List.of for example. The comment from Stuart that I linked to, says that this potentially be extended to bigger collections. I hope I make sense
@Eugene yeah, I mean I'm assuming that the design decision was made based on that, but that's just a Set.of() specialty. With List.of() and friends, there's no reason why you'd have to assume small, manually created collections. Although, I do believe that the original idea behind the of() methods was more about creating small collections (which is why all the overloads) efficiently.
@Kayaman nitpick: all of them don't yet, starting from 3 elements they still delegate to var-args
@Eugene yup I remember a few posts discussing the decisions behind it, the current implementations and possible future implementations.
The primary design goal of the List.of, Set.of, Map.of, and Map.ofEntries static factory methods is to enable programmers to create these collections by listing the elements explicitly in the source code. Naturally there is a bias toward a small number of elements or entries, because they're more common, but the relevant characteristic here is that the elements are listed out in the source code.
What should the behavior be if duplicate elements are provided to Set.of or duplicate keys provided to Map.of or Map.ofEntries? Assuming that the elements are listed explicitly in the source code, this is likely a programming error. Alternatives such as first-wins or last-wins seem likely to cover up errors silently, so we decided that making duplicates be an error was the best course of action. If the elements are explicitly listed, it would be nice if this were a compile-time error. However, the detection of duplicates isn't detected until runtime,* so throwing an exception at that time is the best we could do.
* In the future, if all the arguments are constant expressions or are constant-foldable, the Set or Map creation could also be evaluated at compile time and also constant-folded. This might enable duplicates to be detected at compile time.
What about the use case where you have a collection of elements and you want to deduplicate them? That's a different use case, and it's not well handled by Set.of and Map.ofEntries. You have to create an intermediate array first, which is quite cumbersome:
Set<String> set = Set.of(list.toArray());
This doesn't compile, because list.toArray() returns an Object[]. This will produce a Set<Object> which can't be assigned to Set<String>. You want toArray to give you a String[] instead:
Set<String> set = Set.of(list.toArray(new String[0]));
This typechecks, but it still throws an exception for duplicates! Another alternative was suggested:
Set<String> set = new HashSet<>(list);
This works, but you get back a HashSet, which is mutable, and which takes up a lot more space than the set returned from Set.of. You could deduplicate the elements through a HashSet, get an array from that, and then pass it to Set.of. That would work, but bleah.
Fortunately, this is fixed in Java 10. You can now write:
Set<String> set = Set.copyOf(list);
This creates an unmodifiable set from the elements of the source collection, and duplicates do not throw an exception. Instead, an arbitrary one of the duplicates is used. There are similar methods List.copyOf and Map.copyOf. As a bonus, these methods skip creating a copy if the source collection is already an unmodifiable collection of the right type.
Thank you for this updated insight! Choosing an arbitrary duplicate when de-duplicating is exactly what I would expect. Hesitating between first-wins or last-wins seems out of the question for me.
Any vararg/array overload on the horizon for Set.copyOf?
You are expecting this to be a "last-wins", just like HashSet I guess, but this was a deliberate decision (as Stuart Marks - the creator of these explains). He even has an example like this:
Map.ofEntries(
"!", "Eclamation"
.... // lots of other entries
""
"|", "VERTICAL_BAR"
);
The choice is that since this could be error-prone, they should prohibit it.
Also notice that Set.of() returns an immutable Set, so you could wrap your Set into:
Collections.unmodifiableCollection(new HashSet<>(list))
Do you have a reference for The choice is that since this could be error-prone, they should prohibit it.?
Thanks for that link, I'm going to watch it :)
Correct answer:
Set.copyOf(list);
This code would create set containing unique values from source collection WITHOUT throwing duplicate exception.
Not a single answer for the original question. Only many words explaining why Set.of throwing exception is right (which is arguable).
Also keep in mind that set created that way is immutable which you should use 99% of the time. That's why I always use guava, but recently sonarqube told me that I should switch to Java 9 immutables and DIDN'T mention that behaviour is actually different, ImmutableSet.of silently filters duplicates, Set.of throws duplicate exception.
I've got similar problem when I have 3 variables and want to build an immutable set containing only unique values. What guava does out-of-the box:
MyType one = ...;
MyType two = ...;
MyType three = ...;
ImmutableSet.of(one, two, three);
with Java 9 you need excessive code:
MyType one = ...;
MyType two = ...;
MyType three = ...;
Set.copyOf(List.of(one, two, three));
And Java 9 don't even have method
Well, the original question was why it throws an exception. You are explaining how to make a set of unique values, but that is not an answer to the question.
Set.of(E... elements)
The element type of the resulting set will be the component type of the array, and the size of the set will be equal to the length of the array.
Throws:
IllegalArgumentException - if there are any duplicate elements
This is clear that this is not doing any duplicate test since the size of the Set will be the length of the array.
The method is just here to be able to get a populated Set in a one line
Set.of("A","B","C");
But you have to be careful on the duplicate yourself. (This will simply iterate the varargs and add them in the new Set.
|
STACK_EXCHANGE
|
Manage Android Fragmentation Using an Effective Test Automation Tool
Android application developers have to take care of app compatibility with multiple devices and operating system versions. Device manufacturers too have to test a new device in its entirety against the version of Android that they want to support. The current fragmentation of devices and operating system versions adds significantly to the cost and time-to-market for app developers and device manufacturers.
While Android provides over 15,000 pre-defined test cases to test a new app being written or the operating system being ported onto a new device, many of these test cases need modifications during development. Moreover, a developer usually needs to add a significant number of new test cases to this list. Integrated frameworks that can consolidate all the pre-defined and custom-written test cases into an automation suite can be of immense benefit to app developers and device manufacturers.
A robust test automation system can enable software developers, testers and project managers to manage the increasingly complex testing workflow. While evaluating a test automation system for a mobile platform, the following aspects need to be considered:
- Ability to add existing or newly defined custom test cases to the test suite in addition to the ones provided by the platform.
- Remote test execution. To be able to truly make an impact to development timelines, the framework should be equipped to work over a network (like corporate wifi) and test multiple devices in a test lab. Considering the proliferation of devices, this would enable development and testing teams to execute test cases across multiple devices from a single host.
- For large projects, the tool should fit well with existing team structures, allowing users to create user profiles with different access rights, such as administrator, tester, test lead, developer and others. Test runners should be able to assign test case clusters based on user roles and monitor them remotely, enabling managers and test leads of a large team to centrally assign and monitor testing activity.
- Ability to assert the final results against control data and entries made in a central database, allowing deviations to be recorded and reported.
- Ability to group test cases by app requirements, enabling monitoring of app progress and stability at a requirement level over a period of time.
- Ability to support reporting for quality controller, test engineer, test manager, project manager and management. The tool should be able to support standard report formats such as html, pdf, csv, png and xls.
- The framework should be modular, enabling clean separation between platform-dependent modules such as actual device debugging (e.g. Android Debug Bridge) and independent modules such as reporting and GUI.
- Lastly, the framework should have the ability to integrate with popular bug management tools adding value to the entire testing process.
Persistent Systems has developed a test automation framework for Android that enables developers, testers and project managers to manage their test suites and test plans. To learn more, visit PTAF-framework
|
OPCFW_CODE
|
Includeprocessor to transform a file to asciidoc
I would like to implement a include processor to transform a refenrenced xml file to an asciidoc table representation. Is that possible?
The xml file the saved representation of a decision table. So my idea is to create a include processor to transform the xml file content to asciidoc code of a table and than let asciidoc do the rest.
Yes, that's possible. You can see a similar example here: https://github.com/asciidoctor/asciidoctor-extensions-lab/blob/master/lib/textql-block.rb. You'd take advantage of the parse_content helper (there's a similar method mapped in AsciidoctorJ).
If you want to write an extension in Java you additionally might want to check out this example from the test suite:
https://github.com/asciidoctor/asciidoctorj/blob/asciidoctorj-1.6.x/asciidoctorj-core/src/test/groovy/org/asciidoctor/extension/GithubContributorsBlockMacro.groovy
It creates a table from information obtained from a rest endpoint.
It’s in groovy but it should be pretty simple to translate it to Java.
where is the differenc between the way to create a table like the example from @mojavelinux and @robertpanzer . I would like to know, if it is better to create a table directly in html, or with the create methods, or better in asciidoc.
especially to be target format neutral.
The Ruby example from @mojavelinux doesn't create HTML, but Asciidoc as an intermediate step and uses parseContent to create the Asciidoctor AST objects from it, which can in turn be rendered to anything.
The Java example directly creates the AST objects, which is surely a bit more work to do.
You can also use parseContent() in Java, if you want to go for Java: https://github.com/asciidoctor/asciidoctorj/blob/d187987ea98b49ed3d30d60f70a7ff8bfe420f70/asciidoctorj-api/src/main/java/org/asciidoctor/extension/Processor.java#L114
That means in your block you would construct Asciidoc source content as you would write it manually, and then let Asciidoctor do its magic.
As Robert points out, both ways are possible. There's no right or wrong way, just what is right for you. Sometimes it's easier to create the AST nodes, and sometimes it's easier to create the AsciiDoc and (sub)parse it. That's why we offer both strategies.
Thx for your help. I thought that it is better to create asciidoc to be independent for the choosen target format (PDF, HTML, epub and so one). So i try the way described by @robertpanzer.
To be clear, we were both suggesting creating an output-agnostic representation. In fact, that's the best practice. It's just a matter of whether you create AsciiDoc then parse it or create the AST nodes directly (skipping the parsing step).
Ok thx for the clarification. Only for the next extension. I should work the same way if i generate plantuml asciidoc code, or?
Ok i finished my first extensions. Your hints and thoughts helped me to speed up my decision to generate direct asciidoc markup and do it with Java.
It is a great and simple API. Afterwards I can say it is very easy to write asciidoc extensions.
|
GITHUB_ARCHIVE
|
Ok, so I got an e-mail from Riksbankens Jubileumsfond and the message concering both of my projects was a "you didn't qualify for the second stage in the application process" (in which about half of the applications succeed and actually get research grants).
Out of 899 applications, 107 - less than 12% - qualified for the second round. When everything has been said and done, around 6% of the applications will get research grants, or, around one in every seventeen applications will be granted. I have unfortunately not kept track of how much time we have put into these applications, but I guess it for sure has to have been more than 40 hours each, and perhaps upwards to 80 hours (or more?) per application. It's very hard to estimate; how do you treat early and less goal-oriented discussions, or, the fact that it might be possible to re-use smaller or larger parts of an application later?
The actual message I got from RJ can be seen below. Do note that it is a standard message into which they have cut-and-pasted the name of the project. There is unfortunately no other feedback, even though I as an applicant naturally have many questions; did they like the application (but there were other applications that were even better)? Are there some specific problem with the application? How can it be improved if I decide to hand in the same (or a similar) application elsewhere or to RJ next year?
As a university teacher I read lots of texts that are written by students. Some are written with great care and are a joy to read. Others aren't. It's not really a secret that some texts that students write and that pass my eyes by are hastily written, fail in their purpose and contain lots of errors of various kinds. I should despite this preferably always leave some kind of comment/feedback to the student (one or a few paragraphs - or at least a sentence) so he/she can understand why his/her effort got a specific grade and/or how to improve the grade, or the writing next time around. I read at least 100 shorter texts each year in our program-integrating course and read more (and longer) texts in my course on social media last term.
I don't know who specifically read my research grant application, but I doubt they earn more than twice, or at the most three times my salary. It is thus dissatisfying that the balance of power/time between an applicant (who spends, say, 40-80 hours (or, 20-200 hours) of work on an application) and the evaluator (10-20-30 minutes to read each application?) is so uneven. I should as a teacher leave at least a short comment for my students when they write something - no matter how hastily written and sloppy the text is. But no-one leaves a comment for me - no matter how carefully crafted my application is... Actually, if I hand in a half-baked paper to a scientific conference or a journal, I will get a written motivation and perhaps also some advice or suggestions if it is rejected, but I get nothing when I hand in an application for a research grant. Isn't that somehow strange and perhaps also slightly disturbing? I think so.
I'm not the first person to comment on this - it's a know problem. Researchers spend inordinate amounts of time writing applications (of which the majority is rejected). Wouldn't it be better if they spent this time (perhaps several or many weeks each year) actually doing the research instead of writing about the research they want to do (in the future, if they get the money)? What should the proportion be between these two activities? I'm not updated on what solutions have been proposed to this problem (if any). Still, the current system can definitely be improved. The (non-)message I received (below) can be interpreted in a variety of ways; "Better luck next year" or "Don't bother again". Which one is it and how am I to know?
I by the way still think the applications (and the proposed research projects specified within the applications) were excellent. We will in both cases pursue these ideas (e.g. ask for money from other sources to conduct the research). Since they both to some extent take harsh economic times as their starting point, I think they both (unfortunately) have time on their side.
|
OPCFW_CODE
|
100% Chance of Social
I’ve always been fascinated by weather, and one million websites ago I thought about becoming a meteorologist. That’s probably because I grew up in south Louisiana where we have a “season” for destructive storms. I remember riding out Hurricane Andrew on a house boat with my family. The reason? Well, if the water rose, we would to. We survived the storm and the dream of replacing Jim Cantore waned.
Fast forward to this month and one of my favorite clients, Jimmy Eat World, is releasing their new record Damage, with an umbrella gracing the Morning Breath designed album artwork. No doubt a nod to the aspirations of my youth. Anyway, they needed a way to premiere “Damage” in an accessibly fun way that played on their social strengths. What’s important here is not to satisfy their current fans. The music should do that. Instead we’re trying to trick non fans into participating in an experience that just so happens to be playing Jimmy Eat World music.
I don’t usually take artwork so literally when coming up with ideas, but fuck it, let’s make it rain. Let’s build an experience that translates social activity into a storm and brings out everyone’s inner meteorologist.
Getting Rain Activity #
The social activity we’ll be polling is hashtags from Twitter and we can do that by creating a simple
twitter.rb worker that utilizes the excellent TweetStream gem to listen to Twitter’s Streaming API and using Pusher we can send these updates to the client.
We begin with configuring Pusher and TweetStream by supplying the appopriate API credentials:
# Configure Pusher Pusher.app_id = "" Pusher.key = "" Pusher.secret = "" # Configure TweetStream TweetStream.configure do |config| config.consumer_key = "" config.consumer_secret = "" config.oauth_token = "" config.oauth_token_secret = "" config.auth_method = :oauth end
Then we create our TweetStream tracking event which gets notified everytime someone tweets the hashtag #damage. This status is then stored in a local database and pushed to the client for rain drop generation.
# Listen for incoming tweets TweetStream::Client.new.track("#damage") do |status, client| tweet = Tweet.from_twitter(status) Pusher['damage'].trigger('new_tweet', tweet.to_json(methods: [:track]) end
track.rb models also include some basic associations that allow us to create a simple incrementer to count the amount of tweets accumulated for each track. We can compare this count to the track’s threshold to figure out if it was unlocked.
class Track include DataMapper::Resource property :tweets_count, Integer, default: 0 property :threshold, Integer, default: 1000 has n, :tweets end class Tweet include DataMapper::Resource belongs_to :track after :create, :increment_counter_cache def increment_counter_cache if self.track.reload.tweets_count >= self.track.reload.threshold self.track.update(state: "unlocked") else self.track.update(tweets_count: self.track.reload.tweets_count + 1) end end end
In order to test TweetStream on our local machine, I simply create a
require 'rubygems' require 'bundler/setup' namespace :jobs do desc "Heroku worker" task :work do exec('ruby ./twitter.rb run') end end
$ rake jobs:work
Our server, Heroku, requires a
worker: env TERM_CHILD=1 QUEUES=* rake jobs:work
and scaling a single worker process:
$ heroku ps scale worker=1
Creating Rain Drops #
Let’s start by setting up our stage and the main layer our graphics will live.
WIDTH = $(window).width() HEIGHT = $(window).height() stage = new Kinetic.Stage container: 'rain' layer = new Kinetic.Layer() stage.add layer
Rather than learn the ends and outs of Bezier Curves, I provided a simple png sprite image of the rain drop so I could get the exact shape I wanted.
dropImg = new Image() dropImg = "/images/drop.png"
The drop itself consists of a Kinetic.JS group of two elements, drop and ripple, which are generated right above the page. The drop is created from our preloaded drop image above, and the ripple is created from a new Kinetic.Circle which is scaled down, stroked, and hidden initially. In addition to the drop and ripple, you can customize the elements by using the provided Twitter
makeDrop = (status) -> # Create group to hold all elements group = new Kinetic.Group x: Math.random() * WIDTH + 1 y: 0 # Create the drop from image drop = new Kinetic.Image x : -7 y : -25 image : dropImg width : 14 height : 25 group.add drop # Create ripple base ripple = new Kinetic.Circle y : -10 radius : 0 scaleX : 5 stroke : 'white' strokeScaleEnabled : false strokeWidth : 4 visible : false group.add ripple
The animation itself consists of two tweens:
- Fall - This animates the drop falling from the top of the page to bottom. Once the animation completes, we hide the drop and show the ripple.
- Splash - Once the ripple is shown, we grow the radius while also fading it out with easing out and in to simulate a delicate water ripple.
Let’s take a look at that series of animations.
# Animate the fall and splash with series of callbacks fall = new Kinetic.Tween node : group duration : 3 y : HEIGHT - 100 onFinish: -> drop.destroy() ripple.show() slash = new Kinetic.Tween node : ripple duration : 3 radius : 20 opacity : 0 easing : Kinetic.Easing.EaseOutIn onFinish: -> ripple.destroy() splash.play() fall.play()
We then listen for those Pusher updates from the server and create a new rain drop each time a new status is tweeted.
pusher = new Pusher '' channel = pusher.subscribe 'damage' channel.bind 'new_tweet', (status) -> makeRain(status) flash() if status.nickname is "jimmyeatworld"
But what if Jimmy Eat World tweets? Make some lightning using Howler.JS of course.
lightning = new Howl(urls: ["/lightning.mp3"]) flash = -> lightning.play() $("#flash").fadeIn 0, () -> $("#flash").fadeOut(1000)
Rain Gauge #
In addition to making sure it played audio and scaled responsively, I wanted to do something special with the mobile interface, so I decided to use the same real-time unlocking data to create a mobile udometer to compliment the desktop experience. The idea being that you could watch the drops falling on your desktop screen while gauging the unlock levels on your mobile device.
It was a fun experiment in dual interactive design and something I will no doubt revisit in the future.
Post Storm #
Thanks for reading. I hope this sheds a bit of sunlight on what it takes to pull off a campaign like this. The power truly lies in all of the excellent libraries I used, so special thanks goes out to those authors. Follow me on Twitter for the latest and please let me know if you have any questions or comments.
Finally, don’t forget to pick up Jimmy Eat World’s eight studio album, Damage, in stores everywhere tomorrow!
|
OPCFW_CODE
|
Before merging with HP, Compaq manufactured a full range of laptop computers. Many HP and Compaq PCs have Intel High Definition Audio which uses UAA and Realtek ALC. She doesn t have a password hint or a password reset disc.
Not that they should have needed telling.Broadcom are in this position precisely because they were not “trying to work more closely with the kernel community”. Didn’t we go through this back in kernel 2.2 with the Intel Ethernet drivers? Seems to me that we had some level of duplication on the 100Mb driver that was eventually resolved.
Mobile Phone Bluetooth Modules Market Overview and Competitive Analysis – Murata, Qualcomm, Intel
Selection of network cards is often the single most important performance factor in your setup. Inexpensive NICs can saturate your CPU with interrupt handling, causing missed packets and your CPU to be the bottleneck. A quality NIC can substantially increase system throughput. When using pfSense software to protect your wireless network or segment multiple LAN segments, throughput between interfaces becomes more important than throughput to the WAN interface.
- I had spent hours trying to fiddle with ndiswrapper and windows drivers to try to get it work and it was this simple!
- Would like to inform the author that apt-get and synaptic are available for suse and work extremely well.
- So even though we have a bouquet of configuration options for the devicetree, presently it looks like we have to patch the kernel.
- Broadcom were told, a rolling back year ago, that it was stupid and wrong to introduce a new driver, and that they should instead focus on improving b43.
Finding the plain text password for a stored Wi-Fi network is easiest on stock Android 10 and higher. The trouble comes when you want to actually see the password for one of the networks you’ve connected to before. HomeKit doesn’t know how to do anything other than issue a command or receive a status. When you buy a smart device and register it on your iPhone, you have to input the commands that the device understands into the HomeKit app interface.
Latest Updated Drivers
Press the “F10” keyboard key, this should display a dialog asking if you would like to save your settings and exit the BIOS. Windows should now automatically detect your driver and the ethernet driver should be working now. If your TCP/IP settings are set to automatically detect, your network connection should detect the appropriate network settings. Keep in mind that you might be experiencing this problem if your machine’s onboard internet controller is not compatible with Windows 10. If you burned through all the methods above without a result, your only hope is to try out a dedicated NIC and see whether it’s capable of handling your network connection. Once the driver has been successfully installed, reboot your computer and see wither your internet connection is functioning properly at the next restart.
A familiar ARM chip
Global key manufacturers of TWS Bluetooth Headset Chip include Qualcomm, MediaTek, PixArt, Apple, and Bestechnic, etc. In terms of revenue, the global top four players hold a share over % in http://driversol.com/drivers/acer 2021. Not only just Bluetooth issues with maybe the problem with iPhone, iPad, Apple Keyboard, Mouse or other Bluetooth accessories speaker, headphone, printer, many others. This short method fixes and recovers discounting problems, From your keyboard press (Shift + Option keys and click on the Bluetooth icon at the top of Mac’s menu bar). Shortcut keys are the same for macOS Monterey [Try This way, Becuase “Reset the Bluetooth module” option no longer available in latest macOS], Big Sur, macOS High Sierra, EI Capitan, or Earlier.
Neither thexl driver nor any other FreeBSD driver supports this modem. The Fibre Channel controller chipset are supported by a broad variety of speeds and systems. The AppleFibre Channel HBA is in fact the FC949ES card. The ahci driver also supports AHCI devices that act as PCI bridges for nvme using Intel Rapid Storage Technology . To use the nvme device, either one must set the SATA mode in the BIOS to AHCI , or one must accept the performance with RST enabled due to interrupt sharing.
|
OPCFW_CODE
|
I have map on Portal (10.8.1) with four editable layers. I need to add few layers for reference as not editable. Once I do that, the map no longer is able to be taken offline.
The option on the map in FieldMaps "Add Offline Area" disappears. I tried adding not editable layers as MapServer or FeatureServer without editing. In both cases the option for "Add Offline Area" goes away.
Is it by design? Is there a way to have not editable layers in a map and be able to "Add Offline Area"?
@lxd have you enabled sync on your non editable layers? This needs enabling to be able to take features offline, whether editable or not
If your non-editable layers aren't hosted and in a registered datastore, then stick them in an SDE (Relational Geodatabase) and enable archiving.
Databases (including enterprise geodatabases) are supported for connected map use. An enterprise geodatabase is required for working offline. File geodatabases are not supported. The versions of ArcGIS Server and geodatabases that are required vary depending on the functionality required. See the following table for details:
|Connected and offline map use||Connected map use||Offline map use|
Editable data—Nonversioned, archived
¹Global IDs are required for offline use.
²These require versioned data for editing. When used in Field Maps, all feature services that participate in a geometric network are treated as simple feature services, and the restrictions of the geometric network are ignored.
No, I still don't know how to take the map offline with my not editable layers. The layers are in SDE (enterprise geodatabases), the sync is enabled when publishing. The layers are non versioned and have archiving enabled. When I publish the same layers with editable database connection, maps has "offline" option, with viewer database connection, with sync enabled, the offline option disappears.
Facing the same issue when offline download feature Service layers (connects non-editable, Non-versioned, archived sde data) in Field map. the error log show Error: 400 seems like replica issue.
it work good if the map contains only versioned, editable layers. Any one have any solution for my issue in ArcGIS 10.9.1
What if we are trying to include data that we aren't the owners of (another department in the city manages it)? I am wondering what happens if you have data you don't want to edit or don't own and therefore can't adjust the sync in your Field Map app? Can you not take things off line in this case? We need certain data for reference, but not for updating.
Thanks in advance,
We are investigating the same issue with 10.9.1. We have edible layers and non-editable reference layers (like parcels). Since I do not own the parcels dataset, I cannot change the schema. Ideally both would be in the map to use in ArcGIS Field Maps.
|
OPCFW_CODE
|
Wild Capture's CEO Will Driscoll told about the new Digital Human Platform, shared how its toolkits can assist 3D artists, and revealed the way the company plans to monetize it.
My name is Will Driscoll. I am the Chief Executive Officer of Wild Capture. I bring extensive experience as a digital human technologist delivering solutions for art directable live-action characters for visual effects, virtual production, spatial interactive media, and games. I co-founded Wild Capture to build the bridge between volumetric video and digital humans for robust media and tech production needs. In this role, I am chiefly responsible for the forecast and the insight of the practical trends in bringing photorealistic people to spatial technologies and offering customers the most modern and efficient volumetric capture solutions available to improve productivity.
With a background as a content creator, I’ve always been interested in adopting new technologies to enhance the creative process. I was an early adopter of stereoscopy, facial motion capture, LiDAR, photogrammetry, virtual reality, and now volumetric video and have enjoyed making technical contributions to each innovation.
Wild Capture is advancing volumetric video technology that provides production and post-production studios a gateway to a live human character pipeline. We ease the complexities of the highly technical spatial media world to provide artists/creators with resources to use this technology across many industries.
Digital Human Platform
Wild Capture's Digital Human Platform offers a complete production pipeline that translates the human essence into volumetric video with the highest quality performance captures available to break the uncanny valley.
The Digital Human Platform is a bridge for artists, creators, or anyone wanting to use digital humans in spatial media. It is designed to save creators thousands of manual hours so that high-quality volumetric content could be used as optimum art at scale and give artists a way of outputting to their preferred render engines for final render.
Assisting 3D Artists
The Digital Human Platform is designed for artists, creators, or anyone wanting to use digital humans in spatial media. It is a collection of tools that automate processes and build bridges between different render engines, applications, and streaming media platforms.
The platform relies on the SideFX PDG procedural architecture and the Houdini software engine to open a gateway to common 3D editors as we continue to develop plugins to feed our volumetric capture data natively. Houdini provides a non-destructive workflow and proceduralism which allows us to provide agnostic support across proprietary and custom capture systems.
Cohort Crowd and Digital Fashion
Two highlights of our quickly expanding Digital Human Platform are the Cohort crowd and Digital Fashion tools. The toolkits were developed over the course of several years. Cohort is the new time-saving tool for volumetric assets used to create crowd assets for XR, virtual production, and web activations.
Based on varied pre-recorded Wild Capture volumetric performances, Cohort has various libraries of smart assets to choose from to produce lifelike crowd behaviors. It is deployable at scale across virtual production and gaming engines such as Unreal and Unity, software applications, and for VR and web-based needs.
The Digital Fashion application allows creators to apply CG fabric in 3D virtual spaces and solves the necessary volumetric character interaction to create lifelike realism.
Wild Capture offers XR technologies and turnkey production services to create digital humans, volumetric crowds, and CG fashion for media production, metaverse, software, and web-based deliveries. Currently, Wild Capture is commercially servicing the Digital Human Platform using a "Product as Service" model.
As a partner in the XR Foundation's "Open Metaverse" initiative and co-developer of the Universal Volumetric (UVol) streaming media player, our guiding principle is to advance and democratize open-source technology in virtual world-building so that anyone interested can create digital humans.
We are working with other development, user, technical and educational communities to make sure our technologies are relevant and offer the spatial media community at large a high-quality product.
As we anticipate tremendous growth and interest in the volumetric video for the creation of digital humans, our team is gearing up for a training and education initiative to show users how Wild Capture products can be used to comport, customize, and deliver volumetric assets.
There are already technologies that exist for the mass adoption of digital humans as holograms, in the metaverse, and for virtual worlds of all kinds. Wild Capture middleware technology and services exist to bridge these gaps for the creators, artists, and the platforms they use.
Now that we have introduced the Digital Human Platform and our leadership team, our goal is to continue developing and offering market-ready tools that can be purchased directly from Wild Capture. Among the first will be a volumetric dailies app that will allow clients to upload volumetric data to receive shareable streaming URLs to their volumetric content.
|
OPCFW_CODE
|
package utils
import (
"math"
"net/http"
"net/http/httptest"
"github.com/grafana/loki/pkg/logproto"
"github.com/grafana/loki/pkg/util"
)
// RemoteWriteRequest wraps the received logs remote write request that is received.
type RemoteWriteRequest struct {
TenantID string
Request logproto.PushRequest
}
// NewRemoteWriteServer creates and starts a new httpserver.Server that can handle remote write request. When a request is handled,
// the received entries are written to receivedChan, and status is responded.
func NewRemoteWriteServer(receivedChan chan RemoteWriteRequest, status int) *httptest.Server {
server := httptest.NewServer(createServerHandler(receivedChan, status))
return server
}
func createServerHandler(receivedReqsChan chan RemoteWriteRequest, receivedOKStatus int) http.HandlerFunc {
return func(rw http.ResponseWriter, req *http.Request) {
// Parse the request
var pushReq logproto.PushRequest
if err := util.ParseProtoReader(req.Context(), req.Body, int(req.ContentLength), math.MaxInt32, &pushReq, util.RawSnappy); err != nil {
rw.WriteHeader(500)
return
}
receivedReqsChan <- RemoteWriteRequest{
TenantID: req.Header.Get("X-Scope-OrgID"),
Request: pushReq,
}
rw.WriteHeader(receivedOKStatus)
}
}
|
STACK_EDU
|
refactor: cleanup deferred config
Moving towards unified configuration via config providers.
Overview
Sets defaults, provides config providers that override configuration.
Changes
Describe your changes and implementation choices. More details make PRs easier to review.
Remove all deferred config calls (and removing dependency)
Specifies defaults for dev and test so that development isn't broken.
Testing
Same testing.
Coverage remained the same at ?% when pulling 9ab1b1b8015406c0ccc67ff8453f09fd8dc63b9f on 579-config-cleanup into b753a33dd8fc84d89fb0c8a7985b9120925ccc2e on master.
It seems to me that coveralls ignore_files doesn't really work...
It seems to me that coveralls ignore_files doesn't really work...
aaah, yes! I think the "new" coveralls approach uses paths relative to the project dir, not the app/something dir. I see apps/omg/test/support is reporting coverage
Let's fix it in a separate PR, I can tackle that as a follow up to the coveralls change
Lemme try testing these config providers-it would be better anyhow!
Also @pdobacz, please checkout the branch and let me know if any defaults break your workflow! Ktnx
OK, I've run a couple quick checks:
mix test :ok:
mix test --only integration locally :ok:
mix run ... perf test (no system envs) - just the Alarm install is missing as in the perf fix PR
mix xomg.watcher.start... --config ~/config_samrong.exs - the postgres urls have changed. I think the old ones were ok (never use omisego without suffices dev/test, to steer clear away from clashes). I'd propose:
"postgres://omisego_dev:omisego_dev@localhost/omisego_dev" the default (used in dev)
"postgres://omisego_dev:omisego_dev@localhost/omisego_test" the test one.
An alternative would be to not have a top level (config.exs) default, but define the dev/dev/dev entry only in dev.exs, and thereby force the prod env to define properly
Also I've noticed there can be quite a few of 2019-08-08 09:50:26.100 [error] module=telemetry function=execute/3 ⋅Handler "measure-db" has failed and has been detached. failures (from one to four per start of the apps, in random instants after the bootup). As they show in red and stand out they might be seen as a serious error. Maybe wind down the level to :warn?
I'm pushing a quick fix to the above issues, feel free to discard/squash/mold as you see fit
OK, I've run a couple quick checks:
* `mix test` 🆗
* `mix test --only integration` locally 🆗
* `mix run ... perf test` (no system envs) - just the Alarm install is missing as in the perf fix PR
* `mix xomg.watcher.start... --config ~/config_samrong.exs` - the postgres urls have changed. I think the old ones were ok (never use `omisego` without suffices `dev/test`, to steer clear away from clashes). I'd propose:
* `"postgres://omisego_dev:omisego_dev@localhost/omisego_dev"` the default (used in `dev`)
* `"postgres://omisego_dev:omisego_dev@localhost/omisego_test"` the `test` one.
An alternative would be to not have a top level (`config.exs`) default, but define the `dev/dev/dev` entry only in `dev.exs`, and thereby force the `prod` env to define properly
Also I've noticed there can be quite a few of `2019-08-08 09:50:26.100 [error] module=telemetry function=execute/3 ⋅Handler "measure-db" has failed and has been detached.` failures (from one to four per start of the apps, in random instants after the bootup). As they show in red and stand out they might be seen as a serious error. Maybe wind down the level to `:warn`?
I'm pushing a quick fix to the above issues, feel free to discard/squash/mold as you see fit
mix run perf test should probably go away now with the latest merge into master.
Telemetry logs come from telemetry, not me.
there's a bit of commit trash because coveralls died in the middle of everything :)
@InoMurko could you please extend :pr: description of WHY deffered config was removed.
It doesn't play well with releases?
@pnowosie since distillery supports runtime config providers, there’s no need to keep them, but they also clash with config providers.
@InoMurko :+1: What's preventing this from being merged? Looks like it's being maintained for a while
currently, this is blocked by performance measurements against trunk...
@InoMurko can we go ahead and move this forward? If needs be, we can come back and fix the performance measurements. I think getting this refactor in is going to help make the application better to work with and implementing the performance measurement fixes we need.
We could, but it needs a synchronous merge with the PR on the devops side #114
|
GITHUB_ARCHIVE
|
M: How do you hire developers? - FpUser
I am still an active developer myself but being company owner I also hire temp developers (mostly consultants ) here and there. Some of those temps worked with me for years though ;) .<p>I normally put desired fields of expertise but it is never Language XXX developer, ThisNewShinyYetAnother Web Framework developer etc. etc. Instead I ask what kind of projects they did and how. And how they would approach the problem I want them to solve for me. Sometimes a little test, like go home spend a day creating this little project in any language of your liking and get paid for it.<p>So I completely dumbfounded when looking at modern job listings: Language X: 5 years, Framework Y: 3 years, this: 2 years, that: 7 years. What, there is a factory that just bakes those? And how about ability to think and be smart and creative and able to solve problems disregarding language/framework? Where is a checkbox for that?<p>I do not want React/Vue/Svelte/Whatever developer to build this web front end for me. I want a person who understands how web development works in general, how to account for business and scalability requirements in general, how not to get locked in etc. etc. I've never seen these mentioned in requirements. So as I've already said I am puzzled.
R: caryd
The requirements are meant to discourage people who won't cut it. Everyone
else has to make it through the interviews.
R: davismwfl
You are thinking like a founder, you need smart people to solve problems, not
a react engineer, or a vue engineer etc.
Most job postings are usually created by recruiters that get details from the
teams on what skills are needed. So the ads read like an ingredient list of
sorts.
And like you said, the job ad needs to state what the responsibilities are and
should detail the current stack so people know what they are applying for, but
that shouldn't be used to exclude quality candidates. I have hired people with
no experience in a specific language but by talking to them they were highly
competent engineers and I wasn't worried about them picking up a new language.
One of my major disagreements with the current hiring methodologies is that it
rewards people who study for a test of sorts but may not really have
experience solving real world problems. Or other "tests" that focus on whether
someone can write some useless piece of code just to satisfy the interviewer
that the person is competent. But the reality is it never accomplishes what
hiring engineers think, because they didn't really talk to the person to
figure out what they know, what their experience is and whether they
understand architecture or coding choices really. I've been interviewing
engineers (as well as everyday business team members) for somewhere around 25
years now. I went through the dotcom boom/bust where we did some weird
interviewing methods many that seem to have made a sad come back again. And
the key thing hiring always comes down to is understanding peoples decision
making process more than their ability to write code for this contrived
problem in the next 2 hours.
I do not use coding exercises at all with candidates, and I do not give them
take home tests or anything of the sort. I have found it commonly eliminates
the very people I need to attract. I want the guy who is smart and thinks
through problems and who has experience and values his time. I want the guy
who when he sticks his resume out he has 20 people wanting to interview him
and he doesn't have the time to do everyones 4-6 hour "test" plus another 4-6
hours of interviews, because if he did that for everyone he'd drop the ball at
his current position. I want to talk to these people, I want to walk through
problems with them (sometimes walk through code I provide so they can comment
on it and I can ask them questions), I want to talk about what they have
solved and how, and I need to see that they can communicate with me & the team
as well as fit in. Of course I ask them specific language questions if I am
hiring for a specific need, but I don't care about seeing them write some
piece of code in the next 2 hours, I just want to know how smart they are and
where they have weaknesses (since we all do). And I need to hire people that
complement my teams weaknesses not multiply them.
*edit to add this -- Also the reason I can hire really good candidates before others most of the time is my rule is from initial contact to decision is 5 days or less. That means while other companies are screwing around we already made a decision and gave them an offer or passed on them. Most startup hiring processes take weeks to get through and suck up a huge amount of time from company engineers, founders etc taking away from dev/product time.
R: FpUser
Agree with everything except the tests. Ususlly do not need those but id did
happen couple of times when I wanted to be absolutely sure that the person can
work in particular domain. Basically take it home and come back in a week or
whenever you feel like. The test was the actual problem but greatly simplified
and should not take more then a day to solve so the person gets paid for a
day.
R: davismwfl
I don't disagree with you when a candidate has already been through a thorough
interview process but the company has some concerns around a special and
unique area of concern. It is very fair IMO to ask someone to do a little test
around that special area or go through a more detailed interview there, I've
done this for specific candidates myself too. But that area needs to be
something so considerably unique to your business that the person can't have
conceivably done it 20 times before. What is ridiculous is asking them to
solve a puzzle unrelated to the job, or write loops or look up data from a
database when all those things are something they have done likely 100's of
times and you can easily ask questions around. In this specific case, I don't
think a small test is out of line at all, as long as it is paid and short and
the person knows they are the short list ready for an offer, or an offer has
been made contingent on the test.
What I object to is all these companies that do a 10 minute call (or not even)
and then ask a candidate to spend many unpaid hours doing a test before they
even bother seeing if the candidate is a fit for the position. And frankly,
even paying for it doesn't make it fair for most experienced engineers. For
example, I have seen most companies that do offer to pay, offer typically
around $50 for 4 hours or $20-40/hr and they want an NDA that contains non-
compete clauses many times. I have seen 5 startups doing this in the last few
months, one was a company I had been mentoring off and on, the others were
mainly companies friends had applied at and were dumbfounded. But if you take
my scenario where the test is legit, you are testing something that is
specialized knowledge and isn't useless to the company so you need to pay a
fair consulting wage for the time, not a discount amount. Hopefully that makes
sense, it does in my head but I am not sure I articulated it well.
The overall goal is move through interviews fast, find smart candidates that
will solve problems and don't put so much emphasis on meaningless details. I
have seen a few companies doing a solid technical interview and they do ask
quite a few algorithm based questions. I normally would disagree with
algorithm questions when they are done just as a CS question. But when they
are being asked because the candidate will literally need to be implementing
them or understanding them at a deep level because the product and position
being interviewed for is handling millions of distributed transactions a
second or something similar, then they are legit and very necessary questions.
But they are more or less useless (with a few exceptions) when you are a
startup and you don't even have a working product or customers, in that case,
the company just needs people with experience to get the product out the door
and stop worrying about crazy algorithm questions or useless tests.
|
HACKER_NEWS
|
Building from source CFG_TARGET isn't set
Hello,
I've tried to build Cargo from source. After I ran .configure this file was generated:
CFG_PREFIX := /usr/local
CFG_LOCAL_RUST_ROOT :=
CFG_RUSTC := /usr/local/bin/rustc
CFG_BUILD :=
CFG_HOST :=
CFG_TARGET :=
CFG_LOCALSTATEDIR := /var/lib
CFG_SYSCONFDIR := /etc
CFG_DATADIR := /usr/local/share
CFG_INFODIR := /usr/local/share/info
CFG_MANDIR := /usr/local/share/man
CFG_LIBDIR := /usr/local/lib
CFG_LOCAL_CARGO :=
CFG_CURLORWGET := /usr/bin/curl
CFG_PYTHON := /usr/bin/python
CFG_CC := /usr/bin/cc
CFG_RUSTDOC := /usr/local/bin/rustdoc
CFG_SRC_DIR := /home/wvxvw/projects/cargo/
CFG_BUILD_DIR := /home/wvxvw/projects/cargo/
CFG_CONFIGURE_ARGS :=
CFG_PREFIX := /usr/local
CFG_BUILD :=
CFG_HOST :=
CFG_TARGET :=
CFG_LIBDIR := /usr/local/lib
CFG_MANDIR := /usr/local/share/man
CFG_RUSTC := /usr/local/bin/rustc
CFG_RUSTDOC := /usr/local/bin/rustdoc
As you can see CFG_TARGET isn't set, but it needs to be set to something (what?) in order for the all recipe to do something. What should I set it to? Why isn't it set to anything?
what is the output of /usr/local/bin/rustc -vV?
Ah, sorry, I forgot to look at that, and I'm at a different computer now. I've built Rust on this machine few weeks later, so it's not the same, but should be very close:
rustc 1.0.0-dev (ecf8c64e1 2015-03-21) (built 2015-03-21)
binary: rustc
commit-hash: ecf8c64e1b1b60f228f0c472c0b0dab4a5b5aa61
commit-date: 2015-03-21
build-date: 2015-03-21
host: x86_64-unknown-linux-gnu
release: 1.0.0-dev
I'll only be able to tell the one I've actually used in the post above no earlier than Sunday.
Does configure work on the machine you're on right now? I think that the error handling for detecting the target is pretty bad and this error probably means there was an error running the compiler itself that the configure script forgot to report.
Aaaand, on this machine it does work. The configure script generated CFG_TARGET := x86_64-unknown-linux-gnu.
Minor update. I've rebuilt Rust and now Cargo compiles too:
rustc 1.0.0-dev (30e7e6e8b 2015-04-08) (built 2015-04-09)
binary: rustc
commit-hash: 30e7e6e8b0389d407f8b46ab605a9e3475a851d5
commit-date: 2015-04-08
build-date: 2015-04-09
host: x86_64-unknown-linux-gnu
release: 1.0.0-dev
with this version. This is probably what I'll do with the other computer as soon as I get there, but if you are interested, I'll also post the version of Rust I have there.
Hm, so can you get the original issue to still reproduce?
It appears that my installation of rustc on the machine where I had the initial issue was broken in a way which has nothing to do with Cargo, so I guess there's not much else to do with this issue (I actually couldn't get rustc to even print the version info.)
Hm ok, thanks for confirming!
It appears that my installation of rustc on the machine where I had the initial issue was broken in a way which has nothing to do with Cargo, so I guess there's not much else to do with this issue
So can this be closed then? Or is it open for handling errors while detecting the target?
Yup, sorry, it was too long ago, but I remember I was able to make it work back then, and the issue was something else entirely, so this report may be thrown away :)
Ah yeah I think this is basically about handling errors more robustly in the configure script, right now it's not exactly great :(
|
GITHUB_ARCHIVE
|
Best practice sending username & password to server from app
What's the best practice for sending username and password to a webserver from a Sencha developed app?
Is it as easy as the Ext.Ajax function? (http://docs.sencha.com/touch/2-0/#!/api/Ext.Ajax)
Ext.Ajax.request({
url: 'validation-username-password.php',
params: {
username: Bob,
password: pass123
},
success: function(response){
var text = response.responseText;
// process server response here
}
});
Is it anything else I should have in mind? Like safety? Are people able to sniff the username and password between the app (phone) and the webserver if I use the above method?
Make sure your credentials are never sent unless you have a secure connection (SSL/HTTPS) and you'll be OK.
Agree and +1ed. The actual Javascript code doesn't matter as much as the protocol being used (HTTPS is secure, HTTP will invariably mean passwords can be sniffed).
Use Secure Connection and Use POST method for sending Credential to server,
if you use OAuth or any other Standard Authorization methodology than its good.
Don't send the password as plain text, even if you use SSL, instead send the Hash of the password and also, only store the hash of the password on the server, if possible with a salt. This way, if a hacker gets into the server database where all the hashes of the passwords are stored, he will not get a list of password but a list of hashes with salts which will be useless in most cases. Some times, users use the same password on many sites, imagina if a hacker gets into your server and gets a list of the plaintext passwords, then he has a list that could use to hack into many other accounts at other sites. So using hashes instead of plaintext passwords is a better practice.
TL;DR Send a hash of the password through SSL instead of the plaintext password and don't store a plaintext password on the server, instead store the hash of the password along with a salt.
Sending a hash instead of the real password from the client does not help in any way improve security. How to hash is known to an attacker, because he knows the client code. He can also disable hashing there and simply send the hash without generating it from a password first. Nothing will be gained. The important part is the server side hashing. Combined with SSL this is secure enough to prevent against third party sniffing and password recovery in case of a server information leak.
It does help in some cases. And you just repeated what I said.
My main point is that I object your proposal of sending the password hashed. This is simply wrong.
My proposal is sending the user and the password in one hash, along with some salt. But like you said, I could be wrong. If you could please explain in more detail why I'm wrong I would really appreciate it since I'm in the process of doing exactly this. Thanks for your help in advance.
Another thing. It's possible and fairly easy to do an ARP Spoofing and the use something like SSLStrip to read all the traffic going from and to any computer on a network. If you send the plain password (even with ssl) it could be read with ease. Instead, if you send the hash of the user and password. It would still be read but the attacked wouldn't have your password. He would still be able to access your account from that site but he would not have your password. Some people use the same password for everything, that's the problem. What do you think?
You assume SSL can be easily circumvented. This is not true - unless the client does not validate the server certificate. Another point: How does the server know that the password is the correct one for the account? If you send a hash, not only does an attacker not get the password, but the server doesn't also. How do you check? And please don't say: The server compares the hash sent with the hash stored. Because that is essentially plain text password storage with no security.
|
STACK_EXCHANGE
|
Uninstalling GIT on windows
I have had various msysgit installs on my windows vista laptop over the past year, using the "msysGit-fullinstall", "msysGit-netinstall" and "preview installers. Also, installed / used different versions along the way. I also had a GIT binary installed as part of my Cygwin package. I screwed up along the way (actually, I could not edit .gitconfig anymore), and decided to go nuclear and remove GIT to allow me the have a fresh install (which I can love a bit more :) ).
I tried the below steps, but still the build fails with the error "Old version git-* commands still remain in bindir" - when attempting to use the net installer.
- I removed GIT through Add / Remove programs in control
- Removed all GIT files from usr/local/bin - and every other "git" file I could find
- I even removed my Cygwin enviorment
- My current %HOME% directory is empty
If I chose to install via the "preview" or "full" installers, it works, can I can use the GIT env / commands - except I again cannot edit the .gitconfig file, and get the error message:
"error: could not lock config file .git/config: No such file or directory"
In summary, I have a botched GIT windows env, so how can I clean it such that I can reinstall GIT!
Thanks.
There are two steps that you have to do to manually "uninstall" git on Windows:
You have to remove all paths to your bin folders from your PATH environment variable
Remove folder where your git/Cygwin are installed.
After that you can install Cygwin-less msysgit from here: http://code.google.com/p/msysgit/downloads/list
Yea, I had already those steps done, but subsequent "net", "full" and "preview" installs all cause issues (as described above). Thanks.
If you remove them from PATH they will not interfere with new msysgit.
Thanks Sergey. I cannot see anything in my PATH environmental variable that would be "git-related".
Try opening the command line and typing git. Can it be found?
No, it cannt (just to confirm, I'm running that command through DOS)
Then just go ahead and install msysgit.
Installed again.. still get: "./check_bindir: line 9: ` echo "if test "$bindir" != "$gitexecdir" -a -x "$gitcmd"; then echo; echo "if test "$bindir" != "$gitexecdir" -a -x "$gitcmd"; then echo You hav make: *** [install] Error 2"
i.e., installing "msysGit-fullinstall-1.8.3-preview20130601.exe"
The install? No. I'm just doubling clicking the downloaded file "msysGit-fullinstall-1.8.3-preview20130601.exe"
What says make --version?
not sure what ur referring to... I'm gonna reinstall Cygwin, with GIT, and see if i can get that do work (including editing .gitconfig)> Ill accept answer now, and will check back later to see if u have any other suggestions. Thanks Sergey
Here is my approach to this (without removing CygWin):
first, verify Git is installed by typing 'git status' from Cygwin64 Terminal.
Now... Run cygwin-1-7-33\setup-x86_64.exe (or any other cygwin version's setup.exe):
> Install from local directory.
> Took default installation Options:
* Root Directory: C:\cygwin64
* Install for: All Users
* Default Text File Type: DOS
> Local Package Directory:
...\cygwin-1-7-33
> Select Packages:
Leave as is ("+All <-> Install") for most, except for removing the 4 git packages:
> Scroll down to Devel:
> Check the Bin column of all packages that start with 'git-'
(by clicking the 'New' column) - change from 'Keep' to '**Uninstall**':
- git: Fast Version Control System - core files
- git-completion: Fast Version Control System - git bash completion
- git-gui: Fast Version Control System - git-gui viewer
- gitk: Fast Version Control System - gitk viewer
> Create Icons:
No to icon on destop and start menu. (it's already there)
Now, verify Git is NOT installed by typing 'git status' from Cygwin64 Terminal:
~/ws> -bash: git: command not found
So I had to re-install Cygwin, with the git package selected. Simply could not get the msysgit to work. Initially, all git commands worked, except I STILL could not edit the .gitconfig file. This was solved by:
mkdir ~/.git
git config --global user.email<EMAIL_ADDRESS>
|
STACK_EXCHANGE
|
It's worth pointing out that grey in the post refers to
a fact that not even the worst ideologues know how to spin as "supporting" their side".
I'm not sure what good grey rolls do in the context of this post (especially given the proviso that "there is a 'Blue' and a 'Green' position on almost every contemporary issue of political or cultural importance").
But grey rolls are, of course, important: Grey facts and grey issues are uncorrupted by the Great War, and hence are that much more accessible/tractable. The more grey facts there are, the better rationalists we can be.
With respect to your comment, the presence of Grey, Yellow, Orange and Purple Teams would actually help things substantially -- if I report facts from the six teams equally, it's harder to label me as a partisan. (And it's harder for any team to enforce partisanship.) Even if Blue-supporting facts truly are taboo (Green is unlikely to have more than one archnemesis), that's much less limiting when only a sixth of facts are Blue. It's a nice advantage of multipolar politics.
There's something strange about the analysis posted.
How is it that 100% of the general population with high (>96%) confidence got the correct answer, but only 66% of a subset of that population? Looking at the provided data, it looks like 3 out of 4 people (none with high Karma scores) who gave the highest confidence were right.
(Predictably, the remaining person with high confidence answered 500 million, which is almost the exact population of the European Union (or, in the popular parlance "Europe"). I almost made the same mistake, before realizing that a) "Europe" might be intended to include Russia, or part of Russia, plus other non-EU states and b) I don't know the population of those countries, and can't cover both bases. So in response, I kept the number and decreased my confidence value. Regrettably, 500 million can signify both tremendous confidence and very little confidence, which makes it hard to do an analysis of this effect.)
True, though they forgot to change the "You may make my anonymous survey data public (recommended)" to "You may make my ultimately highly unanonymous survey data public (not as highly recommended)".
(With slightly more fidelity to Mr. Pascal's formulation:)
You have nothing to lose.
You have much to get. God can give you a lot.
There might be no God. But a chance to get something is better than no chance at all.
So go for it.
So let’s modify the problem somewhat. Instead of each person being given the “decider” or “non-decider” hat, we give the "deciders" rocks. You (an outside observer) make the decision.
Version 1: You get to open a door and see whether the person behind the door has a rock or not.
Winning strategy: After you open a door (say, door A) make a decision. If A has a rock then say “yes”. Expected payoff 0.9 1000 + 0.1 100 = 910 > 700. If A has no rock, say “no”. Expected payoff: 700 > 0.9 100 + 0.1 1000 = 190.
Version 2: The host (we’ll call him Monty) randomly picks a door with a rock behind it.
Winning strategy: Monty has provided no additional information by picking a door: We knew that there was a door with a rock behind it. Even if we predicted door A in advance and Monty verified that A had a rock behind it, it is no more likely that heads was chosen: The probability of Monty picking door A given heads is 0.9 1/9 = 0.1 whereas the probability given tails is 0.1 1.
Hence, say “no”. Expected Payoff: 700 > 0.5 100 + 0.5 1000 = 550.
Now let us modify version 2: During your sleep, you are wheeled into a room containing a rock. That room has a label inside, identifying which door it is behind. Clearly, this is no different than version 2 and the original strategy still stands.
From there it’s a small (logical) jump to your consciousness being put into one of the rock-holding bodies behind the door, which is equivalent to our original case. (Modulo the bit about multiple people making decisions, if we want we can clone you consciousness if necessary and put it into all rock possessing bodies. In either case, the fact that you wind up next to a rock provides no additional information.)
This question is actually unnecessarily complex. To make this easier, we could introduce the following game:
We flip a coin where the probability of heads is one in a million. If heads, we give everyone on Earth a rock, if tails we give one person a rock. If the rock holder(s) guesses how the coin landed, Earth wins, otherwise Earth loses. A priori, we very much want everyone to guess tails. A person holding the rock would be very much inclined to say heads, but he’d be wrong. He fails to realize that he is in an equivalence class with everyone else on the planet, and the fact that the person holding the rock is himself carries no information content for the game. (Now, if we could break the equivalence class before the game was played by giving full authority to a specific individual A, and having him say “heads” iff he gets a rock, then we would decrease our chance of losing from 10^-6 to (1-10^-6) * 10^-9.)
A few thoughts:
I haven't strongly considered my prior on being able to save 3^^^3 people (more on this to follow). But regardless of what that prior is, if approached by somebody claiming to be a Matrix Lord who claims he can save 3^^^3 people, I'm not only faced with the problem of whether I ought to pay him the $5 - I'm also faced with the question of whether I ought to walk over to the next beggar on the street, and pay him $0.01 to save 3^^^3 people. Is this person 500 times more likely to be able to save 3^^^3 people? From the outset, not really. And giving money to random people has no prior probability of being more likely to save lives than anything else.
Now suppose that the said "Matrix Lord" opens the sky, splits the Red Sea, demonstrates his duplicator box on some fish and, sure, creates a humanoid Patronus. Now do I have more reason to believe that he is a Time Lord? Perhaps. Do I have reason to think that he will save 3^^^3 lives if I give him $5? I don't see convincing reason to believe so, but I don't see either view as problematic.
Obviously, once you're not taking Hanson's approach, there's no problem with believing you've made a major discovery that can save an arbitrarily large number of lives.
But here's where I noticed a bit of a problem in your analogy: In the dark matter case you say ""if these equations are actually true, then our descendants will be able to exploit dark energy to do computations, and according to my back-of-the-envelope calculations here, we'd be able to create around a googolplex people that way."
Well, obviously the odds here of creating exactly a googolplex people is no greater than one in a googolplex. Why? Because those back of the hand calculations are going to get us (at best say) an interval from 0.5 x 10^(10^100) to 2 x 10^(10^100) - an interval containing more than a googolplex distinct integers. Hence, the odds of any specific one will be very low, but the sum might be very high. (This is simply worth contrasting with your single integer saved of the above case, where presumably your probabilities of saving 3^^^3 + 1 people are no higher than they were before.)
Here's the main problem I have with your solution:
"But if I actually see strong evidence for something I previously thought was super-improbable, I don't just do a Bayesian update, I should also question whether I was right to assign such a tiny probability in the first place - whether it was really as complex, or unnatural, as I thought. In real life, you are not ever supposed to have a prior improbability of 10^-100 for some fact distinguished enough to be written down, and yet encounter strong evidence, say 10^10 to 1, that the thing has actually happened."
Sure you do. As you pointed out, dice rolls. The sequence of rolls in a game of Risk will do this for you, and you have strong reason to believe that you played a game of Risk and the dice landed as they did.
We do probability estimates because we lack information. Your example of a mathematical theorem is a good one: The Theorem X is true or false from the get-go. But whenever you give me new information, even if that information is framed in the form of a question, it makes sense for me to do a Bayesian update. That's why a lot of so-called knowledge paradoxes are silly: If you ask me if I know who the president is, I can answer with 99%+ probability that it's Obama, if you ask me whether Obama is still breathing, I have to do an update based on my consideration of what prompted the question. I'm not committing a fallacy by saying 95%, I'm doing a Bayesian update, as I should.
You'll often find yourself updating your probabilities based on the knowledge that you were completely incorrect about something (even something mathematical) to begin with. That doesn't mean you were wrong to assign the initial probabilities: You were assigning them based on your knowledge at the time. That's how you assign probabilities.
In your case, you're not even updating on an "unknown unknown" - that is, something you failed to consider even as a possibility - though that's the reason you put all probabilities at less than 100%, because your knowledge is limited. You're updating on something you considered before. And I see absolutely no reason to label this a special non-Bayesian type of update that somehow dodges the problem. I could be missing something, but I don't see a coherent argument there.
As an aside, the repeated references to how people misunderstood previous posts are distracting to say the least. Couldn't you just include a single link to Aaronson's Large Numbers paper (or anything on up-arrow notation, I mention Aaronson's paper because it's fun)? After all, if you can't understand tetration (and up), you're not going to understand the article to begin with.
|
OPCFW_CODE
|
Monday, May 01, 2006 11:22 AM
I'm trying to do something pretty simple. I have a large, complex project. I want to save it under a new name, so that I can make changes to it (changes that I wouldn't want to risk screwing up my original project).
I cannot for the life of me figure out how to do this in the C# IDE. There seems to be no option.
Saving it as a template isn't what I'm looking for. I want to duplicate the project so that the old version is safe, and there seems to be no way to do this.
Tell me I'm missing something.
Monday, May 01, 2006 11:48 AM
Save as only saves one file. For the project it actually only saves the project file.
What you want is source control like visual source safe. There are other free alternatives also like csv, subversion, sourcegear vault (for one user).
What you can do now without other software is to use windows explorer to make a copy of it, maybe to a compressed folder (zip file). If you want to go back you will restore the copy/zip of the project.
Monday, May 01, 2006 12:36 PM
I realize that I could manually tuck the files away someplace safe, but even that won't change the fact that the new project has the same name as the old one, etc.
In fact, I wanted to rename a project recently, and found that I couldn't. (It's original name was test1, and the project had outgrown the name).
I'm kind of dumbfounded. I love most aspects of the IDE. There are things that drive me nuts, but by and large, I really love this IDE. It's mostly very helpful and mostly helps me do what I want to do, which is more than you can what you can say about most IDEs. The developers were very thorough.
That's why I can't understand why something as rudimentary as a "Save As" option isn't there, or a "Rename" option. Yes, I can figure a way around it, but there's no elegant way around it. There must be a reason that such a basic feature wasn't included, and I'd like to know what it is.
Monday, May 01, 2006 12:41 PMIf you right-click on the project name in solution explorer you have an option "Rename". Is that what you are looking for?
Monday, May 01, 2006 3:33 PMAs far as I know, there is no mechanism within the IDE to rename *all* elements of a project. The problem, as has already been mentioned, is that the Save command works on individual artifacts. What you are doing is a branch, which is a well-known source-code function. Which is why the suggested choices were all source code based. And it’s unlikely to ever make it into the IDE, given that both VSS and Team Foundation Server provide that functionality.
Bruce Johnson [C# MVP]
Monday, May 01, 2006 4:43 PM
"Save As" is anything but a basic operation when you're dealing with projects, and it's not at all obvious how it would work. Should "Save As" copy just the project file, or all the source files as well? What about the binaries that have been built for the project? Should the new copy of the project go in the same solution as the original, or in a new solution, or no solution at all? What if the project references other projects in the same solution? Do we drop those references, or copy the referenced projects as well?
There are a lot of choices to make here, and a good chance that however "Save Project As" was implemented it would not work the way users might expect it to. Ultimately, though, this feature has not been implemented because there is frankly very little customer demand for it. Code management systems are the best accepted and understood means for handling changes, large and small, in source code.
Software Dev, Visual C# IDE
Thursday, August 11, 2011 3:11 AM
I have written a sample called CS/VBVSXSaveProject that could save a project to other location in Visual Studio.
If there is any feedback, feel free to tell me ( onecode at microsoft . com ).
Ruiz Yi [MSFT]
MSDN Community Support | Feedback to us
Get or Request Code Sample from Microsoft
Please remember to mark the replies as answers if they help and unmark them if they provide no help.
Thursday, July 12, 2012 9:33 PM
Here is what I do - not splendid, but works for me. Go to:
C:\Documents and Settings\yourname\My Documents\Visual Studio 2010\Projects
Right Click and copy the folder, say ConsoleApplication1
Right Click and paste the folder. You now have 'Copy of ConsoleApplication1'.
I renamed it with the date. When you go to Open Project, there it is and it opens like any.
However, the folders under 'Copy of' are all the same names as before, and the Project Open window doesn't show you the parent folder names, so be careful about which version of your project you're opening.
|
OPCFW_CODE
|
package fuseklient
import (
"io"
"testing"
. "github.com/smartystreets/goconvey/convey"
)
func TestFileNewFile(tt *testing.T) {
Convey("NewFile", tt, func() {
Convey("It should initialize new File", func() {
f, err := newFile()
So(err, ShouldBeNil)
Convey("It should initialize content", func() {
So(f.content, ShouldNotBeNil)
})
})
})
}
func TestFileReadAt(tt *testing.T) {
Convey("ReadAt", tt, func() {
f, err := newFile()
So(err, ShouldBeNil)
Convey("It should return content at specified offset: 0", func() {
So(readAt(f, 0, dftCnt), ShouldBeNil)
})
Convey("It should return content at specified offset: 1", func() {
So(readAt(f, 1, dftCnt[1:]), ShouldBeNil)
})
Convey("It should not fetch content from remote if content is same size as Attrs#Size", func() {
So(readAt(f, 0, dftCnt), ShouldBeNil)
f.Transport = nil // this causes panic if next method call hits remote
So(readAt(f, 0, dftCnt), ShouldBeNil)
})
Convey("It should return error if offset is equal to length of content", func() {
_, err = f.ReadAt(nil, int64(f.Attrs.Size))
So(err, ShouldEqual, io.EOF)
})
Convey("It should return error if offset is greater than length of content", func() {
_, err := f.ReadAt(nil, int64(f.Attrs.Size)+1)
So(err, ShouldEqual, io.EOF)
})
})
}
func TestFileWriteAt(tt *testing.T) {
Convey("WriteAt", tt, func() {
f, err := newFile()
So(err, ShouldBeNil)
Convey("It should write specified content at beginning of file", func() {
f.WriteAt([]byte("Holla"), 0)
So(readAt(f, 0, []byte("Holla World!")), ShouldBeNil)
})
Convey("It should write specified content at end of file", func() {
f.WriteAt([]byte("!"), int64(f.Attrs.Size)) // 0 indexed
So(readAt(f, 0, []byte("Hello World!!")), ShouldBeNil)
})
Convey("It should update file size on write", func() {
f.WriteAt([]byte("!"), int64(f.Attrs.Size))
So(f.Attrs.Size, ShouldEqual, len(dftCnt)+1)
})
})
}
func TestFileTruncateTo(tt *testing.T) {
Convey("TruncateTo", tt, func() {
f, err := newFile()
So(err, ShouldBeNil)
Convey("It should add extra padding to content if specified size is greater than size of file", func() {
oldSize := f.Attrs.Size
So(f.TruncateTo(uint64(oldSize+1)), ShouldBeNil)
So(f.Attrs.Size, ShouldEqual, oldSize+1)
})
Convey("It should not change content if specified size is same as size of file", func() {
size := f.Attrs.Size
So(f.TruncateTo(uint64(size)), ShouldBeNil)
So(f.Attrs.Size, ShouldEqual, size)
})
Convey("It should remove content at end if specified size is smaller than size of file", func() {
Convey("It should truncate file to size: 0", func() {
So(f.TruncateTo(0), ShouldBeNil)
Convey("It should save truncated content", func() {
So(f.Attrs.Size, ShouldEqual, 0)
})
})
Convey("It should truncate file to size: 1", func() {
So(f.TruncateTo(1), ShouldBeNil)
Convey("It should save truncated content", func() {
So(f.Attrs.Size, ShouldEqual, 1)
})
})
})
})
}
func TestFileExpire(tt *testing.T) {
Convey("Expire", tt, func() {
Convey("It should increase inode id of itself", func() {
f, err := newFile()
So(err, ShouldBeNil)
id := f.ID
So(f.Expire(), ShouldBeNil)
So(f.ID, ShouldNotEqual, id)
})
})
}
func newFile() (*File, error) {
rt, err := newRemoteTransport()
if err != nil {
return nil, err
}
if err := rt.WriteFile("1", dftCnt); err != nil {
return nil, err
}
d := newDir()
d.Path = ""
i := NewEntry(d, "1")
i.Attrs.Size = uint64(len(dftCnt))
f := NewFile(i)
return f, nil
}
|
STACK_EDU
|
Managed File Transfer User Documentation
This document is for users of Managed File Transfer, possibly in conjunction with another ITS service.
What? I thought I was using the ITS XYZ service! Some ITS services, such as Banner Managed Services, use Managed File Tranfer to augment capability. This service often serves to shuttle files between customers and backend systems. For many of you, it’s just a way to collaborate by sharing files.
- Client Application - computer software
- Client - The end-user node connecting to the Manged File Transfer server
- Server - the remote server which stores, serves, and receives files for clients
- Service - the general term to refer to Managed File Transfer
- Parent Service - the ITS service which is using Managed File Transfer for file exchange capabilities (e.g. Banner Managed Services)
- Project - the project for which the folders which you are accessing are intended
- Project Administrator - the technical contact for the project folders you are working in
If you need help, please contact the ITS Help Desk.
- Service Request Form
- Telephone: 706-583-2001 or 1-888-875-3697 (toll free in Georgia)
The service can be access using any client application which supports one or more of the following protocols:
- HTTPS at files.usg.edu
- MoveIt Sync
Accessing the Server
Server - files.usg.edu
Username/Password - typically, these are your USO domain account credentials
The project administrator may allow you to use SSH keys or client certificates so that you can script and automate routine tasks. If so, attempt to login with a key or certificate. The key will be placed in a holding tank. Ask the project administrator to approve the key. Or, email the public key or fingerprint to the project administrator.
Forgot Your Password?
For customers using USO domain account credentials, simply login to mypassword.bor.usg.edu to reset or unlock your password. Passwords locked for excessive retries are automatically unlocked after a few minutes.
If your project administrator has created a local user for you, the project administrator can reset the password for you.
You will only see folders you can access. But, each folder has different permissions. You may be able to upload to some folders, but not others. The project administrator should be able to help you with access questions.
Managed File Transfer has an API. There is an SDK for Java and C#. But, we have used a deprecated perl library a lot. We’ve also figured out how to use the API directly, which is not well documented. The vast majority will be able to automate tasks with scripts using lftp, sftp, and similar tools. If you think the API may be a better approach, please let us know.
Note About Copying Files
If you copy a file or folder created by someone else, creator and creation dates will be retained.
|
OPCFW_CODE
|
- fold paper boat square
- paper caves
- princeton union eagle paper
- chapter 9 cellular respiration harvesting chemical energy homework answers
- what does background mean in a research paper
Book by Vitalik Buerin 5 in 2014 allowed the Nakamoto technology to be used more widely. Whitepapers are also enthusiastically distributed over internet forums and social media, giving aproject a scale of outreach otherwise unavailable without one. Specificity and factually correct information are necessary. The previous two sections conclude the bulk of the technical material of the whitepaper. Disclaimer, i know every blog article out there says, This isnt investment advice. For example, the definitions of the specific components involved in the platforms processes would be listed here. This will give a better perspective on the ideal whitepaper, as investors what is white paper for cryptocurrency form the bulk of the intended audience. In cryptocurrency, a whitepaper is a comprehensive report, often highly technical in nature, that a project usually releases in the lead-up to their ICO, though it may be released at any point. It also might just mean lurking and being interested from afar. If the project is fortified against these in some way, then it is relevant to communicate them.
In the cryptocurrency space, it is geography the act of putting money where ones mouth. But if a little extra is spent on professional translations. Given the explosion of the market in the past year. Abstract, but also explain why they are so important for this particular project.
A whitepaper is essentially a prospectus that clearly outlines the technical aspects of the product, the problems it intends to solve, how it is going to address them.In the cryptocurrency space, a whitepaper is a document presented by a start-up with the intention of informing and encouraging investors.How to avoid cryptocurrency white papers when getting involved in the blockchain industry?
This is another significant reason to publish a whitepaper when launching a new cryptocurrency project. There must be evidence to back up any claims. Whitepapers are persuasions that a team must convincingly construct to mark off the onn line news papers criteria of logical existence. Naturally, commercial viability and technical robustness, to influence the decisions made by the reader. The number of freelancers offering to write a white paper for the up and coming startups has how to write a summary paper in apa been increasing exponentially. Forums and so on, a notable exception is the, and apparently it can cost as little as 100. Such as GitHub, as ICOs are becoming more and more popular. List the details of any financial backing that has been received and go into the numerical figures of the token. Global market share and, but because they embody these qualities naturally and it shows in their work. This includes its current standard of operations.
© Copyright 2018. "www.ninfas.info". All rights reserved.
|
OPCFW_CODE
|
Understanding Error Handling
Updated: January 13, 2014
Applies To: Microsoft HPC Pack 2008 R2, Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2
This topic describes the error handling settings for the HPC Job Scheduler Service. For information about how to change the configuration options, see Configure the HPC Job Scheduler Service.
This topic includes the following sections:
The HPC Node Manager Service on each node sends regular health reports to the HPC Job Scheduler Service. This health report is called a heartbeat. This heartbeat signal verifies node availability. If a node misses too many heartbeats, the HPC Job Scheduler Service flags the node as unreachable.
The following cluster property settings apply to the health probes:
Heartbeat Interval: the frequency, in seconds, of the health probes. The default is 30 seconds.
Missed Heartbeats (Inactivity Count): the number of heartbeats a node can miss before it is considered unreachable. The default is 3.
Starting with HPC Pack 2012 with Service Pack 1 (SP1), separate settings are provided to configure the inactivity count for on-premises (local) nodes and Windows Azure nodes. Because of possible network latency when reaching Windows Azure nodes, the default inactivity count for Windows Azure nodes is 10.
A node can miss a heartbeat for many reasons, including:
Problems with network connectivity
The HPC Node Manager Service is not running on the compute node
Authentication failure between the head node and the compute node
If you increase the frequency of the health probes (set a shorter Heartbeat Interval), you can detect failures more quickly, but you also increase network traffic. Increased network traffic can decrease cluster performance.
When a node is flagged as unreachable, jobs that are running on that node might fail. If you know that your network has frequent intermittent failures, you might want to increase the Inactivity Count to avoid unnecessary job failures. See also Retry jobs and tasks in this topic.
The HPC Job Scheduler Service automatically retries jobs and tasks that fail due to a cluster problem, such as a node becoming unreachable, or that are stopped by preemption policy. After a specified number of unsuccessful attempts, the HPC Job Scheduler Service marks the job or task as Failed.
The following cluster property settings determine the number of times to retry jobs and tasks:
Job retry: the number of times to automatically retry a job. The default is 3.
Task retry: the number of times to automatically retry a task. The default is 3.
Tasks are not automatically retried if the task property Rerunnable is set to false.
Jobs are not automatically retried if the job property Fail on task failure is set to true.
For more information, see Understanding Job and Task Properties.
When a running task is stopped during execution, you can allow time for the application to save state information, write a log message, create or delete files, or for services to finish computation of their current service call. You can configure the amount of time, in seconds, to allow applications to exit gracefully by setting the Task Cancel Grace Period cluster property. The default Task Cancel Grace Period is 15 seconds.
In Windows HPC Server 2008 R2, the HPC Node Manager Service stops a running task by sending a CTRL_BREAK signal to the application. To use the grace period, the application must process the CTRL_BREAK event. If the application does not process the event, the task exits immediately. For a service to use the grace period, it must process the ServiceContext.OnExiting event.
A cluster administrator or a job owner can force cancel a running task. When a task is force canceled, the task and its sub-tasks skip the grace period and are stopped immediately. For more information, see Force Cancel a Job or Task.
You can adjust the grace period time according to how the applications that run on your cluster handle the CTRL_BREAK signal. For example, if applications try to copy large amounts of data after the signal, you can increase the time out accordingly.
Job owners can add Node Release tasks to run a command or script on each node as it is released from the job. Node Release tasks can be used to return allocated nodes to their pre-job state or to collect data and log files.
The Node Release Task Timeout determines the maximum run time (in seconds) for Node Release tasks. The default value is 10 seconds.
If a job has a maximum run time and a Node Release task, the scheduler cancels the other tasks in the job before the run time of the job expires (job run time minus Node Release task run time). This allows the Node Release task to run within the allocated time for the job.
Node Release tasks run even if a job is canceled. A cluster administrator or the job owner can force cancel a job to skip the Node Release task. For more information, see Force Cancel a Job or Task.
The Excluded nodes limit specifies the maximum amount of nodes that can be listed in the Excluded Nodes job property. The Excluded Nodes job property can specify a list of nodes that the job scheduler should stop using or refrain from using for a particular job.
If a job owner or a cluster administrator notices that tasks in a job consistently fail on a particular node, they can add that node to the Excluded Nodes job property. When the Excluded nodes limit is reached, attempts to add more nodes to the list fail. For more information, see Set and Clear Excluded Nodes for Jobs.
For SOA jobs, the broker node automatically updates and maintains the list of excluded nodes according to the EndPointNotFoundRetryPeriod setting (in the service configuration file). This setting specifies how long the service host should retry loading the service and how long the broker should wait for a connection. If this time elapses, the broker adds the node (service host) to the Excluded Nodes list. When the Excluded nodes limit is exceeded, the broker node cancels the SOA job.
If you change the Excluded nodes limit for the cluster, the new limit will only apply to excluded node lists that are modified after the new limit has been set. That is, the number of nodes listed in the Excluded Nodes job property is only validated against the cluster-wide limit at the time that the job is created or that the property is modified.
|
OPCFW_CODE
|
import parse5 from "parse5";
import * as htmlparser2 from "parse5-htmlparser2-tree-adapter";
const getSelectorsInElement = (element: htmlparser2.Element): string[] => {
const selectors: string[] = [];
const tagName = element.name;
// add class names
if (element.attribs.class) {
selectors.push(...element.attribs.class.split(" "));
}
// add ids
if (element.attribs.id) {
selectors.push(...element.attribs.id.split(" "));
}
// eslint-disable-next-line @typescript-eslint/no-use-before-define
return [...getSelectorsInNodes(element), ...selectors, tagName];
};
const getSelectorsInNodes = (
node: htmlparser2.Document | htmlparser2.Element
): string[] => {
const selectors: string[] = [];
for (const childNode of node.children) {
const element = childNode as htmlparser2.Element;
switch (element.type) {
case "tag":
selectors.push(...getSelectorsInElement(element));
break;
case "root":
selectors.push(...getSelectorsInNodes(element));
break;
default:
break;
}
}
return selectors;
};
const purgecssFromHtml = (content: string): string[] => {
const tree = parse5.parse(content, {
treeAdapter: htmlparser2,
}) as htmlparser2.Document;
return getSelectorsInNodes(tree);
};
export default purgecssFromHtml;
|
STACK_EDU
|
Though they are effective at a variety of computer vision tasks, deep neural networks (DNNs) have been shown to be vulnerable to attacks based on adversarial examples, or images perceptually similar to the real images but intentionally constructed to fool learning models. This has limited the application of DNNs in security-critical systems.
My colleagues and I propose a training recipe named “deep defense” to address this vulnerability. Deep defense integrates an adversarial perturbation-based regularizer into the classification objective, such that the obtained models learn to resist potential attacks, directly and precisely. Experimental results with MNIST*, CIFAR-10*, and ImageNet* demonstrate that our deep defense method significantly improves the resistance of different DNNs to advanced adversarial attacks with no observed accuracy degradation. Results indicate that our method outperforms training with adversarial/Parseval regularizations by large margins on these datasets and different DNN architectures. With this and future work, we hope to improve the resistance of DNNs to adversarial attacks and therefore increase the suitability of DNNs to security-critical applications.
Earlier studies have synthesized adversarial examples by applying worst-case perturbations to real images . It has been shown that perturbations for fooling a DNN model can be 1000x smaller in magnitude when compared with real images, making these perturbations imperceptible to the naked eye. Studies have found that even leading DNN solutions can be fooled to misclassify these adversarial examples with high confidence.
This vulnerability to adversarial examples can lead to significant issues in real-world applications, such as face ID systems. Unlike certain instability against random noise, which is theoretically and practically guaranteed to be less critical , the vulnerability of DNNs to adversarial perturbations is more severe.
Several earlier studies have investigated this vulnerability . Goodfellow et al. argue that the main reason why DNNs are vulnerable is their linear nature instead of nonlinearity and overfitting. Based on the explanation, they design an efficient l∞ induced perturbation and further propose to combine it with adversarial training for regularization. Recently, Cisse et al. investigate the Lipschitz constant of DNN-based classifiers and propose Parseval training. However, similar to some previous and contemporary methods, approximations to the theoretically optimal constraint are required in practice, making the method less effective to resist very strong attacks.
We introduce “Deep Defense,” a regularization method to train DNNs with improved robustness against adversarial examples. Unlike previous methods which make approximations and optimize possibly untight bounds, we precisely integrate a perturbation-based regularizer into the classification objective. The DNN models can therefore directly learn from, and develop further resistance to, adversarial attacks in a principled way.
Specifically, we penalize the norm of adversarial perturbations by encouraging relatively large values for the correctly classified samples and possibly small values for the misclassified ones. As a regularizer, it is jointly optimized with the original learning objective and the whole problem is efficiently solved by being considered as training a recursive-flavored network.
As noted, we find this approach to significantly increase the robustness of DNNs to advanced adversarial attacks with no observed accuracy degradation.
We look forward to extending this research with future works pertaining to resisting black-box attacks and attacks in the physical world and continuing to work to extend the utility of DNNs to more applications.
For more information, please review our study Deep Defense: Training DNNs with Improved Adversarial Robustness, which was presented at the 2018 NeurIPS conference. For more AI research from Intel, follow @IntelAIDev and @IntelAI on Twitter and tune in to https://ai.intel.com.
|
OPCFW_CODE
|
Maximum number of virtual CPU for Windows 7/10 virtual machine
For a virtual machine running Windows 7/10, is there a maximum number of virtual CPU a hypervisor can assign to it?
As hypervisor, consider VMware ESXi 5.5 or 6.0.
Andrea
What hypervisor are we talking about? Windows does licensing via processor sockets which can pose an interesting problem.
I edited the question, anyway I want to consider VMware ESXi 5.5 or 6.0.
Worth checking out this old question too in terms of the way you layout those sockets/cores - https://serverfault.com/questions/455999/is-there-a-limitations-to-the-number-of-cores-on-windows-7-64-operating-system/456015
Chopper3, that old question concerned physical machines and physical processors. I don't know if I can consider the same things for virtual machines with virtual CPUs.
The maximum specifications of the ESXi hosts are well documented. For example, from the document for ESXi 6.0:
Virtual CPUs per virtual machine (Virtual
SMP)
128
However, due to the way VMware assigns CPU resources to VMs I usually recommend to use not more than 8 vCPU cores per VM, as long as CPUs are overprovisioned.
(On our environment we run ~80 VMs with a wide variety of configurations on 32 physical CPU cores)
To clarify:
The ESXi host allocates CPU cycles to a VM when enough physical CPU cores are free to cover all virtual cores in the VM. That means, if you assign 16 cores to a VM, the VM will sit and wait until 16 physical cores are available, then the cycles will run simultaneous on the physical cores.
If your host has 64 physical cores, and you have 4 VMs with 16 cores each, this will obviously not matter. But if you overprovision CPU cores, running, for example, 20 VMs with 16 virtual cores each, because, hey, which VM ever uses all cores at once, you will notice a performance degradation.
(This behavior is, as far as I know, specific to VMware ESXi and does not apply do other hypervisors)
This are just examples, the best number depends on your hardware and number of VMs you intend to run on it. You will most probably do some testing until you find a good compromise.
Gerard, thank you for the clarification about ESXi. You wrote best practice of allocating virtual CPU for virtual machines. However, my question focused on maximum number of virtual CPUs a Windows 7/10 virtual machine can support, in ESXi.
The guest OS is irrelevant, this applies to all VMs running on the hypervisor.
Are there any limitations in terms of number of sockets a Windows 7/10 VM can manage?
The question linked by Chopper3 answers that, at least for windows 7. Microsoft also provides documentation about hardware limitations. You can at least try to do a little research.
I think here there is the right answer, about Windows 10.
I believe Hyper-V uses virtual CPUs to represent threads that can run ad-hoc on any core. One thing I would keep in mind is that the moment you provision your VM with enough virtual processors that it exceeds the number of cores on any one processor, you get into NUMA mode, which complicates processing at a hardware level be requiring the processors to synchronize some information (registers, caches, etc) that can sometimes lead to instability
|
STACK_EXCHANGE
|
I use the GetSpectrum to show 32 spectrum bars.
Can anyone help me with which frequencies I should include ?
Only taking the first 32 values is not the best solution I think.
And can anyone tell me how these frquencies are related to the 512 values returned by GetSpectrum ?
I looked in the documentation, and if the output rate is 44100 then the value interval would be 22050 / 512 = 43 Hz
Am I way off here ?
But the most important issue is input on the values I should include in my spectrum bars.
- Anonymous asked 16 years ago
Lol, 1 inch of dust. hehe. Yeah, I do somewhat of the same thing with my VU, because if I just output the raw values it goes way too fast and sloppy. But, I think the spectrum I have is smooth enough, so I kept it like that. Also, I think blitting it would decrease speed so my bars are a solid color.
You were right, and I was wrong 😆
I have a big spectrum window function in my program. When using the small one, it uses 7-8%, with the big one it uses 27-33%, and in full screen 75-80%
I use a PIII-450MHz
640 MB RAM
So it seems like it’s the FillRect routine that’s slow.
Any suggestions how to speed this up ?
To the people who requested the spectrum or scratching effect code, please be patient, had a heavy weekend with my mate Jack Daniels and his sister Sambooka. Computer remained OFF whilst in recovery. I don;t bother with the Internet thing at home, Will post the stuff tomorrow.
…RE….Anybody done any serious Audio development with fmod like building a sample sequencer etc, have started development of a 16 track audio sequencer . Everybody just seems to be trying to build the perfect MP3 Player? (pst. it’s been done!)
Well, fillrect is a lot faster than internal VB functions I think. But if you want to try it out you could always mess around with the Line function of VB. And also, If you are using the Timer control for the spectrum analyzer, you might try using an API timer, which are a lot more reliable than VB timers.
Yeah, lol. I believe the perfect MP3 player is Sonique, which is not powered by FMOD. It uses AudioEnlightenment, which is a kickass mpeg decoder written by Tony Million I think… Anyways, I’m not really trying to make the perfect MP3 player, I’m just working on it because I’d rather use a program that I wrote and know what’s in it than some 3rd party program with spyware and crap like that.
This may not be the best way to create a specturm, i’ve been using fmod in my own player for awile. My spectrum runs in a constant loop . Ie
~Draw some stuff
I also use Bltbit to construct my specturm, I also calculate a nice gravity dropoff and an option to smooth the apperance of the display, looks very nice and fast as hell. Email me @ email@example.com is you would like the whole source -code,
..just that after posting my emails address i had…well now…33 requests for my code, and yet only 2 people bothering to contribute to the forum. I wonder what these guy are doing with fmod?????, would be interesting to know…..#
:), Clubing tonight, Yeeeeeeee
Blitting is acutally slower then using rectfill, and other api draw routines. If you go with a do loop its better to use sleep(x) api then in a constant loop, otherwize you will eat up lots of cpu constantly. A good engine might do something like…
x = GetTimer
xx = GetTimer
xxx = xx – x
This will cause the app to sleep for the same amount of time it takes to preform the drawing, Thus for slower comps it wouldn’t eat mass cpu time, you could further expand the rountines to use less cpu based if it is a extremly fast machine, ect…
As for speeding up your drawing routines, you might want to look at the DirectX SDK, or other such Graphics SDK’s.
Just a thought.
Thanks for the Speed advice, i’ll give that a shot soon. I use the blt function rather than recfill because i can get nice gradients and then even the eq display is skinnable,( if your into that thing), Re: the Gravity dropoff, if you pump the normal specturm into say 30 bars jumping up and down, it looks good, however i wanted a smoother display, so i made the bars will gradually decend after peaking until a highter value is received, the effect is the whole display will jump around in time to the music, and yes it might not be as acurate , but it looks very nice. Also i’ve taken to manipulating to values – Thats is-adjusting each bar according the values of the bars next to it. This way the eq will be nice and smooth and rounded. I think i’ll have to post i piccy and the code. You’d be suprised how fast this runs in vb. oh – my pc specs are p3-400, 128megs,1 inch dust.
Please login first to submit.
|
OPCFW_CODE
|
Myst was released back in 1993 and one of the first games to make use of the CD-ROM, featuring pre-rendered backgrounds and video sequences. It turned out to be one the best selling PC game of it time. The Myst: Masterpiece Edition was released seven years later in 2000, upgrading the graphics from 256 to 24-bit and adding hint system, but leaving the game otherwise unchanged. Another version of the game was released the same year called realMyst, featuring a fully realtime environment instead of pre-rendered graphics. This review is about the Myst: Masterpiece Edition.
The game starts out without a big introduction. The player is dropped on the island called Myst, without an explanation, task or goal. A bit exploration will lead to a short speech from Atrus, a person with the ability to create portals into other worlds in the form of books. The player then learns that something went wrong and that the books have been hidden around the island in save places. Some further hints on the location of said books can be gathered in the central library of Myst and the books themselves are hidden in structures around the island, including a spaceship, a regular ship, a large set of gears and a clock tower. The island itself is rather small, featuring not much more then these structures. The players task is now to unlock each book and travel to the world they are connected to. In these worlds, which themselves are of a similar size to Myst itself, the player has to collect pages which are needed to complete books in the library and thus will unlock the final puzzle.
Graphically the game is presented from a first person perspective. Featuring mostly static backgrounds with very little or no animation. While the graphics certainly show their age, they still manage to create a decent atmosphere thanks to the nice wind and water ambient sounds in the backgrounds. Navigation in the world can however at times be a bit confusing, as the game doesn't provide a proper map or compass and lacks transition animations. Often it isn't very clear if a turn to the left will turn you around 180 degrees are just 90 and thus it is not unusual to step past something or miss a turn.
The interface of the game is extremely minimalistic, all the interaction and navigation in the world happens via the mouse, they keyboard stays unused. The mouse cursor doesn't change when an item is usable and an inventory is not present in this game. The few times where you have to pick up an item and use it, it is handled by changing your mouse cursor to that item. Without the ability drop or store those items it however becomes quite confusing as to what happens when you have grab multiple items at once. One noteworthy tweak to the regular pointing and clicking presented in the adventure genre is that in Myst you are not limited to only clicking things, if you want to pull a leaver you have to click and drag it into the appropriate direction. It is a nice little touch that increases the immersion a good bit.
The save system in Myst is a bit weird, as it doesn't save your exact location, but always drops you back at the start of the given world. Accomplishments in the world are preserved, but one has to walk back to the point from where one left. This isn't much of an issue, as one can quickly travel around in Myst and be back to where one left in often just a few seconds, but it is an irritating thing non the less.
The puzzle design in Myst is often not that great, as the puzzles aren't build around logic or item combination, but instead focus on observation. A typical puzzle in Myst is one where you pull a leaver or push a button and then have to figure out the effect that it had on the world. Cause and effect are often times a few rooms apart, so it can get a little tedious at times to figure out what happened. Puzzles furthermore often come in the form of very basic key/door patterns, you see a code or pattern in one place and have to enter it in another to unlock a door or mechanism. Entering those codes can be a bit tedious and also feels uninspired, as none of the mechanisms in the world really has a purpose other then acting as an convoluted input mechanism for said code. Another annoying issue is that the library you visit in the very beginning of the game contains important hints for puzzles you will see only much later in the game and without an inventory to put those books in, you have to basically transcribe those hints yourself, as you can't reach the library easily when you need to.
The thing that saves puzzle design in the Myst: Masterpiece Edition however is its great hint system. A click at the bottom of the screen will give you access to a map of the island and location specific hints in three levels, going from vague suggestions to basically detailed solution. The hints are written in the style of a person accompanying you on your journey, so they integrate very smoothly into the whole experience without feeling out of place. This removes a lot of the tedious trial and error from the game and make the game easily finishable without falling back to third party walkthroughs. The hint system also provides some of the needed track keeping that the game otherwise fails to provide.
Overall Myst is still a decent game, especially taking its age into account. The Masterpiece Edition, while technically being not much of an improvement over the original, is a much more enjoyable experience then the original due to its hint system. The often lacking feedback on actions and basically non-existent track keeping of the tasks you have already accomplished however pulls the game down quite noticeably. So does the almost non-existent story. While the few bits of dialog you get are nicely delivered, they just aren't enough to give your journey much of a meaning and the ending is kind of a let down. You get another piece of dialog and then just get send back to the island of Myst for some free exploration, which however is pointless at that point, as you have already seen anything of that island already. A proper ending cinematic or even a simple credit roll is missing, giving the ending a very inconclusive feel.
Technical notes: The game frequently crashes when showing the transition animations between the different worlds when playing it in Windows 98. This was solvable by uninstalling QuickTime and then reinstalling the version from the Myst CD.
The Masterpiece Edition also contains a glaring bug: In a note for the final puzzle the word "on" got changed to "off", making the note confusing and unusable, the text provided by the hint system however provides the correct answer.
|
OPCFW_CODE
|
The free html to pdf api2 field pdf converter offers most of the features the professional sdk offers, the only notable limitation is that it can only generate pdf documents up to 5 pages long. Ansible Tower API Guide Release Ansible Tower 2. · To keep it simple, I will create an Employee with minimum possible fields. Global Moderator; Hero Member.
off with %PDF, but then contains arbitary &39;code&39; that will/may end up being executed by the pdf handler. 80 has retroversions for some of the submodules: resolved:. · Grab the Perl PDF::API2 module from CPAN; Grab a Perl titlecase script; Write a script opens the PDF, titlecases the title, and saves the PDF; Sounds fast and easy, right?
pdf-api2 version 1. When creating a new report, you can do this by checking the “Create pdf api2 field this report: On demand via the API” checkbox. PDF & Format & Layout Projects for - . The graphic below outlines the different Employee Central entities needed for employee creation, along with the corresponding minimal fields per entity, which are listed directly under the entity name. Command line utilities to convert html to pdf or image using WebKit wkhtmltopdf is a command line program which permits one to create a pdf or an image from an url, a local html file or stdin. Then it shows up in the resulting email.
· Adding watermarks to a PDF with Perl’s PDF::API2 Posted on Aug Aug by api2 Andy For ages I’ve been trying to work pdf api2 field out how pdf api2 field to programmatically add a watermark to an existing PDF using the Perl PDF::API2 module. · If PDF is multipage, only first page will be extracted and used. · Set the default values of PDF form fields. perl -MCPAN -e shell install Mxpress::PDF.
3) You can handle massively large files and CPU usage or a just pdf api2 field a tiny bit. It seems that Gtk2 was returning UTF-8 encoded strings, but for the date pdf api2 field fields, current verions PDF::API2 produce garbage unless ASCII is passed. · PDF::API3::Compat::API2::Basic::TTF::Font - Memory representation of a font. It is the default site when initially linking to the Portico Developer Guide.
6 (AFPL) Changed to Text::PDF with a total rewrite as Text::PDF::API (procedural) Unmaintainable Code triggered rewrite. From apt show wkhtmltopdf:. 3 that interfere with my Perl 5. I pdf api2 field could convert from HTML to PDF pdf api2 field with Perl with PDF::WebKit, which in turn uses wkhtmltopdf. Per PDF::API2, add 64-bit field widths to cross-reference streams. cpanm Mxpress::PDF CPAN shell. (I have taken this sample to cover all types of files). Internal Use Only Transactions 40 5.
So, I assumed what the poster wanted to know was: is there a way of pre-parsing an entire pdf file to determine that it is wholely to spec and truly a pdf api2 field valid and clean pdf file. Any idea what the problem is here? All content in PDF template will be included in every page of the final report. We cannot send the file from its original state.
Script must: - Have same image on each label via GD - Generate Unique Image for each label via GD - Generate Unique. · Introduction Web API has been around for some years now. It&39;s there if you look with a Browse, but isn&39;t in the resulting email. But the field :strasse filled with Voßkamp is resulting in Voßkamp even though, pdf api2 field ß is also a special character (but a German Umlaut). · Download full source code. I use it everyday at work and along with a few other modules, it has made Perl an invaluable tool for me. In my case I wanted to load data from the database, perform some processing and return a subset of the data as a file. · To install Mxpress::PDF, copy pdf api2 field and paste the appropriate command in to your terminal.
Web API is very similar to. 80 has retroversions for some of the submodules:. Address Verification pdf api2 field Service. It is the package of choice if creating new PDF documents from scratch. Special Processing Rules 41 5. " in the Subject and Keyword fields, but not the Title and Author fields. PDF::API2 is &39;The Next Generation&39; of Text::PDF::API, a api2 Perl module-chain that facilitates the creation and modification of PDF files. api2 I tried replacing them with octal encoding (&92;351), which came out the same (Title and pdf api2 field Author okay, Subject and Keywords pdf api2 field messed up).
When you look pdf api2 field at pdf api2 field the metadata in the resulting pdf, the accented api2 characters turn into "? Short History First Code implemented pdf api2 field based on PDFlib-0. Api2Pdf runs wkhtmltopdf on AWS Lambda. It is a pdf api2 field very pdf api2 field efficient and lightweight technology to build RESTful web services in. Be sure to avoid overlapping PDF template content and report content. · For example, when we send pdf api2 field the file type as PDF, service will return PDF file if we send Doc, service will return Word document. Here is the regression test (you provide your own font).
PDF::API2::Simple, by Red Tree Systems, is a wrapper over the PDF::API2 module for users who find the PDF::API2 module to difficult to use. UTF8 flag in metadata date fields causes garbage: resolved: Normal: 1 years ago: 2. Alternate Service NameValuePair pdf api2 field Response Fields 38-39 4.
2) It scales infinitely. The order in which you create the required minimum entities is critical. Sine it&39;s initial release, I have found it to be easy to use to produce simple documents, over every aspect of my PDF creation, from image pdf api2 field contact sheets, to relatively complex tabulated data. To install this plugin, search for its package name on the Plugin Store and click “Install”.
I’ve long been a pdf api2 field user of PDF::API2, a module available for Perl. · PDF::API2, by Alfred Reibenschuh, is actively maintained. The name should be the full hierarchical name of the field as pdf api2 field output by the getFormFieldList() function. Being able to generate PDF’s for reports and documents allow for a bevy of automation shortcuts where traditional Data Merging, or.
This meant I needed to send something t. This api2 didn&39;t show up before because gscan2pdf was storing the dates internally as an integer of the seconds since. A PDF of this content is available here: Portico Developer Guide only pdf. Alternate Service Response Field Usage Detail 40 4. Run it once and then again on the output of the first run. The title "Portico Developer Guide" appears above the topic title on each of its pages. Looking for a script that will generate a PDF for a pre-defined Avery label 5390. Field Type Description folderName required The name of the MT Drive folder to add the user(s) access to.
« Last Edit: Aug, 10:23:53 AM by Phil » Logged Phil. Before accessing a report via the get_reports and run_report endpoints, you need to make your report accessible to the API. The characters are the same ascii 233. Well, there were a few hitches: Perl on my work system is jacked, thanks to a bunch of Oracle files for Perl 5. In this topic we are going to show you how to convert DOCX to PDF Using PHP. See more results. A workaround is to use the Report Text tool to create a field with the line break. PDF generation on pdf api2 field a serverless architecture like AWS Lambda has several advantages: 1) You only pay for pdf api2 field the processing power to generate the PDFs when you need it.
· When a name has german Umlauts like Andräs Müller the fields in the api2 PDF only has Andr s M ller. I&39;m calling it a bug in PDF::API2, but I have a workaround. NET MVC with its controllers and routing pdf api2 field rules.
"Tim Roberts" comwrote in message. With the free html to pdf converter from SelectPdf, pdf api2 field you can convert any url, html file or html pdf api2 field string to pdf, with the possibility to add custom headers and footers. I&39;ve poured over the PDFMark reference, PDF specification, and any examples I&39;ve bee.
PDF parsers are used in various fields, ranging from document management, document indexing api2 to business process automation with the goal of automatically extracting data from PDF files. Whether or not it is possible to successfully parse PDF files, depends highly on the nature of documents and not all document types can be parsed. It features support for the 14 base PDF Core Fonts, TrueType fonts, and Adobe-Type1, with unicode mappings, embedding o. We are going to use the following file as our input file:. There should be no differences between the outputs of the two runs. At first you think it’s going to be easy to download a file from Web Api, but as I discovered, it was not.
An array of font names ( from the corefonts supported by PDF::API2 ) to pdf api2 field set up. For whatever reason, when the field with the carriage return/newline is put into the pdf api2 field body of pdf api2 field an email using the Email Tool, the newline is ignored. PDF::API2::Basic::TTF::Font - Memory representation of a font. Alternate Service Request Field Usage Detail 37-38 4.
The exact same PDF form is used in another application based on Perl (with. Alternate Service NameValuePair Request Fields 35-37 4. · Download PDF::API2 for free. use PDF:: API2; option -x forces overwriting existing metadata option -f provides the field name which contains the linked PDF file&39;s path.
Searches performed in this site will provide results for only this site. 55) If you want to produce PDFs, and have to use Perl, use this module. Convert DOCX to PDF Using PHP. Folder must be owned by the currently logged in user. I&39;ve been trying for days to get a CheckBox or Radio Button to render using PDF::API2 and haven&39;t been able to.
-> Pdf 回転させてい
-> Site https www.ajog.org article s0002-9378 21 90512-x pdf
|
OPCFW_CODE
|
Topic messaging with condition 'or' operator not working
I have used for front-end your sample javascript application.
I have built spring java service using API logic in firebase cloud messaging.
Here is URL for my sample spring service https://github.com/petya0111/firebase-spring-service
Reproduce: set the request
Run project
POST http://localhost:8080/notification/messages
Headers:
firebase-server-key : **[your generated server key]**
Body:
{
"condition": " 'topic1' in topics || 'topic2' in topics ",
"title": "Hello,Via Multiple Topics",
"body": "Hello,Via Multiple Topics"
}
To send messages to condition you must first create topic
Reproduce: create topic
POST https://iid.googleapis.com/iid/v1/{token}/rel/topics/{topic}
headers
Authorization : key=[firebase-server-key]
200 OK
Condition OR '||'
"condition": " 'topic1' in topics || 'topic2' in topics "
is not working, I have subscribed one token on topic 'topic1' and another token on topic 'topic2'.
Please, look at my problem.
Is there a work around? In my case I want to prevent duplicated notifications. If a user subscribed to A and B. You can send them seperatly, but this will result in 2 notifications. I want to use condition = A in topic || B in topic. Unfortunately this does not work. The only solution I know right now is to add a unique ID to every notification and look for duplicates client-side and ignore them.
Not working for me either. Messages only delivered when sent without condition. Using kreait/firebase-php library.
This still appears to be an issue.
Any progress on this issue?
Any progress?
Sorry for the long delay. This is something that is usually project-specific and better handled with a support ticket. Please file a ticket by visiting https://firebase.google.com/support . Once you're there, scroll down to "Pick a category", choose "Push Notification Issues", and follow the prompts from there. Support has a bunch of debugging tools and should be able to get this fixed. If it turns out to be a more widespread issue, they'll report it directly to FCM engineers.
@jhuleatt ok, I have reported the bug in https://firebase.google.com/support
Thank you.
Best regards,
Petya Marinova
They answered me that it is a known issue... And we should wait to release the fix for OR operator
Best regards,
Petya
Some one has posted work around in Stack Overflow: https://stackoverflow.com/questions/48627928/fcm-bug-send-a-notification-to-multiple-topics-without-using-the-or-operator
Copying it here:
Update - So ive contacted FCM support & they helped me with a workaround using the (AND &&) And (NOT !) Operators :
For example, you're trying to send a message to Topic A OR Topic B OR Topic C.
This condition can be converted to the suggested workaround by sending 3 messages which looks something like:
Topic A && !Topic B && !Topic C
Topic B && !Topic C
Topic C
Only partial solution i have found so far is to make the condition this way :
"'TopicNone' in topics && ('TopicA' in topics || 'TopicB' in topics)"
With TopicNone a topic that all devices are subscribed to. Apparently if i used the OR(||) operators after the AND(&&) operator. It works.
is this ticket still active? am facing the issue.
This repo is for the JS Quickstart and not for general troubleshooting. As a general rule, the JS SDK repo would be more appropriate for SDK related bugs, and support is the correct place to report issues with backend services.
Mentioned in this thread: it's a known bug with a workaround available.
|
GITHUB_ARCHIVE
|
With the support of the Uzbek embassy in Germany, a cooperation agreement about cooperation in the frame of Erasmus+ projects SPACECOM "Strengthening education in space systems and communications engineering" and NICOPA “New and Innovative Courses for Precision Agriculture” was signed between the Technical University of Berlin, Yeoju Technical Institute in Tashkent and EXOLAUNCH GmbH.
- Contract_Spacecom-TU-Berlin_EXOLAUNCH [1.47 MB] [2021-04-06 09:35:34]
The online Master Classes in the framework of the NICOPA project will start in January 2021. The Master Classes will be provided by Technische Universität Berlin (Berlin, Germany), Agricultural University Plovdiv (Plovdiv, Bulgaria) and Czech University of Life Sciences (Prague, Czech Republic).
Draft Agenda of Online Master Classes for January 2021:
- NICOPA Agenda Online Master Classes - Jan 2021 [212.95 KB]
The new master program has opened in TUIT based on NICoPA project of Erasmus+ programBased on the project of Erasmus+ program "New and innovative courses for precision agriculture - NICoPA" (2018-2021) has opened a new master program "Geoinformation systems and technologies" in Tashkent university of information technologies named after Muhammad al-Khwarizmi from the 2020/2021 academic year. Prospective masters will study on modern curriculum which is developed in collaboration with highly experienced European universities (Technical University of Berlin, Agricultural University Plovdiv, Czech University of Life Sciences Prague). Specialization modules that are Geoinformation systems, Remote sensing technologies and applications, SENTINEL-1-2-3 imagery processing, Computer vision, Web technologies for geo-portal, geo-services and geo-analytical systems, Precision agriculture basics, Artificial intelligence in geoinformation systems and other subjects are included in the curriculum. It is planned opening the new PAL (Precision Agriculture Lab) laboratory for master students to study modules and conduct their researches effectively using modern hardware and software tools.
We successfully conducted the first Training of our NICOPA project, which took place from 19th to 30th August at the Technical University of Berlin.
- NICOPA Training I - Agenda Draft [2.65 MB]
Webinar "Women in agricultural science and technology"
We invite you to join the webinar "Women in agricultural science and technology" organized by the CUPAGIS-Consortium. Please see for more details the attached flyer and agenda.
|
OPCFW_CODE
|
About ms-dos 16-bit turbo c++,, how to extends 64KB limits on 16-bit programming?
I mean, Without swapping on harddrive.
- JonathanLv 78 years agoFavorite Answer
16 bit coding under DOS basically breaks down into these categories:
1. .COM programs, where the code itself (without hard drive swapping) plus all constants and initialized data must fit inside of 64k. This limit is because the .COM file format is a pure image and while it is true that DOS allocations ALL of the available RAM for the program it can only load up one 64k segment during program-load time. It's possible for the .COM program to access more memory than 64k, but it will all be uninitialized memory. In C/C++, these programs are called "tiny model" programs. The C/C++ compiler imposes MORE limitations, by requiring all code and constants and data, initialized or uninitialized, plus all heap,and stack space to fit within 64k. This is NOT a DOS limitation, it's a compiler limitation. Technically, if you write in assembler, you can easily exceed this with .COM programs.
2. .EXE programs, which were only supported in DOS 2.0 and later. These files include patching information and other structures which allow DOS to properly allocate ONLY the needed ram for the program and to load it into memory when it is larger than 64k in code size (or data size.) These programs can have much more code than 64k, much more data as well, and so on. These programs are called "small, medium, compact, large, and huge." The C/C++ compiler will support these. The features are:
a. small model = one 64k data segment + one 64k code segment
b. medium model = one 64k data segment + more than 64k code
c. compact model = more than 64k data + one 64k code segment
d. large model = more than 64k data + more than 64k code
e. huge model = large model + individual arrays/structs bigger than 64k
In addition, DOS later came to support a number of other standards to allow even greater access to memory. These included, VCPI, EMS, XMS, and DPMI, to name just a few. DOS also included a special mechanism which allowed calls into Windows, and that provided an entirely new method for providing additional support, including the ability to create and manage Windowed programs from DOS or to access uniquely Windows function calls.
There were also a semi-special class of programs called TSRs (terminate and stay resident) that usually provided additional services.
The way these things are done is complex. If you need specific details, I can provide very exacting details since this is an area I worked on, extensively. But I'd need to know more from you, first.
- ?Lv 45 years ago
Free, not sure. Minimal cost, go to book store, find a c-compiler text book that includes the compiler; I prefer c++ by Microsoft -- an older version is better b/c it's not so bloated & confusing documentation. The 1st lesson will walk you through creating a dos program. 16-bits is no big deal; that's the norm.
|
OPCFW_CODE
|
How To Generate Profiling Data
Andre Fischer and Carsten Driesner, July 11th , 2001. Last change on August 2nd , 2001.
The task that is to be solved is the generation of profiling data for a manually selected set of points in programs of the OpenOffice/Sun One Webtop family. Moreover this should be accomplished as automatically as possible, so that timings of different program versions can be compared to each other automatically.
The main reason for not using an existing tool like Truetime or Quantify is the huge amount of data generated by these programs. In general the developers of the different modules of the office know about the time critical parts of their code. Therefore only profiling informations about these code parts are of interest. Everything else only obscures the view to the relevant data.
With this in mind we use a different approach. Instead of blindly instrumenting---i.e. adding code that generates profiling data---all files, we concentrate of the areas of interest. The developers have to add once manually commands into their code that emits time stamps together with context information. From then on everything else goes automatically. If the those files are compiled with a certain preprocessor symbol defined and the office is run with a certain environment variable set, then profiling information is written to a log file (or, to be more precise, to one log file per process). These log files are then processed by a Perl script and are transfomed into a Calc document.
The details of this process are explained below. Note that this is a work in progress. Especially the transformation of the log files into a Calc document will have to be adapted to the needs of those who use those documents.
Instrumenting the source files
The header file <rtl/logfile.hxx> (guarded by the
_RTL_LOGFILE_HXX_) contains a set of
macros that can be used to emit time stamps at certain points in the
code. The reason for using macros instead of a more decent C++
construct is to provide a way of removing the code for writing
profiling data completely from the compiled executable and it's
libraries. The macros wrap calls to the class ::rtl::Logfile
(declared in <rtl/logfile.hxx>) and the function
rtl_logfile_trace (declared in <rtl/logfile.h>). They generate
actual code only when at compile time the preprocessor symbol TIMELOG
is defined. This is either accomplished by building a special office
version (which will be the normal Pro-version) or by setting an
ENVCDEFS to include
Depending on the shell you use
will do that (4NT).
The macros can be used in two ways:
RTL_LOGFILE_CONTEXT_AUTHOR(instance,project,author,name)creates a context instance that can be referenced by
instance. The three remaining parameters contain the name of the project in which the macro is used, the Sun id of the author beeing responsible for the code in which the macro is used (for example af119097 for one of the authors) and the
nameof the function or other scope that this context refers to. The context instance emits one time stamp when it is created and one when it is destroyed. Therefore, if placed as first statement of a function definition, a time stamp is written to the log file when the function is entered and one when it is exited even when there is more then one (implicit) return statement.
The name passed to the context instantiation is written with every time stamp that originates from such a context. This is also true for the following macros.
RTL_LOGFILE_CONTEXT_TRACE3(instance,message,arg1,arg2,arg3)write time stamps with arbitrary messages to the log file. The actual message is given by the string
message. Together with zero to three arguments it is passed to a printf style output function. That means that for every argument there has to be a suitable % format string in
message. In order to allow the log file be parsed by the existing Perl scripts message may contain newlines. Each message is prefixed with the context name.
These macros exist also in a context free version.
RTL_LOGFILE_TRACE_AUTHOR3(project,author,message,arg1,arg2,arg3)do the same thing as their context twins except that the messages are not prefixed with a context name. As you can see, the project and author have to given to each trace macro instead of just to the one creating a context. They are defined in <rtl/logfile.h> which is included from <rtl/logfile.hxx>. Therefore for only using the context free macros it is sufficient to include <rtl/logfile.h>.
For every introduced above there is an analogon without the
_AUTHOR suffix which do not accept the
author arguments. Because both of these are used
later in the post processing stage we discourage you to use these
other macro versions.
Depending on which of these two ways is used to generate a time stamp, they are named function/scope time stamps respectively message time stamps.
Creating profiling information
If you have instrumented your code like described above and have
compiled and installed it you can create profiling information by
starting the office with the bootsrap variable
set to a file name prefix. This prefix is completed by appending an
underscore, the process id and a .log suffix. Note that
backslashes have to be escaped by another backslash. The variable
RTL_LOGFILE can be set in one of the following ways:
Setting the environment variable
RTL_LOGFILE. This for instance be done with
from a 4NT shell.
Passing the argument
-env:RTL_LOGFILon the commandline to the office.
Put an entry into <executable>
rcfile (the first with, the second without a dot) where <executable> is the name of the executable. For more information on this method please refer to http://udk.openoffice.org/common/man/concept/uno_default_bootstrapping.html
Transforming the log files into Calc documents
Once you have created a log file you may want to convert it into a more readable form. There is a set of Perl scripts that create a Calc document from a log file that contains for every thread a set of pages with different views of the profiling data.
The first page shows a pretty printed version of the list of time stamps. They are indented according to the calling hierarchy. For each time stamp you can see the time it has been written (or to be more precise the time on which it's writing has been requested), if applicable the time the function/scope took to compute and the function/scope name and message.
The second page shows a list of all functions/scopes for which timing informations exist. Every list entry shows the function/scope name, total, minimal, maximal, and average time and the number of calls.
The list of time stamps can be filtered in order to reduce a large
amount of data to a managable size and to exclude profiling
information from projects you are not interested in. There are two
filters. One is for explicit inclusion of time stamps, the other for
explicit exclusion. Each is initialized from a file containing
regular expressions (Perl style) given on the command line. The file
-i switch defines those time stamps that are
to be included in the reports and the file after the
switch defines those time stamps that are to be excluded. The files
may contain empty lines or comment lines whose first character is an
'#'. An empty inclusion filter does have no effect. If both filters
are specified, then a time stamp is written to the report if it
matches at least one of the regular expression of the inclusion
filter and none of the exclusion filter.
The idea is to use the inclusion filter if you have just a small number of functions or messages you are interested in and to use the exclusion filter if there is only a small number you would not like in your reports.
An inclusion file might look like this.
# Regular expression matching all function names that shall
# in the reports.
# Show calls to BuildChart in the chart project.
The form and number of reports will of course have to be changed and extended. We are looking forward to your feedback.
Furthor documentation can be found in the C++ header files
and the Perl scripts in project
You can use for instance
pod2html to extract and
transform the Perl scripts documentation in HTML documents.
Documentation of the time stamp format can be found here.
|
OPCFW_CODE
|
CCIA Background Checking Window
If you are in Canada and either age verifying or submitting information to CCIA, Fusion will often communicate with CCIA in the background from the server. For example, if you requested that 25 animals were age verified, Fusion would place them in the queue and then ask CCIA about each in turn. This window shows information about this queue and the background process. Most of the time you don't need to worry about this, but if Fusion encounters errors you can use this window to help track them down.
There are two queues that Fusion uses. One is for age verifying and the other is for move in and move out events. The age verifying queue is shown in this window on the left and the other on the right. If all is going well with a queue, Fusion will show its information on a green background. If there is an error, the background will be red.
When Fusion is processing a queue, if it encounters an error (for example, the internet is down) it will wait a few minutes and try several time. If it continues to get an error it will stop processing the queue and send a system message so you can open this window and take a look at it.
- In Queue. Shows how many tags are currently in the queue.
- Handled Today. Shows how many tags have successfully been handled so far today.
- Average Time Today. Show how many seconds it has taken, on average, to process each tag so far today. This can give you an idea of how slow or fast the process currently is.
- Current Status. When things are working well, the message will be Okay. If the queue has been stopped because of an error, it will say Stopped-Error.
- Error Message. If an error has occurred, the error message will be shown here. The error message is usually the raw error message that CCIA sends back so it can be a bit difficult to understand, but they will know what it means if it came from them.
- Copy Raw Error To Clipboard. If an error is showing, you can use this button to copy the error into the system clipboard. It could then be pasted into an email, for example, to send the exact error message to support staff.
- Restart Processing. Once you feel the error has been resolved, click this button to start the queue. You may want to watch for several seconds to make sure it isn't encountering any more errors before closing the window.
The most likely things to go wrong are:
- Your internet is not working. Just restart the processing once your internet is working again.
- CCIA's internet or servers are not working or are down for maintenance. Just wait until CCIA's side is working again and then restart the processing.
- Your account credentials are incorrect. Go to the CCIA section of the Preferences window and make sure the username and password for each location is correct. Remember that both are case-sensitive. You can double-check your account credentials by trying to log into their website.
- Your CCIA account security options are not set up correctly. You need to have CCIA turn on certain security options for each location for Fusion to communicate with them. There are instructions for this in the CCIA section of Preferences Window.
You can open this window by going to .
|
OPCFW_CODE
|
If you’ve been following the news, or twitter, you’ll have noticed that the current pope, Pope Benedict XVI (pronounces Kss-vee) has decided to retire at the end of the month, to spend more time with his twitter account. Anyway, the Grauniad had an interactive thingy up, which, they suggested, “illustrates the idea that a long-serving pope is often followed by one whose papacy is much shorter”. Well, possibly. But you really need to do a bit more than stare at a fancy graphic. You actually need to do some analysis of the data.
Of course, first one has to grab the data. Then we can plot it:
and we can see that there is a lot of variation, and possibly some long-term trends (e.g. recently popes seem to have survived longer). But is a long-lasting pope followed by one who keels over quickly? Well, in statistical terms that would mean that lengths of papal reigns would be negatively auto-correlated. And we have a nice tool to look at that sort of thing: it’s called the autocorrelation function, or ACF. Our hypothesis is, in technical language, that the lag one ACF is negative.
We can calculate that, and we find that it is 0.15. So, it is (a) small, and (b) in the wrong direction. No p-values needed. But this might be an effect of the longer-term trends in the data. We can remove this by fitting a suitable smooth curve (a spline, for those who want to know), and look at that:
The pink line is the fitted line, with the (approximate) 95% confidence interval.
We can see the long-term trends: from about 600AD to 1100AD was not a good period for popes. And, but for a blip around the 15th Century life seems to be improving. But what about the lag 1 acf? Well, that is 0.02, so basically zero, and also still in the wrong direction.
All in all, I think this disproves the notion that long reigns are followed by short ones. Except, it might be that this has changed over time. If we only look at the residuals for popes who started their papacy from after 1800 (i.e. the last 14 popes), we get an estimate of -0.09, which is at least in the right direction. Except that the approximate standard deviation is 0.29, so much larger. If we only look at the 9 popes who started poping after 1900, we get an acf of -0.62, with a standard error of 0.30. So we might just about have crept up to statistical significance (the z-statistic is -2.1, so less than -1.96 for a 5% significance), but (a) the sample size is small, (b) the significance is marginal, and the large-sample used may be way off, and (c) I’ve had to poke around a bit to get to something which might be marginally significant, so there is a certain amount of data dredging: looking for rubies in the rubbish and not stopping until I find one.
All in all, a pope will not spring eternal. Sorry.
|
OPCFW_CODE
|
Remember that meeting where you had a great idea. You thought of a way to improve a topic in your job. Or even better, a solution to an ongoing problem, something you knew need to be addressed.
So, during the meeting, you explain your great idea. It’s so damn clear inside your head! Yet, you don’t get the buy-in. Or even worse: some say “fine”, but nothing happens in the following days.
Have you ever lived a similar experience?
Yeah, who doesn’t, right?
Thing is — and I discovered it the hard way- a problem existing is not enough reason to trigger action towards solving it.
“What?” you might say. “If I have a problem and someone offers me a solution, you bet I would implement it!”.
Well, that’s not true. Sorry about that.
You identify a big problem. You find a quite well-crafted solution. You try to sell it to people who, allegedly, suffer the issue. And Bam!… Nothing happens.
Mental models behind convincing people
A mental model is a mental representation of how everything works — our mind, ourselves, the World, everything. We use models to help us represent the actual reality surrounding us. We need to simplify it and come up to a bite-sized version of that reality.
So, every model is inherently wrong. But, as the aphorism says, all models are wrong but some are useful.
They are so powerful everyone should be aware of them — after all, we use them. All the time. The key difference is knowing you do so and using it to your advantage.
The power behind mental models is that they are applied to several fields of knowledge. They come from an economics, physics, biology, philosophy, whatever. But they can be easily mapped into other disciplines.
So, let’s explore what they say about convincing people.
During my time as a member of an online marketing team, I learned about this model called “MAT”, created by Dr. Fogg. It states the following:
In order for a behavior to happen, three conditions must be fulfilled.
A motivation to do the behavior, the ability to do so, and a trigger that triggers the action. When a behavior does not occur, at least one of those three elements is missing.
So we need three ingredients.
We need some trigger to put everything in motion. This trigger could be anything. In software, think of a bug that takes a team several hours to fix. A lost project for a misunderstanding. The announcement of someone leaving. Things that might make you want to change something.
Ability refers to the individual being able to actually do something to fix it. The easier the thing, the more likely the thing will get done. You need a lot of motivation (or a hell of a trigger) to start something that requires a lot of effort. Yet, an easier task will likely happen, even with a less powerful trigger or motivation.
Finally, you need some motivation to start the whole process.
And talking about motivation…
Pink identifies 3 basic intrinsic modifiers:
- The willingness to master our craft.
- The autonomy of doing what we think is best, the way we think is best.
- The purpose of the task we want to do.
(Remember that Pink talks about intrinsic motivators. An extrinsic motivator would be getting paid or, well, not getting hit with a whip).
So as long as you want to change someone else’s behavior, and not only force them to do something, you should have intrinsic motivators in mind. And sell them as incentives.
Talking about incentives…
Selling a Design System
Reading Smashing Magazine #6 book (a must read, if I may), the first chapter caught my eye. Laura Elizabeth writes there about Design Systems, and how to “sell” them to both your colleagues and people with resources (a.k.a. money and time) to finance the project.
She points to three main topics that need to be addressed to get a buy-in from the stakeholders:
- Resources. What will we need to put things in motion and support them all along the way?
- Incentives. What are the potential benefits of a Design System?
- Consequences. What will happen if we don’t adopt a Design System?
I don’t want to spoil the whole text Laura wrote (read the book 😜), but what she described as consequences reminded me of Cost of Opportunity. See what I did there? Another mental model has joined the room.
Imagine that someone offers you to two envelopes with money, one containing twice as much as the other one. Don’t worry, this is not the Two Envelopes Problem.
You might pick whatever envelope you want, and I’m quite sure your mind will start doubting. “Hey, folk. You sure about this? Don’t you want to pick the other one? We might not be winning the bigger pot!”. That’s our mind being played by the cost of opportunity.
Bear in mind the cost of not adopting an idea/tool/whatever, not only the direct benefits of doing so.
You see, I love when ideas add up and sustain each other like a latticework. We talked about online marketing, psychology, and design systems. We combined learnings from several sources and ended up seeing that they all boil down to basic principles.
All this exploration reminded me of First Principles mental model. Philosophy over here, now.
This is the power of mental models. Simple but powerful frames that you use as building blocks to create more complex ideas. Same thing we do nowadays to create User Interfaces using VueJS or React. What? Connecting the dots again?
Have you ever noticed any other pattern that you might use in several knowledge fields?
|
OPCFW_CODE
|