text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi there, In this instructable I will show how I made a really simple Bluetooth Low Energy presence detector, using my smart wristband and a relay I was able to control the ligths of my room; Everytime I go in, turn the light on and if I left the room or cut the bluetooth connection, the lights turn off. Step 1: Parts I'm using a ESP32 Feather but any other will work 1 5v Relay 1 TIP31C Transsitor 1 BLE Server device (Any beacon device) The TIP31C its ment to control the relay, beacuse the 3V3 digital outputs of the ESP32 are not enough in voltage and current The relay to control the 120V lights and the wristband to detect the presence of the person. Step 2: Circuit This is really simple, the pin number 33 of the ESP32 goes to the base of the transistor, with this we can add the 5V VCC signal and control a bigger voltage with the 3V3 voltage output, then, with the relay we can controll then the 120V of the light. Step 3: Code #include "BLEDevice.h" int Lampara = 33; int Contador = 0; static BLEAddress *pServerAddress; BLEScan* pBLEScan; BLEClient* pClient; bool deviceFound = false; bool Encendida = false; bool BotonOff = false; String knownAddresses[] = { "your:device:mac:address"}; unsigned long entry;); } class MyAdvertisedDeviceCallbacks: public BLEAdvertisedDeviceCallbacks { void onResult(BLEAdvertisedDevice Device){ //Serial.print("BLE Advertised Device found: "); //Serial.println(Device.toString().c_str()); pServerAddress = new BLEAddress(Device.getAddress()); bool known = false; bool Master = false; for (int i = 0; i < (sizeof(knownAddresses) / sizeof(knownAddresses[0])); i++) { if (strcmp(pServerAddress->toString().c_str(), knownAddresses[i].c_str()) == 0) known = true; } if (known) { Serial.print("Device found: "); Serial.println(Device.getRSSI()); if (Device.getRSSI() > -85) { deviceFound = true; } else { deviceFound = false; } Device.getScan()->stop(); delay(100); } } }; void setup() { Serial.begin(115200); pinMode(Lampara,OUTPUT); digitalWrite(Lampara,LOW); BLEDevice::init(""); pClient = BLEDevice::createClient(); pBLEScan = BLEDevice::getScan(); pBLEScan->setAdvertisedDeviceCallbacks(new MyAdvertisedDeviceCallbacks()); pBLEScan->setActiveScan(true); Serial.println("Done"); } void Bluetooth() { Serial.println(); Serial.println("BLE Scan restarted....."); deviceFound = false; BLEScanResults scanResults = pBLEScan->start(5); if (deviceFound) { Serial.println("Encender Lamara"); Encendida = true; digitalWrite(Lampara,HIGH); Contador = 0; delay(10000); } else{ digitalWrite(Lampara,LOW); delay(1000); } } void loop() { Bluetooth(); } Step 4: PCB for Light Control I made this circtuit on a protoype pcb to make things cleaner. Step 5: Done And you are done! You can use this code to open doors instead, or to control different things I hope you like my instructable, and if you have any question make me a comment or send me an inbox, I'll be happy to answer 11 Discussions 4 months ago I have the same idea project Adjustable time for missing Bluetooth relay off Language: German Deutsch Reply 4 months ago Nice! Now the interesting part is what you connect to the rellay or what do you activate with when you are near the sensor Reply 4 months ago Use Bluetooth relay: if I leave home I want the TV to not turn on. Children should not watch everything on TV without me. If I leave the workplace, my monitor automatically turns off with the help of a relay. The room in which I often go in and out, but where should I not go to strangers if I am not around? The door is automatically locked if I move away from this place. 7 months ago That is what i search long time. I am not a professional programmer and project like this help me to create my own projects.What will make from this project? From long time ago i have a idea for good car imobiliser. Now is time to realize it. With little adding a code i will make it to work with 3-4 MACs (fitnes bands or ibeacons), will add deep sleep in code too. When you come to a car and open it (my is with keyless) CAN-BUS wakeup signal from a car will wakeup ESP32 from a deep sleep and he will check for a deisred MAC Address are inside in a range defined by RSSI. If it a present they will allow starting a car. All this procedure from unlock to allowing start take between 0.8 to 1.5 sec.. For more security and to can work with android phones (because now won't) will take UUID for checking instead of a MAC Address. Who don't work with android units? Because from version 5 or 6 for more security BLE in android start anytime with different random generated MAC Address. UUID are used for identifying different services and are unique. Sorry for my bad english :( If anyone can help with application for android with widged button who will send specific UUID via BLE are welcome. Thanks to Lindermann95. Regards Reply 6 months ago Hi there, I'm sorry, I haven't been around lately, but this car application is one of my plans too That's a good idea, the random generation MAC like the one that apple use on the WiFi connction. Maybe that could be the answer, Using WiFi... But I havent work that much with WiFi, BLE would be a suitable option at the moment, I'll work in this project and I'll keep in touch with you, have you already done it? Reply 6 months ago Hi, i made it and he work fine at this time. For Authorization I use UUID, because MAC any time when you start advertising service are different this is in BLE standard. Now I use one application who create GATT server with UUID desired by me. I will write more in few days because this is hobby for me and now I have a many professional work. Regards 7 months ago At this time i found you use library ESP32 BLE Arduino by Neil Kolban. That is right? Are you use arduino ide for this project or another one? Regards Reply 7 months ago Yes! I’m using the Neil Kolban library in the Arduino IDE, I think that the code didnt pase the way I wanted... but let me change it and add some comments Let me know if you have more questions Reply 7 months ago Thank you for fast anwer. Ok that fine. From the code i assumed it was written on Adruino but not 100% sure. If it is on platformio then will have include Arduino.h and all will be clear. Regards Question 7 months ago Hi, You are add one source code but this code are for ???? platform Ok i see you add library BLEDevice.h but from where. I have a interest about this project and will be good to add more light about him :)) Regards Answer 7 months ago Hi there! (I’ll answer in the other comment)
https://www.instructables.com/id/ESP32-BLE-Presence-Detector/
CC-MAIN-2019-26
refinedweb
1,130
71.44
Actually, for most file formats I've encountered it's pretty easy for the conversion program to infer the byte order used by a file from the first few bytes of data (e.g. a magic number) -- presuming the conversion program what the data is supposed to look like... In general users needn't know what byte order a file is in -- the hard part is knowing what the format is in the first place. Maybe "magic (a replacement for file(1) written in Perl just posted on the net) can help them with that -- although for me it just said ``Undefined subroutine "main'BufMatch" called at magic.pl line 429.'' >Actually, you have a point. I don't expect them to know what byte order >is, but it is reasonable for them to know that what they are trying to >do is convert IBM-PC files to Sparc/Unix files of some type. So I reject >the idea of an option like 'dd's "conv=swab", but something like >"-from pc" is reasonable. But I still want to keep the "low-level" code >portable. Given some table mapping architectures to byte orders you could add a "-to sparc" option. But of course the "dd" example is naive: no file formats consists of arrays of words that can be byte-swapped like that; in reality you will have to write code that understands the data and knows when to swap words or longs or nothing. >Also, I WILL admit that *I* don't know the byte order of various machines >off the top of my head. But I *do* know that there is a "network byte >order" and conversions to/from that and native byte order. Reading a word in network byte order is easy in Python: def rwordn(fp): s = fp.read(2) if len(s) < 2: raise EOFError hibyte, lobyte = ord(s[0]), ord(s[1]) return hibyte<<8 | lobyte Writing is of course as easy. There's no need to know the byte order used by Python here. This assumes network byte order is big-endian (hibyte first), which is what the Internet specifies -- but who knows what other networks use? You can write similar routines rwordb() and rwordl() to read words in big and little endian order -- rwordb() will of course be identical to rwordn(). >Where pathname need to be written into a program, then it is not >unreasonable to expect the programmer to explicitly use a >"path.to_native" conversion function, to indicate that he doesn't >actually MIND if the pathname get's munged up ( truncated | character >translated | etc. ) as long as there is a determinate mapping. >( for practical purposes one-to-one ) I don't understand this. Pathnames hardcoded into programs are almost always things like /etc/termcap, /usr/tmp or the equivalent of $HOME/.mh_profile. How would you expect a path.to_native for Mac or MS-DOS to translate these? You can *never* expect to be able to move a program containing hardcoded pathnames to such a system without having to edit them. Filenames (i.e. no slash on UNIX) are a different matter, but even there the choice of names usually has to be revised by a human being when moving to a different O.S., e.g. names begginning with a . are not usable on MS-DOS. >I think we are in agreement on: >(1) There will be ( for example ) a 'path' module for unix|dos|mac|etc. > that will attempt to hide or at least isolate machine differences. >(2) That portable code should only need to import 'path', and not > need to figure out which 'specific' module ( unixpath|macpath|dospath ) > it needs to load. [ I don't care what mechanics we choose to do this, > as long as we can hide the machinery! ] >(3) But searching for module dependencies and renameing files is not > the preferred solution to the above. [ The machinery here is > painfully visible, even if only visible to ONE person ( the site > maintainer/installer of Python. ] Yes. Yes! YES!!! I will try to put a lot of this the a next release, or at least make a decent attempt (after all I routinely move Python code between a Mac and UNIX so I have ample opportunity to test it in two totally different environments). >[ BTW: I have managed to get my packet drivers working on my 486 PC, > So I can telnet the python sources onto it. I have the Gnu C compiler > PC port installed, So I hope to start porting Python *REAL SOON* ] That's good news! --Guido van Rossum, CWI, Amsterdam <guido@cwi.nl> "This is an ex-parrot"
http://www.python.org/search/hypermail/python-1992/0317.html
CC-MAIN-2013-48
refinedweb
775
70.02
#include <CdbOrigin.hh> #include <CdbOrigin.hh> List of all members. More details to come... Definition at line 23 of file CdbOrigin.hh. [protected] The normal constructor. Initialize context with specified parameters. CdbDatabasePtr Definition at line 86 of file CdbOrigin.cc. [protected, virtual] The destructor. More details... Definition at line 94 of file CdbOrigin.cc. [pure virtual] Obtain the creation time of the origin. This method is supposed to be implemented by the corresponding subclass. Obtain the origin description. The identifier of an object. Definition at line 110 of file CdbOrigin.cc. [static] The static locator for an origin object (by id). This locator is similar to teh one defined above. It just uses an identifier instead of a name. Definition at line 52 of file CdbOrigin.cc. References CdbStatus::Error, CdbDatabase::instance(), and CdbStatus::Success. The static locator for an origin object (by name). This locator uses the name of the origin to resolve the right instance of the origin object in the scope of a database. The database itself as well as the top-level API object are two optional parameters of this method. If either (or both) of them are not specified then the corresponding default values will be used. CdbOriginPtr Definition at line 17 of file CdbOrigin.cc. Check if data corresponding to this origin are available in the local database. The positive answer means that the data from remote database have already been brought into local database. This method is supposed to be implementyed by subclasses. Check if this origin is the local one. The "local" origin is the one corresponding to the part of a distributed database setup currently being used by a client application. This database also can be called the "local" one. Depending on the origin's type "local" databases (=origins) may have persistent resources modifiable by clients' applications. The actual set on those resources depends on the origin's type. Check if this origin is the "master" one. The "master" origin is meant to describe the central database of a distributed database setup. Certain database operations can only be performed in this central "master" database. This method is supposed to be implemented by subclasses. Check if this origin is a "slave" one. The "slave" origin is meant to describe a database being a part of a distributed database setup. The database of the "slave" origin is allowed to contribute data into the distrubuted database so that these data can be seen and used by clients of other databases of the setup. The "slave" origin is known to its central "master" database. The "master" may delegate certain operations to its "slave"-s. Check if this origin is a "test" one. The "test" origin is meant to describe a database NOT being a part of a distributed database setup. The database of the "test" origin is NOT allowed to contribute data into the distrubuted database in a way it's done by "slave" databases. However the "test" type databases are allowed to produce their own local data to be consumed locally. It's allso allowed to import data from the corresponding distributed database. The name of an object. Definition at line 104 of file CdbOrigin.cc. Return a smart pointer to the parent database object. Definition at line 98 of file CdbOrigin.cc. Set up an iterator of properties. Specific implementations of this origin class my supply technology- and implementation- specific properties. [friend] Definition at line 25 of file CdbOrigin.hh.
http://www.slac.stanford.edu/BFROOT/www/Public/Computing/Databases/experts/cdb/doxygen/recent/html/classCdbOrigin.html
crawl-003
refinedweb
577
52.36
commits [1] representing numerous major changes. Release history: - Eigen 3.3 alpha-1 was released on September 4, 2015. It includes about 1660 commits since 3.2. It includes all bug-fixes and improvements of the 3.2 branch up to the 3.2.5 version, as detailed in the respective change-logs: 3.2.1, 3.2.2, 3.2.3, 3.2.4, 3.2.5. - Eigen 3.3 beta-1 was released on December 16, 2015. It includes about 350 commits since 3.3 alpha1, as detailed in the respective change-log. Expression evaluators In Eigen 3.3, the evaluation mechanism of expressions has been completely rewritten. Even though this is really a major change, this mostly concerns internal details and most users should not notice it. In a nutshell, until Eigen 3.3, the evaluation strategy of expressions and subexpressions was decided at the construction time of expressions in a bottom-up approach. The novel strategy consists in completely deferring all these choices until the whole expression has to be evaluated. Decisions are now made in a top-down fashion allowing for more optimization opportunities, cleaner internal code, and easier extensibility. Regarding novel expression-level optimizations, a typical example is the following: MatrixXd A, B, C, D; A.noalias() = B + C * D; Prior to Eigen 3.3, the "C*D" subexpression would have been evaluated into a temporary by the expression representing the addition operator. In other words, this expression would have been compiled to the following code: tmp = C * D; A = B + tmp; In Eigen 3.3, we can now have a view of the complete expression and generate the following temporary-free code: A = B; A.noalias() += C * D; Index typedef In Eigen 3.3, the "Index" typedef is now global and defined by default to std::ptrdiff_t: namespace Eigen { typedef std::ptrdiff_t Index; } This "Index" type is used throughout Eigen as the preferred type for both sizes and indices. It can be controlled globally through the EIGEN_DEFAULT_INDEX_TYPE macro. The usage of Eigen::DenseIndex and AnyExpression::Index are now deprecated. They are always equivalent to Eigen::Index. For expressions storing an array of indices or sizes, the type for storage can be controlled per object through a template parameter. This type is consistently named "StorageIndex", and its default value is "int". See for instance the PermutationMatrix and SparseMatrix classes. Warning: these changes might affect codes that used the SparseMatrix::Index type. In Eigen 3.2, this type was not documented and it was improperly defined as the storage index type (e.g., int), whereas it is now deprecated and always defined as Eigen::Index. Codes making use of SparseMatrix::Index, might thus likely have to be changed to use SparseMatrix::StorageIndex instead. Vectorization - Eigen 3.3 adds support for AVX (x86_64), FMA (x86_64) and VSX (PowerPC) SIMD instruction sets. - To enable AVX or FMA, you need to compile your code with these instruction sets enabled on the compiler side, for instance using the -mavx and -mfma options with gcc, clang or icc. AVX brings up to a x2 speed up for single and double precision floating point matrices by processing 8 and 4 scalar values at once respectively. Complexes are also supported. To achieve best performance, AVX requires 32 bytes aligned buffers. By default, Eigen's dense objects are thus automatically aligned on 32 bytes when AVX is enabled. Alignment behaviors can be controlled as detailed in this page. - FMA stands for Fused-Multiple-Add. Currently, only Intel's FMA instruction set, as introduced in the Haswell micro-architecture, is supported, and it is explicitly exploited in matrix products for which a x1.7 speedup can be expected. - When AVX is enabled, Eigen automatically falls back to half register sizes for fixed-size types which are not a multiple of the full AVX register size (256bits). For instance, this concerns Vector4f and Vector2d types. - New in beta1: Enable fixed-size alignment and vectorization on ARM. - New in beta1: Add vectorization of round, ceil, floor, for SSE4.1/AVX. - Many ARM NEON improvements, including support for ARMv8 (64-bit code), VFPv4 (fused-multiply-accumulate instruction), and correct tuning of the target number of vector registers. - Add vectorization of exp and log for AltiVec/VSX. - Add vectorization of Quaternion::conjugate for SSE/AVX Dense products - The dense matrix-matrix product kernel has been significantly redesigned to make a best use of recent CPU architectures (i.e., wide SIMD registers, FMA). - A "rotating" kernel variant has been added for better performance on ARM CPUs (especially Qualcomm Kraits). - The heuristic to determine the different cache-level blocking sizes has been significantly improved. - Reasonable defaults for cache sizes have been added for ARM, where we generally cannot query them at runtime. - The overhead for small products of dynamic sizes has been significantly reduced by falling-back at runtime to a coefficient-based product implementation (aka., lazyProduct). - The criterion to switch to lazyProduct is: (m+n+k)<20 - Enable Mx0 * 0xN matrix products. Dense decompositions - Eigen 3.3 includes a new Divide & Conquer SVD algorithm through the BDCSVD class. This new algorithm can be more than one order of magnitude faster than JacobiSVD for large matrices. - Various numerical robustness improvements in JacobiSVD, LDLT, LLT, 2x2 and 3x3 direct eigenvalues, ColPivHouseholderQR, FullPivHouseholderQR, and RealSchur. - FullPivLU: pivoting strategy can now be customized for any scalar types. - New in beta1: Add LU::transpose().solve() and LU::adjoint().solve() API (doc). Sparse matrices - MappedSparseMatrix is now deprecated and replaced by the more versatile Map<SparseMatrix> class. New in beta1: Ref<SparseVector> is supported too. - Add support for Ref<SparseMatrix>. - Add OpenMP parallelization of sparse * dense products. Currently, this is limited to row-major sparse matrices. - New in beta1: Extend setFromTripplets API to allow passing a functor object controlling how to collapse duplicated entries. - New in beta1: Optimise assignment into a sparse.block() such that, for instance, row-by-row filling of a row-major sparse matrix is very efficient. - New in beta1: Add support for dense.cwiseProduct(sparse), thus enabling (dense*sparse).diagonal() expressions. - New in beta1: Add support for the direct evaluation of the product of two sparse matrices within a dense matrix. Sparse solvers - Add a LeastSquareConjugateGradient solver for solving sparse problems of the form argmin_x |A x - b|^2 through the normal equation but without forming A^T A. - Improve robustness of SimplicialLDLT to semidefinite problems by correctly handling structural zeros in AMD reordering. Very useful for solving SPD problems with equality constraints. - Add OpenMP support in ConjugateGradient, BiCGSTAB, and LeastSquareConjugateGradient. See the respective class's doc for details on the best use of this feature. - ConjugateGradient and BiCGSTAB now properly use a zero vector as the default guess. - Allows Lower|Upper as a template argument of ConjugateGradient and MINRES: in this case the full matrix will be considered. This also simplifies the writing of matrix-free wrappers to ConjugateGradient. - Improved numerical robustness in BiCGSTAB, SparseLU, SparseQR and SPQR. - Add a determinant() method to SparseLU. - Improved handling of inputs in both iterative and direct solvers. - New in beta1: Improve support for matrix-free iterative solvers. - New in beta1: Add access to UmfPack return code and parameters. Experimental CUDA support Starting from Eigen 3.3, it is now possible to use Eigen's objects and algorithms within CUDA kernels. However, only a subset of features are supported to make sure that no dynamic allocation is triggered within a CUDA kernel. Unsupported Tensor module Eigen 3.3 includes a preview of a Tensor module for multi-dimensional arrays and tensors in unsupported/Eigen/CXX11/Tensor. It provides numerous features including slicing, coefficient-wise operations, reductions, contractions, convolution, multi-threading, CUDA, etc. This module is mainly developed and used by Google. Miscellaneous - Various numerical robustness improvements in stableNorm(), Hyperplane::Through(a,b,c), Quaternion::angularDistance. - Add numerous coefficient-wise methods to Array: isFinite, isNaN, isInf, arg, log10, atan, tanh, sinh, cosh, round, floor, ceil, logical not, and generalize pow(x,e). New in beta1: sign, rsqrt, lgamma, erf, and erfc. - New in beta1: Add support for row/col-wise lpNorm(). - Add a determinant() method to PermutationMatrix. - EIGEN_STACK_ALLOCATION_LIMIT: Raise its default value to 128KB, make use of it to assert on maximal fixed size objects, and allows it to be 0 to mean "no limit" - Conversions from Quaternion to AngleAxis implicitly normalize the input quaternion. - Improved support of c++11 while preserving a full compatibility with c++98. - Eigen 2 support code has been removed. If you plan to migrate from Eigen 2 to 3, then it is recommended to first do the transition using the facilities offered in Eigen 3.2, drop all Eigen 2 deprecated features, and then move to Eigen 3.3. - The BLAS interface library does not require a fortran compiler anymore. - The LAPACK interface library has been extended with the following two SVD routines: ?gesdd, ? gesvd.
http://eigen.tuxfamily.org/index.php?title=3.3&oldid=2025
CC-MAIN-2018-17
refinedweb
1,476
50.43
Latest revision as of 01:05, 19 March 2013 - Send some patches for python3 compatibility upstream. Done, see here - Wait for RHBZ bug 889784 (python issue 16754) to be fixed (python 3 distutils bug, uses wrong extension when searching for shared objects) Done - Review finished Done - Notify maintainers of packages depending on PIL Done - Almost all of the packages have been ported as well (few are waiting on upstream releases) - push package to repositories Done - block python-imaging: Done, see rel-eng ticket.) - The python-pillow package will Obsolete and Provide python-imaging so there's no need for dependent packages to change their Requires line at this time. How To Test - Install python-pillow from the rawhide repositories (as of writing, compilation of the python3 variant is disabled due to RHBZ bug 889784),. -k instead of simply import <Module> This change does not break backwards compatibility with the legacy PIL.
https://www.fedoraproject.org/w/index.php?title=Features/Pillow&diff=cur&oldid=326614
CC-MAIN-2022-05
refinedweb
151
50.7
I want to do a matchTemplate from mss import mss import cv2 import numpy with mss() as sct: screenshot_numpy = numpy.array(sct.shot()) template = cv2.imread('./templates/player.png') result = cv2.matchTemplate(screenshot_numpy,template,cv2.TM_CCOEFF_NORMED) Traceback (most recent call last): File "main.py", line 14, in <module> result = cv2.matchTemplate(screenshot_numpy,template,cv2.TM_CCOEFF_NORMED) TypeError: image data type = 18 is not supported From the mss examples page: img = numpy.array(sct.grab(monitor)) So here we can see the .grab() method to get the raw pixel data from the image. In this case sct.grab() returns a PIL Image, and numpy.array(Image) will thus convert the PIL Image object into a numpy ndarray. Check the numpy ndarray dtype after you convert; for e.g. if your code is ndarray_img = numpy.array(sct.grab()), then check ndarray_img.dtype. If it's np.uint8 then you're done. If it's np.uint16, then you'll have to divide by 256 and convert to np.uint8 with ndarray_img = (ndarray_img/255).astype(np.uint8). Further down you'll see another example which flips the R and B channels of the image: cv2.imshow(title, cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) except this is actually backwards. It really doesn't matter because either way it's just swapping the first and third channel, so BGR2RGB and RGB2BGR do exactly the same thing, but PIL (and other libraries) give you RGB order while you need BGR order to display with OpenCV, so technically it should be cv2.imshow(title, cv2.cvtColor(img, cv2.COLOR_RGB2BGR))
https://codedump.io/share/Oua8sdIU7Gv3/1/opencv-not-accept-numpy-array
CC-MAIN-2018-05
refinedweb
260
61.93
Question: I'm looking for a way to call a single Capistrano task to perform different things to different roles. Is Capistrano able to do this, or do I have write a specific task for each role? Solution:1 The standard way to do this in Capistrano: task :whatever, :roles => [:x, :y, :z] do x_tasks y_tasks z_tasks end task :x_tasks, :roles => :x do #... end task :y_tasks, :roles => :y do #... end task :z_tasks, :roles => :z do #... end So yes, you do need to write separate tasks, but you can call them from a parent task and they will filter appropriately. Solution:2 Actually no: % cat capfile server 'localhost', :role2 task :task1, :roles=>:role1 do puts 'task1' end task :task2 do task1 end % cap task2 * executing `task2' * executing `task1' task1 The :roles param is passed further to run command etc but does not seem to affect whether the task is actually fired. Sorry, didn't find the way to put a comment on comment so I've written it here. Solution:3 You can also do task :foo do run "command", :roles => :some_role upload "source", "destination", :roles => :another_role end Solution:4 Use namespacing: namespace :backup do task :default do web db end task :web, :roles => :web do puts "Backing Up Web Server" end task :db, :roles => :db do puts "Backing Up DB Server" end end these tasks show up in a cap -T as backup:default backup:web backup:db Solution:5 There is a way, kind of. Check: and you'll see that you can override the default roles using the ROLES environment variable. I have a task defined as: desc "A simple test to show we can ssh into all servers" task :echo_hello, :roles => :test do run "echo 'hello, world!'" end The :test role is assigned to one server. On the command line, I can run: [james@fluffyninja bin]$ cap echo_hello ROLES=lots_of_servers And the task will now run on the lots_of_servers role. I have not verified that this works inside a ruby script by updating the ENV hash, but this is a good start. Solution:6 Only for the record, this could be a solution using Capistrano 3: desc "Do something specific for 3 different servers with 3 different roles" task :do_something do on roles(:api_role), in: :sequence do # do something in api server end on roles(:app_role), in: :sequence do # do something in application server end on roles(:another_role), in: :sequence do # do something in another server end end The sever definition to perform "do_something" task in a application server would be something like: server 'application.your.domain', user: 'deploy', roles: %w{app_role} Then you can call the task (there are several ways to do it) and the task will execute specific instructions according to the "app_role". Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/05/tutorial-creating-capistrano-task-that.html
CC-MAIN-2018-47
refinedweb
479
52.83
User:Skinfan13/Upsilon Sigma Sigma From Uncyclopedia, the content-free encyclopedia The sacred coat of arms of ΥΣΣ edit PurposeUpsilon Sigma Sigma serves as an organization for the betterment of Uncyclopedia. Brothers of the fraternity are tasked with combing the annals of Uncyclopedia for requested articles and writing these articles, thus fulfilling the desires of the public at large. The most important aspect of this noble undertaking is the monthly ΥΣΣ collaboration article. This collaboration will be to complete one of the top 50 most requested articles. This is the primary reason for the existence of ΥΣΣ. Many people are intimidated by the top 50 list of requested articles, this way we can cooperate together to write one of them each month. Each month, a three day Welcome to the famous Upsilon Sigma Sigma basement bar! Have a refreshing beer or three, courtesy of your crazy frat bros! Another area of service is the individual creation of requested articles by Brothers. There is no quota on the amount of articles a Brother must write during a given time span, but they are asked that if they pledge the fraternity and pass the ordeal that they contribute at least one article every three months. Brotherhood is eternal, so not meeting this request will never result in exclusion from the fraternity. The one list that Upsilon Sigma Sigma uses for this task is Uncyclopedia:Requested Articles. When creating a new article from this list, add the redlink into your section of "dibs" and work on the article in your userspace, leaving the redlink intact, however, remove the redlink from the requested pages section. If you feel you cannot create the article yourself after a period of time, add the redlink back to the requested articles list. Members are highly encouraged to leave the top 50 list alone so that the monthly collaboration will have good material to choose from. The third pillar of service is that of the Pee Review. It is highly encouraged that ΥΣΣ Brothers join the PEEING committee and contribute to the Pee Review by providing insightful and in-depth reviews for other users. This is a lesser focus of ΥΣΣ, so if a Brother cannot give high quality reviews, they are encouraged to not review at all. edit Membership Membership is not guaranteed. ΥΣΣ will not discriminate based on sex, religion, or skin color, but you must pledge to join, at which point you will be evaluated. If you are chosen, you will be added to the Brotherhood. You may pledge only once a month, but you may pledge as many months as you like, there is no cut off. It is highly unlikely that you will be rejected if you have established yourself in the Uncyclopedia community. Along these lines, membership guidelines are: - You must be a registered user for at least 2 months. - You must have written at least one article (in any namespace). - You must like chocolate cake, the official cake of ΥΣΣ. - You must be a crazy party animal. edit Ranks There is a set hierarchy within the fraternity, based on seniority and on merit. All members are considered Brothers in the order regardless of rank. edit Points Points will be awarded on the following scale, and may be awarded for various tasks at the discretion of frat's three officers. - You will earn 1 point for every in-depth pee review you submit ("in-depth" will be judged by the PEEING committee. Self-affirmed in-depth reviews will not be awarded points). - You will earn 2 points for being a principal author of the monthly collaboration. (The three officers of the Brotherhood will determine the principal authors) - You will earn points in the same manner as below if the same monthly collaboration earns additional honors. - You will earn 2 points for individually completing a requested article. - You will earn 1 additional point if an article you wrote for ΥΣΣ becomes a quasi-feature. - You will earn an additional 2 points if an article you wrote in service to ΥΣΣ becomes a feature. - You will earn an additional 2 points if an article you wrote in service to ΥΣΣ becomes a top 3 feature of the month. - You will earn an additional 5 points if an article you wrote for ΥΣΣ becomes a top 10 feature of the year. - In this way, the maximum amount of points one article can earn (assuming it is quasi-featured) is 12. If an article is quasi-featured or featured a second time, no additional points are awarded. edit Ranking You will be ranked according to a scale based on points: - Brother/Sister 0 pts - Junior Steward (JStw)/ 6 pts - Senior Steward (SStw)/ 12 pts - Steward of the Brotherhood (StwB)/ 18 pts - Junior Deacon (JDn)/ 25 pts - Senior Deacon (SDn)/ 34 pts - Deacon of the Brotherhood (DnB)/ 45 pts - Junior Warden (JWdn)/ 60 pts - Senior Warden (SWdn)/ 75 pts - Master (Mstr)/ 100 pts - Honored Master (HMstr)/ 150 pts In addition to the above ranks, the Brotherhood will from time to time award various other honors and ranks depending on circumstances or other forms of merit. The founder of the fraternity is entitled to the moniker of Founder of the Brotherhood (or just Founder). A secretary will be appointed, and given the rank and title of Marshall (Mrsh). This position will be to enforce the rules and standards of the Brotherhood. A Treasurer will also be appointed, and will be given the rank and title of Secretariat (Sct). This position will be to make sure point totals are balanced and correct and that all other tasks associated with statistics involving the Brotherhood be kept in order. Ranks and titles can be affixed to a Brother's signature/username. The suggested method of accomplishing this is to link the abbreviation of your rank to this page in your postscript, or simply link ΥΣΣ in your postscript. No ranks or titles may currently be placed in a users prefix. edit Current Members Once you have been recognized as a Brother of the fraternity, you can add yourself to the list below. Update your point total on your own whenever you complete a new article or do an in-depth review. Collaboration points will be added for you at the end of the month. edit Pledging Our Fraternity Pledges will be honored by a ceremonial beating. If you wish to become a Brother of Upsilon Sigma Sigma, you should familiarize yourself with our basic tenets and decide if you are called to the same level of service as ourselves. If you think you have the commitment it takes to be a Brother, add your name below to the list of the month's pledges and we will evaluate whether or not you meet the requirements. Most established users will be accepted. edit This Month's Pledges: June, 2011 Please put your user name here
http://uncyclopedia.wikia.com/wiki/USS
CC-MAIN-2016-26
refinedweb
1,152
60.14
Stack is one of the most widely understood data structures in computer science. It is a general purpose data structure and is a part of most of modern day computer architectures as well. In the context of a thread running in an executing process, “the stack” is the memory specifically given to that thread for storing local variables, function parameters, return addresses, and other register values that need to be saved for later retrieval. Stack corruptions tend to be trickier memory corruptions to find. So I decided to write a series of blogs on stack corruptions. Some of them, like buffer overruns, can even lead to security risks. But focus of this article shall be calling convention mismatches. So what is a calling convention? A calling convention — as the name suggests — is a convention between the caller of function and the function itself. It is a set of predefined rules that both of them agree upon to maintain the integrity of stack. These rules describe essentially two things: 1) How parameters are passed to the called function. 2) How parameters are cleaned up once the function call is finished. Even the name of function emitted by compiler (a.k.a. “name mangling”) is governed by the calling convention. We have only one calling convention for x64 systems — fastcall. But on x86, we have multiple types of calling conventions allowed, of which the most commonly used are as follows: 1) Stdcall Stack. 2) Cdecl Stack. 3) Fastcall. 4) Thiscall While in cdecl, the callee cleans the stack, in the rest of them the caller does. One might think, “why is cdecl even required, as it is an overhead in terms of size?” But in C/C++, functions can take a variable number of arguments, so cdecl is a necessity. In FastCall, the ECX and EDX registers are used to pass first two arguments from right to left. And in ThisCall, the ECX register contains the this pointer. Now let us write a test program to discuss calling convention mismatch errors. Say you have defined a function in a DLL with standard calling convention: void _stdcall testFunc(void) { int a=1,b=0; printf("%d""%d",a,b); } Now we call this function from a client EXE that refers to this DLL as follows: #include "stdafx.h" _declspec(dllimport)void testFunc(void); int _tmain(int argc, _TCHAR* argv[]) { int a=1, b =0; testFunc(); return 0; } We get a linker error : error LNK2001: unresolved external symbol "__declspec(dllimport) void __cdecl testFunc(void)" (__imp_?testFunc@@YAXXZ) Now the reason we get this linker error is because the name mangling performed by the compiler (for various reasons like function overloading in C++) , takes into account the calling convention used. We can do a dumpbin on the lib file and compare names as follows: > dumpbin /all "Stack Sample.lib"> StackSample.txtArchive member name at 6DE: /0 Stack Sample.dll DLL name : Stack Sample.dll Symbol name : ?testFunc@@YGXXZ (void __stdcall testFunc(void)) Name : ?testFunc@@YGXXZ We were lucky this time; we got a linker error. This shows that name mangling depends on calling convention used. If the function call is resolved at compile time (or to be more precise, if type information for a function to be called is available), the compiler and linker work together to make sure that you are calling the correct function, as was the case before. There are some cases in which it is not possible and so neither of them would be able to detect a mismatch, and two such common scenarios would be 1) Calling a managed function from native code by passing a delegate as function pointer to native code from within a managed function. While we are doing so, make sure that you declare the function pointer as stdcall, as all managed functions are always stdcall. 2) Using GetProcAddress to call a function. We need to make sure that the type of function pointer matches the type of called function. The call instruction pushes the IP register’s value on stack, and the ret instruction pops the value from stack and moves it into the IP register. There can be two types of mismatch in theory: 1) If the caller assumes that called function is cdecl and cleans the stack, but called function is actually stdcall, the stack gets cleaned up twice, resulting in cleaning up of stored value of IP. And thus IP will get populated with some wrong address. If you are lucky, this will resulting in an access violation when you try to access that address. Otherwise, it will jump to that address, which can be any random value, and thus your application goes into undefined behavior. 2) If, conversely, no stack cleanup occurs, then also IP will be populated with a wrong value (which actually will be your pushed parameters) and again will result in AV or unexpected behavior. If your application is crashing and you suspect it to be stack corruption. The best way to go ahead will be to run your application under debugger , set a breakpoint at the function in which we are crashing and single-step from there. On single -stepping if you find that you are getting an Acess violation or STATUS_ILLEGAL_INSTRUCTION exception(these two are not the only two that you can get ) and the breaking instruction is just after a return from function call , first thing you must check is if there is a calling convention mismatch. Pls fix whatever is hard-coded for size info that chops off the right-hand side of the blog when I use print preview. Thank you. Thanks for your comments. Which browser are you using ? I do not see this problem on mine. According to 1) blogs.msdn.com/…/debug-fundamentals-exercise-3-calling-conventions.aspx 2) msdn.microsoft.com/…/zxk0tw93(v=VS.71).aspx stack clean-up defined in this article is not correct. As per the above 2 blogs, cdecl , caller is responsible to clean up the stack. Where as for other called function is responsible to clean-up the stack. Can you please clarify?+ Nitin, you are correct. This blog entry has it backwards. CDecl has the caller cleaning up the stack, and the other conventions have the called function popping the stack. If you think about it, CDecl provides for variable argument passing, so sometimes only the caller knows how much stack space to pop — hance in cdecl, the caller cleans up the stack.
https://blogs.msdn.microsoft.com/dsvc/2008/09/03/stack-corruption-calling-convention-mismatch/
CC-MAIN-2017-43
refinedweb
1,080
62.78
Set up Amazon Device Messaging To use Amazon Device Messaging (ADM) with your app, you first need to include the ADM JAR in your development environment. You can do this in different ways: - using Android Studio. - using Eclipse, an open-source integrated development environment (IDE). - using the command line. If you haven't already, see Overview for an overview of the ADM architecture. Obtain credentials describes the process of getting your initial credentials for ADM. To use ADM in your project, install the following on your development computer: - The Android SDK (API 15 or higher) - Any Android SDK system requirements, including the JDK Before configuring your project, download the Amazon Mobile App SDK. By downloading our Amazon Mobile App SDK, you agree to our Program Materials License Agreement. Extract the SDK contents to a location of your choice. - Adding ADM to your IDE project - Adding ADM from the command line - Configuring Proguard Adding ADM to your IDE project To use ADM with your IDE-based project, you add the ADM library to your project as an external JAR. Although you can use ADM with any development environment, the following sections describe adding ADM to Android Studio or to Eclipse. Adding ADM to Android Studio Make sure you have downloaded and installed the current version of Android Studio. - In Android Studio, create a new Android project or open an existing project. Change the folder structure from Android to Project. Change the folder structure in Android Studio Find the libs folder in the apps folder - Copy the amazon-device-messaging-1.0.1.jar file from where you extracted the ADM zip file. Paste the JAR into the libs folder. Paste the jar file in libs Right click the JAR file and at end click Add as library. Add jar file as library This automatically adds the compile files('libs/amazon-device-messaging-1.0.1.jar')command to the build.gradlefile. Finally, because you need the library only for compile time and not for runtime as the Amazon device will have the necessary classes and methods, change the line from compile files to provided files. Change from this: dependencies { compile files('libs/amazon-device-messaging-1.0.1.jar') } to this: dependencies { provided files('libs/amazon-device-messaging-1.0.1.jar') }Warning: Skipping this step will cause runtime errors. Adding ADM to Eclipse Make sure you have installed the Android Development Tools (ADT) Plugin for Eclipse. - In Eclipse, create a new Android project or open an existing one. - Open the project's properties; for example, right-click the root folder for the project, and then click Properties. From the list of properties, select Java Build Path. Select the Libraries tab. - Click Add External JARs… and navigate to where you extracted the ADM zip file. - Open the DeviceMessaging/lib folder, select the amazon-device-messaging-1.0.1.jarfile, and then click Open. You should now see amazon-device-messaging.jar listed in the Properties window. Click OK.Note: In the Properties window, there is also an "Order and Export" tab for the Java Build Path for your project. Do NOT mark the ADM JAR file as exported in this tab. Marking the JAR file as an exported entry causes your APK to use the stub implementations of the ADM API from the JAR file, rather than the actual implementations of the ADM classes that are on the device itself. In the Package Explorer for your project, under Referenced Libraries, amazon-device-messaging.jar should now appear. Adding ADM from the command line Before performing this procedure, update your AndroidManifest.xml file, as described in Integrate your app. Also ensure that you have Apache ANT installed, with your ANT_HOME, JAVA_HOME, and PATH environmental variables properly defined. - Change directories into the Android SDK's tools/ path. Run a command with the following syntax, where <path>is the location where the project will be created, and <target Android platform>is the Android platform for which the project is intended. For a list of available platforms, run android list targets. android create project â€"-path <path> --target <target Android platform> â€"-activity ADMActivity â€"-package com.example.amazon.adm - At the root of your new project, create a new directory, called ext_libs. - Navigate to the Android/DeviceMessaging/lib directory, in the Amazon Mobile App SDK, and copy the JAR file to your new ext_libs directory. At the root of your new project, create a custom_rules.xml file that contains the following: <?xml version="1.0" encoding="UTF-8"?> <project name="custom_rules"> <path id="java.compiler.classpath.path"> <fileset dir="ext_libs" includes="*.jar"/> </path> <property name="java.compiler.classpath" refid="java.compiler.classpath.path" /> </project> To build your project, run the following command from the root directory for your project: ant debug Make sure that you take similar steps to configure the projects that test your app. Configuring Proguard If you use Proguard, edit the proguard.cfg file and add the following configuration: #This should point to the directory where ADM's JAR is stored -libraryjars libs -dontwarn com.amazon.device.messaging.** -keep class com.amazon.device.messaging.** {*;} -keep public class * extends com.amazon.device.messaging.ADMMessageReceiver -keep public class * extends com.amazon.device.messaging.ADMMessageHandlerBase
https://developer.amazon.com/docs/adm/set-up.html
CC-MAIN-2018-17
refinedweb
871
58.18
Only released in EOL distros: (branch: groovy-devel) Used by (1) Package Summary ROS Arduino Python. - Maintainer: Patrick Goebel <patrick AT pirobot DOT org> - Author: Patrick Goebel - License: BSD - Source: git (branch: hydro-devel) Used by (1) Package Summary ROS Arduino Python. - Maintainer: Patrick Goebel <patrick AT pirobot DOT org> - Author: Patrick Goebel - License: BSD - Source: git (branch: hydro-devel) Contents Overview This package consists of a Python driver and ROS node for Arduino-compatible controllers. The Arduino must be connected to a PC or SBC using either a USB port or an RF serial link (e.g. XBee). Features - Direct support for the following sensors: - Ping sonar - Sharp infrared (GP2D12) - Onboard Pololu motor controller current - Phidgets Voltage sensor - Phidgets Current sensor (DC, 20A) - Can also read data from generic analog and digital sensors - Can control digital outputs (e.g. turn a switch or LED on and off) - Support for PWM servos - Configurable base controller if using the required hardware. Arduino Node arduino-node.pyA ROS node for Arduino-compatible microcontrollers. Sensors can be polled at independent rates (see sample config file at the end of this page.) For example, main voltage could be polled at 1 Hz while a sonar sensor could be polled at 20 Hz. Subscribed Topics/cmd_vel (geometry_msgs/Twist) - Movement commands for the base controller. Published Topics/odom (nav_msgs/Odometry) - Odometry messages from the base controller. - An array of sensor names and values. - Each sensor value is published on its own topic under the "sensor" namespace with the appropriate type. See sample config file below for examples. Services~servo_write (ros_arduino_msgs/ServoWrite) - Set the target position of a servo with index 'id' to 'value' (in radians). The 'id' to pin mapping is made in the Arduino firmware. - Get the last set position (in radians) from the servo with index 'id'. - Sets a digital pin to INPUT (0) or OUTPUT (1) - Sets a digital pin either LOW (0) or HIGH (1) Parameters~port (str, default: /dev/ttyUSB0 -- some controllers use /dev/ttyACM0) - Serial port. - Baud rate for the serial connection. - Timeout for serial port in seconds. - Rate to run the main control loop. Should be at least as fast as the faster sensor rate. - Rate to publish the overall sensor array. Note that individual sensors are published on their own topics and at their own rate. - Whether or not to use the base controller. - Rate to publish odometry data. - Link to use as the base link when publishing odometry - Wheel diameter in meters. - Wheel track in meters. (Distance between centers of drive wheels.) - Encoder ticks per wheel revolution. - External gear reduction. - Reverse the sense of wheel rotation. - Proportial PID parameter. - Derivative PID parameter. - Integral PID parameter. - Output PID parameter. - Max acceleration when changing wheel speeds - Dictionary of sensors attached to the Arduino. (See the sample YAML config file below.) Provided tf Transformsodom → base_link - Transform needed for navigation. Configuration The Arduino node is configured using a YAML file specifying the required parameters. A sample parameter file called arduino_params.yaml is included in the config directory and is shown below. Make a copy of this file(e.g. my_arduino_params.yaml) before editing. Note that many of the parameters are commented out and must be set and un-commented before you can use the node with your Arduino. Current valid sensor types names (case-sensitive): - Ping - GP2D12 - Analog (generic) - Digital (generic) PhidgetsCurrent (20 amps, DC) port: /dev/ttyUSB0 baud: 57600 timeout: 0.1 rate: 50 sensorstate_rate: 10 use_base_controller: False base_controller_rate: 10 # === Robot drivetrain parameters #wheel_diameter: 0.146 #wheel_track: 0.2969 #encoder_resolution: 8384 # from Pololu for 131:1 motors #gear_reduction: 1.0 #motors_reversed: True # === PID parameters #Kp: 20 #Kd: 12 #Ki: 0 #Ko: 50 #accel_limit: 1.0 # === Sensor definitions. Examples only - edit for your robot. # Sensor type can be one of the follow (case sensitive!): # * Ping # * GP2D12 # * Analog # * Digital # * PololuMotorCurrent # * PhidgetsVoltage # * PhidgetsCurrent (20 Amp, DC) sensors: { #motor_current_left: {pin: 0, type: PololuMotorCurrent, rate: 5}, #motor_current_right: {pin: 1, type: PololuMotorCurrent, rate: 5}, #ir_front_center: {pin: 2, type: GP2D12, rate: 10}, #sonar_front_center: {pin: 5, type: Ping, rate: 10}, arduino_led: {pin: 13, type: Digital, rate: 5, direction: output} } Example Launch File <launch> <node name="arduino" pkg="ros_arduino_python" type="arduino_node.py" output="screen"> <rosparam file="$(find ros_arduino_python)/config/my_arduino_params.yaml" command="load" /> </node> </launch> Usage Notes - Be sure to put your robot on blocks before first trying the base controller If you mess up your parameters in a given session, just do a: rosparam delete /arduino The driver requires Python 2.6.5 or higher and PySerial 2.3 or higher. It has been tested on Ubuntu Linux 10.04 (Maveric) and 11.10 (Oneric). Report a Bug
http://wiki.ros.org/ros_arduino_python?distro=fuerte
CC-MAIN-2021-43
refinedweb
770
59.6
Smooth and cool page transitions are something we all love to see while browsing on Dribbble. I have always been fascinated and asked myself how I could do it for my sites. Once, I was able to do achieve it in a site built with Next.js by using a library called next-page-transitions. It allowed me to create the transitions I wanted with CSS. However, I hit a problem. It was very limiting and inflexible since it was made through CSS classes. I couldn't create a custom experience on every page without having a lot of classes and having to deal with re-renders. Thankfully, Framer Motion's Animate Presence API makes it possible to create sleek and custom page transitions in any React framework easily without having to worry about these problems. Animate Presence In my previous post, I introduced the <AnimatePresence/> component. It triggers the exit prop animations from all its children when they're removed from React's render tree. Basically, it detects when a component unmounts and animates this process. Recently, Framer Motion introduced a prop called exitBeforeEnter. If it is set to true, it will only render one component at a time. It will wait for the existing component to finish its animation before the new component is rendered. This is perfect for handling page transitions since we can guarantee that only a component or page is rendered at a time. A Small Example Let's test what we learned about <AnimatePresence/>. First, we'll test it without the exitBeforeEnter prop by doing a simple transition to see how it behaves. This website will be a mimic of an E-commerce. It will have two pages: Store and Contact Us. They will have a very simple layout. Like this: Our first step is to wrap our pages inside a <AnimatePresence/>. Where we wrap it will depend on where our router is rendering the pages. Keep in mind that each of the children needs to have a unique key prop so it can track their presence in the tree. In Next.js we would head to the _app.js file, and wrap the <Component> with <AnimatePresence/>. // pages/_app.js import { AnimatePresence } from "framer-motion"; import "../styles/index.css"; function MyApp({ Component, pageProps, router }) { return ( <AnimatePresence> <Component key={router.route} {...pageProps} /> </AnimatePresence> ); } export default MyApp; For Create React App, we would use it wherever our router is rendering the pages. import React from "react"; import { Switch, Route, useLocation, useHistory } from "react-router-dom"; import { AnimatePresence } from "framer-motion"; const App = () => { const location = useLocation(); return ( <AnimatePresence> <Switch location={location} key={location.pathname}> <Route path="/contact" component={IndexPage} /> <Route path="/contact" component={ContactPage} /> </Switch> </AnimatePresence> ); }; 💡 Check out the website's code for each framework in this GitHub repository. Now that we have all our pages wrapped in an <AnimationPresence>, if we try to change routes, you'll notice that the current component never unmounts. This happens because Framer Motion is looking for an exit animation for each page, and it is not found because we haven't defined any motion component yet. Let's add some simple fade-out animation to each page. Like this: import { motion } from "framer-motion" <motion.div exit={{ opacity: 0 }}> ... content </motion.div> And now the components can unmount! If you pay close attention, before our contact form disappears, the index page appears at the bottom, creating distraction and ruining the fluidity of our animation. This would be really bad if we were to have a mount animation on the Index page. This is where the exitBeforeEnter prop comes in handy. It guarantees that our component will have unmounted before allowing the new component to load in. If we add the prop In the <AnimatePresence/>, you will notice it is no longer a problem, and our transition is smooth and working as desired. <AnimatePresence exitBeforeEnter/> This is all that is needed to create transitions with Framer Motion. The sky is the limit when it comes to what we can do now! A Beautiful Transition From Dribbble Have you ever wanted to create amazing transitions like those seen in Dribbble? I always have. Thankfully, Framer Motion allows us to re-create these with ease. Take a look at this design by Franchesco Zagami: Let's try to re-create this awesome transition. When translating transition prototypes, it would be best to have the original file so the easings and details of the animation can be known. However, since we are taking a Dribble design, we'll re-create it by estimating its values. Initial Transition One of the elements that we first see is a black background that moves toward the end of the screen. This is really easy to re-create because of Framer's abstractions. First, we'll create a component that will house all our initial transition logic so it can be easier to maintain and develop. const InitialTransition = () => {}; Second, add the black square which will have the size of the screen. const blackBox = { initial: { height: "100vh", }, }; const InitialTransition = () => { return ( <div className="absolute inset-0 flex items-center justify-center"> <motion.div className="relative z-50 w-full bg-black" initial="initial" animate="animate" variants={blackBox} /> </div> ); }; Instead of using motion props, we'll use variants since further down we'll have to handle more elements. 💡 If you want to learn how to use Framer Motion variants, you can check out my beginner's tutorial! So far, we will have a black square in the middle of our screen. We'll use the bottom and height property to create a downward movement. The bottom property will make it collapse towards the bottom. const blackBox = { initial: { height: "100vh", bottom: 0, }, animate: { height: 0, }, }; const InitialTransition = () => { return ( <div className="absolute inset-0 flex items-center justify-center"> <motion.div className="relative z-50 w-full bg-black" initial="initial" animate="animate" variants={blackBox} /> </div> ); }; This is what we have now: If you compare this to our reference, you'll notice the animation happens very quickly and not fluid enough. We can fix this with the transition property. We'll modify the duration to make our animation slower and ease to make it smoother. const blackBox = { initial: { height: "100vh", bottom: 0, }, animate: { height: 0, transition: { duration: 1.5, ease: [0.87, 0, 0.13, 1], }, }, }; it will look much more similar: Now, we have to re-create the text. Albeit, we'll do something different. Since our text is not located in the middle of our navbar, we'll just fade it out. The text is a little harder than the black square because if we take a close look it has an animated layer similar to a mask. A way we could achieve this effect is through SVG elements, specifically the <text/> and <pattern/>. It will look like this: <motion.div <pattern id="pattern" patternUnits="userSpaceOnUse" width={750} height={800} <rect className="w-full h-full fill-current" /> <motion.rect </pattern> <text className="text-4xl font-bold" text-anchor="middle" x="50%" y="50%" style={{ fill: "url(#pattern)" }} > tailstore </text> </svg> </motion.svg> This works by setting a custom text fill with <pattern/>. It will have two <rect/>. One for the color of the text and the other for the animation which will be a motion element. Basically, the latter will hide and will leave a white color. Let's proceed to animate this. First, let's introduce a new transition property called when. It defines 'when' should an element carry out its animation. We want our black box to disappear when all children are done rendering hence afterChildren: const blackBox = { initial: { height: "100vh", bottom: 0, }, animate: { height: 0, transition: { when: "afterChildren", duration: 1.5, ease: [0.87, 0, 0.13, 1], }, }, }; Now, when our text finishes rendering, our black box will do its animation. Second, we'll animate the <svg/> . Here is its variant: const textContainer = { initial: { opacity: 1, }, animate: { opacity: 0, transition: { duration: 0.25, when: "afterChildren", }, }, }; <motion.svg variants={textContainer}</motion.svg> Finally, the <rect/>: const text = { initial: { y: 40, }, animate: { y: 80, transition: { duration: 1.5, ease: [0.87, 0, 0.13, 1], }, }, }; <motion.rect variants={text} 💡 You may be asking yourself where do I get most of these animation values. All of them except the easewere fine tweaked through estimation. For easing, I used this cheat sheet, specifically the easeInOutExpovalues. With all these hooked up, you should see this: Awesome! It's looking very close to our design. You may have noticed that we can still scroll even though our screen is supposed to be busy showing our transition. Luckily this is really easy to fix. We just need to apply overflow: hidden to our body when it is animating and remove it when it's done. Thankfully, motion components have event listeners for this exact situation: onAnimationStart, and onAnimationComplete. The former is triggered when the animation defined in animate starts and the latter when it ends. On our InitialTransition add the following: <motion.div className="absolute z-50 flex items-center justify-center w-full bg-black" initial="initial" animate="animate" variants={blackBox} onAnimationStart={() => document.body.classList.add("overflow-hidden")} onAnimationComplete={() => document.body.classList.remove("overflow-hidden") } > </motion.div> Animating the Content All that is left is creating sleek animation for our content. We won't copy the same animation as the design since it wouldn't match our site very well. What we'll do is a staggering fade in down effect on the children. Let's create our variants: const content = { animate: { transition: { staggerChildren: 0.1, delayChildren: 2.8 }, }, }; const title = { initial: { y: -20, opacity: 0 }, animate: { y: 0, opacity: 1, transition: { duration: 0.7, ease: [0.6, -0.05, 0.01, 0.99], }, }, }; const products = { initial: { y: -20, opacity: 0 }, animate: { y: 0, opacity: 1, transition: { duration: 0.7, ease: [0.6, -0.05, 0.01, 0.99], }, }, }; export default function IndexPage() { return ( <motion.section exit={{ opacity: 0 }}> <InitialTransition /> <motion.div <motion.h1 variants={title} Welcome to tailstore! </motion.h1> <motion.section variants={products} </motion.section> </motion.div> </motion.section> ); } You'll be familiar with most of the properties except delayChildren. It applies a delay to all the children of a propagated animation. In other words, it will display the children after a certain amount of time. Aside from this, we are just making the element fade down, add a duration of 0.7 seconds, and smooth it with an easing. Here is the result: Let's do the same for our contact page: const content = { animate: { transition: { staggerChildren: 0.1 }, }, }; const title = { initial: { y: -20, opacity: 0 }, animate: { y: 0, opacity: 1, transition: { duration: 0.7, ease: [0.6, -0.05, 0.01, 0.99], }, }, }; const inputs = { initial: { y: -20, opacity: 0 }, animate: { y: 0, opacity: 1, transition: { duration: 0.7, ease: [0.6, -0.05, 0.01, 0.99], }, }, }; <motion.section exit={{ opacity: 0 }} <motion.div variants={content} <motion.div variants={title} </motion.div> <motion.div variants={inputs} </motion.div> </motion.div> </motion.section> UX Improvements Transitioning between Contact and Store will take a long while since it will play the initial transition again. Doing this every time will annoy the user. We can fix this problem by only playing the animation if it is the first page the user loads. To achieve this, we'll listen for a route change globally, and determine if it is the first render. If it is, we'll show the initial transition; otherwise, skip it and remove the delay on the children. In Next.js we would detect a route change through routeChangeStart event on _app.js. 💡 Solutions will vary between frameworks. For the sake of keeping this blog post as simple as possible, I will elaborate on Next.js implementation. However, the repository will have solutions in their respective framework. On _app.js: function MyApp({ Component, pageProps, router }) { const [isFirstMount, setIsFirstMount] = React.useState(true); React.useEffect(() => { const handleRouteChange = () => { isFirstMount && setIsFirstMount(false); }; router.events.on("routeChangeStart", handleRouteChange); // If the component is unmounted, unsubscribe // from the event with the `off` method: return () => { router.events.off("routeChangeStart", handleRouteChange); }; }, []); return ( <Layout> <AnimatePresence exitBeforeEnter> <Component isFirstMount={isFirstMount} key={router.route} {...pageProps} /> </AnimatePresence> </Layout> ); } We are keeping the state on the first mount which is updated only when a user does the first route change. And, we pass this variable as a prop to the currently rendered page. On our index.js : const content = (isFirstMount) => ({ animate: { transition: { staggerChildren: 0.1, delayChildren: isFirstMount ? 2.8 : 0 }, }, }); // ... export default function IndexPage({ isFirstMount }) { return ( <motion.section exit={{ opacity: 0 }}> {isFirstMount && <InitialTransition />} <motion.div <motion.h1 variants={title} </motion.h1> <motion.section variants={products} </motion.section> </motion.div> </motion.section> ); } That's it! Our page has amazing transitions and the user will not feel annoyed by replaying the same animation over and over. Conclusion Sleek page transitions are very important to achieve awesome web experiences. Using CSS can be hard to maintain since one will deal with many classes and lack of independence. Thankfully, Framer Motion solves this problem with Animate Presence. Coupled with exitBeforeEnter, it allows developers to create amazing page transitions. It is so flexible and powerful that through few lines of code, we could mimic a complex animation found on Dribbble. I hope this post inspires you to create awesome page transitions so you can impress your future employer or clients. (4) It was indeed a great article. I just have one question that how should I implement this, I want black screen to animate from right bottom to left top at an angle of 45 degrees, just like this website has jfelix.info/blog/using-react-sprin... Ik its your website itself😅 Any help is highly appreciated Thanks in advance I love framer motion for this, one thing that I found tricky was pushing page transitions beyond the norm. For example on lemondeberyl.com I'm using this technique where I absolutely transition the incoming pages and reset the scroll position when the transition is complete. How would you approach that reliably? For example browsers like Safari where performance can result in bugs when transitioning between pages very quickly... Thank you so much for this.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/joserfelix/page-transitions-in-react-1c8g
CC-MAIN-2021-31
refinedweb
2,357
50.43
This C++ program demonstrates multilevel inheritance without method overriding in classes. The method val() has not been overridden in the multilevel inherited classes. The val() methods have not been declared virtual, so the V-table doesn’t keep track of the latest version of val() method. Rather uses method val() specified in the Base Class when called by a pointer to Base Class. The run-time type-identification doesn’t happen and the compiler calls Base::val(). Here is the source code of the C++ program demonstrates multilevel inheritance without method overriding in classes. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below. /* * C++ Program to Demonstrate Multilevel Inheritance * without Method overriding */ #include <iostream> class Base { int i; public: Base(int i = 0): i(i) { } int val() const { return i; } virtual ~Base() { } }; class Derived : public Base { int i; public: Derived(int i = 0): i(i) { } int val() const { return i; } virtual ~Derived() {} }; class MostDerived : public Derived { int i; public: MostDerived(int i = 0): i (i) { } int val() const { return i; } virtual ~MostDerived() { } }; int main() { Base* B = new Base(1); Base* D = new Derived(2); Base* MD = new MostDerived(3); std::cout << "Base.Value() = " << B->val() << std::endl; std::cout << "Derived.Value() = " << D->val() << std::endl; std::cout << "MostDerived.Value() = " << MD->val() << std::endl; } $ a.out Base.Value() = 1 Derived.Value() = 0 MostDerived.Value() = 0 Sanfoundry Global Education & Learning Series – 1000 C++ Programs. If you wish to look at all C++ Programming examples, go to C++ Programs.
https://www.sanfoundry.com/cpp-program-demonstrate-multilevel-inheritance-without-method-overriding/
CC-MAIN-2018-13
refinedweb
256
57.16
Created attachment 685754 [details] demo We seem to miss fast path for Math.exp. From the demo: home:~/ $ ~/projects/v8/out/native/d8 ./demo.js mathExp: 72 mathExp: 52 mathExp: 51 mathExp: 51 home:~/ $ ~/projects/mozilla/builds/obj-opt-x86_64-apple-darwin12.2.0/dist/Nightly.app/Contents/MacOS/js ~/demo.js mathExp: 1079 mathExp: 1081 mathExp: 1054 mathExp: 1067 We do not currently have a fast path for Math.exp(). We clearly should. Potentially a good first bug. I am actually not sure if v8 inlines anything unless you compile with "fast-math". Sean, do you think you could put some steps here to help a new contributor fix this bug? I don't expect this is as simple as using the inline keyword on math_exp_body. I looked through the source code and saw that and math_exp_body are likely to be the places where the work would be done. This isn't about inlining the C function; it's about not making a C function call at all and instead doing Math.exp in JIT-generated assembly... Sure. The goal is to generate inline assembly in IonMonkey that makes a very quick call to math_exp_body(). Most of the work has already been done for sin()/cos()/tan(); this bug will just require making math_exp_body() non-static (probably renamed as math_exp_impl()) then plugging the function into IonMonkey's inlining system, reusing the same functions that sin()/cos()/tan() use. The main inlining code is in js/src/ion/MCallOptimize.cpp -- search for inlineMathFunction(). This requires defining a new type in the MMathFunction::Function enum, then using it in inlineNativeCall() as with js_math_sin. The MMathFunction is then lowered to an LMathFunctionD in js/src/ion/Lowering.h's visitMathFunction(). This part should work already; no modifications are required. Finally, the LMathFunctionD must be translated to assembly, done in js/src/ion/CodeGenerator.cpp's visitMathFunctionD(). This requires adding a new "MMathFunction::Exp" case to the switch. (In reply to Boris Zbarsky (:bz) from comment #4) > This isn't about inlining the C function; it's about not making a C function > call at all and instead doing Math.exp in JIT-generated assembly... In the future, maybe. For now, exp() is part of the family of cache-reliant math functions, and it's much easier to just reuse the existing machinery. If further oomph is necessary, we can look into actually inlining it later, but the rest of the functions seem to be quick enough provided that we just avoid a full callVM(). (In reply to Sean Stangl from comment #6) > provided that we just avoid a full callVM(). Ahem, that should read "a full visitCallNative()". The call is slowed down by generation of an exit frame, because we don't know that the target native function cannot GC. This bug implies some modifications in js/src/ion/MCallOptimize.cpp and likely js/src/ion/CodeGenerator.cpp or any of the arch-specific variant if the choice of the implementation goes for a native function call (callWithABI) to some architecture specific instructions for double exponentiation. Once done the attached benchmark should be a magnitude faster and performance improvement should be visible on (In reply to Nicolas B. Pierron [:pierron] [:nbp] from comment #8) > This bug implies some modifications ... Comment 5 states exactly what to do to reuse the MMathFunction infrastructure. No need for arch-specific paths. Created attachment 720295 [details] [diff] [review] patch as described in comment 5 improves performance of 'demo' attachment by a factor of 120 on my machine Nice work!! I haven't followed the conversation on #jsapi, but I assume somebody from the IM team should to do the review. Sean Stangl in this case. FYI 1: To know who should review the code you can always run "hg annotate" on the sourcefile or just go to the hg.mozilla.org site. In this case: You will see sstangl on all "return inlineMathFunction(callInfo, MMathFunction::Cos);" lines. He created that code and has the best idea if it is correct. So I think you should flip the review to Sean. FYI 2: To flip review, you need to click on "Details" on the patch. Under flags, you will see the input box to change the reviewer. I'm mentioning this, because it wasn't immediately obvious for me the first time I did that ;). Comment on attachment 720295 [details] [diff] [review] patch as described in comment 5 Review of attachment 720295 [details] [diff] [review]: ----------------------------------------------------------------- I'm not Sean, and I don't claim to know much about how to write this patch, but from glancing at the code it looks like not much has changed. The call to mathCache->lookup is now deferred until the previous infinity checks have passed, but besides that I don't see where the large perf increase can come from. Is the uploaded patch missing some files? (I also could just be reading it wrong, please correct me if so.) ::: js/src/jsmath.cpp @@ +284,3 @@ > return 0.0; > } > + endif This is missing the hash character, #endif, and both the #ifdef and #endif should be at the beginning of the line (no preceding whitespace). Apart from the #endif issue and the new whitespace at the top of two files the patch is perfect. We actually have a very good abstraction with MMathFunction, which makes this change so small. What happens is that in the CodeGenerator we emit a call to math_exp_impl, with the MathCache as argument. So we skip all of the VM call overhead and can probably even load from the cache directly in the called function. Comment on attachment 720295 [details] [diff] [review] patch as described in comment 5 Review of attachment 720295 [details] [diff] [review]: ----------------------------------------------------------------- Good patch! Just some minor nits to fix. When they're fixed, upload a new version and flag me for review, and I'll r+ and push it. Note that the crazy 120x speedup is because the "demo" micro-benchmark always calls Math.exp(100), and 100 is cached, so exp() is rarely executed. It will still be a major speedup for code that hits uncached entries, though. ::: js/src/ion/MCallOptimize.cpp @@ +1,1 @@ > + nit: whitespace added to top of file. ::: js/src/ion/MIR.h @@ +1,1 @@ > + nit: whitespace added to top of file. ::: js/src/jsmath.cpp @@ +278,2 @@ > { > + #ifdef _WIN32 nit: preprocessor directives should not be prefixed with whitespace. @@ +284,3 @@ > return 0.0; > } > + endif This should be "#endif", without whitespace. Philipp -- looking through jsmath.cpp, it looks like we're also missing MMathFunction support for acos(), asin(), and atan() (the last missing one is sqrt(), but IonMonkey handles that using inline assembly that bypasses the cache). Rather than filing separate bugs, since you already know how to connect those functions to IonMonkey, would you mind updating the patch to also include those? It will require separating the functions into the main body and an _impl() version that takes a double. Created attachment 720399 [details] [diff] [review] fixed review nits and added inlining for atan/acos/asin Comment on attachment 720399 [details] [diff] [review] fixed review nits and added inlining for atan/acos/asin Review of attachment 720399 [details] [diff] [review]: ----------------------------------------------------------------- Perfect. Thanks for adding support for the trig functions! If you're interested, we're still missing inlining support for many native functions. Inlining is suitable for natives that cannot trigger garbage collection (by creating a new object), which rules out most of the String and Array methods -- but there are, from memory, a number of Date functions and global functions like parseInt() that should have fast-paths. Math.random() should also have a fast-path that just calls out to C++ to avoid the exit frame/callVM overhead in visitCallNative(). Plenty of work left :)
https://bugzilla.mozilla.org/show_bug.cgi?id=815737
CC-MAIN-2016-22
refinedweb
1,296
64.41
Voxel for subdividing a local area of space. More... #include <voxel.h> Inherits rtl_voxel. List of all members. A local voxel is used for subdividing space which contains a group of objects. This class is used by rtl_groupobj. You probably have no need for this class directly, if you think you do you probably should be inheriting from rtl_groupobj. Creates a local subdivision of space for the given object. The object is needed to provide the bounding space of the local subdivision. Subdivides the voxel at a given depth. Use add(rtl_groupmem*,int,int,int) if you use this function. This function will cause abort if you use add( rtl_groupmem* ).
http://pages.cpsc.ucalgary.ca/~jungle/software/jspdoc/rtl/class_rtl_lclvoxel.html
crawl-003
refinedweb
110
70.29
I have a script started but am receiving an error message. I typically have the right idea, but incorrect syntax or formatting. Here are the exact instructions given: Extend the Integer class by adding a method called to_oct which returns a string representing the integer in octal. We'll discuss the algorithm in class. Prompt the user for a number and output the octal string returned by to_oct. Add another method to your Integer extension named "to_base". This method should take a parameter indicating the base that the number should be converted to. For example, to convert the number 5 to binary, I would call 5.to_base(2). This would return "101". Assume that the input parameter for to_base is an integer less than 10. to_base should return a string representing the decimal in the requested number base. #!/usr/bin/ruby class Integer def to_base(b) string="" while n > 0 string=(n%b)+string n = n/b end end def to_oct n.to_base(8) end end puts "Enter a number: " n=gets.chomp puts n.to_base(2) tryagain.rb:16:in `<main>': undefined method `to_base' for "5":String (NoMethodError) As suggested, do something like this: class Integer def to_base b to_s b #same as self.to_s(b) end def to_oct to_base 8 #same as self.to_base(8) end end 5.to_base 2 #=> "101" 65.to_oct #=> "101"
https://codedump.io/share/u8eUsWrDNNlP/1/convert-integer-to-octal-using-class-integer-and-multiple-methods
CC-MAIN-2017-26
refinedweb
226
68.67
Ralf Wildenhues <address@hidden> wrote on 18/12/2009 08:45:41: > From: Ralf Wildenhues <address@hidden> > To: Joakim Tjernlund <address@hidden> > Cc: Peter Johansson <address@hidden>, address@hidden > Date: 18/12/2009 08:45 > Subject: Re: ifdef expessions in Makefile.am > > * Joakim Tjernlund wrote on Thu, Dec 17, 2009 at 11:29:19PM CET: > > if @TEST@ then > > SUBDIRS+= dir1/dir2/@TEST@ > > SUBDIRS+= dir3/dir2/@TEST@ > > SUBDIRS+= dir4/@TEST@ > > .... > > endif > > The syntax is > if TEST > SUBDIRS += ... > endif > > and there should not be any indentation of `if' and `endif', and at most > spaces for indentation of variable assignments, no TABs. > > You already learned that running aclocal and autoconf is required; > the first time you use AM_CONDITIONAL, the code of this macro needs to > be added to your aclocal.m4 file. > > If you need to maintain a patch against a Libtool macro, then what most > users do is write a bootstrap or autogen.sh script which consists of > something like > > libtoolize [--copy] [--install] > patch -p0 - <<EOF > here comes your patch text for m4/libtool.m4 > EOF > aclocal -I m4 > autoconf > automake -Wall Yes, thanks for the script layout. What would be even cooler is if one could add a custom macro in acinclude.m4 that would override the libtool.m4 macro creating the -fPIC option and replace it with -fpic instead. Jocke
http://lists.gnu.org/archive/html/automake/2009-12/msg00045.html
CC-MAIN-2014-15
refinedweb
220
70.53
Issues Show Unassigned Show All Search Lost your login? Roundup docs The bug tracker for setuptools 0.7 or higher is on BitBucket Created on 2012-07-09.15:13:45 by tseaver, last changed 2012-07-10.12:32:34 by tseaver. That code does work for the case I'm currently working on (breaking out the 'persistent' package into its own project, and making 'ZODB3' depend on it). I would have preferred a pkg_resources API which gave me direct access to the "root" of an installed distribution: include paths are normally going to be relative to there, rather than to a given Python package. E.g., the BTrees code in ZODB3 uses: #include <persistent/cPersistence.h> I think the same will apply to packages like SciPy -> NumPy. Yeah, like that, except I'm not sure I'd make '..' the default; I'd probably want to require an explicit path. Did that code work for you? PJE wrote: >. Maybe something like:: import os from pkg_resources import require from pkg_resources import resource_filename class ModuleHeaderDir(object): def __init__(self, require_spec, where='..'): # By default, assume top-level pkg has the same name as the # upstream distribution. # Also assume that headers are located in the package dir, and # are meant to be included as follows: # #include "upstream/header_name.h" self._require_spec = require_spec self._where = where def __str__(self): require(self._require_spec) return os.path.abspath( resource_filename(self._require_spec, self._where)) And in the downstream package's setup.py:: from setuptools import ModuleHeaderDir from setuptools import Extension from setuptools import setup setup(name='downstream', ext_modules = [Extension('_someExtension', ['src/downstream/_someExtension.c'], iinclude_dirs=[ModuleHeaderDir('upstream')])], #... ) On Mon, Jul 9, 2012 at 5:31 PM, Tres Seaver wrote: > I tried making a generator function for the 'include_dirs' argument to > the Extension in the downstream setup.py, but distutils barfs if it is not a > bare list of strings.. Your proposed solution isn't workable, either: the 'pkg_resources' code can't find the upstream distribution at module scope (before calling setup), which means that 'setup_requires' can't install it before it is needed. I tried making a generator function for the 'include_dirs' argument to the Extension in the downstream setup.py, but distutils barfs if it is not a bare list of strings. The only workaround I have found to date is to include the headers in the downstream package (e.g., via 'svn:externals' hackery). Currently, easy_install only modifies files in the directories specified via the command line or configuration files; it is not a backwards-compatible change for it to install things someplace else. More specifically: you can't properly *uninstall* an egg if its header files are installed elsewhere. See issue41 for more discussion on the general problem of adding post-install operations like this to easy_install. The critical issue is that there's no uninstall operation implemented, as compared to say pip or packaging. If you need to access headers from another module, the only way to do this right now is to store the headers in a package directory, and then have the depending setup.py use pkg_resources.resource_filenmae() to fetch the header filename(s) for inclusion. It's not a great approach, but it's the only thing that will work consistently w/the current architecture. Notice, by the way that easy_install is an embedded command in many systems, which would break if it suddenly had system-wide side-effects. Not the least of such embeddings is the implementation of the setup_requires and tests_require keywords, which invoke easy_install to fetch the needed eggs, and put them in a temporary location. It simply won't work to have those things install system header files. This is why easy_install doesn't do post-install scripts, and CAN'T do them without some explicit request for it to run said post-install scripts. First part: get the 'headers' value mapped onto an 'EGG-INFO/headers.txt' file. I'm not sure why you believe it is not fixable. For instance, I think we could approach this in two steps: - First, update 'egg_info' command such that it creates an 'EGG-INFO/headers.txt' file whose contents are the values passed as 'headers' (one per line). While existing distributions would be missing this file, newly-created ones would have it. - Then, when installing a distribution, check for a 'headers.txt' info file. If present, use its contents to drive the distutils 'install_headers' machinery. This isn't fixable within the current easy_install architecture, unfortunately, but is fixable in "packaging", at least in principle. Ref:.
http://bugs.python.org/setuptools/issue142
CC-MAIN-2015-22
refinedweb
757
57.16
How can I execute a shell script from C in Linux? You can use system: system("/usr/local/bin/foo.sh"); This will block while executing it using sh -c, then return the status code. If you need more fine-grade control, you can also go the fork pipe exec route. This will allow your application to retrieve the data outputted from the shell script. If you're ok with POSIX, you can also use popen()/pclose() ( ) #include <stdio.h> #include <stdlib.h> int main(void) { /* ls -al | grep '^d' */ FILE *pp; pp = popen("ls -al", "r"); if (pp != NULL) { while (1) { char *line; char buf[1000]; line = fgets(buf, sizeof buf, pp); if (line == NULL) break; if (line[0] == 'd') printf("%s", line); /* line includes '\n' */ } pclose(pp); } return 0; } It depends on what you want to do with the script (or any other program you want to run). If you just want to run the script system is the easiest thing to do, but it does some other stuff too, including running a shell and having it run the command (/bin/sh under most *nix). If you want to either feed the shell script via its standard input or consume its standard output you can use popen (and pclose) to set up a pipe. This also uses the shell (/bin/sh under most *nix) to run the command. Both of these are library functions that do a lot under the hood, but if they don't meet your needs (or you just want to experiment and learn) you can also use system calls directly. This also allows you do avoid having the shell (/bin/sh) run your command for you. The system calls of interest are fork, execve, and waitpid. You may want to use one of the library wrappers around execve (type man 3 exec for a list of them). You may also want to use one of the other wait functions ( man 2 wait has them all). Additionally you may be interested in the system calls clone and vfork which are related to fork. fork duplicates the current program, where the only main difference is that the new process gets 0 returned from the call to fork. The parent process gets the new process's process id (or an error) returned. execve replaces the current program with a new program (keeping the same process id). waitpid is used by a parent process to wait on a particular child process to finish. Having the fork and execve steps separate allows programs to do some setup for the new process before it is created (without messing up itself). These include changing standard input, output, and stderr to be different files than the parent process used, changing the user or group of the process, closing files that the child won't need, changing the session, or changing the environmental variables. You may also be interested in the pipe and dup2 system calls. pipe creates a pipe (with both an input and an output file descriptor). dup2 duplicates a file descriptor as a specific file descriptor ( dup is similar but duplicates a file descriptor to the lowest available file descriptor). I prefer fork + execlp for "more fine-grade" control as doron mentioned. Example code shown below. Store you command in a char array parameters, and malloc space for the result. int fd[2]; pipe(fd); if ( (childpid = fork() ) == -1){ fprintf(stderr, "FORK failed"); return 1; } else if( childpid == 0) { close(1); dup2(fd[1], 1); close(fd[0]); execlp("/bin/sh","/bin/sh","-c",parameters,NULL); } wait(NULL); read(fd[0], result, RESULT_SIZE); printf("%s\n",result); A simple way is..... #include <stdio.h> #include <stdlib.h> #define SHELLSCRIPT "\ #/bin/bash \n\ echo \"hello\" \n\ echo \"how are you\" \n\ echo \"today\" \n\ " /*Also you can write using char array without using MACRO*/ /*You can do split it with many strings finally concatenate and send to the system(concatenated_string); */ int main() { puts("Will execute sh with the following script :"); puts(SHELLSCRIPT); puts("Starting now:"); system(SHELLSCRIPT); //it will run the script inside the c code. return 0; } Say thanks to Yoda @ Similar Questions
http://ebanshi.cc/questions/106072/how-to-execute-a-shell-script-from-c-in-linux
CC-MAIN-2017-04
refinedweb
693
69.31
This class caches features of a given QgsVectorLayer. More... #include <qgsvectorlayercache.h> This class caches features of a given QgsVectorLayer. The cached features can be indexed by QgsAbstractCacheIndex. Proper indexing for a given use-case may speed up performance substantially. Definition at line 38 of file qgsvectorlayercache.h. Definition at line 22 of file qgsvectorlayercache.cpp. Definition at line 42 of file qgsvectorlayercache.cpp. Adds a QgsAbstractCacheIndex to this cache. Cache indices know about features present in this cache and decide, if enough information is present in the cache to respond to a QgsFeatureRequest. The layer cache will take ownership of the index. Definition at line 117 of file qgsvectorlayercache.cpp. Is emitted when an attribute is changed. Is re-emitted after the layer itself emits this signal. You should connect to this signal, to be sure, to not get a cached value if querying the cache. Returns the maximum number of features this cache will hold. In case full caching is enabled, this number can change, as new features get added. Definition at line 53 of file qgsvectorlayercache.cpp. Checks if the information required to complete the request is cached. i.e. If all attributes required and the geometry is held in the cache. Please note, that this does not check, if the requested features are cached. Definition at line 319 of file qgsvectorlayercache.cpp. Is emitted, when a new feature has been added to the layer and this cache. You should connect to this signal instead of the layers', if you want to be sure that this cache has updated information for the new feature Gets the feature at the given feature id. Considers the changed, added, deleted and permanent features Definition at line 134 of file qgsvectorlayercache.cpp. Gets called, whenever a feature has been removed. Broadcasts this information to indices, so they can invalidate their cache if required. Definition at line 185 of file qgsvectorlayercache.cpp. When filling the cache, this signal gets emitted once the cache is fully initialized. Query this VectorLayerCache for features. If the VectorLayerCache (and moreover any of its indices) is able to satisfy the request, the returned QgsFeatureIterator will iterate over cached features. If it's not possible to fully satisfy the request from the cache, part or all of the features will be requested from the data provider. Definition at line 261 of file qgsvectorlayercache.cpp. Check if a certain feature id is cached. Definition at line 314 of file qgsvectorlayercache.cpp. Returns the layer to which this cache belongs. Definition at line 168 of file qgsvectorlayercache.cpp. When filling the cache, this signal gets emitted periodically to notify about the progress and to be able to cancel an operation. Removes the feature identified by fid from the cache if present. Definition at line 163 of file qgsvectorlayercache.cpp. Gets called, whenever the full list of feature ids for a certain request is known. Broadcasts this information to indices, so they can update their tables. Definition at line 173 of file qgsvectorlayercache.cpp. If this is enabled, the subset of cached attributes will automatically be extended to also include newly added attributes. Definition at line 122 of file qgsvectorlayercache.cpp. Enable or disable the caching of geometries. Definition at line 58 of file qgsvectorlayercache.cpp. Sets the maximum number of features to keep in the cache. Some features will be removed from the cache if the number is smaller than the previous size of the cache. Definition at line 48 of file qgsvectorlayercache.cpp. Set the subset of attributes to be cached. Definition at line 71 of file qgsvectorlayercache.cpp.. Definition at line 76 of file qgsvectorlayercache.cpp. Definition at line 288 of file qgsvectorlayercache.h. Definition at line 286 of file qgsvectorlayercache.h. Definition at line 287 of file qgsvectorlayercache.h.
http://www.qgis.org/api/classQgsVectorLayerCache.html
CC-MAIN-2014-52
refinedweb
632
60.31
al_draw_glyph man page al_draw_glyph — Allegro 5 API Synopsis #include <allegro5/allegro_font.h> void al_draw_glyph(const ALLEGRO_FONT *f, ALLEGRO_COLOR color, float x, float y, int codepoint) Description Draws the glyph that corresponds with codepoint in the given color using the given font. If font does not have such a glyph, nothing will be drawn. To draw a string as left to right horizontal text you will need to use al_get_glyph_advance(3) to determine the position of each glyph. For drawing strings in other directions, such as top to down, use al_get_glyph_dimensions(3) to determine the size and position of each glyph. If you have to draw many glyphs at the same time, use al_hold_bitmap_drawing(3) with true as the parameter, before drawing the glyphs, and then call al_hold_bitmap_drawing(3) again with false as a parameter when done drawing the glyphs to further enhance performance. Since 5.1.12 See Also al_get_glyph_width(3), al_get_glyph_dimensions(3), al_get_glyph_advance(3). Referenced By al_get_glyph_advance(3), al_get_glyph_dimensions(3), al_get_glyph_width(3), al_set_fallback_font(3).
https://www.mankier.com/3/al_draw_glyph
CC-MAIN-2017-17
refinedweb
165
52.6
Contents A password store for the AccountManagerPlugin Description This plugin is a password store for the AccountManagerPlugin. It provides authentication and groups from Lightweight Directory Access Protocol (LDAP) enabled services, including OpenLdap, ActiveDirectory and OpenDirectory. Users are authenticated by performing an ldap_bind against a directory using their credentials. The plugin will also pull the email address and displayName from Directory and populate the session_attribute table. Key features: - Can use a service account to do lookups, or anonymous binding. - Can use SSL if openssl is configured correctly. - Configurable: many options to deal with the differences between directories and schema. - Uses both memory and db based caching to improve performance. - Supports large directories: - Searches Groups more efficiently using Member. - Traverses up the tree to find subgroups. - Can expand directory groups into the Trac namespace. - Supports paged LDAP searches to circumvent server size limits.. Installation Prerequisites - You must install AccountManagerPlugin to use this plugin. - Python-LDAP is also required. - For SSL, you will have to install and configure OpenSSL to work with valid certificates. You can test using ldapsearch -Z. Installation steps General instructions on installing Trac plugins can be found on the TracPlugins page. Starting from v0.3, a database upgrade will be required as part of the installation. - Install the plugin and its prerequisites. - Update the database: trac-admin /var/trac/instance upgrade - Restart the tracd service or your webserver. See ConfigurationExamples. Common Issues - When using SSL, the server won't authenticate. Make sure you can use ldapsearch -Zwith the same parameters from the same host, and resolve the issues there. A handy way to do that is to use: joe@admin > ldapsearch -d8 -Z -x -b dc=base,dc=net -D binding@base.net -W -H ldaps://ldap.base.net -s one 'objectclass=person'The -d8should show you TLS errors. - If you see Trac throwing an exception similar to "OPERATIONS_ERROR: In order to perform this operation a successful bind must be completed on the connection" when you know the bind user/pass is correct, then try connecting to Active Directory on port 3268. This may happen when Active Directory is running across multiple machines. Recent Changes Author/Contributors Author: pacopablo Maintainer: bebbo Contributors: sandinak, rjollos
https://trac-hacks.org/wiki/DirectoryAuthPlugin?version=20
CC-MAIN-2020-24
refinedweb
365
50.94
Chris Oliver's Weblog - All - F3 - JavaFX - Programming - Research F3 vs Processing One of the comments to my earlier posts asked whether F3 is a competitor to Processing. No, not really. F3 is intended to provide a general purpose GUI development platform and not just an "electronic scratchpad" for 2D graphics like Processing. That said, F3 can do many of the same things as Processing, for example below is the F3 equivalent to this simple processing example. Note the use of the F3 bind operator in this example. F3 performs dependency-based evaluation of the right-hand operand of the bind operator. Whenever any of the values it depends on changes the result is incrementally reevaluated and, if the overall value of the expression changed, the left-hand side is automatically updated. In this example, when the gx, gy, rightColor, and leftColor attributes are modified in the update() operation, the dimensions and colors of the rectangles are also implicitly updated due to the bindings. import f3.ui.*; import f3.ui.canvas.*; public class Mouse1d extends CompositeNode { public attribute width: Number; public attribute height: Number; attribute gx: Number; attribute gy: Number; attribute leftColor: Number; attribute rightColor: Number; operation update(x:Number); } attribute Mouse1d.gx = 15; attribute Mouse1d.gy = 35; attribute Mouse1d.leftColor = 0.0; attribute Mouse1d.rightColor = 0.0; operation Mouse1d.update(x:Number) { leftColor = -0.002 * x/2 + 0.06; rightColor = 0.002 * x/2 + 0.06; gx = x/2; gy = 100-x/2; if (gx < 10) { gx = 10; } else if (gx > 90) { gx = 90; } if (gy > 90) { gy = 90; } else if (gy < 10) { gy = 10; } } function Mouse1d.composeNode() = Clip { shape: Rect {width: bind width, height: bind height} content: [Rect { height: bind height width: bind width fill: black selectable: true onMouseMoved: operation(e:CanvasMouseEvent) { update(e.localX); } }, Rect { x: bind width/4-gx, y: bind width/2-gx height: bind gx*2 width: bind gx*2 fill: bind new Color(0.0, leftColor + 0.4, leftColor + 0.6, 1.0) }, Rect { x: bind width/1.33-gy, y: bind width/2-gy height: bind gy*2 width: bind gy*2 fill: bind new Color(0.0, rightColor + 0.2, rightColor + 0.4, 1.0) }] }; Frame { visible: true content: Canvas { content: Mouse1d { width: 200 height: 200 } } } Posted at 11:15AM Nov 11, 2006 by Christopher Oliver in F3 | Comments[2] Posted by Ivan Lazarte on November 11, 2006 at 09:52 PM PST # Posted by DS on November 14, 2006 at 02:30 PM PST #
http://blogs.sun.com/chrisoliver/entry/f3_vs_processing
crawl-002
refinedweb
414
58.18
When you log in to a docassemble server, the options available to you in the menu in the upper-right corner will depend on what privileges have been enabled for your account. There are four special privileges built in to docassemble. admin: for people who need complete control over the server. developer: for people who need to use the server to develop, test, and debug interviews, but do not need to be able to access user data. advocate: for people who need to be able to use the Monitor to provide remote assistance to users, or use interviews or APIs that provide access to user data. trainer: for people who need to be able to train machine learning models. By default, when a new user account is created, the user is given only the privilege of user. This is the “lowest” level of privilege; a user without the user privilege can do everything a user with user privileges can do. A user with privileges of admin can control the privileges of other users using the User List screen. Menu items for users with special privileges Monitor The Monitor is a feature of docassemble’s Live Help system that allows the user to chat with or share a screen with an active user. Users with privileges of admin or advocate can access the Monitor. Train The “Train” menu item allows the user to train machine learning models. This is part of docassemble’s machine learning system. Users with privileges of admin or trainer can access it. Package Management The “Package Management” screen allows the user to install, update, or uninstall Python packages that exist on the server. It has three parts: - Upgrade docassemble - Install or update a package - Update or uninstall an existing package Python packages play an important role in docassemble. The core functionality of docassemble resides in two Python packages: docassemble.webapp and docassemble.base. (There is also a “namespace” package called docassemble, which is necessary but contains no substantive code.) Python packages are the mechanism by which docassemble interviews are published and shared. Also, if any of your interviews needs to perform a non-standard function, you can install a third-party Python package that provides that functionality. For example, suppose you wanted your interview to integrate with your organization’s Slack server. You could install the Python package called slackclient, and then use an imports block to incorporate the functionality of slackclient into your interview. The “Package Management” screen is where users with admin or developer privileges can manage Python packages. The screen effectively serves as a front end to Python’s pip utility. Upgrade docassemble Under the “Upgrade docassemble” part, you can see what version of docassemble you are running. If a new version of docassemble is available, it will also show you current version. You can press the “Upgrade” button to upgrade to the latest version of docassemble from PyPI. This updates only docassemble’s Python packages. It does not upgrade any of the system files that docassemble uses. A few times a year, you may see a notification on the screen saying: A new docassemble system version is available. If you are using Docker, install a new Docker image. This means that non-Python files in the docassemble system have been upgraded. If you are using Docker with persistent storage, then you should then upgrade your container. Install or update a package Under the “Install or update a package” part, you can install a new Python package on the system. There are three ways you can provide a package to docassemble: - GitHub - ZIP file - PyPI PyPI is the standard place on the internet where Python packages are published. However, some Python packages are hosted directly on GitHub, and some are distributed as ZIP files. When you provide a URL to a GitHub repository, you can choose which branch of that repository you wish to install. The Playground allows you to package one or more docassemble interviews as a Python package and publish it on GitHub, publish it on PyPI, or save it as a ZIP file. Thus, you can use the Playground on a development server to create and publish a package, and then use “Package Management” on a production server to install the package. If you do not want your package to be available for anyone in the world to read, you can store it in a private repository on GitHub, and then use GitHub to obtain a special URL for the repository that embeds an “OAuth” authentication code. These URLs can be used in the “Github URL” field of “Package Management.” Update or uninstall an existing package In this part of the “Package Management” screen, you can see a list of all of the Python packages that are installed on your system already. You can click “Uninstall” to remove a package, and you can click “Update” to update a package to the latest version. However, be careful; some of the listed packages are dependencies for other packages, and those other packages may depend on a specific older version of the package. Thus, you could break your system if you click “Update” on the wrong package. However, if the only packages you “Update” or “Uninstall” are docassemble extension packages, or dependency packages that you installed originally, then there is little risk that you will cause a problem with your system. Logs The “Logs” screen provides a web interface to some of the log files on the underlying system. In a multi-server arrangement, log messages are consolidated from all servers, and “Logs” provides a way to see all of them together. The log messages shown in the box on the screen are just the tail end of the actual log file. To see the complete log file, you will need to download it. Note that the log files are managed by logrotate. Generally, the current log file for the docassemble.log file is simply docassemble.log, and the next most recent is docassemble.log.1, and then docassemble.log.2, etc. As a result, the log messages you want to view may not be in docassemble.log, but could be in docassemble.log.1. Also, given the way log rotation interacts with open file handles, note that the very most recent log messages may be in docassemble.log.1 rather than docassemble.log. Here is a summary of what the different log files represent. access.logcontains entries for every request made to the web server. docassemble.logcontains log messages and most errors generated by the web application. error.logcontains error messages generated by the web server. Most error messages in docassemble are trapped before they can be raised by the web server, so docassemble.logshould generally be the first place you look for an error. If you get an “500 Internal Server Error” in the browser, however, the error message is likely in error.log. worker.logcontains log messages and error messages generated by background tasks. In unusual situations, you may need to review other log files. For more information about how to find other log files, see the troubleshooting section. Playground The Playground is an area where users who have privileges of admin or developer can develop and test interviews using the web browser. Interviews can also be developed “off-line” by assembling a package containing YAML files and other files in the appropriate locations. Documentation for the Playground can be found in the Playground section. Utilities The “Utilities” screen provides two miscellaneous services that do not fit in anywhere else. They are available to users with admin or developer privileges. Get list of fields from PDF/DOCX template If you are assembling a document using the pdf attachment file or the docx attachment file options, you can use this utility to generate a first draft of a question that fills in all of the fields referenced in the PDF or Word file. If you have a Word file that you are referencing with docx attachment file, you probably do not need to use this utility, because your template can be populated directly using variables in your interview. Populating the template with a list of fields is optional. This utility is primarily useful when you are using the fields (or field code, or field variables) in which the keys are the names of fields in the underlying PDF file. This utility provides a handy way of obtaining the list of fields. pdf attachment fileoption. This option requires you to provide a dictionary of If your PDF field names use non-ASCII characters such as characters with accent marks, the various software packages that docassemble uses to fill PDF forms may not be able to fill those fields properly. If you find that some fields are being filled and others are not, check to see if the fields that are not being filled have accented characters. In the PDF template, try replacing the accented characters with non-accented characters, and see if you get a better result. Translate system phrases into another language The second utility on the “Utilities” screen is helpful if your site does not use English, or you are developing multi-lingual interviews. There are many words and phrases that appear to the user on various screens and in various circumstances. By default, docassemble only provides these words and phrases in English. However, docassemble allows you to provide a YAML dictionary that maps each English word or phrase to a word or phrase in another language. For more information about how this feature works, see the words directive. This utility will produce a draft YAML file that you can then edit, store within a package, and reference from the words directive of your Configuration. To use this utility, provide a language in the form of a lowercase ISO-639-1 code (e.g., fr for French) and press “Translate.” You will be provided with a text box containing a YAML data structure that you can copy and paste into a text file. If you have configured a Google api key inside your If you have already configured a words directive, this utility will pass through all of the words defined in existing words files, and will only try to translate phrases that do not already exist in existing words files. Download an interview phrase translation file The third utility on the “Utilities” screen is helpful if you are developing multi-lingual interviews. It allows you to download an Excel spreadsheet for side-by-side translation of the phrases used in a given interview. Once translated, the spreadsheet can be included in a package and mentioned in a translations block. Then docassemble will use the translations in the file when it needs a translation of a given phrase. To download a translation file from the “Utilities” screen, you need to provide the name of an interview (e.g., docassemble.demo:data/questions/questions.yml) and the target language in ISO-639-1 or ISO-639-3 format (e.g., fr for French). The resulting spreadsheet will contain a row for each unique phrase used in the interview file, including interview files incorporated by reference. The columns of the spreadsheet are: - interview: the name of the YAML file containing the question that contained the phrase. - question_id: the idof the question the phrase, or the generic name of the question (which could change if the interview changes). - index_num: an number that indexes the phrase within a given question. - hash: an MD5 hash of the phrase (which can be used to test whether the text in the “orig_text” column was edited, which it should not be) - orig_lang: the original language of the phrase, as indicated by the languagespecifier of the question. - tr_lang: the language into which the phrase should be translated. - orig_text: the text of the phrase - tr_text: the translated phrase (which is blank if the phrase has not yet been translated. You can then give the spreadsheet to a translator who will fill in the “tr_text” column with a translation of the text in the “orig_text” column. The spreadsheet with the completed translations can then be uploaded to the sources folder of a package and included in the interview using a translations block. If the target language of the spreadsheet is French ( fr), then the French translations of phrases will be used if the current language in the interview (as determined by the set_language() function is French. If the interview contains a translations block, the file or files referenced in the translations block will be scanned and the translations specified in those files will be included as default translations. If the files referenced in the translations block contain phrases that are not present in the interview, perhaps because they used to be present but are no longer present, these extra phrases will be listed at the end of the interview and their “index_num” values will be numbered starting with 1000. Word add-in manifest XML file The fourth utility is “Download Office add-in manifest file.” You will need this if you want to enable a Playground-like task pane inside Microsoft Word. In Microsoft Office, third party add-ins are enabled through XML “manifest” files. Through this utility, you can download a manifest file that is customized for your server. You need to download this XML file and then install it in your Microsoft Office setup. For more instructions on how to do this, see the Word add-in section. User List The “User List” is available to users with admin privileges. It lists all of the user accounts on the system and allows you to edit a user’s information. When you click the “Edit” button for a user account, you can edit that user’s profile. Under the “Menu,” you can “Add a user,” “Invite a user,” or “Edit privileges.” Edit a user profile When you are editing a user profile, you can disable the user’s account by unchecking the “Active” checkbox. You can edit a user’s privileges. Note that the selector is a multi-select; you can assign more than one privilege to a user. Privileges are additive; a user who has developer and advocate privileges can do anything a developer can do and everything an advocate can do. The profile fields that you can edit include: - E-mail: this must be unique; no two users can have the same e-mail address. - Country code: an uppercase ISO 3166 country code - State - County - Municipality - Organization - Language: a lowercase ISO-639-1 or ISO-639-3 language code representing the user’s language. The words “State,” “County,” and “Municipality” are actually translated phrases defined in docassemble.base:data/sources/us-words.yml, which is included as a words file in the default Configuration: en: First subdivision: State Second subdivision: County Third subdivision: Municipality You can use the user_info() and set_user_info() function to retrieve and set the user profile attributes, where the state, county, and municipality are known by these attributes: subdivision_first subdivision_second subdivision_third Under “Other settings,” you have the option to “Delete account but keep shared sessions.” This will delete the user’s account and all of their data. However, if any of the user’s sessions are multi-user interview that had been joined by another user, those sessions will not be deleted. You also have the option under “Other settings” to “Delete account and shared sessions.” This will delete the user’s account and all of their data, including any multi-user interview the user had joined. These account deletion options can be turned off using the admin can delete account directive in the Configuration. Add a user On the “Add a user” page, you can create a new user account by entering an e-mail address and password. Optionally, you can set the user’s first and last names. You can also select the user’s privileges with a multiselect selector. Note that when a user is added using this tool, the user is not notified that an account has been created. Invite a user On the “Invite a user” page, you can send an e-mail invitation to a prospective new user. The privileges of the prospective user can be set in advance. The e-mail will contain a link with an embedded code. Visiting the link will start the process of registration; until that process is completed, no user account will actually exist. Edit privileges The “Edit privileges” page allows you to manage the privileges that exist on the system. The built-in privileges ( user, admin, developer, advocate, cron, and trainer) cannot be removed. There is one built-in privilege, customer, which has no special meaning within docassemble itself, so you are able to delete it. You can create custom privileges using “Edit privileges,” assign them to users, and then use the user_has_privilege() and user_privileges() in your interviews to do different things depending on what privileges the user has. User privilege assignment can be controlled by: - Manually editing a user’s profile from the User List page. - Calling the set_user_info()function with the privilegeskeyword parameter (calling this function requires adminprivileges); or - Calling the /api/user/<user_id>/privilegesAPI. For example, you may wish to use the privileges system to keep track of which user accounts are paying customers, or to keep track of different tiers of paying customers. Configuration The “Configuration” page allows a user with admin privileges to edit the server Configuration. It provides a YAML text editor in the browser. When the Configuration is saved, the services on the server that depend on the Configuration are restarted. The configuration file for a given server is found at /usr/share/docassemble/config/config.yml. If you are using cloud-based data storage, the config.yml file is also stored in the cloud whenever the Configuration is saved. If you are using persistent volumes, the configuration is copied to the persistent volume when the server shuts down. Then, when the server starts, the config.yml file is restored from storage. Menu items for all logged-in users The following menu items are available to all logged-in users regardless of privileges. Available Interviews The “Available Interviews” page, which is at the URL /list, shows a list of interviews that users can run. The list can be configured using the dispatch directive in the Configuration. If there is no dispatch directive, this page will redirect to the system’s default interview, which can be configured using the default interview directive in the Configuration. By default, the “Available Interviews” menu item is not shown. It can be enabled by setting show dispatch link to True in the Configuration. If an interview is listed under dispatch, but the metadata of the interview contains the specifier unlisted set to True, then the interview will not be listed in the /list, although it will still be usable with a /start shortcut. The URL parameter tag can be used to filter the list of available interviews. For example, if you set tag=estates, then the only interviews that will be listed are those that have estates as one of the tags in the interview metadata. You can use the required privileges specifier in the metadata of each interview listed under dispatch to control whether the interview should appear in the list, depending on the privileges of the user. For information about how to customize the “Available Interviews” page, see the Configuration directives that begin with start page, or configure the start page template, or replace the page with an interview using dispatch interview. My Interviews The “My Interviews” page, which is at the URL /interviews, shows a list of the user’s existing interview sessions. For more information about how sessions work in docassemble, see the subsections on how you run a docassemble interview and leaving an interview and coming back. The “My Interviews” menu item can be hidden from the menu using the show interviews link directive in the Configuration. For information about how to customize the “My Interviews” page, see the Configuration directives that begin with interview page, or configure the interview page template, or replace the page with an interview using session list interview. Profile The “Profile” page allows the user to edit their user account profile. The following fields can be changed: If the user registered using the phone login method, the user can also edit his or her e-mail address. However, users who registered with other methods cannot edit their e-mail address. In addition, users with privileges of admin or developer can edit the following fields: - Country code: an uppercase ISO 3166 country code - State - County - Municipality - Organization - Language: a lowercase ISO-639-1 or ISO-639-3 language code representing the user’s language. - PyPI Username - PyPI Password From the “Other settings” menu, the user can: - Change their password - Configure Google Drive synchronization - Configure OneDrive synchronization - Configure multi-factor authentication - Configure GitHub integration - Manage API keys - Manage their account Whether these commands are available depends on the Configuration. The “Manage account” setting allows the user to delete their account. To disable this, edit the user can delete account directive in the Configuration. Sign Out The “Sign Out” menu item logs the user out and expires the user’s session cookies. If the user was in the process of using an interview, the interview session will not be deleted. Troubleshooting For tips on troubleshooting your docassemble system, see the troubleshooting subsection in the Docker section.
https://docassemble.com.br/docs/admin.html
CC-MAIN-2020-45
refinedweb
3,582
52.49
Tonight find… well, that it is a bit of a mess. It isn’t the expected. The reason for this, you may have guessed, is that Yahoo uses a script to track the outbound clicks. (If they are smart, they are using this to rank their results, leading to the most oft-clicked results showing up at the top.) But John has a problem. He needs to collect the proper links from a *lot* of such pages, and doesn’t want to fuss with reading through the page source for each of them. So, instead, he saves the page, and runs it through a program that will extract the links for him. I put together just such a program, and below, I will explain how it works. If you want to run the program, you will need to download a free copy of the Python interpreter here and install it on your computer. Python programs, like HTML, can be written in a regular plaintext editor like Notepad. The programs are normally saved as “something.py” instead of “something.txt”. Below, I’ll have the program bits in blockquotes, with explanations interspersed. # A script for John that strips out the search URLs from # a saved Yahoo search response page # Alex Halavais, alex@halavais.net # 15 June 2005 # We need the "regular expression" module import re Any part of a program that starts with # is just ignored by the interpreter. It is meant for human-legible comments. In fact, code with enough comments, and sensible naming conventions, doesn’t need to be documented: it is self-documenting. Anyway, the only read “code” part of this is the request to “import re.” When you have a programming task, you often need to import several libraries. Libraries are a bit like toolboxes: they contain the tools you might need on any particular programming job. In this case, I am going to be doing some pattern-matching of text, and so I want to use regular expressions, which is a kind of pattern-matching language. You can use regular expressions in many computer languages, and they are a bit tricky when you get started, but make more sense after a while. Just as you can use wildcards to search for things in certain systems (fish* gets you fishing, fisher, fishsticks, etc.), regular expressions allow you to finely tune different kinds of wildcards. OK, so we have our toolbox, on to the next bit… # We are going to set up a regular expression # pattern that catches each of the links. # This relies on noticing that the link anchors # are assigned to the class "yschttl" # (Yahoo search title?). The url itself is a # long internal link with a lot of stuff in it # we can ignore until we get to the "A//" # part. That's the beginning of the URL we want. # We collect everything after that until we hit a quotation mark. URLS = re.compile('a class="yschttl" .*href=".*A//([^"]+)') This is, again, mostly comments. That last line is scary enough that most people decide immediately after looking at it that they will never learn to program and should go and meditate in the woods. Don’t worry, it looks to everyone like that. Basically, it says I’m looking for a link statement in the HTML code that is associated with the class “yschttl” (Yahoo SearCH TiTLe? Yeti SCHool TurTLe?). Really what I am after is the piece between the inner parentheses, which reads [^”]+ and basically means “give me everything at this point until you hit a quotation mark.” Still don’t get this part? No problem. You can always put off learning about regex (regular expressions) until you are 40 — no one will think less of you for it, and there are often other ways at getting at the same stuff. # First, we need to ask what the input and ouput file names are fileIn = raw_input('What is the name of the saved html file? ') fileOut = raw_input('What would you like to call the output file? ') In this part, we gather the names of two files, and store them in variables called “fileIn” and “fileOut.” I could have called them foo and bar, or anything else I liked. But sine they will be storing the file names, it makes sense to label them properly. When I use the = sign here, really I am saying “put the thing on the right into the variable on the left.” In fact, in some languages, a < - is used instead, and that makes some sense. Anyway in each of these cases we are putting in a "raw_input." What is that? It is whatever the user types in. Further, we are telling the computer that before finding out what the user types in, it should print out a quick question. When the interpreter gets to this part of the program, it will print out the question and wait for the user to type in an answer. Then the answer will be stored in fileIn. It then does the same thing for fileOut. # Now, read in all the text in the file indicated and store it in “inText” inText = open(fileIn).read() Once again, I am putting stuff into a variable, this time the variable I’ve named “inText.” What am I putting into inText? Well, I’ve saved some space by telling the interpreter to do a couple of things at once. I want it to open a file called… well, whatever the name is that is stored in fileIn. Then, using that open file (I guess that’s one way to interpret the “.” there, as “using this do that”), I want you to read in everything. So all the HTML in that file is shoved into inText. inText can hold a virtually unlimmited amount of stuff, so don’t worry how much text is in the file. # Make a list of all the the # things in the text that match the pattern above # and store it in "theList" theList = URLS.findall(inText) Here we get to use one of the tools in our “re” (regular expression) library. The tool “findall” takes a pattern (which we defined up above as the URLS pattern) and compares it to some text (in this case, the text held by “inText”). Any matches it finds to that pattern, it puts in a list. That list of matches is stored, naturally, in “theList.” # Open up an output file to write intoOK, last time we opened a file, we did something with it right away, using the “.” operator. This time, we want to hold onto the open file for a while, so we are going to put it in “f” (you know, for “file”). The file we are opening will be named whatever is stored in “fileOut” and it will be for “w”riting to, rather than reading. (We could have used a ‘r’ up above, but when you don’t specify, Python assumes you want to read a file.) We will be using “f” to manipulate this file for a while, before we finally close it back up. f = open(fileOut,'w') # for each of the items in the list for eachItem in theList: # Write out the http:// part that we stripped out above f.write('http://') Now we are getting a bit fancy. Computers are good at doing things over and over and over. We want it to go through each of the URLs we found earlier and do something with it. Luckily, that command looks a lot like English. We want it to consider each item separately, and do some things with it. All the stuff we want to do with it will be indented a bit. The first thing we want it to do, is write the text ‘http://’ out to the file. That “.” is showing up again. It says, “take the object ‘f’ and do the following with it,” and then tells it to write some text to the object “f” (the file). # write each item f.write(eachItem) # hit return (write a "newline" character) f.write('n') Now we want it to write another thing to the file, this time whatever item it is we are considering at the present. It will go through these lines for each item on the list, writing each item once to the file. After that, we want it to write an “enter” or “return” key, also called a “newline,” and represented by the code “n”. It needs a special code, because how else can you tell it to write an “enter”?! So for each of the items on the list, it will do three things: write “http://” to the file, write one of the URLs it found to the file, write a newline to the file. # Close the file. f.close() Note that we are no longer indenting. This thing we expect it to do only once. Take the file “f” and (.) close it. That’s the whole program. When you run it, and enter the name of a saved HTML file from Yahoo, it strips out the URLs and writes them to a file you specify. There are two ways to run this program. You have installed Python, right? Can’t do anything without that. Once it is installed, on windows any file with the .py extension will appear with a smiley green snake icon. You can just double-click this and the program should run. Alternatively, from the command line (remember that?) you can type “python yahoome.py” and the program will be run (interpreted) by Python. With a couple more lines of code, this can be extended to check all of the files in a particular directory for the pattern. Still more lines of code, and the program can go and get the pages directly from Yahoo and “scrape” out the URLs. And chances are, with a weekend or two of work, you could be writing programs like that. Update: If you want to try it, you can right-click and save this zip, which contains the program above and a version that does the whole directory. 4 Comments Thank you for taking the time to write all this out! I am getting a handle on understanding the process and what it is doing. I’m not able to save the program though, either from the link on this page or directly from the wp-content page. John: Sorry about that; I fogot that the server chokes on .py files. I’ve stuffed the two into a zip, and that should make it easier to download. great learing with you: blogged it over on sousveillance. check out Ayah’s work: she is also working on a poem on airport sousveillance from an arabic’swoman’s perspective.
http://alex.halavais.net/not-so-scary/
CC-MAIN-2021-31
refinedweb
1,795
81.22
Create Random Torn Photos with Actionscript 3.0 by 10 June, 2009 1:00 pm 26 views6 One of the most powerful sets of classes in ActionScript 3 revolve around BitmapData and pixel manipulation. In this tutorial we are going to look at dynamically creating a photo with torn edges by compositing several images together. Requirements I created this tutorial using Flash Builder 4 Beta but you can easily use Flex Builder 3 or Flash CS 3/4. Pre-Requesites You will need to download the PSD file as well as the folder of images included in the source files package. You should also have a basic understanding of the Bitmap, BitmapData, and the Loader class. I will explain in each step what is going on so if you have never used the Bitmap or BitmapData class you should be fine. Here is an example of what we will end up with, refresh this tutorial to see different masks: Before we get started lets take a moment to discuss exactly what we are going to do: - We will need to load in our photo. - Then we need to load in our supporting images: an alpha mask, image texture, and edges mask. - We will copy over the BitmapData of our photo using the alpha mask to cut out an image with transparent edges. - Add 3 layers of the texture image with a blend mode set to multiply. - Cut out edges from our texture using the edges mask image. - Rinse and repeat. Now that we have a plan, lets get started! Step 1: Creating A Preloader We are going to set up a simple preloader since each image we generate is comprised of several “layers”. Create a new project called TornImageDemo and open up the Doc Class. Lets use the following code in the Doc Class: package { import flash.display.Bitmap; import flash.display.Loader; import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.Event; import flash.events.IOErrorEvent; import flash.net.URLRequest; import flash.system.LoaderContext; import flash.utils.Dictionary; [SWF( backgroundColor="#B5AA97", framerate="31" )] /** * @author Jesse Freeman aka @theFlashBum | */ public class TornImageDemo extends Sprite { private static const PHOTO : String = "photo"; private static const MASK : String = "mask"; private static const TEXTURE : String = "texture"; private static const EDGES_MASK : String = "edges_mask"; private static const BASE_URL : String = "images"; private var loader : Loader = new Loader( ); private var currentlyLoading : Object; private var preloadList : Array = new Array( {name:MASK, src:"photo_mask.png"}, {name:PHOTO, src:"photo.jpg"}, {name:TEXTURE, src:"photo_texture.jpg"}, {name:EDGES_MASK, src: "photo_edges_mask.png"} ); private var layers : Dictionary = new Dictionary( true ); private var context : LoaderContext; private var skinPath:String; public function TornImageDemo() { stage.scaleMode = StageScaleMode.NO_SCALE; stage.align = StageAlign.TOP_LEFT; context = new LoaderContext( ); context.checkPolicyFile = true; skinPath = "/skin1/"; preload( ); } /** * Handles preloading our images. Checks to see how many are left then * calls loadNext or compositeImage. */ protected function preload() : void { if (preloadList.length == 0) { init( ); } else { loadNext( ); } } /** * Loads the next item in the prelaodList */ private function loadNext() : void { currentlyLoading = preloadList.shift( ); loader.contentLoaderInfo.addEventListener( Event.COMPLETE, onLoad ); loader.contentLoaderInfo.addEventListener( IOErrorEvent.IO_ERROR, onError ); loader.load( new URLRequest( BASE_URL + skinPath + currentlyLoading.src ), context ); } private function onError(event : IOErrorEvent) : void { trace("IOErrorEvent", event ); preload(); } /** * Handles onLoad, saves the BitmapData then calls preload */ private function onLoad(event : Event) : void { loader.contentLoaderInfo.removeEventListener( Event.COMPLETE, onLoad ); layers[currentlyLoading.name] = Bitmap( event.target.content ).bitmapData; currentlyLoading = null; preload( ); } private function init():void { // Need to do something here } } } As you can see we are creating a simple “loop” through our preload, load and onLoad methods. We have also setup an array that contains each of our images we will preload. After setting up the stage we call the preload method and check the length of our preload array. If there is an item in the array we call load and begin loading it. Once it is loaded we save it to a dictionary then recall preload. Once preloading is done we call init and are ready to go. Lets talk about each of the images we are going to load. Step 2: Creating the Image Mask Template Lets open the PSD you downloaded in the Pre-Requesists part of this tutorial. As you can see I have set this up to show you what our end result will look like in Flash. If you check the layer comps you will see how each layer of the photo will need to be exported. Lets talk about what each layer does: - Preview represents what the image will look like in flash when we apply the masks and textures. You do not need to export this. - Photo Mask represents the alpha mask we will be using to “cut” out transparent edges from photo. Alphas masks work just like regular masks except you use them when calling copyPixels on BitmapData. - Texture represents the actual photo texture we will be using to add some detail to our photo. The texture is also where we will get our edges from when using the edge mask - Edges Mask represents another alpha mask that we’ll use to cut out the edges around the photo. Edges represent any folded over pieces of paper or small tears around the edge of the image. We will need to save out these layers and put them in our project. The Mask and Edge Mask should be PNG-24s and the Texture can be a JPEG. The images zip (you should have downloaded in the pre-requesites section) has an images folder that contains all of our outputted images for this template along with 2 others so we can randomize the mask we apply to the image. Here are some screen I took when creating this template showing you my file settings: This is the photo mask as a PNG-24. This is the photo texture saved out as a JPEG. Notice how it is not transparent? This is because we will use the above alpha mask to cut out the parts of the image we don’t need. Our final image is the edges mask. This is similar to our photo mask and should be a PNG-24. Once you unzip the images file, move it into your bin-debug folder or wherever you compile your final SWF. As you can see I have broken up each folder into skinX and in each set is a photo_mask.png, photo_edges_mask.png,a photo.jpg and photo_texture.jpg. Step 3: Applying the layers At this point we have hardcoded our class to load in mask skin1. Do a quick build and check your browser connections to make sure the photo, mask, edges mask and texture are correctly loading. Sometimes you have to play around with the local security settings of the Flash Player to enable local file access to the images. Likewise it is important to have full security access to these images when running this from a server them since we will be manipulating their BitmapData. Making sure you have a cross domain file is key when deploying these types of projects. Now that everything is loading lets add the following method: /** * Composite image */ private function compositeImage() : void { // Create Bitmap data for final image var bmd : BitmapData = new BitmapData( layers[PHOTO].width, layers[PHOTO].height, true, 0xffffff ); // Get width and height for cutting out image var rect : Rectangle = new Rectangle( 0, 0, layers[PHOTO].width, layers[PHOTO].height ); var pt : Point = new Point( 0, 0 ); // This is our container while we apply the texture var imageComposite : BitmapData = new BitmapData( layers[PHOTO].width, layers[PHOTO].height, true, 0xffffff ); // Copy pixel data from the photo over to the container using the layers[MASK] to cut out the shape imageComposite.copyPixels( layers[PHOTO], rect, pt, layers[MASK], null, true ); ); } if(layers[EDGES_MASK]) { // Copy the edges on top of the the entire composited image imageComposite.copyPixels( layers[TEXTURE], rect, pt, layers[EDGES_MASK], null, true ); } // ); finalImage.x = 50; finalImage.y = 50; } You will also need to import the following classes: import flash.display.BitmapData; import flash.display.BlendMode; import flash.geom.Point; import flash.geom.Rectangle; Finally add the following method call to our init method: compositeImage(); Now if you do a compile you should see the following image: Lets talk about what is going on under the hood of compositeImage. As you can see we are pulling each image out from the dictionary, and either copying out or drawing over the BitmapData if our main photo. Lets go over a few of the main actions going on here: First we need to set a temporary bitmap to store our composite image in. // This is our container while we apply the texture var imageComposite : BitmapData = new BitmapData( layers[PHOTO].width, layers[PHOTO].height, true, 0xffffff ); Next we will copy over the photo’s Bitmap Data using the photo_mask.png as an alpha mask. As I mentioned earlier this acts just like a mask and insures that the BitmapData we copy over has transparent edges. // Copy pixel data from the photo over to the container using the layers[MASK] to cut out the shape imageComposite.copyPixels( layers[PHOTO], rect, pt, layers[MASK], null, true ); Once we have a foundation of our photo we can apply the texture. Here we multiply the texture 3 times to our image to create a little depth and fill in the wrinkles of the photo. ); } Finally we can create our edges by applying an alpha mask to the texture image and using copyPixels to back on top of our if(layers[EDGES_MASK]) { // Copy the edges on top of the the entire composited image imageComposite.copyPixels( layers[TEXTURE], rect, pt, layers[EDGES_MASK], null, true ); } Now with the compositing done we can copy over the imageComposite BitmapData to a clean BitmapData instance using the alpha mask one last time to clean up any transparency we want to preserve. // ); All done, see how easy this is. Of course I am sure this can be optimized even further but for the purposes of this quick example the effect works perfectly. Step 4: Randomize Since we know we have 6 sets of skins (in our images folder) it is really easy to randomize how our photo gets rendered. Simply replace the following line in our TornImageDemo constructor: skinPath = "/skin1"; with the following: skinPath = "/skin"+Math.round(Math.random()*5+1)+"/"; Now when you recompile and hit refresh a random number is added to our skin folder. Conclusion As you have seen, using Alpha Masks when copying pixel data is an incredibly easy technique to use. To give you an idea of how much file size this technique saves us let take a quick look at how big a PNG-24 from PhotoShop would be. The above image from PhotoShop came out to 478k This image was dynamically created from the following images: - photo_edges_mask.png – 16k - photo_mask.png – 12k - photo_texture.jpg – 20k - photo.jpg – 56k - total – 104k As you can see, even though we have loaded up 4 times as many images as the PhotoShop PNG, we have actually saved 374k. Also we can now apply this effect to any number of images on the fly verses creating a transparent PNG each time by hand. I hope you enjoyed this simple tutorial and I would love to see how you use it in your next project. Feel free to leave a comment with a link to what you have done. samBrown 10 June, 2009 at 4:41 pm interesting use of BitmapData, thanks for the tut. Guess you’ve had tons of practice/experience working with this class via FlashCamo Robert 10 June, 2009 at 10:22 pm Hi Jesse! You are ‘THE’ genius and professor of flash. I like your tuts and your site as well. Cheerio and good wishes from Hungary (EU)! Jesse Freeman 11 June, 2009 at 4:07 pm Glad you like my work, thanks for comment! mauricio 11 June, 2009 at 5:15 pm Where is the demo working? Jesse Freeman 11 June, 2009 at 6:53 pm If you look under the Pre-Requesites paragraph you will see the demo swf. It looks like a static image but if you refresh this tutorial it should randomly changed. GordonCasper 15 May, 2012 at 3:36 pm To this I would just add a tutorial on creating and handling symbols in Flash because that is also an important skill…
http://www.thetechlabs.com/tech-tutorials/flash/create-random-torn-photos-with-actionscript-30/
CC-MAIN-2016-30
refinedweb
2,059
64.2
$VAR1 = [ ', ]; [download] How's this for a guess? A 24-month lease An initial price of 12,493.32 A first payment of 1,231.36 + 23 monthly payements of 843.24 ===================================== Total cost (- tax relief) 20,587.44 Daily interest rate 0.008720 Annual Equivalent rate 17.967469 Annual effective rate (inc. one off payments) 18.930167 [download] Other numbers include one-off admin charges and daily/weekly effective costs? Well, look at it this way: This means there are 3 to the power 19 different ways to combine these numbers... That's a lot of possible combinations to try, but it can be done I suppose... You'd probably want to write a recursive algorithm to try all these permutations, and see if the rounded total matches the number you specified... Update: Possible solution code posted below Do you only have the one set of known data? As with any form of decoding, the more examples you have, the easier (and more reliable), it becomes. With (say) half a dozen sets of data + the target value, it might be possible to exclude some of the numbers thereby reducing the search space. The reason is that for example, the next number in the sequence 1,2,3,4,5, ? can be anything you like (even -7 x phi is as good an answer as any). Worse still, there are an uncountably infinite number of different formulas that can "justify" the sixth number being -7 x phi. You also need an infinite number of examples to overcome that problem -- even if you knew all 200000 answers, they wouldn't help you with the 200001th for the same reason. Your only recourse is to interview whoever has the business knowledge behind the formula. -M Free your mind f(1) = 1 f(2) = 2 f(3) = 3 f(4) = 4 f(5) = 5 [download] But when you limit answers to functions having only natural numbers, of special kind (let that be polynoms with integral coefficients, with the largest power *no* more than three, then one and only one function fulfills the condition, and this is f(x)=x, and the correct answer is 6.) Like everywhere else in mathematics, very many things depend on exact phrasing. addition but I agree that OP has very badly phrased task update 2+1 small typos Update oic, you want to limit the types of functions - but without the business knowledge how can you do that? We know it's leasing, but without knowing the business, how do you know it doesn't have a risk component linked to a normal distribution based on the customer's age? That wouldn't be polynomial. More update: yes fletch is right, s/cos/sin/ - my brain did the equivalent of a double negative when I was imagining this. With apologies for the thread drift: Worse still, there are an uncountably infinite number of different formulas that can "justify" the sixth number being -7 x phi. Surely a "number of different formulas" is an integer, and therefore countable? What you say is certainly true in the abstract, but real systems, whether they be physical or financial, tend to follow relatively simple (albeit noisy) models. But I don't think the OP's system is even a real system; I think it's likely to be a very simple model of a system, probably involving exponential growth. But all of this is guesswork. As others have said, the OP would probably get a great deal more help if only he would supply more data. I'm not a financial analyst, but it seems to me that if you google payoff, amortization, net present value, etc., you'll get some standard formulae you can use as a starting point. Then you can use the hack and slash method, by playing with proportionality constants, etc., until you can match up the numbers, or you can also google for "curve fitting" and give it a try. That could be a lot of work, however. Have you considered writing a program to simulate the operator and making your perl script do the key-entry to get the data you need out of the existing machine? --roboticus Many, many years ago, in the era of Windows 3.1 and Token Ring networks, I had a temp job at the local power company. I was given the task to transfer data from their mainframe application into a database running on the Windows network. As with the original question here, there were several thousand records to transfer and the mainframe application would only display one at a time. I have no idea how many hours (days!) they expected me to spend doing data entry on this task, nor how they intended to handle the errors that would surely have resulted, but it didn't prove that easy for them to keep me busy... I quickly taught the Windows macro recorder a sequence of keystrokes to copy and paste the data, then read a magazine for about an hour (getting many dirty looks from passers-by) and reported the task complete. And on that day, I learned both the wisdom of simulating a human operator to carry out repetitive tasks and the joy of suffering the reactions of others to one who truly "works smarter, not harder". The following should work; I haven't had the guts to run it on your real data yet, but my mini-testset seems to result in an answer... #!/usr/bin/perl =cut my @t = ( ', ); my $target = 843.24; =cut my @t = (100,2,32,4,65); my $target = 7; my @terms = _test($target,@t); for my $n (0..$#terms) { next unless $terms[$n]; print (($terms[$n]<0)?'subtract':'add'); print " value number ",$n+1," (",$t[$n],")"; print "\n"; } print "to obtain total of $target\n"; sub _test { my ($target,$number,@rest) = @_; for my $term (0,$number,-$number) { if(@rest==0) { next unless $target == sprintf('%.2f',$sum+$te +rm); return ($term/$number) } my @terms = _test(sprintf('%.2f',$target-$term),@rest) +; next unless @terms; return (($term/$number),@terms); } return; } [download] It's a recursive algorithm that will eventually try every possibility until it finds a working set of terms... It will probably take a long time with the real dataset, so you'll want to eliminate as many terms as possible beforehand. First you would need to generate some sample data, feeding some randomly distributed values into your "blackbox" and getting the output. Then use the obtained set of input/outputs to train the neural-network and once the error goes below your acceptable limit, you could use it to replace the original blackbox. a*18.930167 + b* 17.967469 + c* 0.008720+...+s* 972.290000 = 843.24 After you have 19 or so of those equations, you should be able to solve for a,b,c,d,... Based on the info you provided, if you get numbers other than {1,0,-1} then you know the equation isn't the same for all entries. First of all, 'clunky old EBCDIC' systems still run most of the financial world. Companies like BankofAmerica, Amex, and Visa seem to think they're pretty OK. You're very likely fighting the system, and not using it the way it's designed. OS/390 applications are very batch-centric, sort of similar to the way Unixy things are pipe-centric. I can practically guarantee you that there is a batch interface to what you want to do. Gen one job, or one step per lease and let 'em rip. Google for 'IBM JCL' to get started. Finally, make sure your managment (and you) understand that you're just making up the lease calc if you go that route. This is money you're dealing with, and money mistakes tend to piss people off in a way that nothing else does. Be *very* careful to have your work audited by a bean-counter who knows what they're doing. Good Luck. Instead of trying to determine the formula that was used for a (hopefully) complete set of numbers with unknown meaning, why don't you turn the problem around? Ask the client create a new dummy lease with known input data. Tweak one (and only one) of the parameters and create another new lease. Repeat... Given enough dummy data (known inputs and outputs) you should very quickly be able to determine which numbers are meaningless, which are significant, how each number affects the final value, and which numbers in the dumped data you need. Add a little algebra to get the formula, and you're done. This approach could be quite a bit simpler than trying to brute-force it. :-) Used as intended The most useful key on my keyboard Used only on CAPS LOCK DAY Never used (intentionally) Remapped Pried off I don't use a keyboard Results (441 votes), past polls
http://www.perlmonks.org/?node_id=554988
CC-MAIN-2015-11
refinedweb
1,486
70.53
GAN Neural Networks Generative Adversarial Neural Networks GANs are a pretty definite feature of the news now because of their capabilities. DeepFakes and face-swapping systems that can really help in fooling people stem from the use of GANs, but how do they actually work? This tutorial is a follow up to my other neural network tutorials because you do need some understanding of neural networks. In simple terms, a GAN is two neural networks, one called the discriminator and one called the generator. The job of the generator is to generate fake samples and the job of the discriminator is to classify whether a sample is fake or real and the two neural networks compete against each other therefore improving both neural networks. In terms of compete, I mean if the discriminator wrongly classifies an image, it tunes its weights and if the generator gets found out for generating fake images it tunes its weights and so on for that number of epochs. And that's all there is to it in terms of fundamentals, but note that the generator doesn't generate randomly, however, the input it does take I can't really attach a demo for this tutorial specifically (and other python- TensorFlow tutorials) because of a glitch that prevents us from using the TensorFlow package after downloading it however I did upload all the code as an iPython notebook on GitHub at this link (this is actually code that I made as part of a course from many months back):. It simply generates MNIST digits instead of classifying them (the overused use of the MNIST dataset) and the main lines of code are these: def generator(z,reuse=None): with tf.variable_scope('gen',reuse=reuse): hidden1 = tf.layers.dense(inputs=z,units=128) alpha = 0.01 hidden1 = tf.maximum(alpha*hidden1,hidden1) hidden2 = tf.layers.dense(inputs=hidden1,units=128) hidden2 = tf.maximum(alpha*hidden2,hidden2) output = tf.layers.dense(hidden2,units=784,activation=tf.nn.tanh) return output #Discrimnator in the GAN def discriminator(X,reuse=None): with tf.variable_scope('dis',reuse=reuse): hidden1 = tf.layers.dense(inputs=X,units=128) alpha = 0.01 hidden1 = tf.maximum(alpha*hidden1,hidden1) hidden2 = tf.layers.dense(inputs=hidden1,units=128) hidden2 = tf.maximum(alpha*hidden2,hidden2) logits = tf.layers.dense(hidden2,units=1) output = tf.sigmoid(logits) return output,logits One thing to note is this was written in TensorFlow 1.4 and I think its better to explain it this way rather than with TensorFlow 2.0. If you do want to convert my code, you can do so with the TensorFlow 1.4 -> 2.0 converter. The number of units to output in the generator is 784 because one sample from the MNIST dataset is 28 by 28 pixels (which is 784) so the data outputted must be similar, therefore the outputs also have 784 units on which the numpy. reshape function is applied to make it a 28 by 28 square. This happens at the end of my code when displaying the nth sample. There are two hidden layers (in the generator) and they are both made up of 128 neurons then the relu function is applied to the value returned after matrix multiplying the values outputted from the hidden layers. The discriminator is not too different from a general classifier network, it uses the sigmoid activation function as well as two hidden layers, similar to the generator with 128 neurons each. Thanks for reading and I hope you learned something about how GANs work. probably shouldn't have mentioned deep fake (you look that up and you might pull up some very interesting sites) @AdCharity Hmm. I did want to show the impact of GANs though and deepfakes are pretty recent. @adityakhanna ik but what i mean is students (like myself) with school restricted chromebooks may or maynot encounter sites that is blocked, leading them to wonder what they are (um say celeb ** fill in the blank). That's just my thoughts. But true, deep fake technology is pretty recent as it is kind of becoming an issue as it develops/improves. Also nice stuff :) I gave an upvote Sometime user can't connected the bluetooth audio devices connections in computer PC how do i fix connections to wireless displays in windows 10 we here and discussed more about for this issue.
https://replit.com/talk/learn/GAN-Neural-Networks/22488
CC-MAIN-2021-25
refinedweb
729
53.81
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives I've recently been looking at a notion of interfaces for BitC. The language already has type classes in the style of Haskell, with all of the pros and cons of these. The deeper my dive into interfaces, the more confused I became about the difference between interfaces and type classes. This either means that I'm hopelessly confused or that I've stumbled into something useful. I'm sure the LtU audience can explain my confusion. :-) BitC has object-style methods in a syntactic sense, but no virtual methods or inheritance. In such a language, object methods are sugar. We also have a notion of interfaces, currently called "capsules". A capsule provides existential encapsulation of an underlying object combined with a known set of methods. In contrast to Java-style interfaces, a capsule type has an explicitly invocable constructor. This means that a capsule can have non-default instantiations for a given underlying type T. Capsule constructor arguments corresponding to method initializers are constrained to be compile-time evaluable. This has the consequence that every capsule "method table" is a compile-time constant in the same way that every type class instance method table is a compile-time constant. Capsules may have either static of conventional methods. A static is method does not receive or require a reference to the encapsulated object. The lifetime of a capsule instance is guaranteed not to exceed the lifetime of the object (if any) that it encapsulates. Q: How is a capsule type consisting exclusively of static methods different from a type class, if at all? Interfaces provide runtime polymorphism, and type-classes compile-time polymorphism. Type-classes provide a type-safe equivalent to C++s templates. So when compiling a type-class, like a template, the manipulation happens to the intermediate code, effectively generating at compile time specialised instances of the classes as required (this is the same as elaboration in Ada I think). So implemented well, there will be zero run-time cost, and the generated code would look the same as if templates or macros were used to generate it. Interfaces with method definitions have to vary at runtime (as you do not know what will be inside the existential capsule) you have to call the method indirectly via the method table and this causes overhead. This overhead can be non-linear in modern systems due to non-local memory (Numa) and cache-architecture (non-local references can cause collisions, or cache-thrashing). There are also extra optimisation opportunities when inlining the code statically with type-classes/templates. So one answer is performance, and predictability (sometimes you don't want existential capsules appearing all over your memory), you may be restricted to stack memory for example. The other answer is multiple-dispatch. Object systems generally only dispatch on the first argument, which is by convention put before the function name as in obj.fn(args...) but the difference to fn(obj, args...) is just syntax. Most template/type-class systems resolve overloading on the types of all arguments. An example where this makes a big difference is unification, where you need to unify two types A and B. The action taken depends on the types of both A and B, which in an object model requires nested uses of the visitor pattern (IE N visitor classes for B nested inside the visitor class for A). This gets big and ugly fast. With type-classes a single class for unification, with instances for the combinations of A and B is much neater in my opinion. Even if object systems do support multiple-dispatch on method selection there is the syntactic problem of why is A treated differently to B. Unification is applied equally to A and B, so 'unify(A, B)' looks better than 'A.unify(B)'. This also enables/encourages separation of algorithms from data which I think is a good thing. Type classes plus existential types give you something with the same functionality as objects with interfaces and interface inheritance (type-classes provide interfaces and interface-inheritance, the existential type provides the runtime polymorphism), but no implementation inheritance. Closures can be used instead of existential types for an alternative form of runtime polymorphism that requires explicit upcasting and downcasting. For implementation inheritance you need to add extensible records (although there is no equivalent of C++s protected inheritance). The OOHaskell paper gives a good overview of all this. Setting aside the presence/absence of existentials, I think this characterization as compile-time polymorphism vs. run-time polymorphism is a very good way of capturing the difference. It goes a long way toward accounting for why explaining which one a beginner should use can be challenging. I also agree that there is a difference in performance, but the right choice is not obvious. In the presence of JIT, of course, we can do polymorphic dispatch caching. In the absence of JIT, code multiplication can cause code to overflow a tier of instruction cache in small systems. The consequences of that can be pretty dramatic. Whether OO languages dispatch exclusively on the first argument is a bit fuzzy. There is a certain kind of "second class" dispatch by means of method overloading in C++, and when the Koenig-style name resolution rules are taken into account it becomes considerably less clear whether this should be considered single-parameter dispatch or multi-parameter dispatch. I agree with the point I think you were trying to make; I'm just saying that real languages are messy, and sometimes that makes these statements complicated. Separation of operations from data is a good thing, but it's purely a matter of surface syntax. Separation of operations from types is not a good thing. It is very often true that operations over scalar types - and even the supporting data structures and algorithms that implement those operations - want special treatment. This is the main reason that Haskell's overlapping instance resolution challenge is interesting, rather than being merely a side note. Concur about unification, and I think that's a very interesting one. Parametric polymorphism over functions admits many implementations. Parametric polymorphism over methods is rather trickier to implement. Especially so if whole-program compilation is rejected as a design option. Thanks a lot for your comments, Keean. Capsule constructor arguments corresponding to method initializers are constrained to be compile-time evaluable. This has the consequence that every capsule "method table" is a compile-time constant in the same way that every type class instance method table is a compile-time constant. I take this to mean that when a program invokes a static method of some object, that object isn't actually needed at run time; only its type information is used, and that's only used at compile time. I haven't delved into typeclasses much, and my understanding is probably a mishmash of several GHC extensions that might not be what "type classes" mean to everyone. However, I'm guessing your capsules are pretty close to type classes, with implicitness being the main (maybe only?) feature you lack. Still, I think there are good reasons to care about implicitness, and thus good reasons to say what you have isn't a satisfactory example of "type classes." Implicit arguments, including type class instances, are pretty liberating not only because they reduce clutter, but because they let programmers write utilities that are usable only in a narrow range of statically checked call sites, and then generalize them to a wider variety of call sites later on. foo :: [Foo] -> Bar ...can become... foo :: (Monad m) => m Foo -> Bar ...or perhaps more generally... foo :: (FooSignature t) => t (As long as I'm using Haskell examples, the FooSignature approach might require GHC's FlexibleInstances or UndecidableInstances in order to support an instance for a deep pattern like ([Foo] -> Bar). I'm really not sure where the limits are here; I just remember trying deep patterns like this and getting errors. I'm probably embarrassing myself in several ways right now.) FooSignature FlexibleInstances UndecidableInstances ([Foo] -> Bar) If you add implicit argument support for capsules, you'll probably be much closer to something you can call type classes. In fact, if you just define a single type class that carries capsules, I wonder if that would bring you almost all the way to an implementation: ...instead of writing this... foo :: (Monad m) => m Foo -> Bar ...write this... foo :: (HasCapsule MonadCapsule m) => m Foo -> Bar (Again, I don't know enough to see if there are limitations to this approach, especially for the purposes of advanced type class variations like multi-parameter type classes, functional dependencies, type families, and so on.) Let me take your questions in turn. By "static method", I mean something like a static member function in C++. This is a conventional function making no use of "this" and having no existential encapsulation properties. It lives within the namespace of the class type rather than the global name space. In C++ it has access to private member functions of objects of the same type, but I'm not sure yet how we are going to handle public/private in BitC, so that may not apply in our case. Capsules are not type classes. A capsule type is an interface type. The main difference between the current BitC capsules and Java interfaces is that a capsule has an explicit constructor. In Java, when "class x implements interface y", the compiler binds the interface methods to compatible class methods having the same name. In BitC, there is (or will be) a convenience syntax that does this, but in the usual case there is an explicit construction step in which the programmer-desired binding is stated. So (a) there is no inherent dependency between the method names of the capsule the method names of the thing it encapsulates, and (b) by instantiating capsule with different parameters, two [or more] capsules can be constructed that wrap the same object but with different behavior. Finally, capsules do not support downcast to the underlying object type. Capsules share with interfaces a mechanism for existential encapsulation, but with heavier emphasis on the encapsulation than interfaces seem to provide. The question of implicit capsule instantiation is an interesting one. A capsule instance is an object, and we don't need implicit objects. If you get as far as having an object, you're already holding what you need. The issue - which doesn't arise in any language I know of that has interfaces - is how to support an expression like: ObjectType ob; ... ob as CapsuleType If ObjectType explicitly supplies a preferred conversion for CapsuleType, then we're done. But we also want a way to retrofit implementations of CapsuleType for third-party object types. It's easy enough to do that syntactically, but it brings with it the problem of instance selection in much the way that Haskell must select type class instances. The reason that instance selection (in both cases) cannot be done in a purely lexical way is tied to parametric polymorphism. I think of this as the "three party problem". One party implements type T. A second implements interface/capsule type C. A third party defines the default instantiation rule for building C from T. No single lexical context exists - or is even possible - in which all three facts are known. Please follow up if this didn't answer your questions, and thanks for asking questions that are helping to improve my clarity on this (and hopefully yours as well. ☺) You were asking whether this capsule approach would count as support for type classes. Most of my lack of clarity is about what the common definition of "type class" is, but thanks anyway for reinforcing what you said about the capsule approach. :) Now you're talking about (ob as CapsuleType) coercions, and I think that might count as a way to pass (obType -> CapsuleType) values as implicit arguments, if you can implement it soundly. For instance, it would mean you can define a function with a declared signature (a -> a -> Bool) that can internally call upon (() as (Eq a)) to do its dirty work. (Note that ob might as well be () for all type class instances, since we're representing them as capsules with only static methods.) (ob as CapsuleType) (obType -> CapsuleType) (a -> a -> Bool) (() as (Eq a)) ob () If you made the use of as apparent in the declared signature, I'm pretty sure this could be sound and could even be modularly typecheckable--and it would look just like an implicit argument. as (!=) :: (() -> Eq a) => a -> a -> Bool (!=) x y = not (capsule.(==) x y) where capsule = () as (Eq a) It's easy enough to do that syntactically, but it brings with it the problem of instance selection in much the way that Haskell must select type class instances. If you want to avoid the problems of Haskell type classes, can you really call what you get "type classes"? :) This is something I can't begin to say, because I don't know the culture of this terminology outside of Haskell. I'm barely familiar with its connotations even in Haskell. Personally, I think a good way to approach this is to export instances under specific names, or perhaps with other metadata, and then let each module declare its own rules for instance lookup in terms of that information (and perhaps even in terms of how the code behaves). That way, the lexical context is indeed possible inside a single module, but the module system does not automatically share these lookup preferences across multiple modules. Adding to what I just said: A comment of yours in another thread shows you're already thinking in these terms! Here's your entire comment, just to make this thread easier to follow:. In this case it sounds like you want the second Ord to be an explicit parameter. But I'm sure you would have thought of that, so what's eating at you is probably the general task of moving instance selection preferences across module boundaries, from one module's call site into another module's function definition. Ord Personally, I would see it as still being a parameter, just with fancier, more module-like syntaxes for passing it in and using (i.e. importing) it. I've actually been sketching out ideas for something like this, as part of a larger module system design, but I don't (yet) intend to pass around types this way, only first-class values. It would be hefty enough that I'd only want to use it for embarrassingly open-ended things like certain I/O operations, where there are so many possible configuration options that they have their own namespacing and conflict resolution needs. If a module does provide preferences to another module this way, they'll have to be reified to some degree (at least until the full program is available for either execution or full-program compilation). Instead of implicitly using an instance that's a compile-time constant, the code would implicitly, dynamically look up an instance based on the nearest lexically surrounding import syntax.
http://lambda-the-ultimate.org/node/4867
CC-MAIN-2018-26
refinedweb
2,552
52.19
21 November 2011 10:03 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> “The unit is expected to restart [in a month’s time], so we have cancelled our contracts for December,” the source said. “The downstream 240,000 tonne/year expandable polystyrene (EPS) unit at the same location will be run normally,” the source added. “However, the demand from end-users is weak so the operating rate at the EPS unit is just around 50% capacity,” the source said. Jiangsu Leasty Chemical is the third biggest SM producer in east The largest producer of the product is Shanghai SECCO Petrochemical with a capacity of 650,000 tonnes/year, followed by Zhenhai Lyondell Chemicals, with a capacity of 620,000 tonnes/year. For more on polystyrene and
http://www.icis.com/Articles/2011/11/21/9509951/Chinas-Jiangsu-Leasty-Chemical-shuts-Jiangyin-SM-unit-on-20.html
CC-MAIN-2014-52
refinedweb
126
60.45
WMI related performance issues may arise due to extensive usage of WMI components. You can increase the value of following properties to its maximum. Restart may be required after applying the following settings. - Run cmd.exe as admin - Type wbemtest.exe and run - Click Connect. - In the namespace text box type "root" (without quotes). - Click Connect. - Click Enum Instances… - In the Class Info dialog box enter Superclass Name as "__ProviderHostQuotaConfiguration" (without quotes) and press OK. Note: the Superclass name includes a double underscore at the front. - In the Query Result window, double-click "__ProviderHostQuotaConfiguration=@") - Set the following values for these properties. Don't forget to Save Property after setting each value of the property. MemoryPerHost 1073741824 (1GB) HandlesPerHost 8192 - Save Object Thank you! It helped in fixing my issue.
https://blogs.technet.microsoft.com/bulentozkir/2014/01/14/increase-wmi-quota-properties-to-maximum-values/
CC-MAIN-2019-04
refinedweb
129
52.46
Opened 4 years ago Closed 3 years ago #12469 closed (fixed) get_urls docs in ModelAdmin can be extended Description (last modified by russellm) It might help to explain three things regarding custom views in ModelAdmin - The path in the example r'^my_view/$' will be accessed at /admin/myapp/mymodel/my_view/ - self.my_view in the example means that you probably want to define the view inside the MyModelAdmin Class so it has access to the ModelAdmin. - my_view will be called with the request and the ModelAdmin instance as arguments: def my_view(request, model_admin): A simple class with get_urls and a view will be best. Thanks Attachments (0) Change History (2) comment:1 Changed 4 years ago by russellm comment:2 Changed 3 years ago by timo - Resolution set to fixed - Status changed from new to closed Note: See TracTickets for help on using tickets. (In [15113]) Fixed #12469 - Add a few clarifications to the ModelAdmin.get_urls() docs. Thanks benc for the suggestions.
https://code.djangoproject.com/ticket/12469
CC-MAIN-2014-15
refinedweb
161
54.26
Inorder Non-threaded Binary Tree Traversal without Recursion or Stack We have discussed Thread based Morris Traversal. Can we do inorder traversal without threads if we have parent pointers available to us? Input: Root of Below Tree [Every node of tree has parent pointer also] 10 / \ 5 100 / \ 80 120 Output: 5 10 80 100 120 The code should not extra space (No Recursion and stack) In inorder traversal, we follow “left root right”. We can move to children using left and right pointers. Once a node is visited, we need to move to parent also. For example, in the above tree, we need to move to 10 after printing 5. For this purpose, we use parent pointer. Below is algorithm. 1. Initialize current node as root 2. Initialize a flag: leftdone = false; 3. Do following while root is not NULL a) If leftdone is false, set current node as leftmost child of node. b) Mark leftdone as true and print current node. c) If right child of current nodes exists, set current as right child and set leftdone as false. d) Else If parent exists, If current node is left child of its parent, set current node as parent. If current node is right child, keep moving to ancestors using parent pointer while current node is right child of its parent. e) Else break (We have reached back to root) Illustration: Let us consider below tree for illustration. 10 / \ 5 100 / \ 80 120 Initialize: Current node = 10, leftdone = false Since leftdone is false, we move to 5 (3.a), print it and set leftdone = true. Now we move to parent of 5 (3.d). Node 10 is printed because leftdone is true. We move to right of 10 and set leftdone as false (3.c) Now current node is 100. Since leftdone is false, we move to 80 (3.a) and set leftdone as true. We print current node 80 and move back to parent 100 (3.d). Since leftdone is true, we print current node 100. Right of 100 exists, so we move to 120 (3.c). We print current node 120. Since 120 is right child of its parent we keep moving to parent while parent is right child of its parent. We reach root. So we break the loop and stop Below is the implementation of above algorithm. Note that the implementation uses Binary Search Tree instead of Binary Tree. We can use the same function inorder() for Binary Tree also. The reason for using Binary Search Tree in below code is, it is easy to construct a Binary Search Tree with parent pointers and easy to test the outcome (In BST inorder traversal is always sorted). C++ Java Python3 # Python3 program to print inorder traversal of a # Binary Search Tree (BST) without recursion and stack # A utility function to create a new BST node class newNode: def __init__(self, item): self.key = item self.parent = self.left = self.right = None # A utility function to insert a new # node with given key in BST def insert(node, key): # If the tree is empty, return a new node if node == None: return newNode(key) # Otherwise, recur down the tree if key < node.key: node.left = insert(node.left, key) node.left.parent = node elif key > node.key: node.right = insert(node.right, key) node.right.parent = node # return the (unchanged) node pointer return node # Function to print inorder traversal # using parent pointer def inorder(root): leftdone = False # Start traversal from root while root: # If left child is not traversed, # find the leftmost child if leftdone == False: while root.left: root = root.left # Print root’s data print(root.key, end = ” “) # Mark left as done leftdone = True # If right child exists if root.right: leftdone = False root = root.right # If right child doesn’t exist, move to parent elif root.parent: # If this node is right child of its # parent, visit parent’s parent first while root.parent and root == root.parent.right: root = root.parent if root.parent == None: break root = root.parent else: break # Driver Code if __name__ == ‘__main__’: root = None root = insert(root, 24) root = insert(root, 27) root = insert(root, 29) root = insert(root, 34) root = insert(root, 14) root = insert(root, 4) root = insert(root, 10) root = insert(root, 22) root = insert(root, 13) root = insert(root, 3) root = insert(root, 2) root = insert(root, 6) print(“Inorder traversal is “) inorder(root) # This code is contributed by PranchalK C# Output: Inorder traversal is 2 3 4 6 10 13 14 22 24 27 29 34 This article is contributed by Rishi Chhibber. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above Recommended Posts: - Inorder Tree Traversal without recursion and without stack! - Postorder traversal of Binary Tree without recursion and without stack - Inorder Tree Traversal without Recursion - Construct Special Binary Tree from given Inorder traversal - Zig-Zag traversal of a Binary Tree using Recursion - Find all possible binary trees with given Inorder Traversal - DFS traversal of a tree using recursion - Preorder Traversal of N-ary Tree Without Recursion - Construct Full Binary Tree using its Preorder traversal and Preorder traversal of its mirror tree - Construct a Binary Tree from Postorder and Inorder - Inorder Successor of a node in Binary Tree - Given level order traversal of a Binary Tree, check if the Tree is a Min-Heap - Check if an array represents Inorder of Binary Search tree or not - Replace each node in binary tree with the sum of its inorder predecessor and successor - Sum of the mirror image nodes of a complete binary tree in an inorder way Improved By : andrew1234, PranchalKatiyar
https://www.geeksforgeeks.org/inorder-non-threaded-binary-tree-traversal-without-recursion-or-stack/
CC-MAIN-2019-26
refinedweb
948
63.29
Last year, Andrew Godwin, a Django contributor, formulated a roadmap to bring async functionality into Django. After a lot of discussion and amendments, the Django Technical Board approved his DEP 0009: Async-capable Django yesterday. Godwin wrote in a Google group, “After a long and involved vote, I can announce that the Technical Board has voted in favour of DEP 0009 (Async Django), and so the DEP has been moved to the “accepted” state.” The reason why Godwin thinks that this is the right time to bring async-native support in Django is that starting from version 2.1, it supports Python 3.5 and up. These Python versions have async def and similar native support for coroutines. Also, the web is now slowly shifting to use cases that prefer high concurrency workloads and large parallelizable queries. The motivation behind Async in Django The Django Enhancement Proposal (DEP) 0009 aims to address one of the core flaws in Python: inefficient threading. Python is not considered to be a perfect asynchronous language. Its ‘asyncio’ library for writing concurrent code suffers from some core design flaws. There are alternative async frameworks for Python but are incompatible. Django Channels brought some async support to Django but they primarily focus on WebSocket handling. Explaining the motivation, the DEP says, “At the same time, it’s important we have a plan that delivers our users immediate benefits, rather than attempting to write a whole new Django-size framework that is natively asynchronous from the start.” Additionally, most developers are unacquainted with developing Python applications that have async support. There is also a lack of proper documentation, tutorials, and tooling to help them. Godwin believes that Django can become a “good catalyst” to help in creating guidance documentation. Goals this DEP outlines to achieve The DEP proposes to bring support for asynchronous Python into Django while maintaining synchronous Python support as well in a backward-compatible way. Here are its end goals, that Godwin listed in his roadmap: - Making the blocking parts in Django such as sessions, auth, the ORM, and handlers asynchronous natively with a synchronous wrapper exposed on top where needed to ensure backward compatibility. - Keeping familiar models/views/templates/middleware layout intact with very few changes. - Ensuring that these updates do not compromise speed and cause significant performance regressions at any stage of this plan. - Enabling developers to write fully-async websites if they want to, but not enforcing this as the default way of writing websites. - Welcoming new talent into the Djang team to help out on large-scale features. Timeline to achieve these goals Godwin in his “A Django Async Roadmap” shared the following timeline: Godwin posted a summary of the discussion he had with the Django Technical Board in the Google Group. Some of the queries they raised were how the team plans to distinguish async versions of functions/method from sync ones, how this implementation will ensure that there is no performance hit if the user opts out of async mode, and more. In addition to these technical queries, the board also raised a non-technical concern, “The Django project has lost many contributors over the years, is essentially in a maintenance mode, and we likely do not have the people to staff a project like this.” Godwin sees a massive opportunity to lurking in this fundamental challenge – namely to revive the Django project. He adds, “I agree with the observation that things have substantially slowed down, but I personally believe that a project like async is exactly what Django needs to get going again. There’s now a large amount of fertile ground to change and update things that aren’t just fixing five-year-old bugs.” Read the DEP 0009: Async-capable Django to know more in detail. Read Next Which Python framework is best for building RESTful APIs? Django or Flask? Django 2.2 is now out with classes for custom database constraints
https://hub.packtpub.com/django-3-0-is-going-async/
CC-MAIN-2021-39
refinedweb
657
60.35
In V1 of .net, value types could not be null. Since, this was contradictory to relational databases use of null, the DataSet needed to have a concept of nullable value types – hence DBNull was invented. DBNull is simple type in the System namespace. It does not have a constructor, but a static method (DBNull.Value) that returns a singleton instance of the class. Even though it is designed to support relational database scenarios, it does not support 3 part logic. For example equality is supported in a CLR manner: Console.WriteLine(DBNull.Value == DBNull.Value); // returns true Setting null for a value type means explicitly setting the value to be DBNull.Value: DataColumn nullableColumn = new DataColumn("myInt", typeof(int)); table.Columns.Add(nullableColumn); DataRow row = table.Rows.Add(); row["myInt"] = DBNull.Value; One can explicitly check if a value is DbNull by comparing to DBNull.Value or calling DataRow.IsNull: Console.WriteLine(row["myInt"] == DBNull.Value); // True - but bad news Console.WriteLine(row.IsNull("myInt")); // True - the recommended way Even though both work, calling DataRow.IsNull() is the recommended method of checking for DBNull valued column values. In fact, even though currently supported, no code should be doing the former. You will see the reason why in a moment. Setting null for reference types is similar, but a little different in that since either null or DBNull.Value can be provided: DataColumn nullableReferenceColumn = new DataColumn("myType", typeof(myCustomerType)); table.Columns.Add(nullableReferenceColumn); row = table.Rows.Add(); row["myType"] = null; row["myType"] = DBNull.Value; Note – however, internally the DataSet translates the null value to be DBNull.Value. This can be seen by retrieving the value: Console.WriteLine(row["myType"] == DBNull.Value); // True - but also bad news Console.WriteLine(row["myType"] == null); // False - even though reference was set to null Console.WriteLine(row.IsNull("myType")); // True - the recommended way From this example, one can see that DataRow.IsNull() will return true whether the value was set to DBNull.Value or null. In a way, IsNull abstracts the DataRow consumer from what specific “null” the underlying value is. This becomes more obvious with the introduction of nullable types in V2. BIG DISCLAIMER – THE FOLLOWING BEHAVIOR IS FROM VISUAL STUDIO 2005 BETA 1 AND IS BROKEN. HENCE IT WILL EITHER BE NOT SUPPORTED OR FIXED IN RTM OF VS 2005. Now in V2, we have nullable value types through the use of Nullable<T>. As of Beta 1 of Whidbey, the DataSet allows nullable types to be used as DataSet columns – but with some interesting problems. If a column is a nullable value type, one can explicit set it to null like reference types in V1: DataColumn nullableIntColumn = new DataColumn("myNullableInt", typeof(int?)); table.Columns.Add(nullableIntColumn); DataRow row = table.Rows.Add(); row["myNullableInt"] = (int?)null; //Note – this is different from setting the value to DBNull: row["myNullableInt"] = DBNull.Value In addition, one must set the null value of int? and not reference null. For those of you not completely familiar with nullable type support in C#, the following won’t even compile – which may surprise some: int? c = (object)null; // compile error Cannot implicitly convert type 'object' to 'int?'. // An explicit conversion exists (are you missing a cast?) And casting the null value of int? to be an reference type boxes the value: int? c = null; object h = (object)c; Console.WriteLine(h == null); // false! Console.WriteLine(((int?)h) == null); // unboxed ... now true. Remember, when accessing a column value via DataRow[], an object is returned. Henced, boxing happens automatically for nullable types, and the reason behind the following somewhat bizzare behavior with retrieving the nullable value type: // warning - the following code is broken VS 2005 Beta1 and will change before RTM Console.WriteLine((int?)row["myNullableInt"] == null); // true – unboxing value Console.WriteLine(row["myNullableInt"] == null); // false Console.WriteLine(row.IsNull("myNullableInt")); // false! Broken! One of the interesting design questions is what to do whether DbNull.Value should be returned for DataRow[] for nullable value types. Current behavior is it is not: Console.WriteLine(row["myNullableInt"] == DBNull.Value); // false, boxed value is null, but reference is not. At this point, it may become obvious that unless one is ready to put the following all over their code in cases where the code needs to work independent of column type(or disallow nullable value typed columns), DataRow.IsNull() is by far the better technique: if (row["myNullableInt"] == null || row["myNullableInt"] == DBNull.Value || !((System.INullableValue)row["myNullableInt"]).HasValue) Console.WriteLine("null"); // value is null One other interesing problem with the current support in Beta1 is that the AllowDBNull constraint is not enforced: nullableIntColumn.AllowDBNull = false; //only enforces DBNull, not (int?) null nullableIntColumn.DefaultValue = (int?)5; DataRow row2 = table.Rows.Add(); row2["myNullableInt"] = (int?)null; // obviously As noted before, the DataSet behavior WRT to nullable types will either be changed or not supported for RTM of VS 2005. However, it is very probable that it will be supported some time in the future.
http://blogs.msdn.com/b/aconrad/archive/2005/02/28/381859.aspx
CC-MAIN-2014-52
refinedweb
826
51.95
The past few days I’ve been playing around with Silex, a micro PHP Framework. At a certain point I got stuck in the process when using a custom controller: the darn class just wouldn’t load and the (otherwise excellent) documentation on the Silex site has not mention on how to load it. Most of the information one finds on the internet instruct you to do this (line #3): // source for /app/bootstrap.php require_once __DIR__ . '/../vendor/autoload.php'; $app = new Silex\Application(); $app['autoloader']->registerNamespace('Bramus', __DIR__. '/src'); That information however, is deprecated and won’t work. A working solution I eventually found was this one: // source for /app/bootstrap.php $loader = require_once __DIR__ . '/../vendor/autoload.php'; $app = new Silex\Application(); $loader->add('Bramus', __DIR__. '/../src'); But that code just stinks I must say, it just doesn’t feel right. Turns out the nicest solution is the most simple one: just register your custom namespace in composer.json. For example: { "require": { "silex/silex": "1.0.*@dev" }, "autoload": { "psr-0": { "Bramus": "src/" } } } After changing it, run a composer update and you’re good to go. Hope this helped you, struggled with it myself quite some time. How would I use the recently loaded class as a controller function? The silex docs states tha one could ‘Blah::get’ but I always get Class “Blah” does not exist.
https://www.bram.us/2013/02/06/silex-appautoloader-registernamespace-deprecated/
CC-MAIN-2021-31
refinedweb
227
67.15
Pjotr's rotating BLOG Table of Contents - 1. First code katas with Rust - 2. GEMMA additive and dominance effects - 3. Sambamba build - 4. GEMMA randomizer - 5. GEMMA, Sambamba, Freebayes and pangenome tools - 6. GEMMA compute GRM (2) - 7. Managing filters with GEMMA1 - 8. HEGP and randomness - 9. GEMMA keeping track of transformations - 10. Fix outstanding CI build - 11. GEMMA testing frame work - 12. GEMMA validate data - 13. GEMMA running some data - 14. GEMMA compute GRM - 15. GEMMA filtering data - 16. GEMMA convert data - 17. GEMMA GRM/K compute - 18. GEMMA with python-click and python-pandas-plink - 19. Building GEMMA - 20. Starting on GEMMA2 - 21. Porting GeneNetwork1 to GNU Guix - 22. Chasing that elusive sambamba bug (FIXED!) - 23. It has been almost a year! And a new job. - 24. Speeding up K - 25. MySQL to MariaDB - 26. MySQL backups (stage2) - 27. MySQL backups (stage1) - 28. Migrating GN1 from EC2 - 29. Fixing Gunicorn in use - 30. Updating ldc with latest LLVM - 31. Fixing sambamba - 32. Trapping NaNs - 33. A gemma-dev-env package - 34. Reviewing a CONDA package - 35. Updates - 36. Older BLOGS - 37. Even older BLOG Tis document describes Pjotr's journey in (1) introducing a speedy LMM resolver for GWAS for GeneNetwork.org, (2) Tools for pangenomes, and (3) solving the pipeline reproducibility challenge with GNU Guix. Ah, and then there is the APIs and bug fixing… 1 First code katas with Rust code katas to the pangenome team. First I set an egg timer to 1 hour and installed Rust and clang with Guix and checked out Christian's rs-wfa bindings because I want to test C bindings against Rust. Running cargo build pulled in a crazy number of dependencies. In a Guix container I had to set CC and LIBCLANGPATH. After a successful build all tests failed with cargo test. It says ld: rs-wfa/target/debug/build/libwfa-e30b43a0c990e3e6/out/WFA/build/libwfa.a(mm_allocator.o): relocation R_X86_64_32 against `.rodata.str1.8' can not be used when making a PIE object; recompile with -fPIE On my machine adding the PIE flag to the WFA C code worked: diff --git a/Makefile b/Makefile index 5cd3812..71c58c8 100644 --- a/Makefile +++ b/Makefile @@ -10,7 +10,7 @@ CC=gcc CPP=g++ LD_FLAGS=-lm -CC_FLAGS=-Wall -g +CC_FLAGS=-Wall -g -fPIE ifeq ($(UNAME), Linux) LD_FLAGS+=-lrt endif the PIE flag generated position independent code for executables. Because this is meant to be a library I switched -fPIE for -fPIC and that worked too. test result: ok. 6 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out A thing missing from the repo is a software license. WFA is published under an MIT license. Erik also favours the MIT license, so it make sense to add that. After adding the license I cloned the repo to the pangenome org. 2 GEMMA additive and dominance effects a = (uA - uB)/2 and for the dominance d = uAB - (uA + uB)/2. GEMMA estimates the PVE by typed genotypes or “chip heritability”. Rqtl2 has an estherit function to estimate heritability from pheno, kinship and covar. 3 Sambamba build. This week a new release went out and a few fixes.This week a new release went out and a few fixes. 4 GEMMA randomizer -1 by default. If below 0 it will seed the randomizer from the hardware clock in param.cpp. The randomizer is set in three places: bslmm.cpp 953: gsl_rng_set(gsl_r, randseed); 1659: gsl_rng_set(gsl_r, randseed); param.cpp 2032: gsl_rng_set(gsl_r, randseed); and there are three (global) definitions of 'long int randseed' in the source. Bit of a mess really. Let's keep the random seed at startup only. The gslr structure will be shared by all. After fixing the randomizer we have a new 0.89.3 release of GEMMA! 5 GEMMA, Sambamba, Freebayes and pangenome tools. I just managed to build Freebayes using a Guix environment. The tree of git submodules is quite amazing. The first job is to build freebayes for ARM. This is part of our effort to use the NVIDIA Jetson ARM board. On ARMf the package included in GNU Guix fails with missing file chdir vcflib/fastahack and bwa fails with ksw.c:29:10: fatal error: emmintrin.h: No such file or directory #include <emmintrin.h> ^~~~~~~~~~~~~ The first thing we need to resolve is disabling the SSE2 extensions. This suggests that -DNOSSE2 can be used for BWA. We are not the first to deal with this issue. This page replaces the file with sse2neon.h which replaces SSE calls with NEON. My Raspberry PI has neon support, so that should work: pi@raspberrypi:~ $ LD_SHOW_AUXV=1 uname | grep HWCAP AT_HWCAP: half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm AT_HWCAP2: crc32 Efraim will have a go at the BWA port. After that I'll pick up freebayes which includes a 2016 BWA as part of vcflib. People should not do this, though I am guilty with sambamba too. 5.1 Compile in a Guix container After checking out the repo with git recursive create a Guix container with all the build tools with guix environment -C -l guix.scm Next rebuild the CMake environment with rm CMakeCache.txt ./vcflib-prefix/src/vcflib-build/CMakeCache.txt make clean cmake . make -j 4 make test The last command is not working yet. 6 GEMMA compute GRM (2). 7 Managing filters with GEMMA1 8 HEGP and randomness source code to find out how it generates random numbers. It does not use the Linux randomizer but its own implementation. Ouch.For the HEGP encryption we use rnorm. I had to dig through R's Honestly, we should not use that! These are pragmatic implementations for sampling. Not for encryption. Interestingly, R sets srand() on startup using a time stamp. This is, however, only used to generate temporary filenames using the glibc rand() function. Correct me if I am wrong, but it is the only time R uses the OS randomizer. Meanwhile the implementation of rustiefel is pretty straightforward: All we need to do is replace rnorm with an array of floats. Both Rust and Python provide a normal distribution from /dev/random. I'd have to dig deeper to get details, but to me it looks like the better idea. 9 GEMMA keeping track of transformations OneOne "transformations": [ { "type": "filter", "pheno-NA": true, "maf": 0.01, "miss": 0.05, "command": "./bin/gemma2 --overwrite -o test/data/regression/21487_filter filter -c test/data/regression/21487_convert.json" } which allows us to check whether the filter has been applied before. Running the filter twice will show. The grm command will check whether a filter has run. If not it will add. The current output after two steps looks like: <urce/code/genetics/gemmalib [env]$ cat test/data/regression/21487_filter.json { "command": "filter", "crosstype": null, // Cross-type is outbreeding "sep": "\t", // Default separator "na.strings": [ "NA", "nan", "-" ], "comment.char": "#", // keeping track of these for debugging: "individuals": 17, "markers": 7320, "phenotypes": 1, // all data files are tidy and gzipped (you can open directly in vim) "geno": "test/data/regression/21487_filter_geno.txt.gz", "pheno": "test/data/regression/21487_filter_pheno.txt.gz", "gmap": "21487_convert_gmap.txt.gz", "alleles": [ "A", "B", "H" ], "genotypes": { "A": 0, "H": 1, "B": 2 }, "geno_sep": false, // We don't have separators between genotypes "geno_transposed": true, "transformations": [ // keeps growing with every step { "type": "export", "original": "rqtl2", "format": "bimbam", "command": "./bin/gemma2 --overwrite -o test/data/regression/21487_convert convert --bimbam -g example/21487_BXD_geno.txt.gz -a example/BXD_snps.txt -p example/21487_BXDPublish_pheno.txt" }, { "type": "filter", "pheno-NA": true, "maf": 0.01, "miss": 0.05, "command": "./bin/gemma2 --overwrite -o test/data/regression/21487_filter filter -c test/data/regression/21487_convert.json" } ], "na_strings": [ // na_strings is more consistent than na.strings above, also need it for creating a method "NA", "nan", "-" ], "name": "test/data/regression/21487_convert.json" // name of the underlying file } Note that the control file increasingly starts to look like a Monad because it passes state along with gemma2/lib steps. 10 Fix outstanding CI build collaborate well. To fix it I stopped testing with libgslv1 (which is really old anyway and we need to use gslcblas.h over OpenBLAS cblas.h.The GEMMA1 build on Travis was failing for some time. The problem is that GSL BLAS headers and OpenBLAS headers do not 11 GEMMA testing frame work The first test is to convert BIMBAM to GEMMA2 format with gemma2 --overwrite convert --bimbam -g example/21487_BXD_geno.txt.gz -a example/BXD_snps.txt -p example/21487_BXDPublish_pheno.txt which outputs 3 files INFO:root:Writing GEMMA2/Rqtl2 marker/SNP result_gmap.txt.gz INFO:root:Writing GEMMA2/Rqtl2 pheno result_pheno_bimbam.txt.gz INFO:root:Writing GEMMA2/Rqtl2 geno result_geno_bimbam.txt.gz pytest-reqressions writes its own files so I needed to use the low level interface for file comparisons. Also I'll need to unpack the .gz files for showing a diff. Progressing here. 12 GEMMA validate data. Therefore, we validate the data when there is (almost) zero cost inline. But a full validation is a separate switch. Let's see what it brings out! gemma2 validate -c control One thing the maf filter does not do is check for the absolute number of informative genotypes (maf is a percentage!). Also gemma1 does not check whether the minor allele is actually the smaller one. We hit the problem immediately with WARNING:root:Only one type of genotype Counter({'B': 9, 'A': 7, 'H': 1}) found in ['A', 'A', 'B', 'B', 'B', 'A', 'A', 'B', 'A', 'A', 'B', 'B', 'B', 'B', 'A', 'H', 'B'] --- other similar counter warnings are ignored (rs3722740 file ./test_geno.txt.gz line 175) It is clear that minor allele is actually the major allele. The implications are not large for gemma1, but the minor allele frequency (MAF) filter may not work properly. This is why validation is so important! Unlike gemma1, gemma2 will figure out the minor allele dynamically. One reason is that R/qtl2 does the same and we are sharing the data format. It also allows a consistent use of genotype markers (e.g. B and D for the BXD). I added a validation step to make sure major allele genotypes vary across the dataset (one constant allele is suspect and may imply some other problem). BIMBAM files, meanwhile, include the SNP variants. GeneNetwork kinda ignores them. I need to update the importer for that. Added a note. First, however, I want to speed up the GRM LOCO. 13 GEMMA running some data For a dosage comparison and LOCO permutation run I extracted data from the 21481 set on GeneNetwork. First I needed to match the genotypes with phenotypes using below filter.For a dosage comparison and LOCO permutation run I extracted data from the 21481 set on GeneNetwork. First I needed to match the genotypes with phenotypes using below filter. First create the Rqtl2 type dataset: gemma2 -o 21481 convert --bimbam -g tmp/21481/BXD_geno.txt.gz -p tmp/21481/21481_BXDPublish_pheno.txt Filtering gemma2 -o output/21481-pheno -v filters -c 21481.json Now the genotype file looks like marker 7 10 11 12 38 39 42 54 60 65 67 68 70 71 73 77 81 91 92 rs31443144 AAABBABABAABBABBAAAA rs6269442 AAABBABABAABBABBAAAA rs32285189 AAABBABABAABBABBAAAA Create the BIMBAM for gemma1 gemma2 -o 21481-pheno export --bimbam -c output/21481-pheno.json and run the standard gemma commands. Note we can create the original dosage file by modifying the control file. Running 1,000 permutations on 20 individuals and 7321 markers took real 213m0.889s user 2939m21.780s sys 5423m37.356s We have to improve on that! 14 GEMMA compute GRM The first step is to compute the kinship matrix or GRM. We'll have different algorithms so we need a switch for method or –impl.The first step is to compute the kinship matrix or GRM. We'll have different algorithms so we need a switch for method or –impl. gemma2 --overwrite -o output/21487 convert --bimbam -g example/21487_BXD_geno.txt.gz -p example/21487_BXDPublish_pheno.txt gemma2 --overwrite -o output/21487-filter filter -c output/21487.json we can use the gemma1 implementation with gemma2 grm --impl gemma1 -c output/21487-filter.json or our new version gemma2 grm -c output/21487-filter.json Today I added the necessary filtering steps for the GRM listed in the next section. The maf filter is currently hard coded to make sure results match gemma1.Today I added the necessary filtering steps for the GRM listed in the next section. The maf filter is currently hard coded to make sure results match gemma1. I finally got a matching K in Python compared to gemma1. Turns out that scaling is not the default option. Ha!I finally got a matching K in Python compared to gemma1. Turns out that scaling is not the default option. Ha! 14.1 Calculating kinship injects the mean genotype for a SNP into missing data fields (the mean over a SNP row). Next it subtracts the mean for every value in a row and if centering it scalesGemma gsl_vector_scale(geno, 1.0 / sqrt(geno_var)); and finally over the full matrix a division over the number of SNPs gsl_matrix_scale(matrix_kin, 1.0 / (double)ns_test) Where ns_test is the number of SNPs included. SNPs get dropped in a MAF filter which I rewrote in a fast version in D. The GEMMA filter happens at reading of the Geno file. So, essentially [X]Always apply the MAF filter when reading genotypes [X]Apply missiness filter And when computing kinship [X]Always impute missing data (injecting the row mean) [X]Always subtract the row mean [X]Center the data by row (which is the NOT the default option -gk 1, gemma1 CenterMatrix) [X]Always scale the matrix dividing by # of SNPs (gemma1 ScaleMatrix) See also R's scaling function. Prasun's D code may be a bit more readable. And here is our older pylmm implementation which only scales K by dividing the number of SNPs. [X]Check D implementation Python's numpy treats NaN as follows: >>> np.mean([1.0,2.0,3.0,0.0]) 1.5 >>> np.mean([1.0,2.0,3.0,0.0,0.0]) 1.2 >>> np.mean([1.0,2.0,3.0,0.0,0.0,np.NAN]) nan which means we have to filter first. x = np.array([1.0,2.0,3.0,0.0,0.0,np.NAN]) >>> np.mean(x[~numpy.isnan(x)]) 1.2 that is better. Note the tilde. Python is syntactically a strange beast. Also I need to be careful about numpy types. They are easily converted to lists, for example. r2 is simply the square of the sample correlation coefficient (i.e., r) between the observed outcomes and the observed predictor values. The manual says correlation with any covariate. By default, SNPs with r^2 correlation with any of the covariates above 0.9999 will not be included in the analysis. When I get to covariates I should include that. 14.2 Implementation Starting with the MAF filter. It is used both for GRM and GWA. Gemma1 says -miss [num] specify missingness threshold (default 0.05) -maf [num] specify minor allele frequency threshold (default 0.01) -notsnp minor allele frequency cutoff is not used Note that using -notsnp the value of maf_level is set to -1. With Gemma2 I want to make all filtering explicit. But what to do if someone forgets to filter? Or filters twice - which would lead to different results. gemma2 filter -c data --maf 0.01 Creates a new dataset and control file. We can add the filtering state to the new control data structure with "maf": 0.05 which prevents a second run. If it is missing we should apply it by default in gemma2 grm -c data which will be the same as the single run gemma2 filter -c data --maf 0.01 '=>' grm (don't try this yet). I wrote a MAF filter in Python which puts genotypes in classes and counts the minor alleles. Note that GEMMA1 does not a pure minor allele count because it counts heterozygous as 50%. At this point I am not making a clear distinction. Another filter is missing data miss_level defaults to 0.05: if ((double)n_miss / (double)ni_test > miss_level) pass... Say we have 6 out of 100 missing it fails. This is rather strict. 15 GEMMA filtering data With gemma1 people requested more transparent filtering. This is why I am making it a two-step process. First we filter on phenotypes:With gemma1 people requested more transparent filtering. This is why I am making it a two-step process. First we filter on phenotypes: 15.1 Filter on phenotypes The first filter is on phenotypes. When a phenotype is missing it should be removed from the kinship matrix and GWA. The R/qtl2 format is simply: id pheno1 pheno2 1 1.2 3.0 2 NA 3.1 (etc) So, if we select pheno1 we need to drop id=2 because it is an NA. The filter also needs to update the genotypes. Here we filter using the 6th phenotype column: gemma2 -o output/ms-filtered -v filters -c result.json -p 6 With the new phenotype filter I was able to create a new GRM based on a reduced genotype list (imported from BIMBAM) for a paper we are putting out. 16 GEMMA convert data 16.1 Convert PLINK to GEMMA2/Rqtl2 and BIMBAM The plink .fam format has - Family ID ('FID') - Within-family ID ('IID'; cannot be '0') - Within-family ID of father ('0' if father isn't in dataset) - Within-family ID of mother ('0' if mother isn't in dataset) - Sex code ('1' = male, '2' = female, '0' = unknown) - Phenotype value ('1' = control, '2' = case, '-9'/'0'/non-numeric = missing data if case/control) The BIMBAM format just has columns of phenotypes and no headers(!) For convenience, I am going to output BIMBAM so it can be fed directly into gemma1. Note Plink also outputs BIMBAM from plink files and VCF which is used by GEMMA users today. To convert from plink to R/qtl2 we already haveTo convert from plink to R/qtl2 we already have gemma2 convert --plink example/mouse_hs1940 To convert from R/qtl2 to BIMBAM we can do gemma2 convert --to-bimbam -c mouse_hs1940.json which writes mouse_hs1940_pheno_bimbam.txt. Introducing a yield generator. I notice how Python is increasingly starting to look like Ruby. Yield was introduced in Python3 around 2012 - almost 20 years later than Ruby(!) After some thought I decided to split convert into 'read' and 'write' commands. So now it becomesAfter some thought I decided to split convert into 'read' and 'write' commands. So now it becomes gemma2 read --plink example/mouse_hs1940 To convert from R/qtl2 to BIMBAM we can do gemma2 write --bimbam -c mouse_hs1940.json I think that looks logical. On third thought I am making it gemma2 convert --plink example/mouse_hs1940 To convert from R/qtl2 to BIMBAM we can do gemma2 export --bimbam -c mouse_hs1940.json It is important to come up with the right terms so it feels logical or predictable to users. Convert could be 'import', but Python does not allow that because it is a language keyword. And, in a way, I like 'convert' better. And 'export' is the irregular case that no one should really use. I could name it 'internal'. But hey. Writing the BIMBAM genotype file it looks like rs3683945, A, G, 1, 1, 0, 1, 0, 1, 1, ... without a header. GEMMA does not actually use the allele values (A and G) and skips on spaces. So we can simplify it to rs3683945 - - 1 1 0 1 0 1 1 ... Using these new inputs (converted plink -> Rqtl2 -> BIMBAM) the computed cXX matrix is the same as the original PLINK we had. gemma2 gemma1 -g mouse_hs1940_geno_bimbam.txt.gz -p mouse_hs1940_pheno_bimbam.txt -gk -o mouse_hs1940 and same for the GWA gemma2 gemma1 -g mouse_hs1940_geno_bimbam.txt.gz -p mouse_hs1940_pheno_bimbam.txt -n 1 -a ./example/mouse_hs1940.anno.txt -k ./output/mouse_hs1940.cXX.txt -lmm -o mouse_hs1940_CD8_lmm-new ==> output/mouse_hs1940_CD8_lmm-new.assoc.txt <== chr rs ps n_miss allele1 allele0 af beta se logl_H1 l_remle p_wald 1 rs3683945 3197400 0 - - 0.443 -7.882975e-02 6.186788e-02 -1.581876e+03 4.332964e+00 2.028160e-01 1 rs3707673 3407393 0 - - 0.443 -6.566974e-02 6.211343e-02 -1.582125e+03 4.330318e+00 2.905765e-01 ==> output/result.assoc.txt <== chr rs ps n_miss allele1 allele0 af beta se logl_H1 l_remle p_wald 1 rs3683945 3197400 0 A G 0.443 -7.882975e-02 6.186788e-02 -1.581876e+03 4.332964e+00 2.028160e-01 1 rs3707673 3407393 0 G A 0.443 -6.566974e-02 6.211343e-02 -1.582125e+03 4.330318e+00 2.905765e-01 The BIMBAM version differs because the BIMBAM file in ./example differs slightly from the PLINK version (thanks Xiang, keeps me on my toes!). Minor differences exist because some values, such as 0.863, have been changed for the genotypes. For a faithful and lossless computation of that BIMBAM file we'll need to support those too. But that will come when we start importing BIMBAM files. I'll make a note of that. 16.2 Convert BIMBAM to GEMMA2/Rqtl2 From above we can also parse BIMBAM rather than export. It will need both geno and pheno files to create the GEMMA2/Rqtl2 format:From above we can also parse BIMBAM rather than export. It will need both geno and pheno files to create the GEMMA2/Rqtl2 format: gemma2 convert --bimbam -g example/mouse_hs1940.geno.txt.gz -p example/mouse_hs1940.pheno.txt Problem: BIMBAM files can contain any value while the Rqtl2 genotype file appears to be limited to alleles. Karl has a frequency format, but that uses some fancy binary 'standard'. I'll just use the genotypes now because GeneNetwork uses those too. Translating back and forth. BIMBAM rs31443144, X, Y, 1, 1, 0, 0, 0, ... 0, 1, 0.5, 0.5 becomes Rqtl2 rs31443144 and then back to rs31443144, - - 1 1 0 0 0 ... 0 1 2 2 For processing with GEMMAv1. Funny. Actually it should be comma delimited to align with PLINK output. After some hacking gemma2 convert --bimbam -g example/BXD_geno.txt.gz -p example/BXD_pheno.txt gemma2 export -c result.json gemma -gk -g result_geno_bimbam.txt.gz -p result_pheno.tsv FAILED: Parsing input file 'result_geno_bimbam.txt.gz' failed in function ReadFile_geno in src/gemma_io.cpp at line 743 is still not happy. When looking at the code it fails to get (enough) genotypes. Also interesting is the hard coded in GEMMA1 geno = atof(ch_ptr); if (geno >= 0 && geno <= 0.5) { n_0++; } if (geno > 0.5 && geno < 1.5) { n_1++; } if (geno >= 1.5 && geno <= 2.0) { n_2++; } GEMMA1 assumes Minor allele homozygous: 2.0; major: 0.0 for BIMBAM and 1.0 is H. The docs say BIMBAM format is particularly useful for imputed genotypes, as PLINK codes genotypes using 0/1/2, while BIMBAM can accommodate any real values between 0 and 2 (and any real values if paired with -notsnp option which sets cPar.maf_level = -1). The first column is SNP id, the second and third columns are allele types with minor allele first, and the remaining columns are the posterior/imputed mean genotypes of different individuals numbered between 0 and 2. An example mean genotype file with two SNPs and three individuals is as follows: rs1, A, T, 0.02, 0.80, 1.50 rs2, G, C, 0.98, 0.04, 1.00 GEMMA codes alleles exactly as provided in the mean genotype file, and ignores the allele types in the second and third columns. Therefore, the minor allele is the effect allele only if one codes minor allele as 1 and major allele as 0. BIMBAM mode is described in its own manual. The posterior mean value is the minor allele dosage. This means the encoding should be rs31443144,-,-,2,2,0,0,0,...,0,2,1,1 but GeneNetwork uses rs31443144, X, Y, 1, 1, 0, 0, 0, ... 0, 1, 0.5, 0.5 which leads to a different GRM. Turns out the genotypes GeneNetwork is using is wrong. It will probably not be a huge difference because the dosage is just scaled. I'll test it on a live dataset - a job that needs to be done anyway. GEMMA also has an annotation file for SNPsGEMMA also has an annotation file for SNPs rs31443144 3010274 1 rs6269442 3492195 1 rs32285189 3511204 1 rs258367496 3659804 1 etc. Rqtl2 has a similar gmap file in tidy format: marker,chr,pos rs13475697,1,1.6449 rs3681603,1,1.6449001 rs13475703,1,1.73685561994571 rs13475710,1,2.57549035621086 rs6367205,1,2.85294211007162 rs13475716,1,2.85294221007162 etc. entered as "gmap" in the control file. I suppose it is OK to use our chromosome positions there. I'll need to update the converter for BIMBAM and PLINK. The PLINK version for GEMMA looks like rs3668922 111771071 13 65.0648 rs13480515 17261714 10 4.72355 rs13483034 53249416 17 30.175 rs4184231 48293994 16 33.7747 rs3687346 12815936 14 2.45302 so both positions are included, but no header. All different in other words! Rqtl2 also has a "pmap" file which is the physical mapping distance. 17 GEMMA GRM/K compute GEMGEM time gemma -g ./example/mouse_hs1940.geno.txt.gz -p ./example/mouse_hs1940.pheno.txt -gk -o mouse_hs19407.545s user 0m14.468s sys 0m1.037s Now the GRM output file (mousehs1940.cXX.txt) is pretty large at 54Mb and we keep a lot of those cached in GeneNetwork. It contains a matrix 1940x1940 of textual numbers 0.3350589588 -0.02272259412 0.0103535287 0.00838433365 0.04439930169 -0.01604468771 0.08336199305 -0.02272259412 0.3035959579 -0.02537616406 0.003454557308 ... The gzip version is half that size. We can probably do beter storing 4-byte floats and only storing half the matrix (it is symmetrical after all) followed by compression. Anyway, first thing to do is read R/qtl2 style files into gemma2 because that is our preferred format. To pass in information we now use the control file defined below gemma2 grm --control mouse_hs1940.json pylmm five years ago(!) First results show numpy dot product outperforming gemma1 and blas today for this size dataset (I am not trying CUDA yet). Next stage is filtering and centering the GRM. 18 GEMMA with python-click and python-pandas-plink Python click is powerful and covers most use cases.This week I got the argument parsing going for gemma2 and the logic for running subcommands. It was a bit of a struggle: having the feeling I would be better off writing argument parsing from scratch. But today I added a parameter for finding the gemma1 binary (a command line switch and an environment variable with help in a one-liner). That was pleasingly quick and powerful. In the next step I wanted to convert PLINK files to GEMMA2/Rqtl2 formats. A useful tool to have for debugging anyhow. I decided to try pandas-plink and added that to GEMMA2 (via a Guix package). Reading the mouse data: >>> G = read_plink1_bin("./example/mouse_hs1940.bed", "./example/mouse_hs1940.bim", "./example/mouse_hs1940.fam", ver> >>> print(G) <xarray.DataArray 'genotype' (sample: 1940, variant: 12226)> dask.array<transpose, shape=(1940, 12226), dtype=float64, chunksize=(1024, 1024), chunktype=numpy.ndarray> Coordinates: * sample (sample) object '0.224991591484104' '-0.97454252753557' ... nan nan * variant (variant) object '1_rs3683945' '1_rs3707673' ... '19_rs6193060' fid (sample) <U21 '0.224991591484104' '-0.97454252753557' ... 'nan' iid (sample) <U21 '0.224991591484104' '-0.97454252753557' ... 'nan' father (sample) <U32 'nan' 'nan' 'nan' 'nan' ... 'nan' 'nan' 'nan' 'nan' mother (sample) <U3 '1' '0' '1' 'nan' 'nan' ... 'nan' '0' '1' 'nan' 'nan' gender (sample) <U32 'nan' 'nan' 'nan' 'nan' ... 'nan' 'nan' 'nan' 'nan' trait (sample) float64 -0.2854 -2.334 0.04682 nan ... -0.09613 1.595 0.72 chrom (variant) <U2 '1' '1' '1' '1' '1' '1' ... '19' '19' '19' '19' '19' snp (variant) <U18 'rs3683945' 'rs3707673' ... 'rs6193060' cm (variant) float64 0.0 0.1 0.1175 0.1358 ... 53.96 54.02 54.06 54.07 pos (variant) int64 3197400 3407393 3492195 3580634 ... -9 -9 61221468 a0 (variant) <U1 'A' 'G' 'A' 'A' 'G' 'A' ... 'G' 'G' 'C' 'G' 'G' 'A' a1 (variant) <U1 'G' 'A' 'G' 'G' 'G' 'C' ... 'A' 'A' 'G' 'A' 'A' 'G' Very nice! With support for an embedded GRM we can also export. Note that this tool is not lazy, so for very large sets we may need to write some streaming code. The first step is to create an R/qtl2 style genotype file to compute the GRM. The BIMBAM version for a marker looks like rs3683945, A, G, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 2, 1, 1, 0, 1, 1, 1, 1, 2, 1, 0, 2, etc rs3707673, G, A, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 2, 1, 1, 0, 1, 1, 1, 1, 2, 1, 0, 2, rs6269442, A, G, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 2, 1, 0, 0, 1, 1, 1, 1, 2, 0, 0, 2, (...) reflecting row a0 and a1 vertically in the BED file. ===> BED (binary format) [[1. 1. 2. 1. 2. 1. 1. 1. 1. 2. 2. 1. 2. 1. 1. 1. 1. 2. 0. 1. 1. 2. 1. 1. 1. 1. 0. 1. 2. 0. 1. 0. 1. 2. 0. 2. 1. 1. 1. 1. [1. 1. 2. 1. 2. 1. 1. 1. 1. 2. 2. 1. 2. 1. 1. 1. 1. 2. 0. 1. 1. 2. 1. 1. 1. 1. 0. 1. 2. 0. 1. 0. 1. 2. 0. 2. 1. 1. 1. 1. [1. 2. 2. 1. 2. 1. 1. 2. 2. 2. 2. 1. 2. 1. 1. 1. 2. 2. 0. 1. 2. 2. 1. 1. 1. 1. 0. 2. 2. 0. 2. 0. 1. 2. 0. 2. 1. 1. 1. 1. (...) ]] So you can see 2 and 0 values switched meaning H. The equivalent R/qtl2 can be the human readable "genotypes": {"A":0, "B":1, "H": 2} For the genotype file we'll go marker by individual (transposed) because that is what we can stream in GWA. The tentative 'control' file (mirroring recla.json) to start with: { "description": "mouse_hs1940", "crosstype": "hs", "individuals": 1940, "markers": 12226, "phenotypes": 7, "geno": "mouse_hs1940_geno.tsv", "alleles": [ "A", "B", "H" ], "genotypes": { "A": 1, "H": 2, "B": 3 }, "geno_transposed": true } Where cross-type "HS" should probably act similar to "DO". We'll use some (hopefully) sane defaults, such as a tab for separator and '-' and NA for missing data. Comments have '#' on the first position. We add number of individuals, phenotypes and markers/SNPs for easier processing and validation when parsing. Now the GEMMA2 genotype file should become marker, 1, 2, 3, 4, ... rs3683945 B B A B A B B B B A A B A B B B B A H B B A B B B B H B A H etc rs3707673 B B A B A B B B B A A B A B B B B A H B B A B B B B H B A H rs6269442 B A A B A B B A A A A B A B B B A A H B A A B B B B H A A H (...) "geno_compact": true which makes it the more compact marker,1,2,3,4,... rs3683945 BBABABBBBAABABBBBAHBBABBBBHBAH etc rs3707673 BBABABBBBAABABBBBAHBBABBBBHBAH rs6269442 BAABABBAAAABABBBAAHBAABBBBHAAH (...) hope R/qtl2 will support this too. Next step is compression The uncompressed space version: -rw-r--r-- 1 wrk users 46M Aug 31 08:25 mouse_hs1940_geno.tsv with gzip compresses to -rw-r--r-- 1 wrk users 3.3M Aug 31 07:58 mouse_hs1940_geno.tsv.gz while the compact version is half the size and compresses better too -rw-r--r-- 1 wrk users 2.1M Aug 31 08:30 mouse_hs1940_geno.tsv.gz The bz2 version is a little smaller but a lot slower. lz4 has a larger file -rw-r--r-- 1 wrk users 5.4M Aug 31 08:37 mouse_hs1940_geno.tsv.lz4 but it is extremely fast. For now we'll just go for smaller files (these genotype files can be huge). gzip support is standard in Python3. Compressing the files from inside Python appeared slow with the default maximum compression. Reducing the level a bit it is comparable to the original writer and the file is 20x smaller. It is also 3x smaller than the original PLINK version (that is supposedly optimally compressed). The control file probably has enough showing the compressed file name extension. Now it looks like { "description": "mouse_hs1940", "crosstype": "hs", "sep": "\t", "individuals": 1940, "markers": 12226, "phenotypes": 7, "geno": "mouse_hs1940_geno.tsv.gz", "alleles": [ "A", "B", "H" ], "genotypes": { "A": 1, "H": 2, "B": 3 }, "geno_sep": false, "geno_transposed": true } Next step is generating the GRM! 19 Building GEMMA. For deployment we'll use GNU Guix and Docker containers as described below. Gemma2 is going to be oblivious about how deployment hangs together because I think it is going to be an ecosystem on its own. I am looking into command line parsing for gemma2. The requirement is reasonably sophisticated parameter checking that can grow over time. First of all I'll introduce 'commands' such as gemma grm gemma gwa which allows for splitting option sets. Better readable too. Also I want to be able to inline 'pipe' commands: gemma grm => gwa which will reuse data stored in RAM to speed things up. You can imagine something like gemma filter => grm --loco => gwa as a single run. Now bash won't allow for this pipe operator so we may support a syntax like gemma filter '=>' grm --loco '=>' gwa or gemma filter % grm --loco % gwa Note that these 'pipes' will not be random workflows. It is just a CLI convenience notation that visually looks composable. Also the current switches should be supported and gemma2 will drop to gemma version 1 if it does not understand a switch. For example gemma -g /example/mouse_hs1940.geno.txt.gz -p mouse_hs1940.pheno.txt -a mouse_hs1940.anno.txt -gk -no-check should still work, simply because current workflows expect that. The python click package looks like it can do this. What is tricky is that we want to check all parameters before the software runs. For now, the grouping works OK - you can chain commands, e.g. @click.group(invoke_without_command=True) @click.pass_context def gemma2(context): if not context.invoked_subcommand: click.echo("** Call gemma1") @click.command() def grm(): click.echo('** Kinship/Genetic Relationship Matrix (GRM) command') if second: gemma2(second) @click.command() # @click.argument('verbose', default=False) def gwa(): click.echo('** Genome-wide Association (GWA)') gemma2.add_command(grm) gemma2.add_command(gwa) gemma2() which allows gemma grm ... -> calls into grm and gwa gemma gwa ... gemma (no options) Not perfect because parameters are checked just before the actual chained command runs. But good enough for now. See also the group tutorial. OpenBLAS development. The gains we get for free.Performance metrics before doing a new release show openblas has gotten a bit faster for GEMMA. I also tried a build of recent openblas-git-0.3.10 with the usual tweaks. It is really nice to see how much effort is going into As people have trouble building GEMMA on MacOS (and I don't have a Mac) I released a test Docker container that can be run as lario:/home/wrk/iwrk/opensource/code/genetics/gemma# time docker run -v `pwd`/example:/example 2e82532c7440 gemma -g /example/mouse_hs1940.geno.txt.gz -p /example/mouse_hs1940.pheno.txt -a /example/mouse_hs1940.anno.txt -gk -no-check8.161s user 0m0.033s sys 0m0.023s Note that the local directory is mounted with the -v switch.. 20 Starting on GEMMA2 which is getting quite a bit of attention.A lot happened in the last month. Not least creating Today is the official kick-off day for new GEMMA development! The coming months I am taking a coding sabbatical. Below you can read I was initially opting for D, but the last months we have increasingly invested in Rust and it looks like new GEMMA will be written in Python + Rust. First, why Rust? Mostly because our work is showing it forces coders to work harder at getting things correct. Not having a GC can look like a step backward, but when binding to other languages it actually is handy. With Rust you know where you are - no GC kicking in and doing (black) magic. Anyway, I am going to give it a shot. I can revert later and D is not completely ruled out. Now, why Python instead of Racket? Racket is vastly superior to Python in my opinion. Unfortunately, Python has the momentum and if I write the front-end in Racket it means no one will contribute. So, this is pragmatism to the Nth degree. We want GEMMA2/gemmalib to be useful to a wider community for experimentation. It will be a toolbox rather than an end-product. For example, I want to be able to swap in/out modules that present different algorithms. In the end it will be a toolbox for mapping phenotypes to genotypes. I don't think we'll regret a choice for Python+Rust. Both languages are complementary and have amazing community support, both in terms of size and activity. Having Python as a front-end implies that is should be fairly trivial to bind the Rust back-end to other languages, such as Racket, R and Ruby. It will happen if we document it well enough. One feature of new GEMMA2 will be that it actually can run GEMMA1. I am going to create a new development repository that can call into GEMMA1 transparently if functionality does not exist. This means the same command line parameters should work. GEMMA2 will fall back to GEMMA1 with a warning if it does not understand parameters. GEMMA2 specific parameters are a different set.. I promised GeneNetwork that I would implement precompute for all GN mappings for GEMMA. I think that is a great idea. It is also an opportunity to clean up GEMMA. Essentially a rewrite where GEMMA becomes more of a library which can be called from any other language. That will also make it easier to optimise GEMMA for certain architectures. It is interesting to note that despite the neglect GEMMA is getting and its poor runtime performance it is still a surprisingly popular tool. The implementation, apparently, still rocks! So, let's start with GEMMA version 2 (GEMMA2). What is GEMMA2? GEMMA2 is a fast implementation of GEMMA1 in D. Why not Rust? You may ask. Well, Rust is a consideration and we can still port, but D is close to idiomatic C++ which means existing GEMMA code is relatively easy to convert. I had been doing some of that already with Prasun, with faster-lmm-d. That project taught us a lot though we never really got it to replace GEMMA. Next to D I will be using Racket to create the library bindings. Racket is a Lisp and with a good FFI it should be easy to port that to (say) Python or Ruby. So, in short: Front-end Racket, back-end D. Though there may be variations. 21 Porting GeneNetwork1 to GNU Guix is a legacy Python web application which was running on a 10 year old CentOS server. It depends on an older version of Python, mod-python and other obsolete modules. We decided to package it in GNU Guix because Guix gives full control over the dependency graph. Also GNU Guix has features like timemachine and containers which allows us to make snapshot of a deployment graph in time and serve different versions of releases.GeneNetwork1 The first step to package GN1 with the older packages was executed by Efraim. Also he created a container to run Apache, mod-python and GN1. Only problem is that mod-python in the container did not appear to be working. 22 Chasing that elusive sambamba bug (FIXED!) github. Locating the bug was fairly easy - triggered by decompressBgzfBlock - and I manage to figure out it had to do with the copying of a thread object that messes up the stack. Fixing it, however is a different story! It took a while to get a minimal program to reproduce the problem. It looks like BamReader bam; auto fn = buildPath(dirName(__FILE__), "data", "ex1_header.bam"); foreach (i; 0 .. 60) { bam = new BamReader(fn); assert(bam.header.format_version == "1.3"); } which segfaults most of the time, but not always, with a stack trace #0 0x00007ffff78b98e2 in invariant._d_invariant(Object) () from /usr/lib/x86_64-linux-gnu/libdruntime-ldc-debug-shared.so.87 #1 0x00007ffff7d4711e in std.parallelism.TaskPool.queueLock() () from /usr/lib/x86_64-linux-gnu/libphobos2-ldc-debug-shared.so.87 #2 0x00007ffff7d479b9 in _D3std11parallelism8TaskPool10deleteItemMFPSQBqQBp12AbstractTaskZb () from /usr/lib/x86_64-linux-gnu/libphobos2-ldc-debug-shared.so.87 #3 0x00007ffff7d4791d in _D3std11parallelism8TaskPool16tryDeleteExecuteMFPSQBwQBv12AbstractTaskZv () #4 0x00005555557125c5 in _D3std11parallelism__T4TaskS_D3bio4core4bgzf5block19decompressBgzfBlockFSQBrQBqQBoQBm9BgzfBlockCQCoQCn5utils7memoize__T5CacheTQCcTSQDxQDwQDuQDs21DecompressedBgzfBlockZQBwZQBpTQDzTQDgZQGf10yieldForceMFNcNdNeZQCz (this=0x0) #6 0x00007ffff78a56fc in object.TypeInfo_Struct.destroy(void*) const () #7 0x00007ffff78bc80a in rt_finalizeFromGC () from /usr/lib/x86_64-linux-gnu/libdruntime-ldc-debug-shared.so.87 #8 0x00007ffff7899f6b in _D2gc4impl12conservativeQw3Gcx5sweepMFNbZm () #9 0x00007ffff78946d2 in _D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm () #10 0x00007ffff78969fc in _D2gc4impl12conservativeQw3Gcx8bigAllocMFNbmKmkxC8TypeInfoZPv () #11 0x00007ffff78917c3 in _D2gc4impl12conservativeQw3Gcx5allocMFNbmKmkxC8TypeInfoZPv () (...) task_pool=0x7ffff736f000) at reader.d:130 #29 0x000055555575e445 in _D3bio3std3hts3bam6reader9BamReader6__ctorMFAyaZCQBvQBuQBtQBsQBrQBn (warning: (Internal error: pc 0x55555575e444 in read in psymtab, but not in symtab.) at reader.d:135 #30 0x00005555557bff0b in D main () at read_bam_file.d:47 where reader.d:130 adds a BamReader to the task pool. It is clear the GC kicks in and we end up with this mess. Line #4 contains std.parallelism.TaskS bio.core.bgzf.block.decompressBgzfBlock utils.memoize.Cache-DecompressedBgzfBlock-yieldForce yieldForce executes a task in the current thread which is coming from a cache: alias Cache!(BgzfBlock, DecompressedBgzfBlock) BgzfBlockCache; and Cache is part of BioD. One trick aspect of Sambamba is that the design is intricate. In our crashing example we only use the simple BamReader wich is defined in std.hts.bam.reader.d. We are using a default taskpool. In reader.d not much happens - it is almost all simple plumbing. std.hts.bam.read.d, meanwhile, represents the BAM format. The Bgzf block processing happens in bio.core.bgzf.inputstream. The BamReader uses BgzfInputStream which has functions fillNextBlock, setupReadBuffer. The constructor sets up a RoundBuf!BlockAux(n_tasks). When I set n_tasks to a small number it no longer crashes!? The buffer _task_buf = uninitializedArray!(DecompressionTask[])(n_tasks); is a critical piece. Even increasing dim to n_tasks+2 is enough to remove most segfaults, but not all. Remember it is defined as alias Task!(decompressBgzfBlock, BgzfBlock, BgzfBlockCache) DecompressionTask; DecompressionTask[] _task_buf; with dimension n_tasks. Meanwhile BlockAux static struct BlockAux { BgzfBlock block; ushort skip_start; ushort skip_end; DecompressionTask* task; alias task this; } Injecting code struct DecompressedBgzfBlock { ~this() { stderr.writeln("destroy DecompressedBgzfBlock ",start_offset,":",end_offset," ",decompressed_data.sizeof); }; ulong start_offset; ulong end_offset; ubyte[] decompressed_data; } It is interesting to see that even when not segfaulting the block offsets look corrupted: destroy DecompressedBgzfBlock 0:0 16 destroy DecompressedBgzfBlock 0:0 16 destroy DecompressedBgzfBlock 0:0 16 destroy DecompressedBgzfBlock 89554:139746509800748 16 destroy DecompressedBgzfBlock 140728898420736:139748327903664 16 destroy DecompressedBgzfBlock 107263:124653 16 destroy DecompressedBgzfBlock 89554:107263 16 destroy DecompressedBgzfBlock 71846:89554 16 destroy DecompressedBgzfBlock 54493:71846 16 destroy DecompressedBgzfBlock 36489:54493 16 destroy DecompressedBgzfBlock 18299:36489 16 destroy DecompressedBgzfBlock 104:18299 16 destroy DecompressedBgzfBlock 0:104 16 and I am particularly suspicious about this piece of code in inputstream.d where task gets allocated and the resulting buffer gets copied to the roundbuffer. This is a hack, no doubt about it: DecompressionTask tmp = void; tmp = scopedTask!decompressBgzfBlock(b.block, _cache); auto t = _task_buf.ptr + _offset / _max_block_size; import core.stdc.string : memcpy; memcpy(t, &tmp, DecompressionTask.sizeof); b.task = t; _tasks.put(b); _pool.put(b.task); and probably the reason why decompressBgzfBlock gets corrupted, followed by sending the GC in a tail spin when it kicks in. Artem obviously designed it this way to prevent allocating memory for the task, but I think he went a little too far here! One thing I tried earlier, I have to try again which is get rid of that copying. First of all alias Task!(decompressBgzfBlock, BgzfBlock, BgzfBlockCache) DecompressionTask; defines DecompressionTask as calling decompressBgzfBlock with parameters which returns a DecompressedBgzfBlock. Remember it bails out with this block. There is something else that is notable, it is actually the cached version that bails out. Removing the cache code makes it run more reliable. But not completely. Also we are getting memory errors now: destroy DecompressedBgzfBlock 4294967306:0 16 core.exception.InvalidMemoryOperationError@core/exception.d(702): Invalid memory operation but that leaves no stack trace. Now we get std.parallelism.Task_bio.core.bgzf.block.decompressBgzfBlock-DecompressedBgzfBlock-yieldForce so the caching itself is not the problem. In the next phase we are going to address that dodgy memory copying by introducing a task managed by GC instead of using the ScopedTask. This is all happening in BgzfInputStream in inputstream.d used by reader.d which inherits from Stream. BamReaders uses that functionality to iterate through the reads usings D popFront() design. Streams allow reading a variable based on its type, e.g. a BAM read. The BamReader fetches the necessary data from BgzfBlockSupplier with getNextBgzfBlock which is used as the _bam variable. BamReader.reader itself returns an iterator. It is interesting to note how the OOP design obfuscates what is going on. It is also clear that I have to fix BgzfInputStream in inputstream.d because it handles the tasks in the roundbuffer. Part of sambamba's complexity is due to OOP and to having the threadpool running at the lowest level (unpacking bgzf). If I remove the threadpool there it means that threading will have to happen at a higher level. I.e., sambamba gets all its performance from multithreaded low level unpacking of data blocks. It is unusual, but it does have the (potential) advantage of leaving higher level code simple. I note with sambamba sort, however, Artem injected threads there too which begs the question what happens when you add different tasks to the same pool that have different timing characteristics. Be interesting to see the effect of using two task pools. block.d again, BgzfBlock is defined as a struct containing a _buffer defined as public ubyte[] _buffer = void; and is used in (indeed) block.d and inputstream.d only. The use of struct means that BgzfBlock gets allocated on the stack. Meanwhile _buffer get pointed into the uncompressed buffer which in turn is is a slice of uncompressed_buf that is also on the stack in decompressBgzfBlock (surprise, surprise) and gets assigned right before returning block._buffer[0 .. block.input_size] = uncompressed[]; now, the cached version (which I disabled) actually does a copy to the heap BgzfBlock compressed_bgzf_block = block; compressed_bgzf_block._buffer = block._buffer.dup; DecompressedBgzfBlock decompressed_bgzf_block; with (decompressed_bgzf_block) { start_offset = block.start_offset; end_offset = block.end_offset; decompressed_data = uncompressed[].dup; } cache.put(compressed_bgzf_block, decompressed_bgzf_block); Artem added a comment /// A buffer is used to reduce number of allocations. /// /// Its size is max(cdata_size, input_size) /// Initially, it contains compressed data, but is rewritten /// during decompressBgzfBlock -- indeed, who cares about /// compressed data after it has been uncompressed? Well, maybe the GC does! Or maybe the result does not fit the same buffer. Hmmm. If you go by the values destroy DecompressedBgzfBlock 140728898420736:139748327903664 16 you can see the BgzfBlock is compromised by a stack overwrite. Changing the struct to a class fixes the offsets destroy DecompressedBgzfBlock 104:18299 16 destroy DecompressedBgzfBlock 18299:36489 16 destroy DecompressedBgzfBlock 36489:54493 16 so things start to look normal. But it still segfaults on DecompressedBgzfBlock on GC in yieldForce. std.parallelism.Task_bio.core.bgzf.block.decompressBgzfBlock-DecompressedBgzfBlock-yieldForce decompressBgzfBlock the underpinning data structure is corrupt. The problem has to be with struct BgzfBlock, struct BgzfBlockAux and the Roundbuf. Both BgzfBlockAux goes on a Roundbuf. I was thinking last night that there may be a struct size problem. The number of tasks in the roundbuffer should track the number of threads. Increasing the size of the roundbuffer makes a crash take longer. Well I hit the jackpot! After disabling _task_buf = uninitializedArray!(DecompressionTask[])(n_tasks); there are no more segfaults. I should have looked at that more closely. I can only surmise that because the contained objects contain pointers (they do) the GC gets confused because it occassionaly finds something that looks like a valid pointer. Using uninitializedArray on an object that includes a pointer reference is dangerous. Success!! That must have take me at least three days of work to find this bug, one of the most elusive bugs I have encountered. Annoyingly I was so close earlier when I expanded the size of n_tasks! The segfault got triggered by a new implementation of D's garbage collector. Pointer space is tricky and it shows how careful we have to be with non-initialized data. Now what is the effect of disabling the cache and making more use of garbage collected structures (for each Bgzf block)? User time went down and CPU usage too, but wall clock time nudged up. Memory use also went up by 25%. The garbage collector kicked in twice as often. This shows Artem's aggressive avoidance of the garbage collector does have impact and I'll have to revert on some of my changes now. Note that releasing the current version should be OK, the performance difference does add to overall time and energy use. Even 10% of emissions is worth saving with tools run at this scale. So, I started reverting on changes and after reverting two items: - Sambamba - [-] speed test + [X] revert on class DecompressedBgzfBlock to struct + [X] revert on auto buf2 = (block._buffer[0 .. block.input_size]).dup; + [ ] revert on DecompressionTask[] _task_buf; + [ ] revert on tmp = scopedTask!decompressBgzfBlock(b.block) + [ ] revert on Cache Sambamba is very similar to the last release. The new release is very slightly slower but uses less RAM. I decided not to revert on using the roundbuf, scopedTask and Cache because each of these introduces complexity with no obvious gain. v0.7.1 will be released after I update release notes and final tests. 23 It has been almost a year! And a new job.. I am writing a REST service which needs to maintain some state. The built-in Racket server has continuations - which is rather nice! Racket also has support for Redis, SQLite (nice example) and a simple key-value interface (ini-style) which I can use for sessions. Ironically, two weeks after writing above I was hit by a car on my bicycle in Memphis. 14 fractures and a ripped AC joint. It took surgery and four months to feel halfway normal again… 24 Speeding up K GEMMMA has been released with many bug fixes and speedups. Now it is time to focus on further speedups with K (and GWA after). There are several routes to try this. One possibility is to write our own matrix multiplication routine in D. My hunch is that because computing K is a special case we can get some significant speedups compared to a standard matrix multiplication for larger matrices. Directions we can go are: - Stop using BLAS and create multi-core dot-product based multiplication that makes use of - multi threaded decompression - start compute while reading data - use aligned rows only for dot product (less CPU cache misses) - compute half the result (K is symmetric!) - heavy use of threading - use AVX2 optimizations - use floats instead of doubles (we can do this with hardware checking) - chromosome-based computations (see below for LOCO) - Use tensor routines to reduce RAM IO In other words, quite a few possible ways of improving things. It may be that the current BLAS routine is impossible to beat for our data, but there is only one way to find out: by trying. The first step you would think is simple: take the genotype data, pass that in to calckinship(g) and get K back. Unfortunately GEMMA intertwines reading the genotype file, the computation of K in steps and scaling the matrix. All in one and the code is duplicated for Plink and BIMBAM formats. Great. Now I don't feel like writing any more code in C++ and, fortunately, Prasun has done some of the hard work in faster-lmm-d already. So the first step is to parse the BIMBAM format using an iterator. We'll use this to load the genotype data in RAM (we are not going to assume memory restrictions for now). That genotype decompression and reading part is done now and I added it to BioD decompress.d. Next I added a tokenizer which is a safe replacement for the strtok gemma uses. Using BLAS I added a full K computation after reading then genotype file which is on par (speed-wise) with the current GEMMA implementaton. The GEMMA implementation should be more optimal because it splits the matrix computation in smaller blocks and starts while streaming the genotype data. Prasun did an implementation of that in gemmakinship.d which is probably faster than my current version. Even so, I'll skip that method, for now, until I am convinced that none of the above dot-product optimizations pan out. Note that only a fraction of the time is used to do the actual matrix multiplication. 2018-10-23T09:53:38.299:api.d:flmmd_compute_bimbam_K:37 GZipbyLine 2018-10-23T09:53:43.911:kinship.d:kinship_full:23 Full kinship matrix used Reading the file took 6 seconds. 2018-10-23T09:53:43.911:dmatrix.d:slow_matrix_transpose:57 slow_matrix_transpose 2018-10-23T09:53:44.422:blas.d:matrix_mult:48 matrix_mult 2018-10-23T09:53:45.035:kinship.d:kinship_full:33 DONE rows is 1940 cols 1940 Transpose + GxGT took 1 second. The total for 6e05143d717d30eb0b0157f8fd9829411f4cf2a0 real 0m9.613s user 0m13.504s sys 0m1.476s I also wrote a threaded decompressor and it is slightly slower. Gunzip is so fast that building threads adds more overheads. In this example reading the file dominates the time, but with LOCO we get ~20x K computation (one for each chromosome), so it makes sense to focus on K. For model species, for the forseeable future, we'll look at thousands of individuals, so it is possible to hold all K matrices in RAM. Which also means we can fill them using chromosome-based dot-products. Another possible optimization. Next steps are to generate output for K and reproduce GEMMA output (also I need to filter on missing phenotype data). The idea is to compute K and LMM in one step, followed by developing the LOCO version as a first class citizen. Based on above metrics we should be able to reduce LOCO K of this sized dataset from 7 minutes to 30 seconds(!) That is even without any major optimizations. 25 MySQL to MariaDB The version we are running today: mysql --version mysql Ver 14.12 Distrib 5.0.95, for redhat-linux-gnu (x86_64) using readline 5.1 26 MySQL backups (stage2) Backup to AWS. env AWS_ACCESS_KEY_ID=* AWS_SECRET_ACCESS_KEY=* RESTIC_PASSWORD=genenetwork /usr/local/bin/restic --verbose init /mnt/big/mysql_copy_from_lily -r s3:s3.amazonaws.com/backupgn init env AWS_ACCESS_KEY_ID=* AWS_SECRET_ACCESS_KEY=* RESTIC_PASSWORD=genenetwork /usr/local/bin/restic backup /mnt/big/mysql_copy_from_lily -r s3:s3.amazonaws.com/backupgn I also added backups for genotypefiles and ES. 27 MySQL backups (stage1) Before doing any serious work on MySQL I decided to create some backups. Lily is going to be the master for now, so the logical backup is on P1 which has a large drive. Much of the space is taken up by a running MySQL server (which is updated!) and a data file by Lei names 20151028dbsnp containing a liftover of dbSNP142. First I simply compressed the input files, not to throw them away. Because ssh is so old on lily I can't login nicely from Penguin, but the other way works. This is temporary as mysql user ssh -i /var/lib/mysql/.ssh/id_rsa pjotr@penguin The script becomes (as a weekly CRON job for now) as mysql user rsync -va /var/lib/mysql --rsync-path=/usr/bin/rsync -e "ssh -i /var/lib/mysql/.ssh/id_rsa" pjotr@penguin:/mnt/big/mysql_copy_from_lily/ This means we have a weekly backup for now. I'll improve it with more disk space and MariaDB to have incrementals. Actually, it looks like only two tables really change -rw-rw---- 1 pjotr mysql 29G Aug 16 18:59 ProbeSetData.MYD -rw-rw---- 1 pjotr mysql 36G Aug 16 19:00 ProbeSetData.MYI which are not that large. To make incrementals we are opting for [restic](). It looks modern and has interesting features. env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic init -r /mnt/big/backup_restic_mysql/ now backups can be generated incrementally with env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic --verbose backup /mnt/big/mysql_copy_from_lily -r /mnt/big/backup_restic_mysql To list snapshots (directory format) env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic -r backup_restic_mysql/ snapshots Check env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic -r backup_restic_mysql/ check Prune can merge hashes, saving space env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic -r backup_restic_mysql/ prune So, now we can do a daily backup from Lily and have incrementals stored on Penguin too. My cron reads 40 5 * * * env RESTIC_PASSWORD=genenetwork /usr/local/bin/restic --verbose backup /mnt/big/mysql_copy_from_lily -r /mnt/big/backup_restic_mysql|mail -s "MySQL restic backup" pjotr2017@thebird.nl Restic can push to AWS S3 buckets. That is the next step (planned). 28 Migrating GN1 from EC2 GN1 is costing us $\600+ per month on EC2. With our new shiny server we should move it back into Memphis. The main problem is that the base image is Linux version 2.6.18-398.el5 (mockbuild@builder17.centos.org) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-55)) #1 SMP Tue Sep 16 20:50:52 EDT 2014 I mean, seriously, ten years old? Compared to GN2, GN1 should be simpler to deploy. Main problem is that it requires Python 2.4 - currently. Not sure why that is. Some system maintenance scripts live here. There is a Docker image by Artem here. And there are older notes referred to in the Artem doc. All of that is about GN2 really. So far, I have not found anything on GN1. In /usr/lib/python2.4/site-packages I find the following modules: Alacarte elementtree gmenu.so htmlgen iniparse invest json libsvn nose numarray piddle pirut pp.py pptransport.py ppworker.py pyx pyXLWriter reaper.so sos svg urlgrabber nothing too serious. Worse is that GN1 uses modpython which went out of vogue before 2013. The source code is still updated - you see, once things are out there… This means we'll need to deploy Apache in the VM with modpython. The installation on Lily pulls in 50+ Apache modules. Argh. If we want to support this… The good news is that most modules come standard with Apache - and I think we can disable a lot of them. 29 Fixing Gunicorn in use DEBUG:db.webqtlDatabaseFunction:.retrieve_species: retrieve_species result:: mouse DEBUG:base.data_set:.create_dataset: dataset_type: ProbeSet [2018-07-16 16:53:51 +0000] [4185] [ERROR] Connection in use: ('0.0.0.0', 5003) [2018-07-16 16:53:51 +0000] [4185] [ERROR] Retrying in 1 second. 30 Updating ldc with latest LLVM Updating ldc to latest leaves only 5 tests failing! That is rather good. 99% tests passed, 5 tests failed out of 1629 Total Test time (real) = 4135.71 sec The following tests FAILED: 387 - std.socket (Failed) 785 - std.socket-debug (Failed) 1183 - std.socket-shared (Failed) 1581 - std.socket-debug-shared (Failed) 1629 - lit-tests (Failed) build/Testing/Temporary/LastTest.log: (core.exception.AssertError@std/socket.d(456): Assertion failure build/Testing/Temporary/LastTest.log:core.exception.RangeError@std/socket.d(778): Range violation and Failing Tests (1): LDC :: plugins/addFuncEntryCall/testPlugin.d Fixing these ldc 1.10.0 compiles on LLVM 3.8! 31 Fixing sambamba We were getting this new error /gnu/store/4snsi4vg06bdfi6qhdjfbhss16kvzxj7-ldc-1.10.0/include/d/std/numeric.d(1845): Error: read-modify-write operations are not allowed for shared variables. Use core.atomic.atomicOp!"+="(s, e) instead. which was by the normalize class bool normalize(R)(R range, ElementType!(R) sum = 1) the normalization functions fractions range values to the value of sum, e.g., a = [ 1.0, 3.0 ]; assert(normalize(a)); assert(a == [ 0.25, 0.75 ]); so, the question is why it needs to be a shared range. I modified it to cast to shared after normalization. The next one was BioD/bio/maf/reader.d(53): Error: cannot implicitly convert expression `this._f.byLine(cast(Flag)true, '\x0a')` of type `ByLineImpl!(char, char)` to `ByLine!(char, char)` also reported on After fixing the compile time problems the tests failed for view, view -f unpack, subsample and sort(?!). In fact all md5sum's we test. For view it turns out the order of the output differs. view -f sam returns identity also converting back from BAM. It had to be something in the header. The header (apparently) contains the version of sambamba! ldc2 -wi -I. -IBioD -IundeaD/src -g -O3 -release -enable-inlining -boundscheck=off -of=bin/sambamba bin/sambamba.o utils/ldc_version_info_.o htslib/libhts.a /usr/lib/x86_64-linux-gnu/liblz4.a -L-L/home/travis/dlang/ldc-1.10.0/lib -L-L/usr/lib/x86_64-linux-gnu -L-lrt -L-lpthread -L-lm -L-llz4 bin/sambamba.o: In function `_D5utils3lz426createDecompressionContextFZPv': /home/travis/build/biod/sambamba/utils/lz4.d:199: undefined reference to `LZ4F_createDecompressionContext' /home/travis/build/biod/sambamba/utils/lz4.d:200: undefined reference to `LZ4F_isError' (...) objdump -T /usr/lib/x86_64-linux-gnu/liblz4.so|grep LZ4F 000000000000cd30 g DF .text 00000000000000f6 Base LZ4F_flush 000000000000ce30 g DF .text 0000000000000098 Base LZ4F_compressEnd 000000000000c520 g DF .text 00000000000002f5 Base LZ4F_compressBegin 000000000000c4a0 g DF .text 000000000000003f Base LZ4F_createCompressionContext 000000000000dd60 g DF .text 00000000000000ee Base LZ4F_getFrameInfo 000000000000ced0 g DF .text 00000000000002eb Base LZ4F_compressFrame 000000000000c4e0 g DF .text 0000000000000033 Base LZ4F_freeCompressionContext 000000000000c470 g DF .text 0000000000000029 Base LZ4F_getErrorName 000000000000c460 g DF .text 000000000000000a Base LZ4F_isError 000000000000c8a0 g DF .text 00000000000000e3 Base LZ4F_compressFrameBound 000000000000d1c0 g DF .text 0000000000000038 Base LZ4F_createDecompressionContext 000000000000d200 g DF .text 0000000000000037 Base LZ4F_freeDecompressionContext 000000000000c990 g DF .text 0000000000000396 Base LZ4F_compressUpdate 000000000000d240 g DF .text 0000000000000b1d Base LZ4F_decompress 000000000000c820 g DF .text 000000000000007d Base LZ4F_compressBound 32 Trapping NaNs When a floating point computation results in a NaN/inf/underflow/overflow GEMMA should stop The GSL has a generic function gslieeeenvsetup which works through setting an environment variable. Not exactly useful because I want GEMMA to run with just a –check switch. The GNU compilers have feenableexcept as a function which did not work immediately. Turned out I needed to load fenv.h before the GSL: #include <fenv.h> #include "gsl/gsl_matrix.h" Enabling FP checks returns Thread 1 "gemma" received signal SIGFPE, Arithmetic exception. 0x00007ffff731983d in ieeeck_ () from /home/wrk/opt/gemma-dev-env/lib/libopenblas.so.0 (gdb) bt #0 0x00007ffff731983d in ieeeck_ () from /home/wrk/opt/gemma-dev-env/lib/libopenblas.so.0 #1 0x00007ffff6f418dc in dsyevr_ () from /home/wrk/opt/gemma-dev-env/lib/libopenblas.so.0 #2 0x0000000000489181 in lapack_eigen_symmv (A=A@entry=0x731a00, eval=eval@entry=0x731850, evec=evec@entry=0x7317a0, flag_largematrix=<optimized out>) at src/lapack.cpp:195 #3 0x00000000004897f8 in EigenDecomp (flag_largematrix=<optimized out>, eval=0x731850, U=U@entry=0x7317a0, G=G@entry=0x731a00) at src/lapack.cpp:232 #4 EigenDecomp_Zeroed (G=G@entry=0x731a00, U=U@entry=0x7317a0, eval=eval@entry=0x731850, flag_largematrix=<optimized out>) at src/lapack.cpp:248 #5 0x0000000000457b3a in GEMMA::BatchRun (this=this@entry=0x7fffffffcc30, cPar=...) at src/gemma.cpp:2598 Even for the standard dataset! Turns out it is division by zero and FP underflow in lapackeigensymmv. 33 A gemma-dev-env package I went through difficulties of updating GNU Guix and writing a package that creates a GEMMA development environment. This was based on the need for having a really controlled dependency graph. It went wrong last time we released GEMMA, witness gemma got slower, leading to the discovery that I had linked in a less performing lapack. GNU Guix was not behaving because I had not updated in a while and I discovered at least one bug. Anyway, I have a working build system now and we will work on code in the coming weeks to fix a number of GEMMA issues and bring out a new release. 34 Reviewing a CONDA package Main achievement last week was getting GEMMA installed in a controlled fashion and proving performance is still up to scratch. For JOSS I am reviewing a CONDA package for RNA-seq analysis. The author went through great lengths to make it easy to install with Bioconda, so I thought to have a go. GNU Guix has a conda bootstrap, so time to try that! guix package -A conda conda 4.3.16 out gnu/packages/package-management.scm:704:2 and wants to install /gnu/store/pj6d293c7r9xrc1nciabjxmh05z24fh0-Pillow-4.3.0.tar.xz /gnu/store/m5prqxzlgaargahq5j74rnvz72yhb77l-python-olefile-0.44 /gnu/store/s9hzpsqf9zh9kb41b389rhmm8fh9ifix-python-clyent-1.2.1 /gnu/store/29wr2r35z2gnxbmvdmdbjncmj0d3l842-python-pytz-2017.3 /gnu/store/jv1p8504kgwp22j41ybd0j9nrz33pmc2-python-anaconda-client-1.6.3.tar.gz /gnu/store/l6c40iwss9g23jkla75k5f1cadqbs4q5-python-dateutil-2.6.1 /gnu/store/y4h31l8xj4bd0705n0q7a8csz6m1s6s5-python-pycosat-0.6.1 /gnu/store/8rafww49qk2nxgr4la9i2v1yildhrvnm-python-cookies-2.2.1 /gnu/store/s5d94pbsv779nzi30n050qdq9w12pi52-python-responses-0.5.1 /gnu/store/kv8nvhmb6h3mkwyj7iw6zrnbqyb0hpld-python-conda-4.3.16.tar.gz /gnu/store/cns9xhimr1i0fi8llx53s8kl33gsk3c4-python-ruamel.yaml-0.15.35 The CONDA package in Guix is a bit older - turns out CONDA has a ridiculous release rate. Let's try the older CONDA first guix package -i conda And conda config --add channels defaults Warning: 'defaults' already in 'channels' list, moving to the top conda config --add channels conda-forge Warning: 'conda-forge' already in 'channels' list, moving to the top conda config --add channels bioconda Warning: 'bioconda' already in 'channels' list, moving to the top so, that was all OK Next conda install -c serine rnasik Package plan for installation in environment /gnu/store/h260m1r0bgnyypl7r469lin9gpyrh12m-conda-4.3.16: The following NEW packages will be INSTALLED: asn1crypto: 0.24.0-py_1 conda-forge backports: 1.0-py36_1 conda-forge backports.functools_lru_cache: 1.5-py_1 conda-forge bedtools: 2.25.0-3 bioconda bigdatascript: v2.0rc10-0 serine bwa: 0.7.17-pl5.22.0_2 bioconda bzip2: 1.0.6-1 conda-forge ca-certificates: 2018.4.16-0 conda-forge certifi: 2018.4.16-py36_0 conda-forge cffi: 1.11.5-py36_0 conda-forge chardet: 3.0.4-py36_2 conda-forge click: 6.7-py_1 conda-forge colormath: 3.0.0-py_2 conda-forge conda: 4.5.8-py36_1 conda-forge conda-env: 2.6.0-0 conda-forge cryptography: 2.2.1-py36_0 conda-forge curl: 7.60.0-0 conda-forge cycler: 0.10.0-py_1 conda-forge dbus: 1.11.0-0 conda-forge decorator: 4.3.0-py_0 conda-forge expat: 2.2.5-0 conda-forge fastqc: 0.11.5-pl5.22.0_3 bioconda fontconfig: 2.12.1-4 conda-forge freetype: 2.7-1 conda-forge future: 0.16.0-py_1 conda-forge gettext: 0.19.8.1-0 conda-forge glib: 2.53.5-1 conda-forge gst-plugins-base: 1.8.0-0 conda-forge gstreamer: 1.8.0-1 conda-forge hisat2: 2.1.0-py36pl5.22.0_0 bioconda icu: 58.2-0 conda-forge idna: 2.7-py36_2 conda-forge je-suite: 2.0.RC-0 bioconda jinja2: 2.10-py_1 conda-forge jpeg: 9b-2 conda-forge krb5: 1.14.6-0 conda-forge libffi: 3.2.1-3 conda-forge libgcc: 5.2.0-0 libiconv: 1.14-4 conda-forge libpng: 1.6.34-0 conda-forge libssh2: 1.8.0-2 conda-forge libxcb: 1.13-0 conda-forge libxml2: 2.9.5-1 conda-forge lzstring: 1.0.3-py36_0 conda-forge markdown: 2.6.11-py_0 conda-forge markupsafe: 1.0-py36_0 conda-forge matplotlib: 2.1.0-py36_0 conda-forge mkl: 2017.0.3-0 multiqc: 1.5-py36_0 bioconda ncurses: 5.9-10 conda-forge networkx: 2.0-py36_1 conda-forge numpy: 1.13.1-py36_0 openjdk: 8.0.121-1 openssl: 1.0.2o-0 conda-forge pcre: 8.41-1 conda-forge perl: 5.22.0.1-0 conda-forge picard: 2.18.2-py36_0 bioconda pip: 9.0.3-py36_0 conda-forge pycosat: 0.6.3-py36_0 conda-forge pycparser: 2.18-py_1 conda-forge pyopenssl: 18.0.0-py36_0 conda-forge pyparsing: 2.2.0-py_1 conda-forge pyqt: 5.6.0-py36_5 conda-forge pysocks: 1.6.8-py36_1 conda-forge python: 3.6.3-1 conda-forge python-dateutil: 2.7.3-py_0 conda-forge pytz: 2018.5-py_0 conda-forge pyyaml: 3.12-py36_1 conda-forge qt: 5.6.2-3 conda-forge readline: 6.2-0 conda-forge requests: 2.19.1-py36_1 conda-forge rnasik: 1.5.2-0 serine ruamel_yaml: 0.15.35-py36_0 conda-forge samtools: 1.5-1 bioconda setuptools: 40.0.0-py36_0 conda-forge simplejson: 3.8.1-py36_0 bioconda sip: 4.18-py36_1 conda-forge six: 1.11.0-py36_1 conda-forge skewer: 0.2.2-1 bioconda spectra: 0.0.11-py_0 conda-forge sqlite: 3.13.0-1 conda-forge star: 2.5.2b-0 bioconda subread: 1.5.3-0 bioconda tk: 8.5.19-2 conda-forge tornado: 5.1-py36_0 conda-forge urllib3: 1.23-py36_0 conda-forge wheel: 0.31.1-py36_0 conda-forge xorg-libxau: 1.0.8-3 conda-forge xorg-libxdmcp: 1.1.2-3 conda-forge xz: 5.2.3-0 conda-forge yaml: 0.1.7-0 conda-forge zlib: 1.2.11-0 conda-forge That is a rather long list of packages, including OpenJDK. Conda CondaIOError: IO error: Missing write permissions in: /gnu/store/h260m1r0bgnyypl7r469lin9gpyrh12m-conda-4.3.16 You don't appear to have the necessary permissions to install packages into the install area '/gnu/store/h260m1r0bgnyypl7r469lin9gpyrh12m-conda-4.3.16'. However you can clone this environment into your home directory and then make changes to it. This may be done using the command: $ conda create -n my_root --clone=/gnu/store/h260m1r0bgnyypl7r469lin9gpyrh12m-conda-4.3.16 OK, I suppose that is an idea, though it kinda defeats the idea of a reproducible base repo. But this worked: conda create -n joss-review-583 conda install -n joss-review-583 -c serine rnasik The good news is that conda installs in one directory. But 2.7 GB downloaded… conda info --envs # conda environments: # conda /home/wrk/.conda/envs/conda joss-review-583 /home/wrk/.conda/envs/joss-review-583 root * /gnu/store/h260m1r0bgnyypl7r469lin9gpyrh12m-conda-4.3.16 Activate the environment source activate joss-review-583 RNAsik --help 00:00:00.000 Bds 2.0rc10 (build 2018-07-05 09:58), by Pablo Cingolani Usage: RNAsik -fqDir </path/to/your/fastqs> [options] main options -fqDir <string> : path to fastqs directory, can be nested -align <string> : pick your aligner [star|hisat|bwa] -refFiles <string> : directory with reference files -paired <bool> : paired end data [false], will also set pairIds = "_R1,_R2" -all <bool> : short hand for counts, mdups, exonicRate, qc, cov and multiqc more options -gtfFile <string> : path to refFile.gtf -fastaRef <string> : path to refFile.fa -genomeIdx <string> : genome index -counts <bool> : do read counts [featureCounts] -mdups <bool> : process bam files, sort and mark dups [picard] -qc <bool> : do bunch of QCs, fastqc, picard QCs and samtools -exonicRate <bool> : do Int(ra|er)genic rates [qualiMap] -multiqc <bool> : do MultiQC report [multiqc] -trim <bool> : do FASTQ trimming [skewer] -cov <bool> : make coverage plots, bigWig files -umi <bool> : deduplicates using UMIs extra configs -samplesSheet <string> : tab delimited file, each line: old_prefix \t new_prefix -outDir <string> : output directory [sikRun] -extn <string> : specify FASTQ files extension [.fastq.gz] -pairIds <string> : specify read pairs, [none] -extraOpts <string> : add extra options through a file, each line: toolName = options -configFile <string> : specify custome configuration file I may be critical about CONDA, but this works ;) Now I tried on a different machine and there was a problem on activate where the environment bumped me out of a shell. Hmmm. The conda settings of activate are: CONDA_DEFAULT_ENV=joss-review-583 CONDA_PATH_BACKUP=/home/wrk/opt/gemma-dev-env/bin:/usr/bin:/bin CONDA_PREFIX=/home/wrk/.conda/envs/joss-review-583 CONDA_PS1_BACKUP='\[\033[0;35m\]\h:\w\[\033[0m\]$ ' JAVA_HOME=/home/wrk/.conda/envs/joss-review-583 JAVA_HOME_CONDA_BACKUP= PATH=/home/wrk/.conda/envs/joss-review-583/bin:/home/wrk/opt/gemma-dev-env/bin:/usr/bin:/bin _CONDA_D=/home/wrk/.conda/envs/joss-review-583/etc/conda/activate.d _CONDA_DIR=/home/wrk/opt/gemma-dev-env/bin I guess I can replicate that penguin2:~$ export JAVA_HOME=$HOME/.conda/envs/joss-review-583 penguin2:~$ export CONDA_PREFIX=$HOME/.conda/envs/joss-review-583 penguin2:~$ export PATH=$HOME/.conda/envs/joss-review-583/bin:$PATH conda install -n joss-review-583 -c bioconda qualimap wget bioinformatics.erc.monash.edu/home/kirill/sikTestData/rawData/IndustrialAntifoamAgentsYeastRNAseqData.tar 35 Updates It has been a while since I updated the BLOG (see below older BLOGs). Time to start afresh because we have interesting developments going and the time ahead looks particularly exciting for GEMMA and GeneNetwork with adventures in D, CUDA, Arvados, Jupyter labs, IPFS, blockchain and the list goes on! I also promised to write a BLOG on our development/deployment setup. Might as well start here. My environments are very complex but controlled thanks to GNU Guix.
https://thebird.nl/blog/work/rotate.html
CC-MAIN-2022-33
refinedweb
12,291
60.31
Issue Type: Bug Created: 2009-11-13T02:39:37.000+0000 Last Updated: 2011-08-21T13:53:39.000+0000 Status: Resolved Fix version(s): Reporter: Edwin Vlieg (edwinv) Assignee: Pádraic Brady (padraic) Tags: - Zend_Application Related issues: - ZF-8225 Attachments: I looks like getPluginResource is trying to load a class that is already defined. Therefore it throws a fatal error: Fatal error: Cannot redeclare class Zend_Layout in /Users/edwin/Sites/sqills/lottery_test/library/Zend/layout.php on line 31 The fatal error is thrown at line 354 of Zend_Application_Bootstrap_BootstrapAbstract.php in the call of class_exists. The error is only thrown once the resource you are trying to load (with $this->bootstrap('memcache')) doesn't have any entries in the application.ini file. I've defined a resource in my own namespace: Lottery_Application_Resource_Memcache. In bootstrap.php I'm using the information in the resource, so I call $this->bootstrap('memcache') to make sure the resource is loaded. This goes well if the application.ini contains entries for the memcache resource. Once I comment the memcache entries out (because I want to disable the memcache feature), the fatal error occurs. Posted by Pádraic Brady (padraic) on 2011-08-21T13:48:42.000+0000 Can the reporter check if this is still the case on current trunk? A patch for ZF-8225 has been committed but there no reproduce code this specific included. Posted by Pádraic Brady (padraic) on 2011-08-21T13:53:39.000+0000 Patched in r24393. Reporter should verify as there is no reproduction code with which to test their specific problem. The associated patch should shut out any class which is not a Zend_Application_Resource_Resource subclass which should prevent any unrelated classes getting through.
https://framework.zend.com/issues/browse/ZF-8299
CC-MAIN-2017-30
refinedweb
284
50.33
07 January 2010 03:38 [Source: ICIS news] SINGAPORE (ICIS news)--Taiwan’s Chinese Petroleum Corp has bought a 30,000-tonne cargo of heavy naphtha at a premium of $20-$25/tonne (€14-17/tonne) to Japan spot quotes for February delivery, traders said on Thursday. Prices were much higher, compared with what CPC paid for its term naphtha supply. The refiner secured 675,000 tonnes of full-range naphtha for delivery between April and December at a discount of $4.00-$5.00/tonne to ?xml:namespace> Asian spot naphtha prices are on a rally amid razor-thin supply and robust petrochemical demand. By midday Thursday trade, second half February naphtha prices scaled up to $768.50-$771.50/tonne because of strong crude, versus $758-$759/tonne on Wed
http://www.icis.com/Articles/2010/01/07/9323082/taiwan-cpc-buys-february-naphtha-at-20-25tonne-premium.html
CC-MAIN-2014-41
refinedweb
132
53
What’s New in Python 2.0 sys.setdefaultencoding(encoding) function in a customised version of, unicodedata, provides an interface to Unicode character properties. For example, unicodedata.category(u'A') returns the 2-character string ‘Lu’, the ‘L’ denoting it’s a letter, and ‘u’ meaning that it’s uppercase. unicodedata.bidirectional(u'\u0660') returns ‘AN’, meaning that U+0660 is an Arabic number. The codecs module contains functions to look up existing encodings and register new ones. Unless you want to implement a new encoding, you’ll most often use the read(), readline(), and readlines() methods. These methods will all translate from the given encoding and return Unicode strings. - stream_writer, similarly, is a class that supports encoding output to a stream. stream_writer(file_obj) returns an object that supports the write() and. String Methods --without-cycle-gc switch when running the apply() built-in function: apply(f, args, kw) calls the function f() with the argument tuple args and the keyword arguments in the dictionary kw. print statement can now have its output directed to a file-like object by following the print with >> file, similar to the redirection operator in Unix shells. Previously you’d either have to use the write() method of the file-like object, which lacks the convenience and simplicity of; ‘%r’ will insert the repr() of its argument. This was also added from symmetry considerations, this time for symmetry with the existing ‘. Changes to Built-in Functions, setdefault(key, default), which behaves similarly to the existing, append() and insert(). In earlier versions of Python, if L is a list, L.append( 1,2 ) appends the tuple (1,2) to the list. In Python 2.0 this causes a TypeError exception to be raised, with the message: ‘append ‘x’ and take the lowest 8 bits of the result, so \x123456 was equivalent to \x56. The AttributeError and tell() method of file objects return a long integer instead of a regular integer. Some code would subtract two file offsets and attempt to use the result to multiply a sequence or slice a string, but this raised a TypeError. In 2.0, long integers can be used to multiply or slice a sequence, and it’ll behave as you’d intuitively expect it to; 3L * 'abc' produces ‘abcabcabc’, and (0,1,2,3)[2L:4L] produces (2,3). Long integers can also be used in various contexts where previously only integers were accepted, such as in ‘L’ character, though repr() still includes it. The ‘L’ annoyed many people who wanted to print long integers that looked just like regular integers, since they had to go out of their way to chop off the character. This is no longer a problem in 2.0, but code which does str(longval)[:-1] and assumes the ‘L’ is there, will now lose the final digit. Taking the Include/objimpl.h. For the lengthy discussions during which the interface was hammered out, see the Web archives of the ‘patches’ and ‘python: PyModule_AddObject(), PyModule_AddIntConstant(), and. PyOS_getsig() gets a signal handler and setup.py script. For the simple case, when the software contains only .py files, a minimal setup.py can be just a few lines long: from distutils.core import setup setup (name = "foo", version = "1.0", py_modules = ["module1", "module2"]) The‘, builds a source distribution such as .pkg files are in various stages of development. All this is documented in a new manual, Distributing Python Modules, that joins the basic set of Python documentation. XML Modules Python 1.5.2 included a simple XML parser in the form of the xmllib module, contributed by Sjoerd Mullender. Since 1.5.2’s release, two different interfaces for processing XML have become common: SAX2 (version 2 of the Simple API for XML) provides an event-driven interface with some similarities to xmllib, and the DOM (Document Object Model) provides a tree-based interface, transforming an XML document into a tree of nodes that can be traversed and modified. Python 2.0 includes a SAX2 interface and a stripped- down DOM interface as part of the startElement() and endElement() methods are called for every starting and end tag encountered by the parser, the Document instance is the root of the tree, and has a single child which is the top-level Element instance. This <tag1>...</tag1> to a file. The DOM implementation included with Python lives in the xml.dom.minidom module. It’s a lightweight implementation of the Level 1 DOM with support for XML namespaces. The parse() and parseString() convenience functions are provided for generating a DOM tree: from xml.dom import minidom doc = minidom.parse('hamlet.xml') doc is a Document instance. Document, like all the other DOM classes such as Element and Text, is a subclass of the Node base class. All the nodes in a DOM tree therefore support certain common methods, such as toxml() which returns a string containing the XML representation of the node and its children. Each class also has special methods of its own; for example, Element and Document instances have a method to find all child elements with a given tag name. Continuing from the previous 2-line example: perslist = doc.getElementsByTagName( 'PERSONA' ) print perslist[0].toxml() print perslist[1].toxml() For the Hamlet XML file, the above few lines output: <PERSONA>CLAUDIUS, king of Denmark. </PERSONA> <PERSONA>HAMLET, son to the late, and nephew to the present king.</PERSONA> The root element of the document is available as doc.documentElement, and its children can be easily modified by deleting, adding, or removing nodes: root = doc.documentElement # Remove the first child root.removeChild( root.childNodes[0] ) # Move the new first child to the end root.appendChild( root.childNodes[0] ) # Insert the new first child (originally, # the third child) before the 20th child. root.insertBefore( root.childNodes[0], root.childNodes[20] ) Again, I will refer you to the Python documentation for a complete listing of the different Node classes and their various methods. Relationship to PyXML The XML Special Interest Group has been working on XML-related Python code for a while. Its code distribution, called PyXML, is available from the SIG’s Web pages at. The PyXML distribution also used the package name xml. If you’ve written programs that used PyXML, you’re probably wondering about its compatibility with the 2.0 xml package. The answer is that Python 2.0’s xml package isn’t compatible with PyXML, but can be made compatible by installing a recent version PyXML. Many applications can get by with the XML support that is included with Python 2.0, but more complicated applications will require that the full PyXML package will be installed. When installed, PyXML versions 0.6.0 or greater will replace the xml package shipped with Python, and will be a strict superset of the standard package, adding a bunch of additional features. Some of the additional features in PyXML include: - 4DOM, a full DOM implementation from FourThought, Inc. - The xmlproc validating parser, written by Lars Marius Garshol. - The sgmlop parser accelerator module, written by Fredrik Lundh. Module changes. New modules A number of new modules were added. We’ll simply list them with brief descriptions; consult the 2.0 documentation for the details of a particular module. - atexit: For registering functions to be called before the Python interpreter exits. Code that currently sets sys.exitfunc directly should be changed to use the atexit module instead, importing atexit and calling atexit.register() with the function to be called on exit. (Contributed by Skip Montanaro.) - codecs, encodings, unicodedata: Added as part of the new Unicode support. - filecmp: Supersedes the old cmp, cmpcache and dircmp modules, which have now become deprecated. (Contributed by Gordon MacMillan and Moshe Zadka.) - gettext: This module provides internationalization (I18N) and localization (L10N) support for Python programs by providing an interface to the GNU gettext message catalog library. (Integrated by Barry Warsaw, from separate contributions by Martin von Löwis, Peter Funk, and James Henstridge.) - linuxaudiodev: Support for the /dev/audio device on Linux, a twin to the existing sunaudiodev module. (Contributed by Peter Bosch, with fixes by Jeremy Hylton.) - mmap: An interface to memory-mapped files on both Windows and Unix. A file’s contents can be mapped directly into memory, at which point it behaves like a mutable string, so its contents can be read and modified. They can even be passed to functions that expect ordinary strings, such as the re module. (Contributed by Sam Rushing, with some extensions by A.M. Kuchling.) - pyexpat: An interface to the Expat XML parser. (Contributed by Paul Prescod.) - robotparser: Parse a robots.txt file, which is used for writing Web spiders that politely avoid certain areas of a Web site. The parser accepts the contents of a robots.txt file, builds a set of rules from it, and can then answer questions about the fetchability of a given URL. (Contributed by Skip Montanaro.) - tabnanny: A module/script to check Python source code for ambiguous indentation. (Contributed by Tim Peters.) - UserString: A base class useful for deriving objects that behave like strings. - webbrowser: A module that provides a platform independent way to launch a web browser on a specific URL. For each platform, various browsers are tried in a specific order. The user can alter which browser is launched by setting the BROWSER environment variable. (Originally inspired by Eric S. Raymond’s patch to urllib which added similar functionality, but the final module comes from code originally implemented by Fred Drake as Tools/idle/BrowserControl.py, and adapted for the standard library by Fred.) - _winreg: An interface to the Windows registry. _winreg is an adaptation of functions that have been part of PythonWin since 1995, but has now been added to the core distribution, and enhanced to support Unicode. _winreg was written by Bill Tutt and Mark Hammond. - zipfile: A module for reading and writing ZIP-format archives. These are archives produced by PKZIP on DOS/Windows or zip on Unix, not to be confused with gzip-format files (which are supported by the gzip module) (Contributed by James C. Ahlstrom.) - imputil: A module that provides a simpler way for writing customised import hooks, in comparison to the existing ihooks module. (Implemented by Greg Stein, with much discussion on python-dev along the way.) IDLE Improvements IDLE is the official Python cross-platform IDE, written using Tkinter. Python 2.0 includes IDLE 0.6, which adds a number of new features and improvements. A partial list: - UI improvements and optimizations, especially in the area of syntax highlighting and auto-indentation. - The class browser now shows more information, such as the top level functions in a module. - Tab width is now a user settable option. When opening an existing Python file, IDLE automatically detects the indentation conventions, and adapts. - There is now support for calling browsers on various platforms, used to open the Python documentation in a browser. - IDLE now has a command line, which is largely similar to the vanilla Python interpreter. - Call tips were added in many places. - IDLE can now be installed as a package. - In the editor window, there is now a line/column bar at the bottom. - Three new keystroke commands: Check module (Alt-F5), Import module (F5) and Run script (Ctrl-F5). Deleted and Deprecated Modules. Acknowledgements, Tobias Polzin, Guido van Rossum, Neil Schemenauer, and Russ Schmidt.
https://documentation.help/Python-3.4.4/2.0.html
CC-MAIN-2020-10
refinedweb
1,897
57.77
There’s a new embedded hacking tool on the scene that gives you an interactive Python interface for a speedy chip on a board with oodles of GPIO, the ability to masquerade as different USB devices, and a legacy of tricks up its sleeve. This is the GreatFET, the successor to the much loved GoodFET. I first heard this board was close to launch almost a year ago and asked for an early look. When shipping began at the end of April, they sent me one. Let’s dig in for a hands-on review of the GreatFET from Great Scott Gadgets. Lots of Fast I/O with Direct Access From Your Computer GreatFET is a jack-knife for embedded projects — it’s meant to stand in for multiple tools and components during development so that you can work on the problem in front of you right now and figure everything else out later on. For instance, consider testing out a new chip. You could hook it up to your favorite microcontroller, write and flash some test code, and see if the chip you’re testing works the first time. But chances are you have the I2C address wrong, or you hooked up TX/RX or MISO/MOSI backwards, or myriad other common gotchas. At its simplest, GreatFET is designed to give you an interactive Python shell that controls oodles of IO pins on the board. You run commands and get feedback as fast as you can type them. But this is the tip of the iceberg with the GreatFET. On the more advanced side of things, this board can be used to emulate USB devices, as a data acquisition device for building custom instrumentation, for MSP430 flashing and debugging, or as automated testing gear in manufacturing environments. Officially named the GreatFET One, this board is a giant leap forward in terms of available pins, horsepower, and software interactivity, at an MSRP of $89.99. GreatFET Hardware Overview The chip at the center of the GreatFET is an NXP LPC4330 in a massive LQFP-144 package. On either side of the board you can see the bulk of those 144 pins have been broken out, with 40-pin female headers. There’s also a “bonus row” SIL-20 pin header on one side. The vision here is for the ability to add hardware to the board using shields which Great Scott Gadgets are calling “Neighbors” as a hat-tip to [Travis Goodspeed] who designed the original GoodFET. The LPC4330 is the same chip you’ll find on the HackRF One; a 32-bit ARM Cortex-M4 with a clock speed of up to 204 MHz. HackRF has made a name for itself as a go-to in the software-designed radio realm, you could think of this as a similar device without the analog radio circuitry attached. The board has a Hi-Speed USB port for interfacing with your computer, and a Full Speed USB port so it can be used as a FaceDancer to emulate USB devices, but under Python control. I must say kudos to Great Scott Gadgets on the labeling of this board. One annoyance of prototyping with dev boards is having the thing sitting on the bench with a bunch of jumper wires already hooked up, then having to move it to read the pin labels on the bottom of the board. GreatFET pins are labeled on both sides, with top side labels on the outside edge. Frequently used signals like SDA, SCL, MISO, or MISO are individually labeled. Both USB ports are labeled with hints to tell you which one you’re looking for, and all test pads on the bottom have descriptive labels. A sticker the shape and size of the board comes with it to locate special-use pins was being handed out at KiCon and other live events. You can easily jumper the signals you need, but for more robust applications you might want to purchase or build a Neighbor. My review kit included a Daffodil Neighbor which brings a solderless breadboard to the party. You could easily build your own Neighbor using protoboard and two double-row pin headers as the spacing is a predictable 0.1″. The bonus row of pins is optional when building Neighbors. Each GreatFET One wisely includes a “wiggler”, a simple tool made of PCB used to pry the two boards apart without bending pins, and you need it to overcome the 100 pins that make connections between the two boards. Software Delivers Big Versatility Yes, the hardware on this board is a beast, but the software holds the most promise for versatility. I don’t know if this is a software geeks’ hardware tool or the other way around, but it certainly blurs the line in a good way. The current state of the software is not quite mature, and both the documentation and the APIs are being worked on. I tripped on a few gotchas out of the gate, but once they were worked out, the parts I tested were simple and the parts of the API used were straightforward. First the gotchas. I tried installing these libraries for Python 2.7 and that was a no-go; you must use Python 3, which is fine, since Python 2.X reaches end of life six months from now. Next, I had problems connecting to the board reliably, and foolishly assumed I had udev problems when in fact I had a dodgy USB extension cable. Finally, I didn’t do a firmware upgrade, which is the first thing you should do on hardware this new. Luckily, all of these things are actually dead simple if you follow the getting started guide. $ sudo pip3 install --upgrade greatfet $ gf info Found a GreatFET One! Board ID: 0 Firmware version: git-2019.5.1.dev0 Part ID: a0000a30654364 Serial number: 000057cc67e630187657 $ gf fw --auto Trying to find a GreatFET device... GreatFET One found. (Serial number: 000057cc67e630187657) Writing data to SPI flash... Written 84812 bytes of 84812. Write complete! Resetting GreatFET... Reset complete! From here you can control the board through a command line interface, by typing commands in a live Python shell, or by writing and running your own Python scripts. There’s a bit of a learning curve here. Online documentation and tutorials are both a bit scarce right now, with the bulk of available information in the greatfet Python package itself. For instance, typing help(gf.apis.dac) is what helped me figure out which pin to probe for the DAC output. That said, many existing features are being ported over to GreatFET, so if you’re used to using GoodFET with MSP430 or using a Facedancer for USB dev I think you’ll feel right at home. Let’s take some of this for a test drive. Test Run: I2C, DAC, and Facedancer One of the places I often use interactive tools (primarily a Bus Pirate) is in trying out new screens. This is because they usually have a long string of initialization commands necessary to turn them on, and I want to make sure I have that right before I start writing and compiling code for a project. I2C OLED Display This turned out to be extremely easy using GreatFET. In just a few minutes I had a screen hooked up and showing a cross-hatched pattern. The board is a Python object in the greatfet library we installed above, and documentation from that library comes up when using tab completion. I used the scan feature to find the address. I2C commands can be sent by adding 0x80 or 0x40 to the beginning of the array of bytes to signify a command write or a data write. Here’s the code I used. I really love the ability to whip up a quick script like this as it’s dead simple to commit to GitHub for me (or others) to use in the future. from greatfet import GreatFET ssd1306init = [0xA8, 0x1f, 0xd3, 0x00, 0x40, 0xa0, 0xc0, 0xda, 0x02, 0x81, 0x7f, 0xa4, 0xa6, 0x20, 0x00, 0x21, 0x00, 0xff, 0x22, 0x00, 0x03, 0xd5, 0x80, 0x8d, 0x14, 0xaf] gf = GreatFET() addrscan = gf.i2c.scan() addr = 0 for i in range(len(addrscan)): if addrscan[i][0] != False: # i is both the index of the array # and the address tested in the scan addr = i if addr != 0: #Initialize the OLED gf.i2c.write(addr, [0x80] + ssd1306init) #Make screen buffer and fill with hash pattern screenbuff = list() screenbuff += 128*[0xCC,0xCC,0x33,0x33] #Write screen buffer to OLED gf.i2c.write(addr, [0x40] + screenbuff) 0-3.3V DAC with 10-bit Resolution Next up I tested the DAC. Not much to report here. The Python help(gf.apis.dac) function tells us how to set the voltage and what pin is used for the output: “Sets the DAC value on ADC0_0 (0-1023)”. Looking on the huge list of pin functions on the wiki I see that J2 pin 5 maps to ADC0_0. You can see my DMM verifies that setting the value to 512 (50% of the resolution) produces 50% of the 3.3 V rail: >>> gf.apis.dac.set(512) Facedancer USB Emulation The GreatFET product page links to a Facedancer repo which can be used to emulate USB devices so I decided to give this a try as well. I’ve been aware of this tool since Travis Goodspeed started showing off prototypes back in 2012 but this is my first time trying it out and it’s really neat. You use two USB cables, and both of them connect to the board with microUSB and to a computer (it doesn’t need to be the same one) with USB Type-A. The Facedancer software is written in Python — I also had to install pyserial — and includes a few different samples. The most straightforward I found when looking around is the ability to mount an ISO file using the board. This makes GreatFET look like a thumb drive with files on it. I had an old version of Mint on hand that I used in the test. $ git clone git@github.com:usb-tools/Facedancer.git $ sudo pip3 install pyserial $ cd Facedancer $ ./facedancer-umass.py /Marge/linuxmint-18.3-cinnamon-64bit.iso Complete Fail: You Can Recover from This! There’s a function of the greatfet class called onboard_flash. Do not play around with this function. I didn’t closely read the doc comments on this and just went along my merry way writing “Hackaday” to the first address space of this flash. Of course, as soon as I rebooted the board it no longer enumerated. This function allows you to overwrite the firmware programmed to the board. Looking back on this mistake, there is clearly a warning about this: The short version of the story: there’s a DFU mode built into the greatfet Python package installed on your computer. The long version is that I compiled from source and was not able to get my binary to resurrect the board. After opening an issue on GitHub I was directed to the proper commands. First, the code you shouldn’t run: In [46]: #Do Not Run This Code! In [47]: arr = bytes("Hackaday",'utf-8') In [48]: gf.onboard_flash.write(arr,address=0x00,erase_first=True) In [49]: gf.onboard_flash.read(0x00,8) Out[49]: array('B', [72, 97, 99, 107, 97, 100, 97, 121,]) If you happen to do this, or any other foolish thing to brick your board, the NXP chip has a hardware DFU mode that’s dead simple and will have you up and and runnning again in about twenty seconds. Just hold the “DFU” button on the board, press and release the “RESET” button, then run the following command: $ gf fw --dfu --autoflash dfu-util: Invalid DFU suffix signature dfu-util: A valid DFU suffix will be required in a future dfu-util release!!! Opening DFU capable USB device... ID 1fc9:000c Run-time device DFU version 0100 Claiming USB DFU Interface... Setting Alternate Setting #0 ... Determining device status: state = dfuIDLE, status = 0 dfuIDLE, continuing DFU mode device DFU version 0100 Device returned transfer size 2048 Copying data from PC to DFU device Download [=========================] 100% 52736 bytes Download done. dfu-util: unable to read DFU status after completion Trying to find a GreatFET device... libgreat compatible board in flash-stub mode found. (Serial number: 000057cc67e630187657) Writing data to SPI flash... Written 84812 bytes of 84812. Write complete! Resetting GreatFET... Reset complete! This Tools Does It All, If You Choose to Use It What’s the final verdict on the GreatFET? If you decide to go all in and make this the board you have in your kit, it will live up to its name and be a great tool. That’s true of almost any bench tool, right? If you’re really good at using your oscilloscope, but then need to perform some advanced tricks using a friend’s scope by a different manufacturer, it’s going to take you some time to figure things out. The GreatFET has a learning curve, but if you put in the time and make this your go-to tool, the sky is the limit on what you can do with it. However, if you rarely pick it up, you’ll need to glance back at the tutorials each time. For me, this tool makes a ton of sense because I’m a frequent user of Python on the desktop. I can commit my GreatFET scripts to a repo and look back to them as my own tutorials. This is much preferred to my previous go-to tool, the Bus Pirate, which has its own live terminal language that I have to keep relearning without the same easy ability to save and reference previous hacking sessions. And the extensibility of Python is so vast that any data processing, logging, or IO manipulation you need to do is both possible and relatively easy. The power of the chip on this board is insane, but truthfully why wouldn’t you go for a beefy chip in terms of IO, speed, memory, etc. It’s quite possible you will never outgrow the functionality of this chip. The labelling of the board, and interface methods are well thought out and well executed. I don’t see a way the hardware could be any better on the GreatFET. My only hesitation is the state of the documentation and API. Right now I don’t see a way to read from the ADC, and I’m not sure if there is just one ADC pin or if you can multiplex to a number of pins. This is just one example of the alpha state of the tutorials and quick start information. But I fully expect this to improve. The project is completely open source, the team is good about responding to GitHub issues, and I think a good set of user-contributed examples will grow as more people begin using the board. 39 thoughts on “Hands-On: GreatFET Is An Embedded Tool That Does It All” put it in a box and you’ll be forced into an awkward design to avoid the microusb port being too recessed. of course, we all have 3d printers shrug STOP! Stop right there! Hats, capes, shields,and so forth, are names given to peripheral interfaces attached to development boards. NOTE: they follow a convention of being named after “something” worn by a person. Neighbor VIOLATES said convention and must changed! May I suggest “cufflink”? B^) We still need a …MASK oh, i’m doing it with a neighbor and i’m having lots of fun… Not “broads”, boards! B^) What do neighbors do? Get together, help each other out? Silently judge you for not having kept your lawn cut from their side of the fence? yall will have to go back in time, this has been in development for a long long time So… it’s a bog-standard microcontroller. With the hardware accessible through libraries. I’m … amazed. Really. What a breakthrough. I think the libraries are kind of the whole point here… :-) We don’t publish sponsored content. This is a look at a new tool, not an advertisement. Sorry you don’t find a need for it on your bench, at least now you know it’s not for you ;-) New ? Maybe if you have been under a rock for the past year. If it’s not an Ad then it’s just low quality recycled content, typical for HaD these days. I would like to see some experiments about the speed it can operate those gpio pins. Liike, say, interfacint to a NAND or NOR chip ( normal variety, not the spi ones ) . Or even spi, in things that need speed I cant say anything about this one but I have a few boards in the same power range and directly its almost faster than a toggle switch Via program its better course now you just defeated tge point of having realtime control The Python code snippet seems to ignore the expressiveness of the language. I would be a bit more pedantic and write it like this (please note that it is untested): — Code as Base64 to avoid formatting issues: ZnJvbSBncmVhdGZldCBpbXBvcnQgR3JlYXRGRVQKCmdyZWF0ZmV0ID0gR3JlYXRGRVQoKQoKc3Nk MTMwNl9pbml0ID0gWzB4YTgsIDB4MWYsIDB4ZDMsIDB4MDAsIDB4NDAsIDB4YTAsIDB4YzAsIDB4 ZGEsCiAgICAgICAgICAgICAgICAweDAyLCAweDgxLCAweDdmLCAweGE0LCAweGE2LCAweDIwLCAw eDAwLCAweDIxLAogICAgICAgICAgICAgICAgMHgwMCwgMHhmZiwgMHgyMiwgMHgwMCwgMHgwMywg MHhkNSwgMHg4MCwgMHg4ZCwKICAgICAgICAgICAgICAgIDB4MTQsIDB4YWZdCgpzY3JlZW5fYnVm ZmVyID0gWzB4Y2MsIDB4Y2MsIDB4MzMsIDB4MzNdICogMTI4CgphZGRyZXNzZXMgPSBbYWRkcmVz cwogICAgICAgICAgICAgZm9yIGFkZHJlc3MsIHJlc3BvbnNlCiAgICAgICAgICAgICBpbiBlbnVt ZXJhdGUoZ3JlYXRmZXQuaTJjLnNjYW4oKSkKICAgICAgICAgICAgIGlmIHJlc3BvbnNlWzBdXSAg IyBDYW4gd2UgY2hhbmdlIHRoZSBjb25kaXRpb24gdG8gYW55KHJlc3BvbnNlKT8KCmlmIGFkZHJl c3NlczoKICAgIGxhc3RfYWRkcmVzcyA9IGFkZHJlc3Nlc1stMV0KICAgIGdyZWF0ZmV0LmkyYy53 cml0ZShsYXN0X2FkZHJlc3MsIFsweDgwXSArIHNzZDEzMDZfaW5pdCkKICAgIGdyZWF0ZmV0Lmky Yy53cml0ZShsYXN0X2FkZHJlc3MsIFsweDQwXSArIHNjcmVlbl9idWZmZXIpCgo= I see that the code on the article was updated to include some of my suggestions, replacing an obnoxious `while` loop by the handy multiplication operator. However, I keep missing a bit of (sane) PEP8 -the official Python code style guide, for the uninitiated- and the usage of idiomatic features like list comprehensions to replace traditional loops. Discussing about code style is not the best way of making friends, nor the best way of making good code. Said that, I would like to know if the less expressive way of writing the original script was devised so all we could grasp it, or if that simple demonstration was made quickly to illustrate the article and doesn’t need that level of nitpicking. Also, I would like to add that the conditional `if response[0]` in the list comprehension could be more explicit if required by expanding it to `if response[0] != False` to compare against al non-falsey types, or `if response[0] is not False` to filter all the items that are explicitly set to `False`. The idea of using `if any(response)` was to check that either write response (`response[0]`) or read response (`response[1]`) are `True`. To check for both, like if it was an `and` operator, you can use `if all(response)`. I find that I go back and copy/paste old code often enough that it’s definitely worth the time spent to polish up my initial learning demos. Yes, it certainly qualifies as nitpicking, but refactoring demo code is worth the effort. I add unit tests to the code while I simplify it; and those tests serve as great demos of the interface. Refactoring it also helps me look at the code from different angles so I understand it all better. And it seriously doesn’t take much time, because it’s all still fresh in my mind. (Yes, I know I should be doing TDD from the get-go, but when I’m scripting up this kind of thing, I get carried away in getting results first. The trick is to recognize early that I need to refactor and test.) Anyway, I know that when I pick up that code two years from now I will quickly grok it, even if I ignored it in the meantime. Even if I only reuse half my demos, the payout from starting from a decently clean project results in a huge boost to the new one. I have the same problem with the Bus Pirate (perhaps we all do). I come back to it after not using it for awhile, and immediately struggle with remembering all the syntax and nuances. So something that you can run with Python, which comparatively is hard to forget, would be a big improvement. As for the price, certainly we’d all like to see it a bit lower. But there’s something of a premium you pay when you’re dealing an open hardware gadget produced in relatively low numbers. That said, just like with the HackRF and even Bus Pirate, it’s a good bet that we’ll see overseas “clones” before too long. So buy it now if you want to support Great Scott Gadgets, or else take your chances on the cheaper import version down the line. I’ve been using a SPIDriver recently, and found it really helpful exactly because I can save and annotate the Python scripts. I was mulling over a Bus Pirate to do some i2c stuff but you’re right, I’d miss that. The GreatFET looks interesting and more flexible but pricey in comparison. Might be a bit overkill for my personal needs, but it’s a tempting option over an I2CDriver. You can also look at the boards based on the FT232H with pyftdi. They can do a lot of similar things. The greatfet is definitely more powerful, but FT232H boards can be had for cheap from the usual sources. Considering it costs almost as much as an Artix 7 FPGA development board with the USB 3.0 option, I would have expected something like a higher end ICE40 (if an open source toolchain is important) connected to a pair of FX2 chips and some RAM. Then add in a Python library that abstracts away the complexity of FPGA programming if the user just wants a few fast common interfaces, while allowing developers to add support for more interfaces to support more devices. Sounds like you want a Glasgow … (I do too!) I have to agree, ftdi board and jlink mini are under $30 and you can do more with them. Not to mention the greatfet is over 3 years old… it even had a black hat talk in 2016. But I guess when the only research HaD writers do is browse the adafruit blog it might look “NEW NEW NEW!” The IR capable Gladiolus Neighbor is what caught my attention and is buying me in for sure. Like HackRF capabilities to be hacked for improved performance with even better quality components reflowed as can be… I’ve already had some thoughts about improvement capabilities to hack the Gladiolus Neighbor. I’m in the market to et a Shikra for dumping firmwares out of chips but the damn board isn’t available and BusPirate is too slow for that. Would the GreatFET be a good too for that ? There are a number of equivalent boards available from AliExpress etc. You miss the neat pin labelling, etc, but it’s essentially the same board. e.g. Not exactly equivalent to the greatFet due to the lack of gpio, but can be used int he most common cases where a buspirate would suffice Interesting, so it’s a Shikra 1:1 clone then ? I’ll admit to not knowing what a Shikra is, but this board looks like a straighforward implementation of the FT232 reference design. Or is it the ref design minus EEPROM? The EEPROM is nice for storing config stuff in. I use such a board from time to time. It does great fast SPI and is good as an OpenOCD JTAG adapter. It’s faster/better than a Bus Pirate in every way, except that you’re left doing a lot of the work yourself. (Which I enjoy.) But the point of the GreatFET is that a) it does even more, esp USB man-in-the-middling, and b) it’s a lot more configurable / scriptable / flexible since most everything is essentially puppetted from the computer side in Python. Mike’s bit about whether it’s a hardware device for software folks or vice-versa is spot on. Out of curiosity, what does “DFU” stand for? I can only think of “Dammit! Fucked Up!” but I’m pretty sure that’s not right. Device Firmware Update Thanks, I too, was wondering (along the same lines as [mb]) Dallas Finance University Download Flying Units Digitally Fixed Units Dampen Future Umpires Darken Fluorescent Uranium I say “Didn’t Fux Up” because you enter DFU fix it, and say you meant to do that I have been waiting for some kind of Bus Pirate that is actually user friendly (as in has an actual GUI) basically forever, for the same reason – life is too short to “learn” console commands you use once in a blue moon and forget before even closing the terminal window. No, having a “help” list of what is supposed to do what isn’t any better. No, having the same happen only with Python instead of console commands isn’t any better either (but is far more expensive, apparently). I can launch a GUI after two decades, and recognize _literally instantly_ “this is the ‘SPI tool’ tab, this is the text box I type my hex data to send in, this other one is where I get my reply, and this ‘Send’ button is what I click to make it happen”. NOTHING else can do that. Only a GUI. Which is of course why absolutely not a single usable one exists for any cheap Bus Pirate class device I ever heard of, naturally: nothing personal, the Universe just happens to hate your fucking guts. Luckily, it should apparently be trivial to code one myself if I don’t like this – presumably in the Python I’m failing to remember how to use at all in the first place. It’s the standard punishment for just wanting to get on with your electronics or so I’ve heard, foolishly failing to aspire to join the GUI App Coder Master Race. Because if for any reason you can’t juggle three dozen GUI frameworks / support libs requiring support libs based on support libs / build systems that set up build systems that configure build systems (different ones each week) tying it all together with something trendy and hip like Rust or Haskell in your sleep with your hands tied behind your back then you’re clearly just a waste of perfectly good oxygen. Clearly. Do I detect just a smidgen of sarcasm in your comment? B^) Why not just use a cheap STM32 Nucleo with Micropython??? Okay so I’m still a novice in embedded engineering but how is this different from a standard Micropython dev board, or really any dev kit with an interpreter firmware loaded on it? Pretty sure this is Python running on the desktop, with fast communication over USB to the board. Full desktop Python has obvious advantages over Micropython, including being able to save/do actual complicated projects, millions of Python packages, and not having to learn the quirks of a weird embedded implementation of Python. Probably faster, too, because the implementation of microcontroller’s features are native, rather than interpreted. How a A 32 bit arm micro-controllerb ( STM32F4) with micropython, usbs, i2c, spi, lcd etc are different from this?
https://hackaday.com/2019/07/02/hands-on-greatfet-is-an-embedded-tool-that-does-it-all/
CC-MAIN-2020-24
refinedweb
4,548
68.7
Top Posting, to discuss post specific questions about NEP 47 and partially the start on implementing it in: There are probably many more that will crop up. But for me, each of these is a pretty major difficulty without a clear answer as of now. 1. I still need clarity how a library is supposed to use this namespace when the user passes in a NumPy array (mentioned before). The user must get back a NumPy array after all. Maybe that is just a decorator, but it seems important. 2. `np.result_type` special cases array-scalars (the current PR), NEP 47 promises it will not. The PR could attempt to work around that using `arr.dtype` int `result_type`, I expect there are more details to fight with there, but I am not sure.. 4. Now that I looked at the above, I do not feel its reasonable to limit this functionality to numeric dtypes. If someone uses a NumPy rational-dtype, why should a SciPy function currently implemented in pure NumPy reject that? In other words, I think this is the point where trying to be "minimal" is counterproductive. 4. The PR makes no attempt at handling binary operators in any way aside from greedily coercing the other operand. 5. What happens with a mix of array-likes or even array subclasses like `astropy.quantity`? 6. Is there any provision on how to deal with mixed array-like inputs? CuPy+numpy, etc.? I don't think we have to figure out everything up-front, but I do think there are a few very fundamental questions still open, at least for me personally. Cheers, Sebastian On Sun, 2021-02-21 at 17:30 +0100, Ralf Gommers wrote: > Hi all, > > Here is a NEP, written together with Stephan Hoyer and Aaron Meurer, > for > discussion on adoption of the array API standard ( >). This will add a new > numpy.array_api submodule containing that standardized API. The main > purpose of this API is to be able to write code that is portable to > other > array/tensor libraries like CuPy, PyTorch, JAX, TensorFlow, Dask, and > MXNet. > > We expect this NEP to remain in draft state for quite a while, while > we're > gaining experience with using it in downstream libraries, discuss > adding it > to other array libraries, and finishing some of the loose ends (e.g., > specifications for linear algebra functions that aren't merged yet, > see >) in the API standard > itself. > > See > > for an initial discussion about this topic. > > Please keep high-level discussion here and detailed comments on >. Also, you can access a > rendered > version of the NEP from that PR (see PR description for how), which > may be > helpful. > Cheers, > Ralf > > > Abstract > -------- > > We propose to adopt the `Python array API standard`_, developed by > the > `Consortium for Python Data API Standards`_. Implementing this as a > separate > new namespace in NumPy will allow authors of libraries which depend > on NumPy > as well as end users to write code that is portable between NumPy and > all > other array/tensor libraries that adopt this standard. > > .. note:: > > We expect that this NEP will remain in a draft state for quite a > while. > Given the large scope we don't expect to propose it for > acceptance any > time soon; instead, we want to solicit feedback on both the high- > level > design and implementation, and learn what needs describing better > in > this > NEP or changing in either the implementation or the array API > standard > itself. > > > Motivation and Scope > -------------------- > > Python users have a wealth of choice for libraries and frameworks for > numerical computing, data science, machine learning, and deep > learning. New > frameworks pushing forward the state of the art in these fields are > appearing > every year. One unintended consequence of all this activity and > creativity > has been fragmentation in multidimensional array (a.k.a. tensor) > libraries - > which are the fundamental data structure for these fields. Choices > include > NumPy, Tensorflow, PyTorch, Dask, JAX, CuPy, MXNet, and others. > > The APIs of each of these libraries are largely similar, but with > enough > differences that it’s quite difficult to write code that works with > multiple > (or all) of these libraries. The array API standard aims to address > that > issue, by specifying an API for the most common ways arrays are > constructed > and used. The proposed API is quite similar to NumPy's API, and > deviates > mainly > in places where (a) NumPy made design choices that are inherently not > portable > to other implementations, and (b) where other libraries consistently > deviated > from NumPy on purpose because NumPy's design turned out to have > issues or > unnecessary complexity. > > For a longer discussion on the purpose of the array API standard we > refer to > the `Purpose and Scope section of the array API standard < >>` > __ > and the two blog posts announcing the formation of the Consortium > [1]_ and > the release of the first draft version of the standard for community > review > [2]_. > > The scope of this NEP includes: > > - Adopting the 2021 version of the array API standard > - Adding a separate namespace, tentatively named ``numpy.array_api`` > - Changes needed/desired outside of the new namespace, for example > new > dunder > methods on the ``ndarray`` object > - Implementation choices, and differences between functions in the > new > namespace with those in the main ``numpy`` namespace > - A new array object conforming to the array API standard > - Maintenance effort and testing strategy > - Impact on NumPy's total exposed API surface and on other future and > under-discussion design choices > - Relation to existing and proposed NumPy array protocols > (``__array_ufunc__``, ``__array_function__``, > ``__array_module__``). > - Required improvements to existing NumPy functionality > > Out of scope for this NEP are: > > - Changes in the array API standard itself. Those are likely to come > up > during review of this NEP, but should be upstreamed as needed and > this NEP > subsequently updated. > > > Usage and Impact > ---------------- > > *This section will be fleshed out later, for now we refer to the use > cases > given > in* `the array API standard Use Cases section < >>`__ > > In addition to those use cases, the new namespace contains > functionality > that > is widely used and supported by many array libraries. As such, it is > a good > set of functions to teach to newcomers to NumPy and recommend as > "best > practice". That contrasts with NumPy's main namespace, which contains > many > functions and objects that have been superceded or we consider > mistakes - > but > that we can't remove because of backwards compatibility reasons. > > The usage of the ``numpy.array_api`` namespace by downstream > libraries is > intended to enable them to consume multiple kinds of arrays, *without > having > to have a hard dependency on all of those array libraries*: > > .. image:: _static/nep-0047-library-dependencies.png > > Adoption in downstream libraries > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > The prototype implementation of the ``array_api`` namespace will be > used > with > SciPy, scikit-learn and other libraries of interest that depend on > NumPy, in > order to get more experience with the design and find out if any > important > parts are missing. > > The pattern to support multiple array libraries is intended to be > something > like:: > > def somefunc(x, y): > # Retrieves standard namespace. Raises if x and y have > different > # namespaces. See Appendix for possible get_namespace > implementation > xp = get_namespace(x, y) > out = xp.mean(x, axis=0) + 2*xp.std(y, axis=0) > return out > > The ``get_namespace`` call is effectively the library author opting > in to > using the standard API namespace, and thereby explicitly supporting > all conforming array libraries. > > > The ``asarray`` / ``asanyarray`` pattern > ```````````````````````````````````````` > > Many existing libraries use the same ``asarray`` (or ``asanyarray``) > pattern > as NumPy itself does; accepting any object that can be coerced into a > ``np.ndarray``. > We consider this design pattern problematic - keeping in mind the Zen > of > Python, *"explicit is better than implicit"*, as well as the pattern > being > historically problematic in the SciPy ecosystem for ``ndarray`` > subclasses > and with over-eager object creation. All other array/tensor libraries > are > more strict, and that works out fine in practice. We would advise > authors of > new libraries to avoid the ``asarray`` pattern. Instead they should > either > accept just NumPy arrays or, if they want to support multiple kinds > of > arrays, check if the incoming array object supports the array API > standard > by checking for ``__array_namespace__`` as shown in the example > above. > > Existing libraries can do such a check as well, and only call > ``asarray`` if > the check fails. This is very similar to the ``__duckarray__`` idea > in > :ref:`NEP30`. > > > .. _adoption-application-code: > > Adoption in application code > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > The new namespace can be seen by end users as a cleaned up and > slimmed down > version of NumPy's main namespace. Encouraging end users to use this > namespace like:: > > import numpy.array_api as xp > > x = xp.linspace(0, 2*xp.pi, num=100) > y = xp.cos(x) > > seems perfectly reasonable, and potentially beneficial - users get > offered > only > one function for each purpose (the one we consider best-practice), > and they > then write code that is more easily portable to other libraries. > > > Backward compatibility > ---------------------- > > No deprecations or removals of existing NumPy APIs or other backwards > incompatible changes are proposed. > > > High-level design > ----------------- > > The array API standard consists of approximately 120 objects, all of > which > have a direct NumPy equivalent. This figure shows what is included at > a > high level: > > .. image:: _static/nep-0047-scope-of-array-API.png > > The most important changes compared to what NumPy currently offers > are: > > - A new array object which: > > - conforms to the casting rules and indexing behaviour specified > by the > standard, > - does not have methods other than dunder methods, > - does not support the full range of NumPy indexing behaviour. > Advanced > indexing with integers is not supported. Only boolean indexing > with a single (possibly multi-dimensional) boolean array is > supported. > An indexing expression that selects a single element returns a > 0-D > array > rather than a scalar. > > - Functions in the ``array_api`` namespace: > > - do not accept ``array_like`` inputs, only NumPy arrays and > Python > scalars > - do not support ``__array_ufunc__`` and ``__array_function__``, > - use positional-only and keyword-only parameters in their > signatures, > - have inline type annotations, > - may have minor changes to signatures and semantics of > individual > functions compared to their equivalents already present in > NumPy, > - only support dtype literals, not format strings or other ways > of > specifying dtypes > > - DLPack_ support will be added to NumPy, > - New syntax for "device support" will be added, through a > ``.device`` > attribute on the new array object, and ``device=`` keywords in > array > creation > functions in the ``array_api`` namespace, > - Casting rules that differ from those NumPy currently has. Output > dtypes > can > be derived from input dtypes (i.e. no value-based casting), and 0-D > arrays > are treated like >=1-D arrays. > - Not all dtypes NumPy has are part of the standard. Only boolean, > signed > and > unsigned integers, and floating-point dtypes up to ``float64`` are > supported. > Complex dtypes are expected to be added in the next version of the > standard. > Extended precision, string, void, object and datetime dtypes, as > well as > structured dtypes, are not included. > > Improvements to existing NumPy functionality that are needed include: > > - Add support for stacks of matrices to some functions in > ``numpy.linalg`` > that are currently missing such support. > - Add the ``keepdims`` keyword to ``np.argmin`` and ``np.argmax``. > - Add a "never copy" mode to ``np.asarray``. > > > Functions in the ``array_api`` namespace > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > Let's start with an example of a function implementation that shows > the most > important differences with the equivalent function in the main > namespace:: > > def max(x: array, /, *, > axis: Optional[Union[int, Tuple[int, ...]]] = None, > keepdims: bool = False > ) -> array: > """ > Array API compatible wrapper for :py:func:`np.max > <numpy.max>`. > """ > return np.max._implementation(x, axis=axis, > keepdims=keepdims) > > This function does not accept ``array_like`` inputs, only > ``ndarray``. There > are multiple reasons for this. Other array libraries all work like > this. > Letting the user do coercion of lists, generators, or other foreign > objects > separately results in a cleaner design with less unexpected > behaviour. > It's higher-performance - less overhead from ``asarray`` calls. > Static > typing > is easier. Subclasses will work as expected. And the slight increase > in > verbosity > because users have to explicitly coerce to ``ndarray`` on rare > occasions > seems like a small price to pay. > > This function does not support ``__array_ufunc__`` nor > ``__array_function__``. > These protocols serve a similar purpose as the array API standard > module > itself, > but through a different mechanisms. Because only ``ndarray`` > instances are > accepted, > dispatching via one of these protocols isn't useful anymore. > > This function uses positional-only parameters in its signature. This > makes > code > more portable - writing ``max(x=x, ...)`` is no longer valid, hence > if other > libraries call the first parameter ``input`` rather than ``x``, that > is > fine. > The rationale for keyword-only parameters (not shown in the above > example) > is > two-fold: clarity of end user code, and it being easier to extend the > signature > in the future with keywords in the desired order. > > This function has inline type annotations. Inline annotations are far > easier to > maintain than separate stub files. And because the types are simple, > this > will > not result in a large amount of clutter with type aliases or unions > like in > the > current stub files NumPy has. > > > DLPack support for zero-copy data interchange > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > The ability to convert one kind of array into another kind is > valuable, and > indeed necessary when downstream libraries want to support multiple > kinds of > arrays. This requires a well-specified data exchange protocol. NumPy > already > supports two of these, namely the buffer protocol (i.e., PEP 3118), > and > the ``__array_interface__`` (Python side) / ``__array_struct__`` (C > side) > protocol. Both work similarly, letting the "producer" describe how > the data > is laid out in memory so the "consumer" can construct its own kind of > array > with a view on that data. > > DLPack works in a very similar way. The main reasons to prefer DLPack > over > the options already present in NumPy are: > > 1. DLPack is the only protocol with device support (e.g., GPUs using > CUDA or > ROCm drivers, or OpenCL devices). NumPy is CPU-only, but other > array > libraries are not. Having one protocol per device isn't tenable, > hence > device support is a must. > 2. Widespread support. DLPack has the widest adoption of all > protocols, only > NumPy is missing support. And the experiences of other libraries > with it > are positive. This contrasts with the protocols NumPy does > support, which > are used very little - when other libraries want to interoperate > with > NumPy, they typically use the (more limited, and NumPy-specific) > ``__array__`` protocol. > > Adding support for DLPack to NumPy entails: > > - Adding a ``ndarray.__dlpack__`` method > - Adding a ``from_dlpack`` function, which takes as input an object > supporting ``__dlpack__``, and returns an ``ndarray``. > > DLPack is currently a ~200 LoC header, and is meant to be included > directly, so > no external dependency is needed. Implementation should be > straightforward. > > > Syntax for device support > ~~~~~~~~~~~~~~~~~~~~~~~~~ > > NumPy itself is CPU-only, so it clearly doesn't have a need for > device > support. > However, other libraries (e.g. TensorFlow, PyTorch, JAX, MXNet) > support > multiple types of devices: CPU, GPU, TPU, and more exotic hardware. > To write portable code on systems with multiple devices, it's often > necessary > to create new arrays on the same device as some other array, or check > that > two arrays live on the same device. Hence syntax for that is needed. > > The array object will have a ``.device`` attribute which enables > comparing > devices of different arrays (they only should compare equal if both > arrays > are > from the same library and it's the same hardware device). > Furthermore, > ``device=`` keywords in array creation functions are needed. For > example:: > > def empty(shape: Union[int, Tuple[int, ...]], /, *, > dtype: Optional[dtype] = None, > device: Optional[device] = None) -> array: > """ > Array API compatible wrapper for :py:func:`np.empty > <numpy.empty>`. > """ > return np.empty(shape, dtype=dtype, device=device) > > The implementation for NumPy may be as simple as setting the device > attribute to > the string ``'cpu'`` and raising an exception if array creation > functions > encounter any other value. > > > Dtypes and casting rules > ~~~~~~~~~~~~~~~~~~~~~~~~ > > The supported dtypes in this namespace are boolean, 8/16/32/64-bit > signed > and > unsigned integer, and 32/64-bit floating-point dtypes. These will be > added > to > the namespace as dtype literals with the expected names (e.g., > ``bool``, > ``uint16``, ``float64``). > > The most obvious omissions are the complex dtypes. The rationale for > the > lack > of complex support in the first version of the array API standard is > that > several > libraries (PyTorch, MXNet) are still in the process of adding support > for > complex dtypes. The next version of the standard is expected to > include > ``complex64`` > and ``complex128`` (see `this issue < >>`__ > for more details). > > Specifying dtypes to functions, e.g. via the ``dtype=`` keyword, is > expected > to only use the dtype literals. Format strings, Python builtin > dtypes, or > string representations of the dtype literals are not accepted - this > will > improve readability and portability of code at little cost. > > Casting rules are only defined between different dtypes of the same > kind. > The > rationale for this is that mixed-kind (e.g., integer to floating- > point) > casting behavior differs between libraries. NumPy's mixed-kind > casting > behavior doesn't need to be changed or restricted, it only needs to > be > documented that if users use mixed-kind casting, their code may not > be > portable. > > .. image:: _static/nep-0047-casting-rules-lattice.png > > *Type promotion diagram. Promotion between any two types is given by > their > join on this lattice. Only the types of participating arrays matter, > not > their values. Dashed lines indicate that behaviour for Python scalars > is > undefined on overflow. Boolean, integer and floating-point dtypes are > not > connected, indicating mixed-kind promotion is undefined.* > > The most important difference between the casting rules in NumPy and > in the > array API standard is how scalars and 0-dimensional arrays are > handled. In > the standard, array scalars do not exist and 0-dimensional arrays > follow the > same casting rules as higher-dimensional arrays. > > See the `Type Promotion Rules section of the array API standard < > > > `__ > for more details. > > .. note:: > > It is not clear what the best way is to support the different > casting > rules > for 0-dimensional arrays and no value-based casting. One option > may be > to > implement this second set of casting rules, keep them private, > mark the > array API functions with a private attribute that says they > adhere to > these different rules, and let the casting machinery check > whether for > that attribute. > > This needs discussion. > > > Indexing > ~~~~~~~~ > > An indexing expression that would return a scalar with ``ndarray``, > e.g. > ``arr_2d[0, 0]``, will return a 0-D array with the new array object. > There > are > several reasons for that: array scalars are largely considered a > design > mistake > which no other array library copied; it works better for non-CPU > libraries > (typically arrays can live on the device, scalars live on the host); > and > it's > simply a consistent design. To get a Python scalar out of a 0-D > array, one > can > simply use the builtin for the type, e.g. ``float(arr_0d)``. > > The other `indexing modes in the standard < > > > `__ > do work largely the same as they do for ``numpy.ndarray``. One > noteworthy > difference is that clipping in slice indexing (e.g., ``a[:n]`` where > ``n`` > is > larger than the size of the first axis) is unspecified behaviour, > because > that kind of check can be expensive on accelerators. > > The lack of advanced indexing, and boolean indexing being limited to > a > single > n-D boolean array, is due to those indexing modes not being suitable > for all > types of arrays or JIT compilation. Their absence does not seem to be > problematic; if a user or library author wants to use them, they can > do so > through zero-copy conversion to ``numpy.ndarray``. This will signal > correctly > to whomever reads the code that it is then NumPy-specific rather than > portable > to all conforming array types. > > > > The array object > ~~~~~~~~~~~~~~~~ > > The array object in the standard does not have methods other than > dunder > methods. The rationale for that is that not all array libraries have > methods > on their array object (e.g., TensorFlow does not). It also provides > only a > single way of doing something, rather than have functions and methods > that > are effectively duplicate. > > Mixing operations that may produce views (e.g., indexing, > ``nonzero``) > in combination with mutation (e.g., item or slice assignment) is > `explicitly documented in the standard to not be supported < > > > `__. > This cannot easily be prohibited in the array object itself; instead > this > will > be guidance to the user via documentation. > > The standard current does not prescribe a name for the array object > itself. > We propose to simply name it ``ndarray``. This is the most obvious > name, and > because of the separate namespace should not clash with > ``numpy.ndarray``. > > > Implementation > -------------- > > .. note:: > > This section needs a lot more detail, which will gradually be > added when > the implementation progresses. > > A prototype of the ``array_api`` namespace can be found in >. > The docstring in its ``__init__.py`` has notes on completeness of the > implementation. The code for the wrapper functions also contains ``# > Note:`` > comments everywhere there is a difference with the NumPy API. > Two important parts that are not implemented yet are the new array > object > and > DLPack support. Functions may need changes to ensure the changed > casting > rules > are respected. > > The array object > ~~~~~~~~~~~~~~~~ > > Regarding the array object implementation, we plan to start with a > regular > Python class that wraps a ``numpy.ndarray`` instance. Attributes and > methods > can forward to that wrapped instance, applying input validation and > implementing changed behaviour as needed. > > The casting rules are probably the most challenging part. The in- > progress > dtype system refactor (NEPs 40-43) should make implementing the > correct > casting > behaviour easier - it is already moving away from value-based casting > for > example. > > > The dtype objects > ~~~~~~~~~~~~~~~~~ > > We must be able to compare dtypes for equality, and expressions like > these > must > be possible:: > > np.array_api.some_func(..., dtype=x.dtype) > > The above implies it would be nice to have ``np.array_api.float32 == > np.array_api.ndarray(...).dtype``. > > Dtypes should not be assumed to have a class hierarchy by users, > however we > are > free to implement it with a class hierarchy if that's convenient. We > considered > the following options to implement dtype objects: > > 1. Alias dtypes to those in the main namespace. E.g., > ``np.array_api.float32 = > np.float32``. > 2. Make the dtypes instances of ``np.dtype``. E.g., > ``np.array_api.float32 = > np.dtype(np.float32)``. > 3. Create new singleton classes with only the required > methods/attributes > (currently just ``__eq__``). > > It seems like (2) would be easiest from the perspective of > interacting with > functions outside the main namespace. And (3) would adhere best to > the > standard. > > TBD: the standard does not yet have a good way to inspect properties > of a > dtype, to ask questions like "is this an integer dtype?". Perhaps > this is > easy > enough to do for users, like so:: > > def _get_dtype(dt_or_arr): > return dt_or_arr.dtype if hasattr(dt_or_arr, 'dtype') else > dt_or_arr > > def is_floating(dtype_or_array): > dtype = _get_dtype(dtype_or_array) > return dtype in (float32, float64) > > def is_integer(dtype_or_array): > dtype = _get_dtype(dtype_or_array) > return dtype in (uint8, uint16, uint32, uint64, int8, int16, > int32, > int64) > > However it could make sense to add to the standard. Note that NumPy > itself > currently does not have a great for asking such questions, see > `gh-17325 <>`__. > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion at python.org > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: <>
https://mail.python.org/pipermail/numpy-discussion/2021-March/081598.html
CC-MAIN-2021-39
refinedweb
3,875
53.41
Background The aim of this project is to demonstrate how two LoPy modules can communicate directly to each other via LoRa without using LoRaWAN. One LoPy will send out "Ping" messages which are received by another LoPy module, that will in turn reply with a "Pong" message. When either LoPy receives a message it will light up a string of WS2812 addressable LEDs and play a spooky sound effect. Required Libraries: wave - Used to decode WAV files chunk - Required by wave to decode the WAV header ws2812alt - Library for controlling the WS2812 LEDs via the LoPy's SPI periferal Assembly Setting up the boards: Connect the LoPy modules to the expansion boards, ensuring that the LED of the module is facing the same side as the micro-USB connector. Once connected you need to make sure the LoPy firmware is up to date. Instructions on how to update the firmware can be found here. Circuit The above diagram illustrates the complete circuit required for each pumpkin. A speaker is connected to the LoPy via an amplifier to pin P21 . The LoPy module has a digital to analog converter (DAC) available on pins P21 and P22 which will be used to generate an audio signal. The WS2812 LEDs need to be connected into a single chain as shown in the diagram. For simplicity (and better aesthetics) Adafruit jewel modules are used for the eyes and nose. If you don't carve a nose into your pumpkin you will only require two of these. For the power connections it is recommend to use a terminal block like so: Pumpkin Carving This step is where you let your creativity shine, the exact design you use is not critical so long as light is able to pass through the pumpkin in places. A guide on carving pumpkins can be found on wikihow. Code LoRa - LoRa Communications Communicating between two LoPy modules using LoRa is quite trivial. This project used the example code found here to achieve this. Please make sure you set the frequency to match the LoRa frequency in your region ( 868 MHz for Europe, 915 MHz for North America or 433 MHz for Asia). LEDs For the LEDs I used the wa2812alt library by Gadgetoid. The WS2812 LEDs require very accurate timing, this clever library utilises the SPI peripheral of the ESP32 found on the LoPy module to accurately generate the required signal. To initialise this library we used the following code: # 3 jewels for both eyes and nose, 16 LED strip for the mouth num_pixels = 7 * 3 + 16 chain = WS2812(spi_bus=0, led_count=num_pixels) For this project, rather than changing the colour of the LEDs in a pattern we use the libraries intensity attribute to fade a fixed pattern in an out. The fixed pattern for the LEDs is setup with the following code: data = [(0,255,0) for x in range(7*2)] + # Green eyes # a 4 tuple is used below because this jewel is a RGBW one [(0,255,255,0) for x in range(7)] + # Cyan nose [(255,0,0) for x in range(16)] # Red mouth To fade this pattern in and out we can vary the intensity attribute from 0 to 1. The following code changes the intensity and writes the values out to the LEDs. for x in [x * 0.1 for x in range(11)]: chain.intensity = x chain.show(data) time.sleep_ms(50) The LEDs can then be faded back out by using a similar loop with a reversed range. If you want to animate the LEDs with different colours you will need to alter the values within data . Sound Effect For the spooky sound effect we will be using WAV files due to their simplicity. WAV files contain a header describing the format and amount of audio samples followed by the raw, uncompressed, audio samples. Due to the lack of compression, it is very simple to parse a WAV file. The first thing we need to do is setup the DAC on pin P21 #Audio output DAC dac_audio = machine.DAC('P21') The wave library provided by a micropython example will handle parsing the WAV file for us and return to us the raw audio samples. These samples can then be output to the DAC to generate audio. Note: the DAC present on the ESP32 is not really meant for audio output, and as such you won't be able to reproduce audio sampled at more than 2KHz via micropython. def play_wav(filepath, dac_pin): f = wave.open(filepath, 'r') max_frames = f.getnframes() sample_rate = f.getframerate() for _ in range(max_frames): sample = f.readframes(1) sample = struct.unpack("h", sample)[0] sample = scale(sample, (-32768, 32767), (0, 1)) dac_pin.write(sample) time.sleep(1.0/sample_rate) So long as the WAV file is kept small, FTP can be used to upload the WAV file directly to the LoPy. If a long WAV file is used, it is best saved onto a micro SD card which can be plugged into the expansion board. The values in most WAV files come in the form of signed 16-bit integers, the DAC expects a floating point value in the range 0 to 1. This scaling is quite trivial and can be achieved via the following function: def scale(val, src, dst): return ((val - src[0]) / (src[1]-src[0])) * (dst[1]-dst[0]) + dst[0]
https://www.hackster.io/user83346052/raw-lora-communication-with-pumpkins-f02630
CC-MAIN-2020-10
refinedweb
898
68.91
vxtrace(7) VxVM 3.5 vxtrace(7) 1 Jun 2002 NAME vxtrace - VERITAS Volume Manager I/O Tracing Device SYNOPSIS /dev/vx/trace DESCRIPTION The vxtrace device implements the VERITAS Volume Manager (VxVM) I/O tracing and the error tracing. An I/O tracing interface is available that users or processes can use to get a trace of I/Os for specified sets of kernel objects. Each separate user of the I/O tracing interface can specify the set of desired trace data independent of all other users. I/O events include regular read and write operations, special I/O operations (ioctls), as well as special recovery operations (for example, recovery reads). A special tracing mechanism exists for getting error trace data. The error tracing mechanism is independent of any I/O tracing and is always enabled for all pertinent kernel I/O objects. It is possible for a process to get both a set of saved errors and to wait for new errors. IOCTLS The format for calling each ioctl command is: #include <<<<sys/types.h>>>> #include <<<<vxvm/voltrace.h>>>> struct tag arg; int ioctl (int fd, int cmd, struct tag arg); The first argument fd is a file descriptor which is returned from opening the /dev/vx/trace device. Each tracing device opened is a cloned device which can be used as a private kernel trace channel. The value of cmd is the ioctl command code, and arg is usually a pointer to a structure containing the arguments that need to be passed to the kernel. The return value for all these ioctls is 0 if the command was successful, and -1 if it was rejected. If the return value is -1, errno is set to indicate the cause of the error. The following ioctl commands are supported: VOLIOT_ERROR_TRACE_INIT This command accepts no argument. The VOLIOT_ERROR_TRACE ioctl initializes a kernel trace channel to return error trace data. The trace channel will be initialized to return any previously accumulated error trace data that has not yet been discarded. The accumulated trace data can be skipped by issuing VOLIOT_DISCARD on the channel. This call can be issued on a trace channel that was previously initialized either for error tracing or for regular I/O tracing. In this case, the channel is effectively closed down and then reinitialized as described above. To get - 1 - Formatted: August 2, 2006 vxtrace(7) VxVM 3.5 vxtrace(7) 1 Jun 2002 the error trace data, issue the read(2) system call. The error trace data consists of a set of variable length trace event records. The first byte of each record indicates the length, in bytes, of the entire record (including the length byte), the second byte indicates the type of the entry (which can be used to determine the format of the entry). Each call to read() returns an integral number of trace event records, not to exceed the number of bytes requested in the read() call; the return value from read() will be adjusted to the number of bytes of trace data actually returned. If the O_NONBLOCK flag is set on the trace channel, and no trace data is available, EAGAIN will be returned; otherwise, the read will block interruptibly until at least one trace record is available. When some trace data is available, the available unread trace records will be returned, up to the limit specified in the call to read(). If more trace records are available, subsequent reads will return those records. VOLIOT_IO_TRACE_INIT The VOLIOT_IO_TRACE_INIT ioctl initializes a kernel trace channel to return I/O trace data. This command accepts bufsize as the argument. Initially, no objects are selected for I/O tracing. To select objects to trace, issue the VOLIOT_IO_TRACE ioctl. The bufsize argument specifies the kernel buffer size to use for gathering events. A larger size reduces the chance that events are lost due to scheduling delays in the event reading process. A bufsize value of 0 requests a default size which is considered reasonable for the system. The value of bufsize will be silently truncated to a maximum value to avoid extreme use of system resources. A bufsize value of (size_t)-1 will yield the maximum buffer size. VOLIOT_IO_TRACE,VOLIOT_IO_UNTRACE The VOLIOT_IO_TRACE and VOLIOT_IO_UNTRACE ioctls enable and disable, respectively, I/O tracing for particular sets of objects on an I/O tracing channel. They both accept a voliot_want_list structure tracelist as the argument. The tracelist argument specifies object sets. The voliot_want_list structure specifies an array of desired object sets. Each object set is identified by a union of structures (the voliot_want_set union), each representing different types of object sets. See the declaration of these structures in voltrace.h for more detail. FILES /dev/vx/trace SEE ALSO vxintro(1M), vxtrace(1M), vxvol(1M) ioctl(2), read(2), vxconfig(7), vxiod(7) - 2 - Formatted: August 2, 2006
http://modman.unixdev.net/?sektion=7&page=vxtrace&manpath=HP-UX-11.11
CC-MAIN-2017-13
refinedweb
809
62.98
Email has a long history than the Web. Until now, it is also a very widely used service on the Internet. Almost all programming languages support sending and receiving e-mail, but wait a minute, before we start coding, it’s necessary to figure out how e-mail works on the Internet. 1. How Email Works On The Internet. 1.1 MUA ( Mail User Agent ). Assuming our own e-mail address is [email protected] and the other party’s e-mail address is [email protected], now we use software like Outlook to write the e-mail, fill in the other party’s e-mail address, click “Send”, and the e-mail will be sent out. These E-mail software are called MUA: Mail User Agent. 1.2 MTA ( Mail Transfer Agent ). Email is sent from MUA, not directly to the other party’s computer, but to MTA: Mail Transfer Agent, which is the email service providers, such as google, yahoo and so on. Since our own e-mail is google.com, e-mail is first delivered to the MTA provided by google, and then from google’s MTA to the other service provider such as yahoo’s MTA. There may be other MTAs in the process, but we don’t care about the specific route. 1.3 MDA ( Mail Delivery Agent ). After email arrives at Yahoo’s MTA, because the other party uses the mailbox of @yahoo.com, Yahoo’s MTA will send the email to the final destination of the mail that is MDA: Mail Delivery Agent. When Email arrives at MDA, it lies quietly on one of Yahoo’s servers and stores it in a file or special database. We call this place where mail is stored for a long time as an e-mail box. Similar to ordinary mail, email will not directly reach the other party’s computer, because the other party’s computer may not turn on, or may not connect to internet. In order for the other party to access the mail, it must transfer the mail from MDA to its own computer through MUA. 2. How To Write Code To Send/Receive Email. With these basic concepts, writing programs to send and receive e-mails just need to follow two steps. - Write MUA to send mail to MTA. - Write MUA to receive mail from MDA. When sending mail, the protocol used by MUA and MTA is SMTP: Simple Mail Transfer Protocol, and later MTA to another MTA is also SMTP protocol. When receiving mail, MUA and MDA use two protocols: POP: Post Office Protocol, the current version is 3, commonly known as POP3; IMAP: Internet Message Access Protocol, the current version is 4, the advantage is that not only can get mail, but also can directly operate the mail stored on MDA, such as moving from the inbox to the garbage bin, and so on. When email client software sends mail, it will let you configure SMTP server first, which MTA you want to send to. In order to prove that you are a user of this mail server, SMTP server also requires you to fill in the mailbox address and password, so that MUA can send email to MTA through SMTP protocol normally. When receiving mail from MDA, the MDA server also requires validation of your mailbox password to ensure that no one pretends to collect your mail. Therefore, mail clients such as Outlook will require you to fill in POP3 or IMAP server address, mailbox address and password, so that MUA can smoothly retrieve mail from MDA through POP or IMAP protocol. 3. Send Email Through SMTP In Python. 3.1 Send SMTP Email In Python. SMTP is a protocol for sending email. Python has built-in support for SMTP. It can send plain text mail, HTML mail and mail with attachments. Python supports SMTP with two modules: smtplib and smtplib module is responsible for sending mail. Below is python code that can construct the simplest plain text message. from email.mime.text import MIMEText msg = MIMEText('hello world from Python...', 'plain', 'utf-8') Notice that when constructing MIMEText objects, the first parameter is the mail body, the second parameter is the subtype of MIME, and the input ‘plain’ represents the plain text. The final MIME type is ‘text/plain’. Finally, utf-8 coding charset must be used to ensure multilingual compatibility. Then you can send above email message use SMTP protocol like below. # input sender email address and password: from_addr = input('From: ') password = input('Password: ') # input receiver email address. to_addr = input('To: ') # input smtp server ip address: smtp_server = input('SMTP server: ') # import smtplib module import smtplib # create smtp server object, the default smtp protocol port number is 25. server = smtplib.SMTP(smtp_server, 25) # set debug level to 1 to print out all interact data between this program and SMTP servers server.set_debuglevel(1) # login smtp server with the provided sender email and password. server.login(from_addr, password) # send email to smtp server and quit. The to_addr is a list of email address then can send one email to multiple receiver. server.sendmail(from_addr, [to_addr], msg.as_string()) server.quit() 3.2 Send SMTP Email With Complete Information In Python. If you look at the above email that you receive in your mailbox, you will find that the email do not has topic, the recipient’s name is not displayed as a friendly name such as Trump<[email protected]>, and obviously received mail was not displayed in the recipient list, all these is because mail topics, how to display the sender, recipient and other information are not sent to MTA through SMTP protocol, but contained in the text sent to MTA. Therefore, we must add From, To and Subject to MIMEText to send a complete information email. from email import encoders from email.header import Header from email.mime.text import MIMEText from email.utils import parseaddr, formataddr # import python smtplib module import smtplib # this function will parse the email address first to get email user real name and address. # then it will encode the user name use utf-8 to avoid encoding error. # then it will call formataddr function to construct the email address again. def _re_format_addr(s): # parse email to get user real name and email address. name, addr = parseaddr(s) name_encoded = (Header(name, 'utf-8').encode() return formataddr(name_encoded, addr)) # get sender email, password, receiver email and smtp server ip address. from_addr = input('From: ') password = input('Password: ') to_addr = input('To: ') smtp_server = input('SMTP server: ') # create MIMEText object msg = MIMEText('hello world email from Python', 'plain', 'utf-8') # add from, to and subject to the MIMEText object. msg['From'] = _re_format_addr('Trump <%s>' % from_addr) msg['To'] = _re_format_addr('Admin <%s>' % to_addr) msg['Subject'] = Header('This email sent from Python code', 'utf-8').encode() server = smtplib.SMTP(smtp_server, 25) server.set_debuglevel(1) server.login(from_addr, password) server.sendmail(from_addr, [to_addr], msg.as_string()) server.quit() Above is just a simple example about how to send pure text email through smtp in python. We will introduce more detail email sending options such as send html email, contains image email and attachment email in later articles. 2 thoughts on “Python SMTP Send Email Example” error from utils.py #89 name, address = pair ValueError: too many values to unpack (expected 2)
https://www.code-learner.com/python-smtp-send-email-example/
CC-MAIN-2021-43
refinedweb
1,214
63.7
std::strchr From cppreference.com Finds the first occurrence of the character static_cast<char>(ch) in the byte string pointed to by str. The terminating null character is considered to be a part of the string. [edit] Parameters [edit] Return value Pointer to the found character in str, or a null pointer if no such character is found. [edit] Example Run this code #include <iostream> #include <cstring> int main() { const char *str = "Try not. Do, or do not. There is no try."; char target = 'T'; const char *result = str; while ((result = std::strchr(result, target)) != NULL) { std::cout << "Found '" << target << "' starting at '" << result << "'\n"; // Increment result, otherwise we'll find target at the same location ++result; } } Output: Found 'T' starting at 'Try not. Do, or do not. There is no try.' Found 'T' starting at 'There is no try.'
http://doc.bccnsoft.com/docs/cppreference2015/en/cpp/string/byte/strchr.html
CC-MAIN-2018-47
refinedweb
139
68.87
TL;DR – C++ vectors handle dynamic data elements by working as sequence containers for stored elements. Contents What is C++ Vector: STL Basics Vector is a template class in STL (Standard Template Library) of C++ programming language. C++ vectors are sequence containers that store elements. Specifically used to work with dynamic data, C++ vectors may expand depending on the elements they contain. That makes it different from a fixed-size array. C++ vectors can automatically manage storage. It is efficient if you add and delete data often. Bear in mind however, that a vector might consume more memory than an array. Why Use Vectors in C++ Vectors C++ are preferable when managing ever-changing data elements. It is handy if you don’t know how big the data is beforehand since you don’t need to set the maximum size of the container. Since it’s possible to resize C++ vectors, it offers better flexibility to handle dynamic elements. C++ vectors offer excellent efficiency. It is a template class, which means no more typing in the same code to handle different data. If you use vectors, you can copy and assign other vectors with ease. There are different ways to do that: using the iterative method, assignment operator =, an in-built function, or passing vector as a constructor. In C++ vectors, automatic reallocation happens whenever the total amount of memory is used. This reallocation relates to how size and capacity function works. How to Create C++ Vectors Vectors in C++ work by declaring which program uses them. The common syntax look like this: vector <type> variable (elements) For example: vector <int> rooms (9); Let's break it down: typedefines a data type stored in a vector (e.g., <int>, <double> or <string>) variableis a name that you choose for the data elementsspecified the number of elements for the data It is mandatory to determine the type and variable name. However, the number of elements is optional. Basically, all the data elements are stored in contiguous storage. Whenever you want to access or move through the data, you can use iterators. The data elements in C++ vectors are inserted at the end. Use modifiers to insert new elements or delete existing ones. Theory is great, but we recommend digging deeper! Iterators An iterator allows you to access the data elements stored within the C++ vector. It is an object that functions as a pointer. There are five types of iterators in C++: input, output, forward, bidirectional, and random access. C++ vectors support random access iterators.. vector::cend()issimilar to vector::end()but can’t modify the content. Modifiers As its name suggests, you can use a modifier to change the meaning of a specified type of data. Here are some modifiers you can use in C++ vectors: vector::push_back()pushes elements from the back. vector::insert()inserts new elements to a specified location. vector::pop_back()removes elements from the back. vector::erase()removes a range of elements from a specified location. vector::clear()removes all elements. Breaking It Down With Examples There are many ways to initialize C++ vectors. You can use them depending on your preferences or the size of your data. Start with default value int main() { // Vector with 5 integers // Default value of integers will be 0. std::vector < int > vecOfInts(5); for (int x: vecOfInts) std::cout << x << std::endl; } Start with an array int main() { // Array of string objects std::string arr[] = { "first", "sec", "third", "fourth" }; // Vector with a string array std::vector < std::string > vecOfStr(arr, arr + sizeof(arr) / sizeof(std::string)); for (std::string str: vecOfStr) std::cout << str << std::endl; } Start with a list int main() { // std::list of 5 string objects std::list < std::string > listOfStr; listOfStr.push_back("first"); listOfStr.push_back("sec"); listOfStr.push_back("third"); listOfStr.push_back("fouth"); // Vector with std::list std::vector < std::string > vecOfStr(listOfStr.begin(), listOfStr.end()); for (std::string str: vecOfStr) std::cout << str << std::endl; } Start by copying from another vector int main() { std::vector < std::string > vecOfStr; vecOfStr.push_back("first"); vecOfStr.push_back("sec"); vecOfStr.push_back("third"); // Vector with other string object std::vector < std::string > vecOfStr3(vecOfStr); } When using certain initialization, you might need to set the size of the vector. Size refers to the number of elements a vector contains. It is not the same as capacity, which is the maximum number of elements a vector can contain. To manage it, you can use size() function: using namespace std; int main() { vector <int> v { 1, 2, 3, 4, 5 }; int n = v.size(); cout << "Size of the vector is :" << n; } You can also use the max_size() function like this: using namespace std; int main() { vector <int> v { 1, 2, 3, 4, 5 }; std::cout << v.max_size() << std::endl; } Most of the time, you will need to access a specified element in a C++ vector. To do that, you can use the [] selector function as shown below: using namespace std; int main() { vector <int> v { 1, 2, 3, 4, 5 }; for (int i = 0; i < v.size(); i++) cout << v.operator[](i) << " "; } If you want to replace a certain value with a new one, you can use = operator: using namespace std; int main() { vector <char> v { 'C', '#' }; vector <char> v1; v1.operator = (v); for (int i = 0; i < v.size(); i++) std::cout << v[i]; } C++ Vector: Useful Tips - It is recommended to use C++ vector if your data elements are not predetermined. - As a template class, C++ vectors offer better efficiency and reusability. - Compared to arrays, there are more ways to copy vectors in C++.
https://www.bitdegree.org/learn/c-plus-plus-vector
CC-MAIN-2020-16
refinedweb
931
56.05
C# Programming/Data structures There are various ways of grouping sets of data together in C#. Enumerations[edit] An enumeration is a data type that enumerates a set of items by assigning to each of them an identifier (a name), while exposing an underlying base type for ordering the elements of the enumeration. The underlying type is int by default, but can be any one of the integral types except for char. Enumerations are declared as follows: enum Weekday { Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday }; The elements in the above enumeration are then available as constants: Weekday day = Weekday.Monday; if (day == Weekday.Tuesday) { Console.WriteLine("Time sure flies by when you program in C#!"); } If no explicit values are assigned to the enumerated items as the example above, the first element has the value 0, and the successive values are assigned to each subsequent element. However, specific values from the underlying integral type can be assigned to any of the enumerated elements (note that the variable must be type cast in order to access the base type): enum Age { Infant = 0, Teenager = 13, Adult = 18 }; Age myAge = Age.Teenager; Console.WriteLine("You become a teenager at an age of {0}.", (int)myAge); The underlying values of enumerated elements may go unused when the purpose of an enumeration is simply to group a set of items together, e.g., to represent a nation, state, or geographical territory in a more meaningful way than an integer could. Rather than define a group of logically related constants, it is often more readable to use an enumeration. It may be desirable to create an enumeration with a base type other than int. To do so, specify any integral type besides char as with base class extension syntax after the name of the enumeration, as follows: enum CardSuit : byte { Hearts, Diamonds, Spades, Clubs }; The enumeration type is also helpful, if you need to output the value. By calling the .ToString() method on the enumeration, will output the enumerations name (e.g. CardSuit.Hearts.ToString() will output "Hearts"). Structs[edit] Structures (keyword struct) are light-weight objects. They are mostly used when only a data container is required for a collection of value type variables. Structs are similar to classes in that they can have constructors, methods, and even implement interfaces, but there are important differences. - Structs are value types while classes are reference types, which means they behave differently when passed into methods as parameters. - Structs cannot support inheritance. While structs may appear to be limited with their use, they require less memory and can be less expensive, if used in the proper way. - Structs always have a default constructor, even if you don't want one. Classes allow you to hide the constructor away by using the "private" modifier, whereas structures must have one. A struct can, for example, be declared like this: struct Person { public string name; public System.DateTime birthDate; public int heightInCm; public int weightInKg; } The Person struct can then be used like this: Person dana = new Person(); dana.name = "Dana Developer"; dana.birthDate = new DateTime(1974, 7, 18); dana.heightInCm = 178; dana.weightInKg = 50; if (dana.birthDate < DateTime.Now) { Console.WriteLine("Thank goodness! Dana Developer isn't from the future!"); } It is also possible to provide constructors to structs to make it easier to initialize them: using System; struct Person { string name; DateTime birthDate; int heightInCm; int weightInKg; public Person(string name, DateTime birthDate, int heightInCm, int weightInKg) { this.name = name; this.birthDate = birthDate; this.heightInCm = heightInCm; this.weightInKg = weightInKg; } } public class StructWikiBookSample { public static void Main() { Person dana = new Person("Dana Developer", new DateTime(1974, 7, 18), 178, 50); } } There is also an alternative syntax for initializing structs: struct Person { public string Name; public int Height; public string Occupation; } public class StructWikiBookSample2 { public static void Main() { Person john = new Person { Name = "John", Height = 182, Occupation = "Programmer" }; } } Structs are really only used for performance reasons or, if you intend to reference it by value. Structs work best when holding a total equal to or less than 16 bytes of data. If in doubt, use classes. Arrays[edit] Arrays represent a set of items all belonging to the same type. The declaration itself may use a variable or a constant to define the length of the array. However, an array has a set length and it cannot be changed after declaration. // an array whose length is defined with a constant int[] integers = new int[20]; int length = 0; System.Console.Write("How long should the array be? "); length = int.Parse(System.Console.ReadLine()); // an array whose length is defined with a variable // this array still can't change length after declaration double[] doubles = new double[length];
https://en.wikibooks.org/wiki/C_Sharp_Programming/Data_structures
CC-MAIN-2014-10
refinedweb
787
54.73
10 Areas Where Tooling Makes Node.js Developers More Productive, Part 1 10 Areas Where Tooling Makes Node.js Developers More Productive, Part 1 In this blog series, we will go over the most important categories of tools that are closely related to all aspects of successful nodestering. Join the DZone community and get the full member experience.Join For Free Maybe you are a Node.js developer, JavaScript community veteran, passionate supporter, or generally into modern software development. Or perhaps you want to learn how to be more productive with various tools that successful Node.js professionals are using. Either way, you’ve come to the right place. In this blog series, we will go over the most important categories of tools that are closely related to all aspects of successful nodestering—from development environments, frameworks, and test and build tools to continuous integration, delivery, and monitoring. So, without further ado, let’s start our journey through the world of Node.js tools. Disclaimer: You will probably notice that the tools showcased are a bit pro-backend or DevOps-oriented. I apologize for that, but keep in mind that many of the tools and their applications go beyond just backend/frontend/desktop/mobile: Their purpose is to build good quality software in general. 1) IDEs What is a developer without a good integrated development environment (IDE)? The answer is simple: Unproductive. A good IDE will not let you leave its windows, menus, shortcuts, etc. ever (except to take a look at the browser). Code completion, support for various tools, plugins, debuggers, good performance, and good themes are simply the prerequisites for a good IDE, whether it’s free or a paid product. Since Node.js development on Windows is no longer that uncommon, .NET developers-turned-nodesters will appreciate Visual Studio or any of the many general (cross-platform) editors like Sublime Text, Atom, Brackets, Visual Studio Code, or even NetBeans. They all have the basics right and are good tools. Give them a try. But there is one that has it all and much more: Webstorm. It’s a well-done commercial IDE with support for most modern frameworks, build tools, and a load of plugins to make your work easier and more enjoyable. It is a de facto JavaScript cross-platform powerhouse for developing everything from backend services to intricate web apps. Some of the most interesting features for Node.js developers might be the ability to easily do memory heap analyses and CPU profiling with visual representation via flame charts. Read more about it here. 2) Build Tools As with any other software project, you need to have some tools to help you build your app/service/project, prepare it for a release (or release for a specific environment like the load environment), or simply run test suites. The Node.js (and JavaScript) ecosystem is rich with tools to jump in. Some of them like Gulp, Grunt, Brunch, or WebPack are so well-known and rooted that they are featured in some IDEs and you don’t need to be a command line ninja to use them. They allow you the flexibility to write code and make your life much easier. Grunt declares itself a JavaScript runner, but it's really a powerhouse that allows you to do a lot with little code—especially if you are using some of the plugins from its rich plugin ecosystem. Gulp touts itself as an easy-to-learn and easy-to-use streaming build system and is very popular for its flexibility. It aligns really well with Node.js streams and their ability to pipe. In general, there is a lot of overlap in terms of satisfying developer needs for both of these, but for a more generalized understanding, we can say that Gulp is more focused on writing code and Grunt on writing configurations. It's up to you to evaluate your needs and try to find a good fit for your projects. Another approach to builds and automation in the Node.js world is using npm-scripts. This is due to its simplicity (remember the KISS principle?) and tight integration with Node’s faithful counterpart, the famous node package manager (npm). Simply add them to your package.json file and run via npm run <script_name>. In addition, it provides hooks and you can run local modules you might have so you don’t need to install them globally. 3) Transpilation If you need or want to write your code in a superset of JavaScript and use something like TypeScript, CoffeeScript, or Spider, or if you just want to have the latest ES2015 or ES.next features at your fingertips, you must have heard of transpilation. Transpilation is the process of compiling your code, via a transpiler, into code that can run safely, in this case, in a Node.js environment and also in the modern browser (like the newer Chrome and Firefox browsers and IE10+). Node.js does not run CoffeeScript code out of the box. It runs JavaScript, and the transpiler does the conversion of CoffeeScript to JavaScript. Transpilation from a newer version of JavaScript to an older one is a common practice, too (e.g. you write your code in ES6 and transpile it to ES5 so it can safely run in browsers or under Node.js), and it gives you tomorrow’s features of the language today. The king of transpiration in the Node.js world for JavaScript-to-JavaScript transpilation is a tool called Babel. #!/usr/bin/env bash # install babel cli and preset npm install --save-dev babel-cli babel-preset-es2015 # do the transpiling of original.js file to transpiled.js babel original.js --out-file=transpiled.js // small example let tests= ['one', '2', 'III']; const test = () => console.log(...tests); test(); 'use strict'; var tests = ['one', '2', 'III']; var test = function test() { var _console; return (_console = console).log.apply(_console, tests); }; test(); But it is not the only one. Check out Bublé or Traceur. 4) REST API Frameworks and Services If you are using Node.js to build REST API services (microservices, anyone?) and haven't opted out for a more esoteric framework, then you are left with a few well-known and battle-tested frameworks. Those are Restify, HAPI and Express.js. Express.js is probably the most widespread and well-known framework that is used for all sorts of web (app) development, including API development. Not so long ago, it was considered too bloated for API development, but in its last incarnation (version 4) this is no longer the case. It has been seriously re-architected to be light, and all the niceties you may remember from previous versions that were available almost out of the box are now available as separate modules that you plug in and use depending on your needs (as anything else in Node's module ecosystem). Restify is a popular choice for rest APIs. It is a very straightforward framework with API development in mind only. It has some features that set it apart from others like out-of-the-box support for DTrace and built-in REST Clients for easy and fast consumption of other services (very useful if you are building an API gateway microservices component). HAPI is another great framework that focuses on simplicity and performance, but it is not as widespread as the previous ones and might lack in ecosystem richness and widespread production maturity. Definitvely worth consideration when building an API with performance in mind. StrongLoop Loopback is another open-source framework that could help you out in this area. It's a more comprehensive toolset that generates a lot of the code for you (models, database migrations) and can get you very far in a short time. But as with any bigger framework, you delegate your sense of security and knowledge in code and risk a possible lock-in that might be costly in the end. Surely this is a good contender, as it is backed by IT giants like IBM, and the tooling being developed for and around it is and will be more and more impressive and maybe even enterprise-production grade. To test your REST API services, you will need a good REST client. If you like Curl, fine, knock yourself out, but for those of us who like something more intuitive, visual, and full of savings/categorizing/sharing features, Postman is a perfect tool. 5) Test Frameworks and Tools Node.js, being powered by JavaScript, a weakly typed language, makes automated testing an even more important and time-consuming part of software development. Enter Mocha and Chai. Mocha is a flexible testing framework (an enabler of test suites, so to speak) that is usable in a browser or Node.js and is very widespread in combination with Chai, an assertion library. This combination is mostly used in backend development testing and, with a set of their plugins, can be used to fulfill the needs of almost any type of tests (unit, integration, functional, smoke). #!/usr/bin/env bash # make soure you have mocha installed npm i mocha -g # run tests with mocha mocha --reporter=spec test.spec.js var chai = require('chai'); var expect = chai.expect; var assert = chai.assert; var testObj = { name: "test", sub: { name: 'test sub' }, numbers: [1, 2, 3, 4], hasNumbers : true }; describe ('Test Suite', function () { describe('expect tests', function () { it ('should be a valid testObject', function () { expect(testObj).to.be.an('object').and.is.ok; expect(testObj).to.have.property('sub').that.is.an('object').and.is.ok; expect(testObj.sub).to.have.property('name').that.is.a('string').and.to.equal('test sub'); expect(testObj).to.have.property('numbers').that.deep.equals([1, 2, 3, 4]); expect(testObj).to.have.property('hasNumbers', true); }); }); describe('assert tests', function () { it ('should be a valid testObject', function () { assert.isOk(testObj); assert.isObject(testObj); assert.propertyVal(testObj, 'name', 'test'); assert.property(testObj, 'sub'); assert.propertyVal(testObj.sub, 'name', 'test sub'); assert.property(testObj, 'sub'); assert.deepEqual(testObj.numbers, [1, 2, 3, 4]); assert.typeOf(testObj.hasNumbers, 'boolean'); assert.isTrue(testObj.hasNumbers); }); }); }); On the frontend side, it is more common to see a combination of Karma.js and Jasmine. Karma.js is a test runner that is framework agnostic and allows you to test and simulate in real browser environments using a headless PhantomJs instance. Jasmine is a framework for testing JavaScript code and provides the ability to define tests and performs expectation checks in a behavior-driven development style (much like Chai’s BDD flavor). For a more comprehensive example of how to install, configure, and use Jasmine and Karma, take a look at this AngularJS testing article. It's good to know that the JUnit "standard" of describing test results (XML file) is very accepted, tool-rich, and widespread so these test tools gravitate to or have strong support (via plugins or modules) to give you that desirable JUnit output with little effort. Was This List Useful? Stay tuned for more recommendations for Node.js developers in Part 2. Published at DZone with permission of Mihovil Rister , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/10-areas-where-tooling-makes-nodejs-developers-mor
CC-MAIN-2019-30
refinedweb
1,882
56.05
Today is the day of unhelpful post lines. This particular post is about creating a new project from the void – where there was nothing, adding some content and then interacting with the feature that we have created. I am in the process of determining if I can remove one of our modifications, known as the Vessel Modification – this series of posts will demonstrate the steps and the logic that I go through in order to a). the SDK is an appropriate tool for this and b). prove that various functionality I need in the Vessel Modification can be duplicated. You’ll note that in this post I am creating some pretty useless code – but this is all about discovery and proof – don’t consider it polished or finished. Design of a coherent solution will come later. For those that don’t have the SDK, it shows how over a period of an hour or so you can install, create and run your own code. I’m going to assume that you have had a good skim through the Developer Guide and read some of what I am posting has already been covered on the lawsonsmartoffice.com blog. As mentioned in part 1 () , the format of this sequence of posts is to illustrate how I have explored using the SDK. I will be the first to admit I haven’t read the Developers Guide from start to finish – the chances are I never will; reference it, certainly 🙂 The first thing that we need to do is create a new project in Visual Studio. Under the Installed Templates, selecting the Visual C# root and you should see a couple of Lawson Smart Office templates. We need to select the Lawson Smart Office Client. We will be adding a feature to the project, the feature being our code. I’m going to call the project LSOTest We’ll now add our feature project. Right click on the solution and select Add -> New Project We will add a “Lawson Smart Office Feature” and I am going to call it “MyInitialTest”. Now let us run it and see what we have. We have to log in and then we get to the base canvas. Notice that we have something missing from the Navigator – M3 Transactions 😦 If during our development or testing we want to launch M3 programs we need to add the mforms assembly to our client project (LSOTest). Right click on the References and select “Add Reference” browse to your SDK directory (or you can type in the %LSOSDKBin% that you set in the SDK installation steps Which should take you directly to your LSO SDKs bin directory. In it we have our mforms.dll assembly, double click on it and then click Add and finally close. Under the LSOTest I now have a new entry under the references which says MForms Now run the application. Navigator now has M3 Transactions and we can actually run normal M3 programs. We also need to add a reference back to our MyInitialTest project. Again, Right click on the References, select Add Reference -> Projects -> Solution and select “MyInitialTest” The next thing that I want to be able to do is be able to launch MyIntialTest. And I want the URI prefix to be sac – so I would run my program by typing sac://MyInitialTest To do this, in my solution I go down to the MyInitialTest project and open up the MyInitialTest.manifest file I am interesting in changing the Applications “scheme”. <Application scheme="tempscheme" applicationGroup="SmartClient" assemblyName="MyInitialTest" factoryClass="MyInitialTest.MyInitialTestApplication" /> You can see that we have a scheme of tempscheme. I will change that to sac <Application scheme="sac" applicationGroup="SmartClient" assemblyName="MyInitialTest" factoryClass="MyInitialTest.MyInitialTestApplication" /> I then run my solution and in the Navigator I type sac://myinitialtest and I get Pretty nifty really! If we take a look at the code itself, we will see that the template has created the MyInitialTestApplication.cs with code that gives us a really good start on doing something useful. Now because I am looking at proving that I can achieve certain base goals at the moment, I’m not really interested in aesthetics – I just want to prove a few things before things before I go about design considerations. So, with that in mind, we will dump some controls in to our example. So, I’m going to add a user control – under the MyInitialTest I’ll right click and select Add New Item and I’ll go down to WPF -> User Control. I’m just going to leave the name as is (UserControl1.xaml) <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="45,40,0,0" Name="button1" VerticalAlignment="Top" Width="75" /> <TextBox Height="23" HorizontalAlignment="Left" Margin="10,10,0,0" Name="textBox1" VerticalAlignment="Top" Width="120" /> Now I shall actually change the code which will add my newly created control to the window under the MyInitialTestApplication.cs LaunchTask() function. This will add my control to a StackPanel, and it will also add the templates code which does displays the “MyInitialTest”. StackPanel spStackPanel = new StackPanel(); spStackPanel.Orientation = Orientation.Vertical; spStackPanel.Children.Add(new UserControl1()); spStackPanel.Children.Add(content); host.HostContent = spStackPanel; And we shall run the application and load up my feature. We don’t have any events created yet so not a lot will happen. So we will now add an event, we want to make. If we go to the XAML of my UserControl1.xaml and against the button I am going to add a handler for the Click event, it will create a button1_Click handler for me and also add the code for the handler to the UserControl1.cs <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="45,40,0,0" Name="button1" VerticalAlignment="Top" Width="75" Click="button1_Click" /> And UserControl1.cs event and a call to display a MessageBox with the text in the TextBox private void button1_Click(object sender, RoutedEventArgs e) { MessageBox.Show("'" + textBox1.Text + "' was entered in to the Textbox"); } A quick run and we can see everything is working according to plan. Don’t forget that Smart Office provides its own range of ‘MessageBox’es – they should be what is used when you are deploying your solution. MessageBox.Show() is just a lot easier to remember 😉 We’ve now shown we can add our own controls and event handler. Can we do something actually useful now? Let us launch a program based on what is entered in to our TextBox if it is prefixed by mforms:// This requires us to add the assemblies to our UserControl1.cs file and then calling LaunchTask. Assemblies added using Mango.Services; using Mango.UI; using Mango.UI.Core; using Mango.UI.Services; using Mango.UI.Services.Mashup; private void button1_Click(object sender, RoutedEventArgs e) { MessageBox.Show("You clicked a button and the text in the textbox was" + Environment.NewLine + textBox1.Text); if (textBox1.Text.StartsWith("mforms://") == true) { DashboardTaskService.Current.LaunchTask(new Uri(textBox1.Text)); } } We run our solution and type mforms://mms001 in to our TextBox and hit enter, we then have success. So, from the perspective of my Vessel Modification I have access to what appears to be almost all of the M3 functionality I need. I know through APIs and WebServices (wrapped SQL queries) that I can extract and update the information that I need for the Vessel Modification. On the basis of what I have discovered so far, the SDK is a good fit for what I am hoping to achieve. The really nice thing being that I can write my applications data back to the same tables as the real vessel modification – meaning that my existing reports and processes won’t need any modification. In the next post we will start having a good look at the actual solution to replacing the modification itself. For those of you that use JScripts – the help files associated with the SDK are very helpful. I spent many many hours writing code to search for and validate assumptions I had made. It’s very useful indeed! 🙂 Hi. Can you helpe me please. The version of LSO i’am running is 10.2.1.037. I loaded the Infor Smart Office SDK – M3 10.2. I followed all steps and I run successflly Samples.sln and I can debug the solution. My problem is that I can’t see forms in the designer : “Invalid markup” If I add a usercontrol and add some controls I can see them without any problem. But, for exemple in DesignSystemSample I can’t see the controls of ControlSamplePanel.xaml. In XAML I have errors : For exemple : the name “DesignSystemWindow” does not exist in the namespace “clr-namespace:Mango.DesignSystem;assembly=DesignSystem” the name “ThemeIconButton” does not exist in the namespace “clr-namespace:Mango.DesignSystem;assembly=DesignSystem” the name “ThemeIconButton” does not exist in the namespace “clr-namespace:Mango.DesignSystem;assembly=DesignSystem” … I added reference to all dll in the bin directory without any result Thank you, Salem Does it compile and work? If so, then you can add a resource dictionary to your user control so you can see the the Smart Office controls and see the formatting as it would be displayed in Smart Office. When you have finished adjusting the formatting of the xaml, I comment out: The errors will return and the formatting will look bad, but it will compile. This way, your application will load the default theme of Smart Office, rather than the explicit style we set. This is quite noticeable with ThemeIconButtons If your code doesn’t compile with these errors then you probably have a syntax error in your xaml or potentially missing an assembly declaration in the xaml Cheers, Scott Sorry, I couldn’t post the xaml in the comments – lets try that again… <UserControl.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source=”/Mango.Skin;Component/Resources/FrogPond/FrogPondStyle.xaml” /> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </UserControl.Resources> And I would comment out this line <ResourceDictionary Source=”/Mango.Skin;Component/Resources/FrogPond/FrogPondStyle.xaml” /> and compile before deploying. Thank you Scott. I added the xaml and i deleted the directorie \AppData\Local\Microsoft\VisualStudio\12.0\Designer\ShadowCache (). It work, thank you 🙂 Hi Salem, you shouldn’t query the database directly. You should use MWS to wrap select statements in webservices. In the Smart Office profile there is a setting that you can retrieve which should be populated with the WebService address – though few installers seem to. (the Smart Office SDK documentation has a section on retrieving values from the Smart Office profile). FWIW: when I need to store configuration settings for scripts, I will typically store them in a xml file on a webserver – usually under a directory I create beneath the mne directory as we can retrieve the MNE runtime URL from the profile – however given that the data that has traditionally been stored in the mne directory is getting moved in to a database it’s probably not the best long term choice. The SDK mentions other methods of storing application settings, though I haven’t pursued it due to a caveat in the documentation that it was likely to change. Cheers, Scott Hi and thanks for this useful post. I’m following your steps but, in the beginning, after I’ve added both client and feature projects, when trying to launch the application and get the base canvas displayed, a message to relaunch application is always shown and, after that, the splash screen with “bouncing balls” rests without opening anything. Could you help me, please? Thanks, JL HI JL You need to install the Microsoft SQL server Compact 3.5 sp1 But i guess you figured this out as your post is from may 🙂 regards Jesper
https://potatoit.kiwi/2013/02/24/smart-office-sdk-first-project-part-2-in-the-beginning-there-was-nothing/
CC-MAIN-2017-39
refinedweb
1,957
54.42
Check out this quick tour to find the best demos and examples for you, and to see how the Felgo SDK can help you to develop your next app or game! This tutorial is contributed by Martin, one of our customers: Learning to build game with Felgo Felgo 3 Felgo are generally derived from the Item base class – this is the base class for all visual objects (objects that can, but not necessarily have to, be displayed). In the development of a Felgo app, Item objects are usually declared and then specialised in the following ways: Here's an example containing several of those specialisations: LabelBox.qml import Felgo 3 Felgo 3 Felgo app where this makes sense – all Felgo Felgo 3.0 import QtQuick 2.0 Rectangle { id: labelBox // ... Component.onCompleted: { console.debug("LabelBox has been constructed") } }:
https://felgo.com/doc/felgo-qml-tree-basics/
CC-MAIN-2020-40
refinedweb
138
52.49
. The. Figure 1 shows how an AWTEventMulticaster is used to construct a list of listeners. A static factory_method ( add()) is used to create individual multicaster instances -- you can't create them with new. As you can see in Figure 1, this factory method is passed the current head-of-list reference and the listener to add. If either of these is null, it just returns the other, effectively creating a one-element list. If neither argument is null, then an AWTEventMulticaster object, essentially a binary-tree node, is created and initialized so that one child points to the existing list and the other points to the newly added listener. It continues in this way, building a binary tree whose interior nodes are all AWTEventMulticasters and whose leaf notes are standard AWT or Swing listener objects. Notification is done with a simple, recursive tree-traversal. (Which can be a problem. You'll note in Figure 1 that the tree will most likely degrade into a linked list, and recursive traversal of a linked list can use up a lot of runtime stack.) As I mentioned earlier, the AWTEventMulticaster implements all of the standard listener interfaces ( actionPerformed(), mouseClicked(), and so on). The multicaster overrides do nothing but pass the message to their children, using instanceof in the case of the leaf nodes to ensure that the message can be received by the actual object. Immutable objects and blank finals The most important feature of the AWTEventMulticaster is not actually evident in the diagram. The multicaster is an immutable object: all of its fields are declared static final, but are initialized in the constructor rather than the declaration; like this: public class AWTEventMulticaster implements ComponentListener, ... { protected final EventListener a, b; protected AWTEventMulticaster(EventListener a, EventListener b) { this.a = a; this.b = b; } //... } Since everything's final, the object's state can't change once it's created. A Java String is immutable, for example. Immutable objects have a significant advantage over normal objects in multithreaded environments: you never have to synchronize access to them, because they can't change state. Consequently, replacing the Collection in the earlier listings with a multicaster eliminates the synchronization overhead entirely. A final field that is initialized in the constructor rather than the declaration is called a blank final. You should use them whenever you can, which is unfortunately not as often as you'd like. An oft-reported bug that's been around since the early beta versions of the JDK 1.1 compiler is still there in the Java 2 compiler: The compiler sometimes incorrectly spits out the hard-error "Blank final may not have been initialized," even when the field in question has been initialized. This bug tends to manifest only when you use a lot of inner classes. Because I don't want to corrupt the global or package-level name space with class names that the user could care less about, I usually prefer removing the final to moving the inner class out to the global level. Using the multicaster The multicaster exhibits a lot of very desirable behavior. For example, you can safely add listeners to the list while traversals are in progress because the tree is constructed from the bottom up. The new listeners will not be notified by any of the in-progress notification threads, but adding a listener will not damage the list either. Listener removal elicits the following question: How do you remove nodes from a tree whose nodes can't be modified? The flip answer is that you build another tree. Figure 2 shows the easy scenario. To effectively delete node C, you create a new root node and make its child pointers reference node D and node C's right subtree. If you overwrite subscribers with new_list -- the normal case -- there will be no more references to the gray-colored nodes, and they will eventually be garbage collected. Figure 3 shows the more difficult process of deleting a node that's further down in the tree (C again). Here, you have to build a second tree that looks exactly like that part of the original tree that's above the deleted node. As before, the gray colored nodes are subject to garbage collection once subscribers is overwritten (and any traversals positioned in that part of the tree complete). The first time I looked at the AWTEventMulticaster source, I thought the code was weird and unnecessarily complex. On reflection, though, I realized that there's a lot of neat stuff here. The more I looked at it, the more I liked it. Multiple threads can happily add and remove nodes without conflicting either with each other or with threads that are traversing the tree, and multiple threads can traverse the tree simultaneously. Moreover, absolutely no synchronization is required to do any of this. The data structure that's used is no larger than the doubly linked list that I used earlier, and the overhead of copying part of the tree when you do deletions is a lot less than the overhead of copying the entire subscriber list every time you publish an event. So, not being one to look a gift algorithm in the mouth, I wrote up my own general-purpose version of the multicaster, based on the AWTEventMulticaster. It's in Listing 9. It's remarkable how little code there is here. (Ahh, the miracle of recursion!) All the complexity, in fact, is in the removal. (I'll leave it as an exercise to the reader to puzzle through how it works -- opacity is the down side of most recursive algorithms.) This implementation is very general purpose in that it can publish any Object to any class that implements the generic Subscriber interface (from Listing 2). The final task, is to modify our publisher to use the multicaster. I've done that in Listing 10 (for this month's final entry in the worlds-most-complicated-way-to-print-"hello world" contest). This version is built on the one-thread-per-publication model discussed earlier. Wrapping up So that's it for this month. The Observer pattern is enormously useful when you need to write code that can be used in any context. Since the publisher knows nothing about the subscribers (except that they implement the Subscriber interface), the publisher is truly reusable in the technical OO sense. AWT/Swing obviously leverages Observer in its event model, but I use this design pattern heavily in non-UI situations as well. Next month I'll present the last of the class-related solutions to multithreading problems: a reader-writer lock that lets you control access to a shared global resource. I'll also discuss Critical Sections and Singletons. Subsequent columns will look at a few architectural solutions to threading problems such as Synchronous Dispatchers and Active Objects.
https://www.javaworld.com/article/2076375/core-java/programming-java-threads-in-the-real-world--part-6.amp.html
CC-MAIN-2018-22
refinedweb
1,141
52.7
@ -5,6 +5,8 @@ WISH? monotonic clocks times/GetTickCount for coarse corrections? -). 3.48 Thu Oct 30 09:02:37 CET 2008 - further optimise away the EPOLL_CTL_ADD/MOD combo in the epoll @ -11,6 +11,8 @@ libev - a high performance full-featured event loop written in C // a single header file is required #include <ev.h> #include <stdio.h> // for puts // every watcher type has its own typedef'd struct // with the name ev_TYPE ev_io stdin_watcher; @ -41,10 +41,6 @@ #include <stdlib.h> #include <assert.h> #ifndef WIN32 # include <sys/time.h> #endif #ifdef EV_EVENT_H # include EV_EVENT_H #else @ -50,6 +50,12 @@ extern "C" { /* we need sys/time.h for struct timeval only */ #if !defined (WIN32) || defined (__MINGW32__) # include <time.h> /* mingw seems to need this, for whatever reason */ struct event_base; #define EVLIST_TIMEOUT 0x01
https://git.lighttpd.net/mirrors/libev/commit/7c44ec668f74c5de62708bf0a5265d49df11a6aa
CC-MAIN-2022-33
refinedweb
134
69.79
The world's most viewed site on global warming and climate change We. Thanks again for all the great support we receive, both in emails and comments, and of course for your generous financial contributions as well. Anthony Watts, Charles Rotter Maybe you could incorporate as a 501(c)(3) educational foundation. Then those of us whose employers match charitable gifts would be able to double our money! Me too. My DAF only allows donations of $250 or more—and it has to be to a 501C3. It should be easy, but … If you become a 501(c)(3) organization, you come under strict IRS rules and are subject to audits and to official claims (just or false) being filed against you. There are also rules about having a Board of Directors, meetings, keeping certain records, filing annual tax returns. etc. In addition, if the organization is headquartered in California and registered as a California California Nonprofit Public Benefit Corporation (equivalent to IRS 501(c)(3)), it likewise falls under a bunch of California statutes on what it must, and must not, do. Please give very careful consideration to the $ advantage (bestowed to donors) being worth the hassle to those running the WUWT business. There may be better paths for WUWT . . . a good lawyer would know. Trust them as they’re from the Gummint and they’re here to help. Yeah riiiiight! And around 2010 the IRS and FBI got together to stop 501c3’s who had non-progressive leanings (Tea Parties)… You know “facts and some science” are non-progressive. How about exactly 100/year in equal monthly installments? It is much harder to get people to make monthly than one-time pledges, even if the monthly ones are 1/12 of what they’re giving one-off. But it’s the best sustainability model. That’s the best use of the word sustainability I’ve seen for a long time. I agree a monthly contribution is easy to set up most of us have direct debits set up with our banks, it’s easy and once set up your contribution is automatic and won’t need reminders. At age 83 I don’t make long term commitments but will respond with one time donations when asked and given a mailing address for checks. I only use PayPal or send written checks as needed. PayPal cancels sites for politics. @jdgalt, yep they do. Paypal just stopped processing payments to the TimWorstall.com site based on “inappropriate content” when all Tim does is comment and give opinion on news stories mainly about economics. I’m 84 and in the same boat. One-time donations so my wife, who spends near-zero time on the internet, doesn’t have to hunt down and terminate any long-term commitments. You can use Debit or Credit card too. I can’t match your 83, but similarly, I don’t make long term commitments, but intend to make one time donations on an annual basis. I’ll miss the ads for solar panels, but happy to contribute at least once per year. I have gained greatly from WUWT articles and the many reader comments. Thanks! badEnglish That raises a good point…your click through rates are probably much lower than other climate change focused sites, since we are much less gullible, nor are we interested in the latest green scams. That could be why your ad rates are falling. In any case, the nice thing about an ad-funded site is that these rent-seeking, green-washing companies that advertise are wasting their money…and that’s a good thing. Perhaps we all should click on a few of those green’s ads, push up the advertising revenue and allow them to subsidise the site. Then you’d be able to tell them that you are on the Big Green teat, like them. This is exactly what I do plus I make sure my ad-blocker is off for WUWT is this site a non-profit 501c-3? If possible I would like to donate by check. Gene -is this site a non-profit 501c-3? I doubt WUWT would want the enormous regulator burden involved. It is not a simple matter of a single National approval. All states require their own annual reporting regime. You wouldn’t have to register in states where your 501(c)3 has no physical presence. But you would have to file an annual return (Form 990) which is then public information. The 990s of every nonprofit in the US can be viewed at guidestar.org. The form doesn’t require listing all members but you do have to name major donors and the major things you spend the money on. I don’t think it’s worth it. You may be interested in following Charles Ortel is CLOSING IN- a video broadcast with Jason Goodman – crowdsourcethetruth.com Like the majority of fully independent broadcasters, they are being constantly disrupted by Big Tek. Charles Ortel, has a pedigree, in exposing established charities-frauds. I don’t mind ads if they aren’t so invasive. I resent them taking over my browser and moving stuff around on the screen. Or covering half of it. They are VERY invasive here. I’m currently using MS Edge, and the pop-ups show at the bottom of the screen. I just hit the “X” to clear them. Not that onerous, really. I use edge on my work computer, and the few times I’ve visited the site from it, hitting the “X” (when it’s available) only temporarily removes the add, 15 seconds or so later another ad gets served. And not all the ads have an “X” I use Firefox with uBlock Origin and see no ads at all… Of course, that’s one reason why ad revenue is falling. I hope you’ve been donating time or money regularly. I use an adblocker. But it’s possible to temporarily exclude WUWT each session or permanently. Perhaps if enough of us with an ad blocker allow allowed WUWT ads at times … (I just “paused” on this site.) I also use Adblock Latitude on the Pale Moon browser, but I also have (and will continue to) donate. I allowed ads here a few hours ago. Left the site, did other stuff. I just came back and still no ads. I’m not complaining about no ads but I wonder why? Why? Few or no accumulated tracking cookies. Folks that are annoyed by ads might clean out their cookies and compare the experience before and after. I use an ad blocker now after ads had become massively intrusive on WUWT and other sites. I cleared cookies after almost every internet browsing session, but it made no difference. Same here, i did it to save data being stolen from my free data allowance, otherwise i have to pay, but a free 5 gigs a week on the sim card i use in my dongle in my laptop is all i need. I tend to get some set of default ads, which seem to be the most obnoxious, when my cookies are cleared. I WISH they would be more relevant. More and more sites detect ad blockers and instruct you to turn it off or leave. And the ad blockers don’t seem to do much anymore. I had to use an ad-blocker for my sanity! The intrusive nature of so many ads, especially those with endlessly moving or changing graphics are just maddeningly distracting. After Firefox ‘upgraded’ me, I lost the ad-blocker; got really fed-up searching for a replacement that would work and ended up uninstalling Firefox. Installed DDG, thank goodness. Chiefio has a couple of ads, easily deleted while reading. Those ads aren’t relevant for me. We have donated here in the past and would like to again, the problem is that we are pensioners and there are several sites we’d like to support. As a rule, what we can manage goes to Jo Nova. I should have added, any request/instruction to turn off the ad blocker results in my leaving a site. I appreciate why they ask but if they have important info they are keen to have disseminated, then they need to have a rethink. A lot of ads are too large, too bright, too ‘busy’ or too salacious and tasteless for me to be willing to tolerate them! The reason some people use ad blockers, is that some ads load adware onto unsuspecting users’ computers. If ads were known to be safe that would be one reason for allowing them. Second, if one could pick ads about things they are interested in, they would probably want to look at them. Just a thought… And yes I would consider donating once per year. indeed. the bad apples (adware/malware infested ads) spoil the bunch. I don’t see why ad revenue should be different whether I don’t see the ads thanks to my adblocker, or just see but ignore them. However, I have to agree that currently available adblockers are very shoddy pieces of programming because “the other side” so easily recognizes them and then demands or forces one to turn them off. Done properly, there should be no way for the advertiser to find out that an adblocker is being used at all. In fact, a well-designed adblocker should also incorporate routines that simulate occasional clicking on ads so that advertising revenue is generated on the pages one browses using the adblocker, to benefit the makers of these sides, because the advertiser gets the signals he is paying for, namely that his ad has been noticed and reacted on. In the end, the advertiser pays for “clicks” and “views” – it is said nowhere that those must be acted out by a human, and AFAIK the advertiser has no right to snoop on who is doing the clicking and looking anyway! All this must of course happen without ever transmitting actual personal or financial data of the human user, or letting that user see any of the ads nor the robotically clicked-through links. So-to-speak, a second silent browser window dealing with the ads on one (hidden) screen automatically by “clicking” on them, “looking” at the linked ad content for a few seconds, and then “closing” them again, while the real content of the page is forwarded to another screen that the human reader uses. A robotic equivalent of a secretary who opens the page for me, deals with the ads, and forwards me only the interesting bits (and as the latter option – delegating the filtering and sorting-out of ads and content to another human in my employment – surely cannot be considered illegal, a piece of software doing exactly the same is no less legitimate, as the situation is no different from the choice between employing someone to wash my dishes vs. using a dishwashing machine). I’m not a programmer myself, but an app/tool like this should be an immensely satisfying task for any expert in the field! “I don’t see why revenue should be different whether I don’t see the ads thanks to my adblocker, or just see but ignore them” Alex, I don’t think you understand the purpose of ads. The reason revenue is different is because advertisers buy ads for eyeballs – no eyeballs, no ad revenue. They want your eyes on what they’re trying to sell you. If you ain’t seeing it, then that defeats the purpose of the ad. If you don’t see it they have no shot of getting your attention and thus won’t get your potential business. If you are seeing it, even if you generally “ignore” the ads, then they have a shot at getting your attention (with an ad that stands out in some way) and thus potentially your business. dunno if fkbk pays but if you have their damned like button on a page they skim OUR info regardless I am told. another reason i block and use Blur too I’m a software engineer, Alex, and I want to examine of a couple of your claims. First, indeed, ad-blockers are typically shoddily coded because they are written by low-skill software developers or they are written in a short period time. Or they are morsels intended to get you to pay to get more or better features. Or sometimes it’s because they have a poorly designed user interface. Second, advertisers can detect if an ad has been removed numerous ways. One way is, the programming code (Javascript) can periodically check if the ad is there (by querying for it in the HTML) and confirm to the ad agency that it showed for a certain amount of time. Naive ad blockers will simply remove the HTML containing the ad. But the code controlling the ads can detect that it disappeared and simply add it back in. Google is particularly bad about this. It’s a cat and mouse game with ads on their search results. Third, clicking with code is indeed possible but easy to detect and reject. This is because there’s one way to move and click a mouse in code and that way is easy to intercept and examine for ad blockers trying to emulate clicks in place of a human. Above all, ad agencies can track you across the internet based on your ip address. And they can track your behavior in simple and complex ways. For example, simple tracking can answer questions like “what websites does this device visit?” and “how often do they visit it?” This can be done by simply counting the number of times a single ip address requests an ad. When simple tracking is integrated with social media, ad agencies can resolve the “who” behind the device: you. Ad blocking is an arms race so to speak because developers at ad agencies are always finding loopholes to exploit and ad block developers have to try to keep up with it. My solution is a combination of different things. First is something called a Pi Hole. It’s a Raspberry Pi computer with software designed to check network traffic and reject it if it’s coming from ad agencies or a nefarious source. Second is a VPN that I use occasionally. Third is ad blockers at the browser level. And fourth, when I really want to go “stealth”, I can use a text-based browser like Lynx or just raw HTTP requests which get the HTML of the site without actually executing any code. thing is for those of us with limited net gigs stopping ads running also means we can last the month without running outta credit, they chew a lot Exactly, i get by on 5 gig free a week on my sim in my dongle without ad’s, and i play poker online 10 hours a day. Ive been dealt 3 million hands of cards in the last 10 years i reckon, or pretty close to it. I refuse to use Firefox. I don’t support left wing organizations. Did you write your own browser then? Seriously – which one is made by a group that isn’t seriously left-leaning? Certainly not Chrome or Safari. I do the same…. no ads for me to see I never see an ad on WUWT. I use a script blocker/ad blocker. Works perfectly. I prefer one-time, yearly payments. I don’t do PayPal. Agree but what really burne me are the ads that chew up all my 4 processor computer time and 8gb of memory doing who knows what (animations maybe). That’s why I have an ad blocker. $5/month I wouldn’t donate unless the anti-vaccine stuff goes – I’m not clear how that’s related to climate and the lies of CAGW. But I love the site otherwise. I’d be willing to give $5/month. It was timely and relevant an because we saw some of the same disinformation methods used in the “climate wars” in COVID19. But now that’s waning and you’ll see less of it. Fair enough. I can see the parallel in methods, although I am (generally) on the other side as far as my opinion regarding the virus and vaccination. I’m sorry Mr Watts but it is certainly not waning below the line. It’s been said many times that cagw was never really about climate it was just a vehicle to achieve globalization. It has also been said that covid is being used to the same end. If that’s true then maybe covid 19 is relevant. Just a thought. Happy to make a regular donation whatever you decide is fit for discussion. Good point. The goal of climate nonsense was to end or curtail fossil fuel use replacing it with renewables. The method of achieving the goal is globalized organizations: NGOs, GOs, UNGOs, … transnational corps. Since it began it seems to have got a life of its own: imagine a zombie raised with a voodoo spell which now thinks it’s a proper person – that all the other other living people are somehow illegitimate because they call it a zombie. Like Religion begins with spreading the word (of God), and invariably mutates into self interested clerics doling out dispensations for loot. The goal of the climate nonsense is control. With control comes power, with power comes money. That’s pretty much it. A few more thoughts on covid. I’m an an average Joe not a scientist. I’ve learnt more about covid from wuwt than I have from my government or from the MSM. Wuwt is a valuable resource when trying to see through the confusion. With over 442,000,000 total hits and the amount of comments a covid article generates I doubt I am alone in my thinking. The ads never bothered me, what ever you decide is fine with me. The modus operandi of how “climate change” and COVID ! are sold to the public are very similar. They both prey on FEAR. They both engage in fudging the data. We are constantly kept in fear-repeatedly. Propaganda is a big lie repeated frequently (and usually backed up with retribution towards those that refuse to buy in to the narrative) until people believe it out of fear. The propaganda is still a pack of lies. Plugging windmills and solar to save the planet-sounds eerily similar to-“take the untested, experimental vaccine for which we can not be sued so you can have the freedoms we confiscated back.” As yet, there is no solid evidence at all that the 3-5% increase in atmospheric CO2 levels that is of human origin is the dominant driver of catastrophic or dangerous global warming / “climate change.” As yet, there is no evidence that the vaccine prevents the spread of the virus or even makes you immune. Note, the PCR test is a DNA multiplier-it was never intended as a test for a virus- the test itself comes up with too many false positives. BTW, if it is secreted in saliva and faeces, why do the test right near the blood/ brain barrier?? We are likely to see masks still being worn and the insane virtue signalling of swabbing things down, even after people get the jab. Please note, being experimental, the untested, experimental vaccine needs fully informed consent, otherwise the entire rollout is in contravention of the Nuremberg code. Yes, the rollout is in violation of International agreements because there is a level of coercion and a lack of full disclosure. Covering up adverse side effects and misrepresentation of the dangers of the disease makes the whole vaccine thing very suspect-like the whole cAGW thing. Dr Fauci, in his 2008 paper on the Spanish flu came to the conclusion that most of the deaths from the Spanish flu were from bacterial pneumonia from wearing masks! What do you think- vaccine/ COVID discussion does belong on WUWT. “vaccine/ COVID discussion does belong on WUWT.” Apparently not in the opinion of people who disagree with what’s being said. And on a skeptic-oriented site, I find that quite sad. Yeah the luke-warmers of covid1984 Sites that stray off topic often find themselves abandoned by all. People are here because of climate change issues and to move to CoVid just because the propaganda techniques or goals are similar is a mistake. “High Treason Note, the PCR test is a DNA multiplier-it was never intended as a test for a virus- the test itself comes up with too many false positives.” That is indeed correct. Even Mullis, the person who created it back in the 70’s, has stated as much. Makes you wonder why they chose to use it as the standard. “Dr Fauci, in his 2008 paper on the Spanish flu came to the conclusion that most of the deaths from the Spanish flu were from bacterial pneumonia from wearing masks!” There ain’t a lot data wrt 1918 mask wearing, so no one would be able come to such a conclusion. Fauci, dumb-assery not withstanding, would not have referenced masks in such a way … in a written paper. And he wouldn’t have made up such garbage unless there was good political reason to do so (in 2008 there wasn’t). The actual paper from 2008: A ‘search’ of the paper shows NO MENTION OF or NO REFERENCE TO masks or masking. None. It concludes the pneumonia caused by bacteria normally found in upper respiratory tract was probably a major cause of death in the Spanish Flu pandemic. It is not surprising to me since Penecillin, the first antibiotic, had not yet been discovered. The ’caused by masks’ claim made by ‘TREASON’ was a made up add on probably copied mindlessly from somewhere else. Two wrongs don’t make a right. I don’t believe all “fact checks”, but on the Fauci Spanish flu paper… Verdict: False While Fauci did co-author a 2008 study about the causes of Spanish Influenza deaths, it mentions nothing about masks. The study found that a majority of the deaths were caused by secondary bacterial pneumonia related to influenza infection. Perhaps a reminder about the site is in order. As time went on, a meteorologist came to focus on CAGW and the lack of science supporting it. That led to “Why the hype?” and the politics behind the hype. Now we have hype about COVID. How much is science and how much is politics? Actions based on the hype likely put Harris … er …Biden in the White House. Asparagus is not needed to open a discussion on something that doesn’t smell right. “Plugging windmills and solar to save the planet-sounds eerily similar to-“take the untested, experimental vaccine for which we can not be sued so you can have the freedoms we confiscated back.” Could Nuremberg trials end the incursion of industrial scale wind turbines and solar panels? Do you have a link to that paper? Found that paper: it never mentions masks or implicates them at all. (btw I’m very anti-mask and am familiar with the scientific literature which clearly concludes they are between useless and harmful – a position Fauci hadat the beginning of this) Oh dear we have a covid1984 luke-warmer in our midst. About Watts Up With That? News and commentary on puzzling things in life, nature, science, weather, climate change, technology, and recent news gee you think maybe vaccine stuff is recent news? nah…fool. I’ve always considered WUWT to be a Science web site, not limited just to “Climate Change”. Lots of great topics covered here if you’ve followed over the years. Although, admittedly, bolstering my ability to argue that “humans are not the primary driver of climate change and changing human behavior is unlikely to have any measurable affect on climate” is what keeps me coming back. I’m pretty cheap when it comes to internet subscriptions but would be happy to make an annual donation. $100 seems a little steep but if the paying readership is small then I guess that’s what we’re looking at. I deliberately make an effort to not notice the advertising, which seems targeted specifically at me to piss me off, rather than to enlighten me. You didn’t have option to keep advertising 🙁 Just kidding, these embarrassing ads have prevented me for forwarding links to great articles. Eliminating annoying ads may improve growth of WUWT by wider distribution of forwards. As I’ve told users emailing about this, I don’t see these kind of ads on my PC or phone. I see targeted ads related to my Internet browsing. Much more likely to see an ad for an air fryer than any of the soft porn. So any advice on how to not get the air fryers? Spend more time on porn! The ads will be targeted accordingly. Or search for “Hot Pork!” sites. (But that give you ads for both … never mind.) Hmmm … in a comment above I said I allowed ads on WUWT but still didn’t see any. But I also run something that clears all my cookies (she looks like a nice one). Maybe clearing my cookies is why I haven’t seen ads? That is one good looking air fryer. Where can I get one? Isn’t that Chrissy Teigens? Something there is definitely fried! No, that’s Naomi Oreskes from a few years ago, when she let her hair down. I don’t see those kinds of ads on my home PC. My work PC, however, does (on the few occasions I’ve visited WUWT from work) get those NSFW ads – which is why I try not to visit WUWT from work all that often anymore (wasn’t always the case, the ads use to be completely inoffensive) . Outside of work related sites, my work PC only visits a handful of “news” type sites, certainly it never visits any of the “sleazy” sites that your post implies those seeing the ads must be visiting (ie “targeted ads related to my internet browsing”). None of the sites visited by my work PC, nor any of the searches made on it, would indicate a “targeted” interest for such ads. You definitely are a male… a very horny one… dude. 🙂 What were you thinking! Hey wait, I’ve never looked at porn on my phone and it’s the only way I’ve accessed this site for 7 years. Still, I get weird porn like hottest redheads available and stuff like that even though I’d never look at redhead porn. The old site site let me see what my wife was searching due to our google/amazon accounts being intertwined. So I knew when she was looking up new patio umbrellas and divorce lawyers in Greeley. oh wait, I think I’ve just figured out the nude redheads… Just asking for a friend, but he gets the naked redhead too. Not complaining. Er, I mean, HE’s not complaining. But seriously no porn browsing to explain it. Possibly if your only activity is RealClearPolitics and WUWT, they figure you need something to spice up your life. He might have just done a search for a beach vacation spot. They don’t call it “click bait” for nothing. Yea, l searched once for a J Lopez in the beach, followed soon after by another, D Lama on the mount. You wouldn’t want to know wat whip ads I got for quite a while 🙂. The person you forward the link to will probably not see the same ads you do. Go ahead and forward! For me, it depends on how much the cost is. It also requires a price (and performance) comparison against those who claim they can remove all ads. At the moment WUWT is for intrusive ads by far the worst site I visit. I’ve been a part of the WUWT “family” for 12 years now, and Anthony and Charles have even been kind enough to re-publish a dozen or so of my articles. But my resources are limited; even more so than in preceding years. So, I can only give when I can. I’ll hit the tip jar (in a small way) in a few minutes. You might consider a model like ZeroHedge: They have equally intrusive ads, but if you subscribe, then you can access their site free of advertising. WUWT brings such value by accumulating writings on issues I cannot find addressed elsewhere that I would willingly pay $250/year to subscribe ad-free. That’s a good idea, and I would even go further and say that in order to comment, one must be a subscriber. That would probably cut moderation by 90% and also get rid of most trolls. And would hopefully make the comment section more productive if not having to deal with some of the snark from a few people that make the site look bad. ZH new look is ok most items are acessible still other issue is for us OS readers the exchange rate n fees paypal bites us for, make donations costlier than Anthony actually might ask for ie avg us$ to au$ is 30c down for me and fees on a recent purchase from OS paypal added about 10$ more to my costs so a 160 buy cost me nearer 180 I use WISE.COM to send $ to our daughter in Northern Ireland. They used to be TransferWise. Hey, what about a special rate for Griff, maybe $10,000 per month, and that not Trinidadian $ either. US, Australian, or Canadian $s would do. I like the concept of tiered rates–one for the Griffs of the commenting readers, something less for those more supportive. It’s engagement that matters. Griff should probably get in for free as a regular contributor. He does inspire thought. Mainly, “Er… What?” But answering ‘what’ is useful. I agree re Griff for free. It gives us an idea of what lefties actually think, and where they are getting their “knowledge”, i.e. misinformation. When you see the CNN/ MSNBS/WAPOO fact checks that are always off base, you can understand since they are written by Griff type leftists. At least we here can see where the basic incorrect information is coming from. I’ll say it before others point it out more vehemently. I am a lefty too. The last thing I want is a safe space for climate realists where we are never challenged. I’m always surprised by the lack of “alarmists” views here considering it’s the dominant doxa. Agreed. We want as much serious alarmist input as possible, all the better for coming up with effective arguments against their dogma. The problem is most of the alarmist we do get tend to be more troll than serious commentators. Grif has to change his name to Grift. We would lose Griff as those on the Left want everything but paid for with OPM Nah, Griff should come here for free. If we didn’t have a Griff to keep us amused, we’d have to invent a Griff -bot. I thought it was one. Never backs up its mindless erroneous bot-like comments. I think we need to pay Griff for his entertaining comments. If we didn’t have Griff this site would be a little stuffy. But seriously, how much does it cost to run WUWT? How many readers are there? I don’t want anyone to get rich off this site, but I also want to continue to read the fine posts and comments. The price point for some of the smaller news sites that I’ve seen seems to be about $10/month, so that must be the point marketers think people will pay. If we can have no ads for such a great site I think many readers will be willing to pay. I like Martin Gibson’s idea of no ads for subscribers but ads for others because it would still allow us to forward articles to friends to persuade them to start reading WUWT. Um, who sees advertising on the internet? What about some merchandise? Maybe a t-shirt with “My carbon credits go to WATTSUPWITHTHAT” or something witty. I suggested previously a t-shirt: “Stop Climate Fear, Warmer is Better” I’m not sure what has annoyed me more – the crap ads that massively detract from the blog or some of the poor quality posts that have been showing up. I’ve spent a lot less timne here than I have in the past (and there are reference pages I really need to update). A lot of that due to focusing on some personal interests, some due to spending way too much time on FaceBook. Suffice it to say there have been several posts I haven’t link to on other science pages because of the bogus ads. “This Discovery Leaves Doctors Speechless.” “A friend of Kim Kardashian …” ( never finished that headline). “… Will the Stock Market Crash …?” I’ve been thinking about talking to Anthony about the ads…. I’ll donate! There are ads? I, too, am very cheap regarding Internet subscriptions but WUWT is certainly worth at least $100 per year. I prefer one payment per year. I’m the same way, happy to contribute a minimum of $100 per year, but prefer one-off payment. I don’t see any ads and was not aware that there was a problem. However I would donate on a regular basis if necessary to keep the site going. And FYI on the second vote above I ckicked “other” but no option to leave a comment appeared. Consider placing a contributor or supporter label on posts as done with the editors. I don’t mind paypal, but I would like the option to send a donation via eTransfer. I have been donating $10/mo for a while, I will increase it as I can. It’s certainly worth at least that. I don’t need any incentive like 501(c)(3). I donate without the 501c3. But it sends a thrill up my leg to think I could force Microsoft Corporation to match my gifts to WUWT! some of the ads are for malware, to the extent I had to stop visiting the site, let us know what you need for an annual budget, and show progress towards the goal. private schools find a rich donor to offer matching (to a point) in a certain period. this really helps with fund drives I don’t pay monthly to any YouTuber or website. I pay when the place is needing it or when I have some spare cash. I usually pay in the region of $20 or so. Anthony,et al:. Fine idea to go with donation only…suggest you somehow “tag” those that periodically donate…I would be happy to poney up a extra donation now and again when your budget is stretched. Note that I was going to make a donation right now…but on my android phone, and no obvious donate button…will do once I am back on my PC. Should be a quick mobile platform edit. Also, would prefer to have a choice other than PayPal….or even a physical address to send a check or info to make a electronic transfer…eft. Perhaps consider posting a monthly budget target with a donation progress bar…then if you want to increase…provide a bit of an explanation…and the new target. I have to believe wuwt has enough support to fund any reasonable budget. Regards Ethan Brand “Also, would prefer to have a choice other than PayPal….or even a physical address to send a check or info to make a electronic transfer…eft.” Me, too. I still write checks. I made my donation for this year ($500). I don’t like monthly installments, maybe it’s because I’m older and just don’t like things that resemble bills. I won’t necessarily donate monthly, but I will on a regular basis. Thanks to Anthony, Charles and all the contributors to postings at this website. I have learned quite a bit about climate and energy issues. I would be happy to see those ads go because their headlines/bylines are (in my opinion) an insult to one’s intelligence. They treat you as though you have the IQ of a child or teenager. I would gladly be a regular contributor to see them go. “They treat you as though you have the IQ of a child or teenager.” Or an alarmist. I shouldn’t have said that! 🙂 To me, this is like a magazine subscription, I’d donate $25.00 a year to keep it going. The donate button did show up, but in an unexpected place. Make it obvious at the top of the home page. Best Ethan Use both methods and you will get a better response, also if you supply a bank a/c deposit number as well its much easier from my end and I find it less intrusive. Each month I do my bill payments and if I have your a/c number in front of me its easy to add a donation without providing too much of my personal info. Linux Mint – Free – gives a 10y old laptop the performance of a 1y old laptop – zero bloat Brave – Free linux browser – no tracking and very fast and zero popup ads unless you opt in for their ad program (for rewards) LibreOffice – Free MS Office equivalent suite of products – fast, efficient No more BSOD or annoying ‘strokes’ that windows frequently suffers from If WUWT goes donations then a one time is preferable I’ve commented here a few of times over the last decade+ that I’ve been aware of this site, so I am probably more representative of lurkers than an active participant. I use an RSS feed that updates daily, so most of the time all I look at are headlines. I don’t actually click through an article that often. It’s become clear to me that AGW is a post-scientific topic, and no amount of evidence will sway the True Believers. I do very much appreciate that there are other topics covered on WUWT, and if I look at the things I’ve archived in the RSS feed, almost none of them over the last year or so relate to AGW (nor to the COVID panic). All that rambling aside, I am one of the many who use an ad blocker. I’ve had too many bad experiences with malicious ads served on sites that I frequent to leave that attack vector open. I’d be willing to participate in a monthly or annual rate, but, to be honest, even $10/month is more than I can justify for the time I spend on WUWT. Since that’s not a helpful answer – at the level I participate with WUWT, I would be willing to contribute $25/year as a “Lurker Level” member. Maybe I don’t get to participate in the debates in the comments section, but at least I’m not entirely a freeloader. I’m with you. These ads don’t pop up or interject themselves into posts. They aren’t intrusive. I say, leave them. It isn’t so much the intrusiveness (though the ads that pin themselves to the bottom are somewhat intrusive), it’s the NSFW content of the some of the ads that’s the biggest problem with them, IMHO (see the ad image posted by another user further up the thread for an example of what I’m talking about) “ad image posted by another user further up the thread” . Unfortunately, I never get those sort of images 🙁 I don’t have a problem with adverts but they should be ideally ghettoed in a limited area of the screen and should not delay page loading by more than about 20%. Modern adverts become ever more intrusive. A good example is how TV sound volume ratchets up during the advert break. Targeted adverts are actually good. The thing that really annoys me is page loading delays due to pages waiting on advertising content. Finally when the content arrives it’s often not even customized, So one wonders – if it’s just standard advert content why weren’t they caching it better. All-in-all, the most annoying jerks seem to get jobs in advertising and delight in making the user experience ever more miserable. Funny thing you mention the sound level of TV ads. The other week I was watching a show on Tubi on my Roku and noticed that the ads were actually noticeably quieter than the show I was watching. Not sure if it was Tubi or Roku that was responsible (or just that the show was particularly louder than normal) but it had me thinking “why can’t ads always be like that!” While I have donated to some of the special causes here in the past, I was really hoping for a Professional version of WUWT, that had exclusive content not available to a ‘free’ version of WUWT. I think it would be a big hit, and I would especially subscribe to something like that, maybe paying $250 a year for a professional subscription. That way, maybe Griff and the other trolls would have to pay to join, or they would only be able to access the free version of WUWT. That could run in parallel with some of the pro content, but the professional version would perhaps have much more content and perhaps layman courses in specific related climate subjects by educated authors. Or ask an expert a question, and everyone on the pro site gets access to the answer. I have 1001 questions. Considering what I have learned here the many years I have been hanging around, the best thing I have learnt is critical thinking. And to practise my writing skills. I have to admit that much of my life I was influenced by many sources that I know now to be corrupted, because I hadn’t honed my critical thinking skills. That is probably the most important thing I have learned here, so I will continue to donate from time to time. But I also see so much more opportunity for WUWT to offer some type of professional version that would bring enough funding to continue a free site without advertising for the masses. A professional version would be a bubble. Try the intellectual agility of academia if you want that. No way that I would get in. Because I am not from the prevailing political paradigm of the site. This site isn’t like the typical lefty sites you’re use to. Unlike those sites, there’s no “cancelling” people for their political views going on here. You’d get in, politically you’d be in the minority, but you’d have no problems getting in. I wasn’t even aware that there were ads on WUWT. Apparently using Firefox lets me avoid them. As I continue to learn a lot from the site, it’s definitely worth paying a subscription fee in the future. I just remitted $120 to compensate for lost ad revenues in the past. Keep it free and open to all !!! If you need more revenue try a having a Patron Membership. I would join for a reasonable annual fee. Say $200. I don’t see any ads on this page. And when I do see them, they are relatively unobtrusive. even Patreon is removing certain sites /blogs etc that arent pc enough for the few moaners Patreon gives in to cancel culture. Even if WUWT manages to have a Patreon account, I doubt it would be long before Patreon cancelled them. Best to avoid Patreon. If I don’t want ads like in news sites, I use BRAVE browser. First and probably last posting. The current version of Malwarebytes kills most advertising. I read WUWT every day and I never see any ads here. I’d be willing to give 50 notes a year ,once ayear ,pp is ok for me ,also keep in mind pp can if pushed close down payments to individuals and organisations. I don’t contribute much but I sure learn a lot. I see the adds , but if the content of what I’m reading is interesting there not a problem. On the ads, I had to switch browsers on my iMac to only viewing WUWT using Firefox, with Firefox set to “strict” privacy and blocking all pop-up preferences settings to stop all the pop-up ads on WUWT. Using Apple’s Safari browser to view WUWT even with all the settings I could lock down and restrict, I could not block all the popup Ads and especially the annoying one at the bottom of the screen that usually covered content. I would click it away and then it kept coming back. Since I went using on Firefox and “strict” setting and block popups, the ads that can make it through are just a few at the side bar. There is a huige advantage to using two separate browsers in this way since I use G-mail (logged in with my gmail credentials) and watch some Youtube vids occasionally. It means Google can’t ad track me across browsers since I don’t log into Gmail or Youtube on the Firefox browsers, I remain an anonymous visitor anywhere I browse using Firefox. When I’m on Safari, any site I visit Google can track me since that is where my G-mail and Youtube log-in resides with those tracking cookies. As for sending you money once a year, yeah probably okay. I know there are many many visitors here who would not risk their anonymity by making a donation via credit card, paypal, etc. So I’m not sure that (pay to play) would work for you Anthony and Chrles with the time you have to spend on the moderation and upkeep. “I know there are many many visitors here who would not risk their anonymity by making a donation via credit card, paypal, etc. So I’m not sure that (pay to play) would work for you” Well, if Anthony will provide an address, I would be happy to send cash as long as Anthony would verify that he received it via email. I would probably prefer doing it that way. Go with $5 a month for ad free. You would be surprised by how many would. Perhaps an added incentive would be only people who are members can comment. Perhaps a $20 a month for some other benefit, like a gold name plate in comments. I am just a permanent Lurker that likes to read people sometimes much smarter than me argue about things I can only begin to grasp. I just like to look at all the pretty graphs. With that Being Said I would be more than happy to occasionally put a few bucks into a pot if it helps Would be One-time only every few months. I miss the Israeli tourism ads, with that spectacular woman model…it made me want to visit Israel, though my wife could never see the attraction. The other ads, with the horrific photos that are somehow supposed to be clickbait, I could absolutely do without. I experimented with various adblockers and photoblockers on this site, just so I wouldn’t get physically sick at the sight of toenail fungus, or some other such disgusting thing. Nothing worked satisfactorily. But I’m Jake with a subscription on this site. Not on most other sites, but this one, definitely. Download and use Firefox browser for this site and this site only. Set the Privacy restrictions to “Strict” and also check “block pop-ups” and you won’t see that stuff. Use your other browser for other websites where you have user logins like email etc. Opera ( the fastest browser-in-the-West ) blocks everything you tell it to do and remembers set-up exceptions for individual websites if you choose to do so. Yeah, I had to stop using Safari because it would hang all the time on here, but not a bunch of news sites I frequent. With Firefox it runs smoothly, and I don’t have the strict privacy setting set, yet. The apparent “default” ads (when there’s no browser history to use) seem to be the most disgusting things they can provide. I’m a supporter of websites making money, even if via ads – it takes effort to maintain something like this. But the toenail fungus, poop ads, and huge globs of earwax are what prompted me to install an ad blocker. I have made donations to specific projects here but not regular contributions. The ads don’t bother me as I run the extension from adnauseam.io so it clicks on everything but ditches the responses. I’m already donating $5/month, and support multiple sites/YouTube channels; seems to be a sustainable model Expressing an opinion is part of the package, in my estimation At 55 I learn new things every day, because I know what I don’t know (a lot) which is the problem with people like Griff. Everything is the biggest word in the English language, by definition if you know everything you cannot learn anything new. An abject lesson for all. I used to think AGW was settled science. Then I learned, much of it primarily here. So I would donate. But I now use an adblocker on chrome and no longer have issues with ads I was on a donation based web site for a while and saw the good and the bad. The good was it allowed periodic automatic donations and one time donations. This allowed the member to contribute when and how they wanted to. The bad was part of the donations weren’t reported and the expenses/needs were a complete mystery. It was especially bad for that site because the owner was living off the money and people had no ideal if they were funding a lavish lifestyle or if ends were barely meeting. I knew most of the time the owner had a pretty lean life but unfortunately she would post picture along with descriptions that suggested otherwise. Honest accounting so people know how their money is spent will encourage donations. This site is more trusted that that site but there may still be some that are unsure how their money will be used. Whatever amount is chosen, I think there should be pro-rated options: annual, semi-annual, quarterly or monthly pay options. Example: if the donation is set at $100 annually, it would be $55 semi-annually, $30 per quarter and $12 monthly. “Whatever amount is chosen” The amount should be voluntary. Each person should give as they are able. Yes, real socialism. Give what you can, take what you need. Fine, except when the entity providing what you take can no longer sustain itself because too many people do not give what they can. Then everyone loses. Maybe a couple of “standard” options/suggestions and a “other” option. i.e. $5/month, $10/month, “other”. I would chip in $5 a month. I suggest you use a service like SubscribeStar that doesn’t cancel people for politics (vs Patreon which does). In the end, I see no good reason to either have intrusive ads or ask for donations. The makers of this site are the ones who want to tell the world something. THEY have to finance it, and put their money where their mouth is. Not third parties advertising unrelated products. And certainly not those who they want to educate, which would mean an echo-chamber and endless preaching to the converted (already a problem here…). The message must get out to those who currently wouldn’t dream of paying for it, those who disdain, mock, and hate what is published here on this site. It must be pressed on the population by brute force. A site like this must not generate revenue but invest money. If anything, WUWT must PLACE advertisements everywhere on the Net, not carry others’ ads. AFAIK the makers of the site do have real jobs where they do productive work. Writing a blog, even a highly informational and educational one like WUWT, is NOT a honest source of income but something one spends money on. Artists and musicians being paid for making records and appearing in the media instead of having to pay for the privilege has caused the total decadence and ruin of the arts within less than a century already. Scientists asking for donations and grants rather than spending their personal income and wealth on their research has corrupted science no less. I fear that WUWT might be already further down on this slippery slope than it can afford. must be nice to be independently wealthy Alex. Unfortunately most people who have “real jobs where they do productive work” aren’t so blessed. They have bills to pay. Only rich people and big corporations get to have a say, I guess. I discovered WUWT when this site was mediating the funding of Climate Audit. Since then, I have gotten far more from WUWT than I have given ( maybe $400 over all those years). I guess I’d feel pretty foolish if it were the other way around! I’m not sure of the years, but if n=15, at $100 a year, I should cough up that dough; it was worth it. Good luck trying to get that out of me. So now, every time I click a link, the WaPo or NYT or the Grauniad (okay, not them) or Forbes etc. wants me to give them a yearly stipend so that I can see what the other side is thinking. I have tried that in the past and stopped because it wasn’t worth it. If WUWT required $300 a year, I would probably balk, but if you had a way for me to click you a Loonie (yes, Canadian Tire money) each day that I visited, you would probably get $350 out of me without any trouble. As for the other approaches, my frugality would vary inversely to my sense that my contribution was significant; I actually have no idea what it takes to make this all happen. Happy to pay US$100 a year. Newbies have to be free, somehow. Anthony, My reommendation is you should consider setting up a Patreon account and ask for $5/month for those who frequent WUWT. On Patreon you can be more responsive to your Patreon subscribers requests/comments/questions there too. I suggest avoiding Patreon like the plague It has a reputation for cancelling people with political views they disagree with. Subscribe star has a better reputation in that regard. I must have donated $ 500 to WUWT over the last 10 years, and don’t regret a penny of it. Keep up the good work. I read regularly on about 6 sites. I often give $25 per year to 3 or 4 of them. With all the other needs I contribute to, that is all I can do now. I do agree that ads have gotten intrusive compared to the beginning of such on WUWT. I began at WUWT in 2008. Best to all, and thanks. John I use Opera browser and Adblock+ and there are no ads. Adblock is free. A few years ago I made a payment to Paypal using an AMEX card. I later discovered that Paypal had debited the card 4 X £99 for no reason [presumably £100 would have triggered some security breach]. It took me several months to get a refund after a lot of correspondence. I do not do Paypal. I heart WUWT. I have a fixed low income and can’t afford much, but I make regular donations. I prefer mailing a check, and I know where to send it — I won’t give the address out because I respect the privacy. However, in thinking about this, it might be worthwhile for WUWT to get a PO Box just for mail-in donations — with an address you could post online safely — for those of us who don’t do Paypal but still wish to contribute. I’m ambivalent about yearly or monthly. I’m happy to contribute to a blog that you have been running for such a long time that has been valuable world wide. I think something in the order of a magazine subscription cost but somewhat less because there is no paper magazine to publish/post out. Don’t know if that would generate sufficient cash flow. All the best for the future. Anthony: You did make me realize that I have read and enjoyed your site for several years and have not contributed to continuing your excellent site. I just rectified that to the best of my ability. Hope it helps keep you going. Keep up the good work. I love the site. Have no problems with ads. Dont know if its because son added Ad Blocker. I would find $100 a year rather a lot to find. That is my thought, paying a regular amount per month would be hard for me to sustain. I actually click on adverts here and other blogs just to create income for the site. I don’t find adverts intrusive as most news sites have them and it’s just part of life on the internet. I suppose that I shouldn’t expect internet content to be free but I’d actually find more advertising acceptable or free content being available only after a few days delay after subscribers get access. Anthony and Charles, Thank you for providing the most interesting and educational site that I know of! I apologize profusely for not biting the bullet sooner, but you now are part my monthly expenses and I will increase the amount when feasible. The ads are obnoxious, but I don’t really mind them all that much; I tend to scroll right past them even though that redhead does remind me of my first wife! What would be really great is some merch! There are several charts and graphs, like Hansen’s 2008 GSTA or Nahle’s 570Mya record of CO2 and temps, that would make great educational tools for enlightening those who are still capable of thinking! A full page, laminated chart with some irrefutable facts about geologic history or ocean cycles might really shake the faith of some of the alarmists! Small collections by particular authors or on certain subjects could be another way to attract new people to the site, and T-shirts with the WUWT logo and a catchy slogan would be fun! I just use AdBlock and don’t see ads here or anywhere else.I would keep the advertising even if the returns are small, it’s so easy to get rid of them. easier to install adblockers surely us i mean 😉 if ihad to subscribe to all the sites i visit and thats not many I simply couldnt go there CLEAN OUT YOUR COOKIE FILES FREQUENTLY REGULARLY – DAILY. By way of comparison, the WSJ subscription is $40 per month. I choose not to subscribe but NYT appears to be $14.00 per month, WaPo is $10.00 per month. I pay You tube $15.00 per month to avoid ads. WUWT has a unique value that is different than most other news sources. I learn a great deal from the news articles, but the scientific expertise of many of the commentators is also very valuable to me. I can easily justify $20 per month, or $240 per year. Do you have data on income levels of your subscribers? Some may find WUWT valuable information, but have less income to work with, Perhaps a discount for your most qualified commenters would make sense, with an understanding that their contribution to WUWT is the value of their comments. I would be okay paying more than some, because I can contribute cash more readily than expertise. Perhaps those with expertise get a lower price, to keep them as commentators, and their commentary is an important part of the value of that I willingly pay for. Just a thought that a one size fits all structure may not be the best approach because WUWT is such a unique resource. Many non profits have a hierarchy of levels, Admiral, vice admiral, captain etc, perhaps with a hat, coffee cup etc at each level. Perhaps some version of that is worth considering. you’re wasting $15/month. I pay youtube $0 and I don’t see any ads there (effective ad blockers are easy to find and don’t cost anything). Suggest instead of paying youtube for something you can accomplish for free, you take that money and donate to WUWT. I got it, I got it! Unfortunately I doubt there’s any blog host that supports it. I’d be happy to pay some two bits ($0.25!) per comment I make. (Perhaps with “scholarships” for useful or broke participants.) That might make Griff put his money where his mouth is. 🙂 And cut down on the subsequent flame fests that help discourage me from wading through all the crap comments over the last few years. Only “ad” type stuff I get are those in the “recommended” column on the right Very easy to ignore. I don’t necessarily mind advertisements, even when they make scrolling/reading inconvenient sometimes – sadly I’ve become desensitized to it. However, the number of ads featuring scantily clad or near naked women is a problem. I’m no prude but viewing this site at work (many topics relate directly to my work) has become impossible lest someone be ‘offended’ and question what i am viewing. I use the feedback function to report each such ad as inappropriate but they continue to come up week after week. I feel your pain. The only time I see those ads is on my work PC (where I’m blocked from installing software and add-ons of my choosing, so can’t use any of my usual ad-blockers). One work around is to move your browser window so the parts that are occupied by those ads are off the screen as much as possible. Iv’e never seen an advert on this site, i use Ad-blocker so that’s probably why, seriously i never knew there were ad’s on this site, this article came as a surprise to me. I use the Brave browser, which also blocks all advertisements. My worry is that by blocking the ads, WUWT gets no benefit from me frequenting the site. The ads at the bottom always cover content. It’s been my experience that you just need to scroll to get to the content being covered. At least on the PC, can’t speak for the experience on a phone. could leave as is, I run adguard adblocker and u block origin on firefox so ads never bug me. if you leave as is those bothered can just run an ad blocker those ok with it can keep on keeping on. I do disable them on this site 1 or 2 days a week and click some stuff then re-enable. I can’t commit to anything, VA disability is my only income, but would try to do some one time things even if its small amount should help. Clicks on the ads are supposedly covering some of the cost for running the blog, “there ain’t no such thing as a free lunch”. Just keep the ads. I’d rather sift through a few ads than be hounded for donations. I see and here ads on TV and radio every day, big deal. Try offering an ad free experience for a donation and keep the ads for those who don’t donate. I am going to break my comments down in two separate posts. First post: The ads — I’m not bothered by the existence of ads. At home I don’t usually see them (thanks to various ad blocking tools depending on the browser I’m using at the time). The problem is the content of some of the ads. The “scantily clothed young ladies” ads are problematic for those using their work PCs (where ad blocking tools that they use at home may not be an option). They’re what’s known as NSFW (Not Safe For Work). Second post: Before you ask for donations, how much is required for WUWT to stay afloat (report as two categories (501C3 and non-501C3)? I would probably make a one-time donation anyway. If you are looking at 501C3 status: Good luck!!!! 😎 For me the most annoying ads are the ones that are endlessly repeated in between every paragraph. To make your WUWT experience even better, you can also install the ‘I don’t Care About Cookies’ extension to rid yourself of that accursed EU cookie warning. You still get the cookies, but this nifty bit of magic accepts them without you having to hit that stupid ‘accept’ button every time you come to this site, or any other site – it works for all but one site that I use. I clear my browser info every day, so I get the cookie warning every day… well, I used to…. A couple of points on the need for ad. removal and the the donation other vote. Ads. don’t bother me one bit. I view on a desktop PC with a modest monitor. Donations? Hmm, I suspect that anything at the levels you are suggesting is going to substantially reduce traffic, especially those sceptical-curious who are the people one needs to draw in. By sceptical-curious I mean knowledge-seeking default-concensus people. I certainly would never have come here initially if there was a paywall. Others have suggested a two tier access – free with ads or donation supported add free. That might be a suitable optimisation. don’t confuse donation (with is entirely voluntary) with pay-to-play (a paywall). As I understand it, Anthony is suggesting a donation model not a paywall. (for those in the US or familiar with US television, think “PBS pledge drive” – PBS is mostly ad-free and you don’t have to donate to get access to PBS, however periodically PBS will “encourage” you to donate via pledge drives). Ah, thanks John. As you were 🙂 That’s what it looks like to me, and if regular donation gets rid of the ads I’m all for it. However, I disagree with your statement that “PBS is mostly ad-free” – if you account for the inter-show ads for their own shows and the block at the beginning and end for the sponsors, you end up with shows about the same length as regular ad-driven TV. Just not broken up in the middle. I donate when I feel a bit richer and the markets are up. I think donation is the way to do, then you can scrap 90% of the cookies as well Two clicks to shut them off. Ad’s are fine. I just set up $10 monthly contribution via Paypal. skinflint.[me] Ads can be inappropriate and intrusive at times but if needed for site finance so be it. 3 different demographics. General public is very important but will not pay. suggested rates are far too high, sorry. Will try to get the energy up to make a donation as you do a very good job. Thanks for coming back to it and all the effort you, Charles, Willis and the rest of the team put in. pretty much agree with you there angech. As I mentioned elsewhere, I’m not particularly bothered by ads, so intrusiveness isn’t much of an issue. It’s the inappropriate ads that I take issue with. Not out of offense – like most healthy straight males, I don’t mind the seeing the occasional “scantily clothed young ladies” (as one poster described the ads) but rather because most places of employment are not so keen on such images appearing on their screens. In short, if it wasn’t for the inappropriate ads, I’d be all for keeping the ads, they’re easy enough to ignore or block most of the time. As for donations, I see nothing wrong with the suggested pre-set amounts so long as there’s also a “one-time donation” option that lets one pay any amount they want so that those who can’t or won’t do a recurring donation (be it monthly or yearly) can donate as much as they want (or can afford) whenever they want. I have personally benefitted from discussions on WUWT in refining my understanding of the surface temperature control processes that regulate Earth’s energy balance. That has enabled me to clearly identify the glaring failure of climate models. Specifically every model is making ridiculous hindcast cooling of the Nino34 region to match the current temperature while still maintaining a warming trend where there is no warming trend. Beyond that, what can WUWT offer? Does WUWT have any scientific merit that policy makers should view as valuable and be paying for? Who is willing to pay for a scientific understanding of the climate rather than “modelled” predictions? Is WUWT an educational blog? What is the endgame for WUWT; how will it evolve? How can WUWT develop robust funding? I do get some entertainment value and am willing to part with AUD100/year for that. My only problem with the adds is when a paragraph that I have spent time composing gets wiped. I think that occurs with pop-ups but may be the result of other posts occurring while I am composing. Your advertising revenue has fallen because your visitors are skeptics, in the best sense of the word. “Invest in Amazon: with Just $250 You Could Get an Extra Income.” Sorry, I’m skeptical. “Men: Forget the Blue pill ….” Sorry I’m skeptical. “granny remove wrinkles with $5 tip?” Still Skeptical. Your readers like this site because they get to read a lot of back ground information on an important issue. There are graphs and even formulas. That’s okay. we like to study before making big decisions. we think about our decisions and weight the evidence. Advertisers hate people like us. The only way to sell us something is to provide a well made product at a reasonable cost. That’s no way to make a fast buck. One time annual payment at the most for me. I don’t do subscriptions and haven’t for a long long time. They are to easy to forget about and to many made it extremely difficult to cancel. You might want to consider the model some sites do where pay for ad free experience but still allow free viewing with ads. If you go pure subscription you’ll lose the curious visitors who then get hooked on the site. Sign up as a Brave Browser rewards creator () and visitors can contribute while surfing. Just Donated $50 by using the Donate button at top right corner of this page. It took me just 2 minutes to do it and easy too using the Debit card. I am currently giving 50 twice a year. I can switch to monthly if it adds up to 100. I had to use an add blocker because the adds were slowing me down to where this site was acting like molasses on a cold day. In the past they also have tried to slip in adds that would try to get me to download and install software. Thankfully I never agreed no matter how alarming the message was. Most of my ads are scantily clad redheads, what’s not to like ?? “No plan of operations extends with certainty beyond the first encounter with the enemy’s main strength.” Charles, Anthony: AUD$100 annually is OK with me (age 67) I don’t like monthly payments I do NOT like Paypal. Debit card, credit card, or better still BPAY would be fine. I believe you need to make a distinction between financial supporters – who can post comments – and the impoverished, or occasional visitors, who can read and learn but not contribute. Well, I live in South Africa, on a rapidly diminishing (because of inflation) pension, and we’re on the wrong end of the Rand/dollar exchange rate. I intend to donate a lump whenever there is some free cash, but although I really appreciate your blog, I am unable to commit to any regular payment. Right now, after paying lump sum insurance, plus whatever will be demanded as excess after my last hospitalisation, I seem to be suffering from “Too much month at the end of the money” syndrome. Let’s see what I can do in future months, because I frequently refer to “wattsupwiththat” when local warmists demand sources for my ‘denialism’ Anthony the only ads that I don’t like on this site are the Google ads that show up in a box at the bottom of the page. These ads make it harder for me to read the article and I always X them out. But it is a pain to do so. I’ve seen proposals to only allow “members” to comment, and some comments about “only if this type of post stops” I fully disagree with both of those. One of the best things about WUWT is the ability for anyone to freely comment and the extremely light touch on moderating. Another great thing is the way it’s open to sharing many different positions and opinions, opening them up for discussion. Moving away from that model would, IMO, ruin the site. I’ve also seen a proposal for something like an ad-removal pass. I would be 100% behind that and would get on. $5/month for no ads wouldn’t even be a question for me. Lots of comments. And sifting thru them I think it comes down to voluntary contributions as the way forward. I’ve gone on record as not liking subscriptions, and some of the subscription vehicules are problematic anyway. Over the last few years as I’ve disengaged from making a living and started paying more attention to what’s going on, I’ve stepped up my monetary support for things that effect me. Political campaigns, candidates, political pressure groups. So, where before I might sent $25 or $50, I now send $200 or $500. Because it makes a difference. I have friends of the same political ilk that send nothing. They can afford to but they are tight, I sometimes wonder if they have ever bought a round. But they talk talk talk about how crappy it all is. I suggest to them they send some money and they look at me like I just grew a third ear. Among other causes I send WUWT money on a semi regular basis, more this year because of the platform difficulties. In the future I plan to donate at an increased level. I think it’s an important Blog, and I want it to stay healthy. I don’t care about the ads, I just ignore them. If the income from them is so marginal as to be useless I’m sure we won’t see them. It’s time to support what you care about. I hate to call what we are going through a war, but we are in a great struggle, and it’s time for people to pick a side and do something. If it was me and would offer multiple tiered amounts/frequency if possible, barrier free support I voted “other” with the understanding that I could leave comment. But, the opportunity to leave a comment was not apparent in the voting procedure. Some of us are living on a fixed income. So, I would suggest you have an annual fundraising period (Like PBS) where you ask for donations to make a goal. Then even the poorest of us could make a pledge. Otherwise, I guess it the same old story of “Money talks, Poverty walks”. That is a good idea. However, what are the legal requirements for WUWT for this? Are there any restrictions or does WUWT have to comply with being a non-profit to do so? I’m not sure of the laws in CA since leaving, but at the federal level, even a “for-profit” business can solicit donations. It’s no different than any other source of revenue. “Non-profits” get certain tax benefits. The big deal about the 501(c)(3) is that it’s the only type of organization (or one of a very few types) that allows the donations to be deducted from taxes. I get all that my question was more to how WUWT is structured as a business for income tax purposes. Is it a business or a personal blog? Could be both. I was considering sending a contribution through my Brave browser, but it appears that WUWT is not set up to receive tips at this time. I do not see ads, but I would make up for it if I could tip. I probably won’t get around to donating otherwise. The ads don’t annoy me as much as the format change did. I just find the new one completely less appealing, especially the font change. I vote for solicited once a year donations, say $20. OR, for small monthly donations, say $2 to $5. I subscribe to the idea that small amounts times very many donations will always be larger that large donations a few times/people. I also believe that open and honest expression of need catches more fish. Tell us how much you need: to run the site; to pay your expenses; to make a profit? Lay out the business case. I’ve helped in the past and am willing to help again. Please keep up the good work. The ads on this site are the most aggressive I’ve seen anywhere- especially the one that splatters itself across the bottom of the screen. I don’t mind if there are some on the side of the screen as long as they don’t flash at you. I don’t really think it costs all that much to run a site like this but maybe I’m wrong. So essentially you have been driven out of the public arena. Hidden behind a pay screen. Another great victory or the alarmists. Where do you get that from? There may be COMMENTS suggesting restricting access but that’s not mentioned in the article. Just donated. Keep up the good work either with or without ads. You guys keep me sane in a world gone mad. I’ve donated when I could and in amounts I can handle. As a fixed income retiree and a government consumer price index that does not track increased prices at the consumer level, I can not expect any benefit from consumer price index increases to payments. If you decide to exclude us poorer folks, so be it. “In actuality, our ad partner is serving more ads than ever before for even less returns. It’s seemingly the law of diminishing returns in action.” Fewer beginners, newbies or terminally stupid people are clicking ads. Increasing ad frequency is part of the path to a total burnout for that ad revenue. It is exactly the type of decision made by marketing majors instead of aiming for higher quality products in their ads. What’s worse in the ad-stream is that a greater frequency of ads generally reflects cheaper pricing for the ads. Lower pricing, greater frequency of even more absurd advertisements promising everything from better health, instant wealth, amazing increases in sexual prowess and attractiveness. Next will be psychic predictions from California, amazing Weddell seals predicting more global warming, polar bears drinking Coca-Cola, cannibal penguins and great white sharks that want to communicate with Biden… Especially alarming, is whether any of the techie semi-deities, e.g. googly are tampering with the revenue clicks, which they are known to do; e.g. shadow banning, revenue blocking, etc. None of which allows WUWT a decent revenue source. Is there any way to charge bot owners for every incursion their bots make? I like the idea of charging googly, faucebook, binged, twitty and others for their invasive data collecting software… A penny per character, a dollar for every image, entire articles for 1500 bucks? Set the smallest monthly fee which might support your budget needs. Hint: You are not worth nearly as much as Netflix. A trip down memory lane 2trip down memory lane 3trip I haven’t noticed a problem, however: Good luck. Dunno if your Vote button is working either. Perhaps Anthony could get speaking engagements. Alex Epstein was getting paid, a few years ago at least.h
https://wattsupwiththat.com/2021/04/07/an-end-to-wuwt-ad-frustration/?shared=email&msg=fail
CC-MAIN-2022-05
refinedweb
13,615
71.85
This preview shows pages 1–2. Sign up to view the full content. View Full Document This preview has intentionally blurred sections. Unformatted text preview: ;;;; Lab 11 - Due 11/29 ;;;; ;;;; Filename: lab11.scm ;;;; ;;;; Name(s): ;;;; ;;;; ;;;; This file is organized as follows. The file is broken into parts for ;;;; each problem. The first part is an empty skeleton of the procedures ;;;; you are to right for that problem. After this the test cases ;;;; are defined, followed by a line that should look like ;;;; ;;;; ;(do-tests ...) ;;;; ;;;; Uncomment this line to run the test cases for that problem ;;;; and the display the resulting output. You are encouraged ;;;; to use this mechanism and add additional test cases of your ;;;; own. ; (define (reload) ; type (reload) into interpreter to reload this file (load "lab11.scm")) ;;; Set to #t if running in Dr. Scheme: (define dr-scheme? #f) ( ;;; Code used for testing just ignore this (define (do-tests n) ;;Multi arguments display (define (display+ . args) (for-each display args)) ;;Eval that works in MIT and STk (define (eval+ expression) (if (and (not dr-scheme?) (eqv? '#() '#())) (eval expression user-initial-environment) (eval expression))) (define (pretty-eval+ expression) (let ((return (eval+ expression))) (if (and (pair? expression) (eq? (car expression) 'define)) 'define-completed return))) (let* ((test-string (string-append "test-cases-step-" (number->string n))) (test-cases (eval+ (string->symbol test-string))))... View Full Document - Fall '08 - Staff Click to edit the document details
https://www.coursehero.com/file/6071181/lab11/
CC-MAIN-2017-04
refinedweb
234
67.96
I created two script files one script file will take user input as a number and i have another script which has to be run those many times the number given by user.Please help me with this?I have tried to call that scrip by using subprocess.Popen method but i am not getting any output.It is getting terminated. The scripts would help to narrow down issues. Why the need to run that way, when you can simply run a function (imported from another script/module) any number of times you want. Can u please elloborate or provide any example.Actually these scripts are the toolbox scripts for the creation of tools in arcmap I am using PRO as the demo, but the principle is the same... in pictoral form The tool in action... allow the user to input many names into the toolbos increment a counter, calling a function that is stored in the 'helper' script. It worked What you have to do..... Assign a script to the tool Set up the parameters... we have a single parameter that allows for multiple names to be input Now everything... scripts and toolbox are stored in the same folder Here is the code Notice in line 16, I import a function from dummy_helper and use it to print some names with a counter # -*- coding: UTF-8 -*- """ :Script: dummy_main.py :Author: Dan.Patterson@carleton.ca :Modified: 2018-xx-xx :Purpose: tools for working with numpy arrays :Useage: : :References: : :---------------------------------------------------------------------: """ # ---- imports, formats, constants ---- import sys import arcpy from dummy_helper import dumb_demo def tweet(msg): """Print a message for both arcpy and python. : msg - a text message """ m = "\n{}\n".format(msg) arcpy.AddMessage(m) print(m) if len(sys.argv) == 1: testing = True names = ['Hello There', 'How are you', 'Goodbye'] else: names = sys.argv[1].split(";") if names is None: tweet("no names") cnt = 0 for i in names: cnt += 1 tweet(dumb_demo(cnt, i)) # ---------------------------------------------------------------------- # __main__ .... code section if __name__ == "__main__": """Optionally... : - print the script source name. : - run the _demo """ # print("Script... {}".format(script)) Here is the helper script.... now!!! I tend to use a helper script that contains functions that I use all the time, so you don't have to replicate them in every toolbox that you use. # -*- coding: UTF-8 -*- """ :Script: dummy_helper.py :Author: Dan.Patterson@carleton.ca :Modified: 2018-xx-xx :Purpose: tools for working with numpy arrays :Useage: : :References: : :---------------------------------------------------------------------: """ # ---- imports, formats, constants ---- import sys def dumb_demo(cnt, name): """ : docs """ return "({}) {}".format(cnt, name) # ---------------------------------------------------------------------- # __main__ .... code section if __name__ == "__main__": """Optionally... : - print the script source name. : - run the _demo """ # print("Script... {}".format(script)) Hope you get the drift. Thank you so much for your help sir.I have a Graphical User Interface in the second script also.Here in your script it is executing a function,but i also need to a call the GUI in the second Script.Please help us. Code needed... and if you aren't using a GUI that plays nice with arcmap or pro, then you will have some problems. I would advise against it and either use the standard toolbox approach or use python toolboxes. the subprocess module would be an option in pro, but I don't know what you would gain. Perhaps a full description of your desired workflow with the functional code would help Actually we have designed the user Interface using script in arcmap toolbox(.tbx).So I am sending the screen shots and the code which has to be called many times. I Input 1:It will take the number of parameters needed and this will be the count that the second image need to be called. Input 2:It has to be executed, as per the input given in first UI. Code for Second UI(ie second image) # Import arcpy module import arcpy # Local variables: filename = arcpy.GetParameterAsText(0) Field = arcpy.GetParameterAsText(1) Select = arcpy.GetParameterAsText(2) max = arcpy.GetParameterAsText(3) min = arcpy.GetParameterAsText(4) expression = "getClass(!{0}!,{1},{2})".format(Select,min,max) codeblock = """ def getClass(i, min, max): a = 123 if (i>min and i<max): return 1 return 0""" # Process: Add Field arcpy.AddField_management(filename, Field, "LONG", "", "", "", "", "NULLABLE", "NON_REQUIRED", "") arcpy.CalculateField_management(filename, Field, expression, "PYTHON_9.3", codeblock) your second dialog will not run X times with different inputs for each loop. You can make your parameters all multivalue and assemble the inputs prior to running. I am not sure why your proposed workflow is any different that running the tool X times since you still have to provide the inputs. I would suggest dumping your first dialog, so you can assemble the inputs, then run it Thank you so much sir.I need one more help is there any way to get the values for each file of the multiple files in the second script. for example: if there are 3files: for first files it has to take the values and after that when the second file has been given as input it must take the other values. and same for the third file also. can u please help us with the logic or explain how to do it. regards in advance. Note that this is a cross-post of three additional questions on GIS StackExchange, all of which were closed for lack of code. The best way to get help with code is to provide code. - V
https://community.esri.com/t5/python-questions/how-can-i-call-one-script-tool-as-many-types-as/m-p/292891
CC-MAIN-2021-39
refinedweb
895
67.25
One of the most common task during UI design is handling the user clicks, usually on buttons, so it is necessary to handle button click. Android provides two alternative ways to handle user click: one using code and other using XML during the layout definition. To make things very simple let’s suppose we have a button in our layout and we want to handle user clicks. As said before there are two ways: - implements View.OnClickListener - declare the method in XML file Let’s suppose we have a XML layout file like that: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <Button android:id="@+id/button1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="TextButton" </LinearLayout> If we use the first method we have to implements an interface called View.OnClickListner or we can use an anonym interface implementation that looks like: // We retrieve button reference inside the layout Button b = (Button) findViewById(R.id.button1); b.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Implements our code do handle user click } }); So as you can notice, we have first get the button reference using the method findViewById and then set the listener. These lines of code have to be repeated for every button inside the layout we want to handle. There is another method that can be use and is to declare that our Activity implements the interface View.OnClickListener.In this case we have: public class SurvivingClickActivity extends Activity implements View.OnClickListener { . . . Button b = (Button) findViewById(R.id.button1); b.setOnClickListener(this); @Override public void onClick(View v) { int id = v.getId(); switch(id) { case R.id.button1: // execeute our code .... } } } The View.OnClickListener requires to implement a method called onClick, where we have to chose which button was clicked by the user. To do it, we first get the View id and the we compare it with the buttons we want to handle. The last strategy is declare the method that has to be called, when user clicks on the button, directly inside the XML like that: <Button android: Using the attribute android: onClick we declare the method name that has to be present on the parent activity. So we have to create this method inside our activity like that: public void executeHello(View v) { Toast.makeText(this, “Hello World”, Toast.LENGTH_LONG).show(); } [/java[ Analyzing the three method shown above, of course the last one is the most elegant and the shortest too. The others two methods require too much code to be implemented and makes the activity class code too messy.
https://www.survivingwithandroid.com/2012/09/how-to-handle-button-click-java-code-vs.html
CC-MAIN-2017-26
refinedweb
429
53.71
Tip #2: Proper Exception Handling in .Net and in General. ”Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.” You can never know when this phrase will turn into horrible reality, so you’d better be prepared! Tip #2: Proper Exception Handling – Part 1 This time on “Code Tips That Will Save Your Life” – Exceptions. We all know them and sadly we all have to deal with them, but are you doing it right? To most it seems like a very straightforward thing – Try, Catch maybe Finally, that’s it. But code with incorrectly written Exception handling can be frustrating to maintain and hard to debug on a good day, but downright dangerous on a bad one. On this two part post I will share 8 Tips that will make your code amazingly easier to maintain, not to mention log and debug, and might even make it more stable. 1. Getting Up to Scratch As I already said on a previous post about Application Stability: “Most Software Developers Are Lazy” I still stand behind this phrase, which is the only reason I’m not skipping this tip. So lets just make sure we are all on the same page here. Think about a piece of code you wrote lately, an algorithm, a method or a class. Now look at this snippet: try { // Do amazing things } catch { // Things didn't go as planned, time for plan B } finally { // Release resources and other awesome stuff } Does your code follow this pattern? If your answer was yes, than kudos, +2 Points. If your answer was no.. well you better have a good reason for it. There are very few cases in which you would not want to catch rising exceptions. Even if you don’t really have anything to do with the exception at that place in the application flow, it’s usually good practice to catch it for logging purposes, resource management and for enriching it with information about the cause of the exception. Obviously as with any rule, there are exceptions (pun not intended). If there really is nothing to log, and no information to add, there is no reason to catch an exception for the sole purpose of throwing it again. But make sure that is the case. What ever you do, the most important rule for you to remember is: Never, ever swallow Exceptions without doing anything! Bubble them, write to log, do anything. But make sure your catch clause is not empty, or someday, some poor sap that will need to maintain your code will have a very, very bad day. 2. Being Specific With Exceptions Let’s put exceptions aside for a second. Imagine yourself at the office. You are working on that very very important project your boss expected you to finish yesterday. When suddenly your IDE crashes, leaving you with the following message: More or less, you have nothing to grab on to. You have no idea what went wrong. More important, you have no idea how to recover all your lost work. You are left with nothing to do but contemplate your unplanned sudden career change. Now with that in mind, imagine yourself using or maintaining someone else's code, or maybe even code you wrote a long time ago. Then while Debugging it you get this: The exception’s type is System.Exception, the exception message is “An error occurred”. No inner-exception, no relevant stack-trace. What now? If you are lucky you have the source code or PDB files, so you can spend anything between 15 minutes to several days debugging for an answer. If you are unlucky, and you don’t have a way to debug that code.. well good luck guessing the reason. When writing exception handling you should always have this example in mind. You need to make sure the exceptions your code throws can be easily understood, and this is done by being specific. Being specific means two things: 1. Always make sure the exception message is clear. If you can predict the reason for an exception in the code, make sure the exception you throw includes that information, and even a suggestion on how to fix it. For example: If a specific parameter you received for a method is null, make sure you specify the exact parameter that was sent as null, don’t just throw a “Null reference exception”. 2. Use the correct type of exceptions, and use custom exceptions if needed. Exceptions come in many flavors: NullRefrenceException, InvalidOperationException, IOException.. and the list goes on. Usually you will find one that is relevant for you, or at least one you can derive from. But please use the basic System.Exception only as a last resort. On the flip side, you should also separate different exception types when catching exceptions, using several catch clauses. Separating exceptions by type will enable you handle different exceptions differently and log them appropriately. For example: try { RunLogicOfGreatImportance(argument); } catch (TimeoutException timeoutExcpetion) { // Action timed out, perform retry policy } catch (ArgumentException argException) { // Argument sent was invalid, prompt the argument creator } /* etc. */ catch (Exception ex) { // Something else that I didn't expect went wrong, log and handle } As you can see it’s always important to finally catch System.Exception. This is in case an exception of a type you didn’t expect was thrown. P.S: I heard some people talk about performance implications of using multiple catch blocks. That’s simply wrong and in no way an excuse to skip this good practice. For more information check out this Benchmark on try/catch Performance. 3. Custom Exceptions O.K so here is the deal. Sometimes the exceptions that are part of the .Net framework are not specific enough, or not relevant to the type of exceptions you need to throw from your code. This is where the concept of Custom Exceptions comes into play. Basically what it means is that you create your own Exception types by deriving from the exception types that are already in the framework. The simplest way of doing this is using the exception snippet. Simply click CTRL + Space and start writing the word “Exception” Select Exception from the list and press TAB twice. What you will get is this lovely snippet of code: ) { } } The snippet basicaly lets you select the type name prefix (leaving the word “Exception” at the end as a standard) and the Exception type it derives from (System.Exception the being default). Now you can add whatever you want to this class. Usually it’s good practice to add properties with more information about the exception reason, and maybe a GUID or some other type of ID for a log record regarding the exception. To the question of “What exception type should I derive from?” there are two possible approaches: Approach #1: Derive from the exception type that most closely matches the one you wish to create. Basically what this means is taking existing framework exceptions and making them more specific. For example, you are writing a piece of code which needs to read a given configuration file and load it. In case the given file does not exist you want to raise an exception more specific than FileNotFoundException, so you derive from it and create your own exception type : ConfigFileNotFoundException. The advantage here is that even if the person using your code doesn’t know that it throws your custom exception, he will still catch it if handling the derived types. Approach #2: Make one main exception type for your system/application or for the current application module (for larger projects) and have it derive from System.Exception (Not from ApplicationException!). Now have every new Custom Exception you create derive from that type. The advantage here, is that unlike the previous approach you have all your custom exceptions deriving from one application wide custom exception type, and you can even manage them all under the same namespace. It’s a matter of preference, I personally prefer the second approach. When working on large projects especially ones involving several developers it’s a good standard to maintain among each other. It’s also generally a good method to separate internally, exceptions that were risen by the CLR from ones that where risen by the application code. This is good both for maintenance and logging. 4. Correctly Bubbling Exceptions Here is a little test. Read the following 4 lines of code performing exception bubbling, that you would usually find inside a catch block, and see if you can spot the differences (for the sake of this test “ex” is the exception instance caught): - throw; - throw ex; - throw new MyCustomException("Exception Message"); - throw new MyCustomException("Exception Message", ex); So obviously syntax-vise they are all different from each other, but what actually is the difference in what happens when they excute? If your explanation included “Inner Exception” that’s +1 Point. If your explanation included “Call Stack” or “Stack-Trace” that’s +3 Points. And if you have no idea, well there is nothing to be ashamed of, but stay tuned. These fine differences are the kind of things that even seasoned .Net developers are just not familiar with. There is little to no documentation about this, especially on MSDN, where you’d think it should be under Exception Handling Fundamentals. So let’s go over each one and understand what it does and when we would like to use it. 1. throw; What it does: Re-Throwing the just caught exception, Preserving the Call Stack leading to the exception being caught. When will I use it: When you wish to catch the exception only to log it or do some kind or rollback logic, but bubble the original exception with the full Stack-Trace. 2. throw ex; What it does: Same as the last one, only this time Not Preserving the Call Stack, actually cleaning it and starting it from the current place in the code. When will I use it: Possibly never. The only good reason I see to do this is if you want to actually hide the Call Trace leading to the exception from the person using your code. This could be because you already logged the full trace and don’t want to have it written again to the log. Another reason is that you provide a third party DLL and don’t want the exception to reveal all your calling trace. On both cases I find the next example a better option. 3. throw new MyCustomException("Exception Message"); What it does: Creates a new exception instance, with your own exception message. In the process tossing away the call stack gathered up to this point and creating a new one. Therefore Not Preserving the Call Stack. When will I use it: On same instances described on the previous example. Only this time you have the added benefit of having the newly thrown exception be the one you created with your own exception message. 4. throw new MyCustomException("Exception Message", ex); What it does: Creates a new exception instance same as the last time, but this time using the caught exception as the Inner Exception. What it means is that all the information gathered in the exception that was caught is maintained inside the newly created exception, and in the process Preserving the Call Stack. When will I use it: Probably this is the most useful one of the lot. It gives you both the ability to maintain the gathered trace, but also add to it your own information, and if you wish, have the new exception take the form of your own custom exception type. In any case, you should not catch an exception just so you can re-throw it, unless you have a good reason. Among these good reasons are: Enriching or Editing the exception details, Logging, Performing rollback and resource cleanup logic and so on. That’s it for Part I. Part II of this post will cover: Resource Management, Multithreading, Documentation and Design Decisions regarding Exceptions. So stay tuned. I Hope this post helped you. If you have any comments, questions and other relevant or irrelevant things to say, please leave a message on the comment section bellow. Josef. Previously on “Code Tips That Will Save Your Life” : Tip #1- Code Documentation
http://blogs.microsoft.co.il/jgold/2012/04/05/code-tips-that-will-save-your-life-2-part-i/
CC-MAIN-2018-47
refinedweb
2,068
62.58
In few previous articles, I have explained how we can read JSON data in C# and how to read excel file in C#, now in this article, I have provided code sample using console application to show how to read a text file in C# line by line or reading entire text file as a string in one by go using C#. Let's a look at each of the example code, one in which text file is read and converted into string, i.e, using System.IO.ReadAllText() and another is reading text file line by line using System.IO.ReadAllLines() which returns array of line, and we can loop that array to print each line of text file. Read File in .NET Framework 4.5 Console application Reading file in C# line by line In this example, we will read a text file line by line using System.IO.ReadALLLines() in console application. So, if you are new to C# or Visual Studio, you can create a new Console application by opening Visual Studio, navigating to "New"-> "Project" -> Select "Windows Classic" from left-pane and "Console app (Windows Application)"-> Give a name to your project "ReadInCSharp" and click "OK" Now, inside Program.cs, we will write our code using System; using System.IO; namespace ReadInCSharp { class Program { static void Main(string[] args) { //file in disk var FileUrl = @"D:\testFile.txt"; //file lines string[] lines = File.ReadAllLines(FileUrl); //loop through each file line foreach (string line in lines) { Console.WriteLine(line); } } } } Output: This is test file. To Read text file in C# Sample. In the above, code we are using foreach loop to read all lines of an string array. Reading text in C# all line at once Let's take a look at C# code to read all lines at once of a text file. using System; using System.IO; namespace ReadInCSharp { class Program { static void Main(string[] args) { //file in disk var FileUrl = @"D:\testFile.txt"; // Read entire text file content in one string string text = File.ReadAllText(FileUrl); Console.WriteLine(text); } } } Output: This is test file. To Read text file in C# Sample. Reading Text file using StreamReader There is one more way to read lines of a text file in C#, which is using StreamReader. StreamReader class implements a TextReader that reads characters from a byte stream in a particular encoding. using System; using System.IO; namespace ReadInCSharp { class Program { static void Main(string[] args) { //file in disk var FileUrl = @"D:\testFile.txt"; try { // Create an instance of StreamReader to read from a file. // The using statement also closes the StreamReader. using (StreamReader sr = new StreamReader(FileUrl)) { string line; //read the line by line and print each line while ((line = sr.ReadLine()) != null) { Console.WriteLine(line); } } } catch (Exception e) { // Something went wrong. Console.WriteLine("The file could not be read:"); //print error message Console.WriteLine(e.Message); } } } } Output is same as above, in the abovde code, we are using StreamReader instance to read text from file. As you can see in the above code, we are feeding the File url to " StreamReader" class object and then we are reading file line by line using sr.ReadLine(), which gives us one line at a time from text file, then using Console.WriteLine(), we are printing the value of that line console application. Read File in .NET Core Console application In the above example, we were reading file using .NET framework, but you can also read files using 'StreamReader' in .NET Core, here is the working example. Before, I show you example, I have created a new console application using .NET Core in Visual Studio 2019 (Open Visual Studio -> Click on Create new project -> Select "Console App (.NET Core)" from templates -> Click "Next", give your project a name "ReadFileInNetCore" -> Click "Create") Considering you have text file at location "D:\testFile.txt", you can use the below C# Code in .NET Core to read text file line by line. using System; using System.IO; namespace ReadFileInNetCore { class Program { static void Main(string[] args) { FileStream fileStream = new FileStream(@"D:\testFile.txt", FileMode.Open); //read file line by line using StreamReader using (StreamReader reader = new StreamReader(fileStream)) { string line = ""; while ((line = reader.ReadLine()) != null) { //print line Console.WriteLine(line); } } Console.WriteLine("Press any key to continue"); Console.ReadKey(); } } } If you will see the above code, you will notice, there isn't any difference in C# Code, when working with .NET 4.5 or .NET Core. Output: To read all files at once, you can use "ReadAllText" as mentioned for .NET Framework " System.IO.File.ReadAllText("YourFileLocatio.txt");" Note: If you are working with .NET Core 3 and working with web-application, and you want to read file from wwwroot location, you can locate "wwwroot" folder as below: private readonly IWebHostEnvironment _webHostEnvironment; public YourController (IWebHostEnvironment webHostEnvironment) { _webHostEnvironment= webHostEnvironment; } public IActionResult Index() { string webRootPath = _webHostEnvironment.WebRootPath; string contentRootPath = _webHostEnvironment.ContentRootPath; string path =""; path = Path.Combine(webRootPath , "yourFolder"); //or path = Path.Combine(contentRootPath , "wwwroot" ,"yourFolder" ); return View(); } You may also like to read: Read PDF file in C# using iTextSharp
https://qawithexperts.com/article/c-sharp/read-file-in-c-text-file-example-using-console-application/262
CC-MAIN-2021-39
refinedweb
847
66.64
One error you may encounter when using NumPy is: TypeError: 'numpy.float64' object is not iterable This error occurs when you attempt to perform some iterative operation on a a float value in NumPy, which isn’t possible. The following example shows how to address this error in practice. How to Reproduce the Error Suppose we have the following NumPy array: import numpy as np #define array of data data = np.array([1.3, 1.5, 1.6, 1.9, 2.2, 2.5]) #display array of data print(data) [1.3 1.5 1.6 1.9 2.2 2.5] Now suppose we attempt to print the sum of every value in the array: #attempt to print the sum of every value for i in data: print(sum(i)) TypeError: 'numpy.float64' object is not iterable We received an error because we attempted to perform an iterative operation (taking the sum of values) on each individual float value in the array. How to Fix the Error We can avoid this error in two ways: 1. Performing a non-iterative operation on each value in the array. For example, we could print each value in the array: #print every value in array for i in data: print(i) 1.3 1.5 1.6 1.9 2.2 2.5 We don’t receive an error because we didn’t attempt to perform an iterative operation on each value. 2. Perform an iterative operation on a multi-dimensional array. We could also avoid an error by performing an iterative operation on an array that is multi-dimensional: #create multi-dimensional array data2 = np.array([[1.3, 1.5], [1.6, 1.9], [2.2, 2.5]]) #print sum of each element in array for i in data2: print(sum(i)) 2.8 3.5 4.7 We don’t receive an error because it made sense to use the sum() function on a multi-dimensional array. In particular, here’s how NumPy calculated the sum values: - 1.3 + 1.5 = 2.8 - 1.6 + 1.9 = 3.5 - 2.2 + 2.5 = 4.7 Additional Resources The following tutorials explain how to fix other common errors in Python: How to Fix KeyError in Pandas How to Fix: ValueError: cannot convert float NaN to integer How to Fix: ValueError: operands could not be broadcast together with shapes
https://www.statology.org/numpy-float64-object-is-not-iterable/
CC-MAIN-2021-39
refinedweb
397
51.28
Steaming Heap of Quickies 119 I've been so busy on the code frenzy that I've been behind on the quickies! Tragic! First lets get the serious quickies out of the way: chris sent us the Atlanta Linux Showcase Tutorial and Conference program for the 3rd Annual ALS, comming up October 12-16, 1999, in Atlanta Georgia. Registration is open. Bl0w0ff noted that The dockapp warehouse has been upgraded and redesigned. k-rist sent us SimShatner. Here is a site selling a video history of Atari with interviews with the guys that did Pac-Man and all that early stuff. Someone sent us a link to another place you don't want to see a BSOD. Want some Blair Witch Parodies? irishmikev sent is a Southpark Parody and stairs sent The Blair Family Circus Project. How about a pair of strange places to put a server? Gareth Walwyn sent us one in a potted plant and GFD noted thatLinux Today has a story about a box that runs in a real Pizza Hut Box. If strange Linux boxes ain't your bag, someone submitted Apple Fritter which contains strange cases for Apples (Legos, Radios, and more) Jade wrote in with how to apply for the position of Sith Apprentice. and rjh pointed us to the iMaul (seems like a lot of stuff is coming in pairs today) Evan Vetere noticed that despair.com has new de-motivators. Matthew McCabe sent us tuxtiles which is taking votes on designs for "Linux Blankets". Since we're mentioning merchandise, I gotta plug Think Geek which is the first place I've seen with good stuff. They mailed us a box of freebies, but I actually woulda bought most of the stuff they sent me (mugs with #include <beer.h> and some sweet perl shirts and other cool stuff). Most of the "Geek" sites just sell crap but most of this was actually clever. We probably should also note that Copyleft finally has the new Slashdot shirts from our contest winners, they look great. ralphb was the first to say that Time Digital has an article on Slashdot. Re:Could be, but it's neither a BSOD nor NT [nt] (Score:1) Under NT, a BSOD only appears when the processor has halted. That's why it's called a blue screen of _death_ I believe the term originated within Microsoft. Atari Videos (Score:1) And I'm not anonymous, I can't remember my password. (craig@ic.net) Oh my god! Subscription caffeine at Think Geek!!! (Score:1) Windows 9x (Score:1) Another thing, those thinkgeek shirts (the few I saw) are the best geek shirts out there. It'd be neat if you could get any phrase you wanted put in binary. I'd like to walk up to my boss with a shirt that said "Mr. is an idiot" Rob, you are too mean (Score:1) Maybe the guys at ThinkGeek need to hire a geek. :) -Davidu Problems with SimShatner (Score:1) Wow (Score:1) CPU temps (Score:1) Hehe, that's believable. Until last night, I could overclock my dual celery system only if I had the case off (only got through two to four kernel compiles). Last night I installed two case fans (it previously only had the power supply fan in addition to the cpu fans), one in the front and one in the back (my case came with two fan locations, woohoo), and it happily got through 11 compiles, top -d1, procinfo -d1, and ping -f (to another machine). Now, I wonder how well it will work when summer hits (I'm in the southern hemisphere). Anyway, to paraphrase Gecko: fans are good, fans work. Or better yet: airflow is good, airflow works. Tux Case? (Score:1) pull my finger... (Score:1) settle down, taco! a truly great shirt--wonder where rob got it? Not quite BSOD's, but still worth noting. (Score:1) I've seen several PC's in unlikely places with problems booting up. We were at Frankenmuth in this store, and the person's cash register was an old 486 that needed some TLC in the setup screen. My girlfriend had to hold me back to not push F1 and jump right on to help. The second one we saw was at the Macomb Mall. There is a photo booth there that does a bunch of photo effects to your picture (like making it look like it was hand drawn). One day they must've lost power, because when we went by it, it had the power on screen, and "No Keyboard Found. Press F1 to continue." There was also a wedding gift registry at Hudsons that had a BSOD on it. Unfortunately, no CTRL-ALT-DELETE on the membrane keyboard. :) Re:BSOD in an unlikely place (Score:1) Sorry, the position has been filled...... (Score:1) Ya'd think by now that a webmaster would prepare.. (Score:1) Oh, Come on, bruce...Ya know that rob.... (Score:1) This post should be in the SLASHDOT Hall of Fame (Score:1) CmdrTaco, who normally produces nice short and to the point posts, has turned out this behemoth. All I can say is "Cool". Mr. Malda, we salute you! (Now all you need is the code to add yourself to the HOF page) ------- Cool Linux Project of the Week [xoom.com] Re:Tuxtiles (Score:1) Blair Witch parodies (Score:1) He said how about a 'Really scary movie about three white people at the "Blair Witch Projects"' "Oh my god, that is the third time we've been to the same urine smelly elevator" Hmmm... I guess you would have had to see it yourself but it was pretty funny. Even More Quickies! (Score:1) Yours truly gets interviewed in Upside Today [upside.com]. See their Open Season [upside.com] feature. Bruce Re:Ya'd think by now that a webmaster would prepar (Score:1) I'm not silly enough to post any URL under my control to slashdot itself. No server I have a hand in could handle it. Yet today, in a small way, it's doing so anyway. I consider myself fortunate that it was only in the middle of a big fat pack of quickies, so my 100 hits a day site is only having to put up with, say, 6000 hits today instead of the real Slashdot Effect. (For the idly curious: I host the "job opening: apprentice Sith Lord" bit. Wheee. I didn't write it, it just showed up.) mmm... /.'ed (Score:1) Re:A bit more on the BSOD (a cynics view) (Score:1) I used to run WfWG 3.11 and MS Word 6.0, and they crashed all the time. Then I moved the machine (an IBM PS/2 Model 80, circa 1989) onto NT3.51 and the 32-bit version of Word, and have never had a crash. Ever. NT4 on the other hand doesn't seem as stable for me. Maybe Win2k will go back to the 'good old days' of NT stability. Re:Sorry, that's NOT a BSOD (Score:1) You are right, but the term seems to have filtered downwards to WinDOS. Take a look at BSOD Properties [pla-netx.com] which lets you have a Red Screen of Death, etc, on WinDOS. Try running the DOS binary [ta.jcu.cz] of XaoS [paru.cas.cz]. There is a secure server there. (Score:1) "This is a secure document that uses a high-grade encryption key for U.S. domestic use only (RC4, 128-bit)." I don't know if there was one when you visited earlier, but there definitely is a https secure server right now. Re:Could be, but it's neither a BSOD nor NT [nt] (Score:1) Whether or not the system was down... (Score:1) Ben Depends on your POV (Score:1) When you're late for a flight and you need to know what gate to run for. Re:Tuxtiles (Score:1) Demotivators (Score:1) These are a WHOLE lot better than those campy motivational posters I've seen hanging around my office... Re:BSOD in an unlikely place (Score:1) Could be, but it's neither a BSOD nor NT [nt] (Score:1) Hey, don't blame me for your reading this, I said "no text." :P Cheers, ZicoKnows@hotmail.com How many /.ers? (Score:1) How many Where do I send my resume? (Score:1) Re:BSOD in an unlikely place (Score:1) The problem is that this station will often be broadcasting this image for MONTHS!!! 24 hrs a day of Guru meditaion. You'd think someone would come in and Ctrl-Amiga-Amiga... BSOD at the Airport (Score:1) That case... (Score:1) That's not really a pot plant. (Score:1) It would have been much more 31337 if the guy had realized his original goal of creating the first online server inside somebody's ass. The bottleneck is explained at... (Score:1) Good golly (Score:1) Re:Linux Fish (Score:1) char *stupidsig = "this is my dumb sig"; Slogger (Score:1) OC, I don't wear hats. Never really did. Got 'em all lying around the house here somewhere. Well, except for the fedora. It's just to good a hat not to sit on top of my system. --- "Who pill da cubby custar?" Include alcohol???? (Score:1) Onion attacks Columbine (Score:1) Woo Hoo! T-Shirts! (Score:1) Re:amusing timeout on ThinkGeek (Score:1) Re:A bit more on the BSOD (a cynics view) (Score:1) This is not exactly true. The first couple of lines can tell you alot if you care to spend a few minutes looking them up in the Microsoft knowledge base. They may not be exact (you may see half a dozen possibilities per message) but it gives you a good starting point. You should also look at the list of drivers it gives you and the problem is usually with one of the first few on the list. My laptop has BSOD'd on me twice in 3 months and both times were within the first week and caused by a bad NIC driver. If I don't put in the PCMCIA card the driver fails, but if I put the card in without plugging the cable into it the driver crashes the whole machine! I will probably be flamed for this, but I have found NT to be _very_ stable for me as long as the hardware is stable. The key is to use quality hardware that isn't running on beta quality drivers. Most of my problems have come from either using no-brand hardware (you get what you pay for) or using the latest greatest thing with driver version 1.0.1. NT drivers don't seem to come out of the Beta phase until they've hit 2.0 or so.I am not of course speaking for the security of NT in any way, and I have seen problems like you mentioned about memory being maxed out after a while. What I am trying to say is that in 2 years of administrating a network I have seen that 99% of the instability of NT comes from beta quality drivers. In fact after tracking down all the crappy hardware and drivers on the problematic machines we've averaged less than 1 BSOD a month total for 25 computers. (I know, I know... Linux may crash less than that, but that's not to shabby for what I was given to work with amusing timeout on ThinkGeek (Score:1) The ? is...was that timeout message there before this quickie was posted? another ?...should it have been? Any enterprising virus writers want to attack the error message files on web servers? -t Re:BSOD at the Airport (Score:1) The only thing stupider than running flight information on NT, is running it on windows 9x. ThinkGeek gets slashdotted (Score:1) In any case, my guess is that it'll be days before I get to see what's inside : ) Re:BSOD in an unlikely place (Score:1) It must have been for one of those hotel information channels, and the computer got reset... I can see how it would be kinda useful to use a computer that's designed to output to a TV, but a TRS-80? In Basic? Actually, it wouldn't especially surprise me if TRS-80s were used in other places, judging from the horrible graphics those channels usually have. -- Malda Synchronicity? (Score:1) The Atari Video Game History is produced by Howard Scott Warsaw, programmer behind "Yars Revenge" and "E.T." for 2600. On his website he takes joking credit for the collapse of the video game industry [netcom.com], saying "Rarely is one given the opportunity to topple a billion dollar industry single handedly. Yet according to the May '95 issue of New Media magazine (p. 27) this was my shot." In my Demotivators 2000 calendar, Despair, Inc. [despair.com] includes the November 1982 date that Howard Scott Warsaw's "E.T" was released, saying in full "E.T." game release for Atari 2600; hastens collapse of the videogame industry. Over 1 million copies end up buried in a New Mexico landfill. Freak coincidence, or is Rob listening to too many old Police albums? Smirkleton Re:Think Geek. (Score:1) No https at ThinkGeek? (Score:1) Proceed to checkout, and Bad. No order for you. Random, Isolated BSOD (Score:1) Warpstock '99 right after Atlanta Linux Showcase (Score:1) #include ? Oh, brother.... (Score:1) I can see it now... -- Moondog oh, so the conclusion... (Score:1) Speaking of Episode 1 Parodies... (Score:1) Please do leave comments at the "Contact Us" section! Regards, Ryan Mannion (FWIW, our parody is strictly non-profit, etc., etc.) Re:How many /.ers? (Score:1) Note to Rob - I'd like a way to change my user name, without having to register a new one. My 'old' one had Karma=7, which I'd like to keep. What is BSOD? (Score:1) Think Geek. (Score:1) We put the 'o' in .org (Score:2) BSOD in an unlikely place (Score:2) My take on SimShatner (Score:2) I think the web page version of Shatner features slightly better acting, and much, much better hair. But the meatware version has better sound quality than my PowerBook does (though it's close), and has gotten to hang out with Heather Locklear. My vote: SimShatner. After all, even with better sound in the original, you'll only want to listen for so long - not to mention that one of these days, Heather's going to start aging. And she's held it off so long that it'll be catastrophic when it happens. There's just no room for another Dick Clark, female or not. - -Josh Turiel BSOD in Austin ABIA (Score:2) This seems to me like the perfect place for Open Source. Who knows how much each airport pays for this app? I bet it's a BUNDLE. Charge 'em 70% of the going rate to write a GPL'd version, and write it for Linux. Phenomenal uptime, multi-headed monitors (soon) and suddenly travellers across the world start seeing Linux in airports. And some GPL programmers make some money. And why not release something like this under the GPL? It's not like people choose their airports based on the features of the departure/arrival screens... no competetive advantage there. Take your cameras to the airport. Let's start a whole gallery of these things. Mirror of BSOD (Score:2) New Demotivators (Score:2) In case anyone's wondering why no lithographs for the new Demotivators, I mailed them last week to ask. Apparently, something like 50% of the last batch accounted for 90% of their sales, so they're seeing what's popular amongst the new ones before producing the big lithographs. On a totally unrelated note, check out this [yeongyang.com] for a weird case. Not so much an Apple as a Bean. Re:Sorry, that's NOT a BSOD (Score:2) Yeah, and about every politician I know thinks "nuclear" is pronounced "nuke-ya-lur," and every Linux zealot I know that calls himself 31337 still can't figure out how to download only the patches to their latest kernel. Doesn't mean it's correct. Further, seeing as so many people here bristle whenever a journalist uses "hacker" when he could've used "cracker," I would think that they'd be in favor of choosing one's words more carefully. I don't know when the original definition was coined, but it's an NT-only thing -- when you get a blue screen in Win9x, it doesn't mean death is certain; in NT, you have no other choice but to reboot. Just because they're both blue doesn't mean that they're the same thing, no more than I'd confuse a computer running DOS 3.2 with my Linux box just because on my screen I get a prompt and white text on a black background. As to your other questions, yes, Microsoft employees occasionally use the term, but I don't remember any official documenation referring to it as anything other than a "Stop Error" or a "Blue Screen." Some of their publications, like MSDN stuff, will use "BSOD" from time to time. Cheers, ZicoKnows@hotmail.com Burnout for NT4 (Score:2) Burnout Attitudes are Contagious. Mine Might Kill You. Perfect for: - Anyone Looking to Get Fired - Anyone consigned to use NT - Disaffected college students Re:#include beer (Score:2) /* #include "braap.h" #include "p.h" #include "cas.h" #define MAX_ALE 5 void DrinkBeer(int pints) { } int main(void) { } */ A bit more on the BSOD (a cynics view) (Score:2) The problem of the BSOD (one of them) is the lack of info about what exactly caused the problem (unless you read hex). Include this with the overall attempt on NT to hide the hardware and you get the legend of the BSOD. I've had random BSODs, perhaps it's my inexperience, but nonetheless all my users know what it is and what it means. Turn it off and reboot again. It's a backlash against the advertised ease of NT administration and the reality ($$$ for software, $$$ for support). Just wait until the horror stories of 2000 overwhelming admins start to surface and you'll understand. (BTW my NTServer4.0(file, web) lasts about 35 days, until it's memory is totally maxed out (256) and needs to be resurrected) Blue screens after-effects (Score:2) When WinNuke was all the rage, it would blue-screen Win95 and NT. Win95 was 'recoverable'. NT was not. However when you tried to use a TCP/IP connection after acknowleging the blue screen, you found that it didn't work. Reboot. So (I guess) the difference is that you get to save your work in Win95 (unless it's on a TCP/IP-connected server!!) With NT you're just SOL. Also, many times that Win95 has BSOD'd for me, I can't just acknowlege it and keep going. The damn messages just keep coming until I summon mighty RESET. "Windows is busy waiting for a close program dialog to appear. To continue waiting, press any key. To reboot your computer, press CTRL-ALT-DEL again..." Or something like that. Sound familiar? Re:ThinkGeek gets slashdotted (Score:2) I love these guys. They even offer Jolt in flavors I never knew existed! Mmmmm... Citrus Climax or Cherry Bomb... I may never drink regular soda again! Did you notice they even have the relative caffeine content listed? Cool! And scheduled delivery to boot!! Guys, sorry 'bout that /. effect... it was worth it for all the orders, right? Re:Could be, but it's neither a BSOD nor NT [nt] (Score:2) In any case, its not a sight that instills warm fuzzies. Pot Plant? (Score:2) Please realise I am not saying that doing such a creative form of casing for a PC is impossible, or that this guy didn't do it. I just expected to see more regarding the construction. As it is, it just seems like a bunch of old parts in a bucket... Re:ThinkGeek gets slashdotted (Score:2) Tuxtiles (Score:2) >How long is the vote open? >Voting will stop on September 1, 1999, so we can >notify our manufacturer and get a sample made. Figures... SimShatner (Score:2) New /. t-shirt idea (Score:2) -- Yes, that *is* a real email address... YES: https at ThinkGeek? (Score:2) My order's going in now. so is the Pizza Box... (Score:2) Google [google.com] uses Squid or some other proxy/cache to harvest all of its web pages, *then* indexes them. If a link is dead, you can use their cached version instead (and see the headers) -- it's great for all the bad links you find in web searches. Sorry, that's NOT a BSOD (Score:3) For those who don't know, a BSOD is specific to NT and is equivalent to a kernel panic on most *nix variants. The NT kernel drops to the console (which is 80x40), prints a header and some debug information followed by a hex dump of the processor state and (I think) the stack. Just like a kernel panic, a BSOD is unrecoverable. In my four years of experience administering NT boxes, every BSOD I've seen has been caused by NT not liking a particular combination of hardware devices or drivers. When they do appear, they appear regularly until you resolve the conflict either by swapping hardware or updating drivers. I've yet to see an isolated, random BSOD. It seems like some people who don't have any NT admin experience have heard the term BSOD and interpreted it to mean anytime Windows 3.X/9X/NT prints a blue screen. That's not the original meaning of the term. Re:ThinkGeek gets slashdotted (Score:3) BSOD (Score:3) If memory serves me correct, that BSOD picture was shot by none other than our very own Alan Cox. -- #include beer (Score:3) Checking for gtk... yes Checking for ESD... no WARNING: Esound library not found. Will compile without sound. Checking for imlib... yes Checking for lager_ale in _fridge... no Checking for any_kind_of_ale in _fridge.. no WARNING: We were unable to locate any ale in your refrigerator. We suggest you fix this problem immediately. --- i've seen similar things too in other places.. something, i can't remember what, i think it was windowmanager, displayed during Checking for life_signs in Kenny... no Oh my God!! They killed Kenny!! You bastards!! --- The miracle of open source software. yep (Score:3) Despair.com Y2K Calendar dates (Score:4) 1) January 1st, 2000 - Largest collective hangover in human history. 2) January 7th, 1943 - Nikola Tesla, inventor of radio, AC power and wireless communication, dies penniless in New York. 3) January 8th, 1992 - President Bush shares dinner with Japanese Prime Minister Kiichi Miyazawa. 4) January 14th, 1990 - Homer Simpson first utters "D'oh!", aiding millions in articulating a precise feeling of self-inflicted stupidity. 5) January 19th, 1983 - Apple introduces the world's first "user-friendly" computer, the 52 lb., $10,000 Lisa. 6) January 25th, 1996 - FDA approves Olestra. 7) February 10th, 1996 - Chess legend Gary Kasparov is defeated by IBM's "Deep Blue" supercomputer. 8) March 9th, 1999 - Al Gore tells CNN, "I took the initiative in creating the Internet". MIT's Dr. Larry Roberts makes a voting decision for the 2000 election 9) April 29th, 1983 - "Kilroy Was Here", a concept album about a rock band's descent into self-parody, is certified platinum. 10) December 9th, 1997 - Stroboscopic effects in TV show "Pokemon" trigger seizures in over 600 Japanese children. Media exacerbates the problem by replaying clips while cover the story. Funny video game errata, pretty obscure, "E.T." game release for Atari 2600, hastens collapse of the videogame industry. Over 1 million copies end up buried in a New Mexico landfill." and August 8, 1997 - Lord British assassinated while addressing his subjects in Britannia I know where I am buying 90% of my friends for Christmas now. Smirkleton
https://slashdot.org/story/99/09/10/1621242/steaming-heap-of-quickies
CC-MAIN-2016-40
refinedweb
4,065
74.29
. Urgently looking for virtual assistant Looking for website for start up business And other For private fund and business fund scientific research programs for humanity safety and protection Project timeline: 2 days Max. proj. budget: 175 I have 8 suppliers that sell almost the same products. I have contract with all of them. They all have B2B platforms for ordering, product info, price and availability. I need a tool that with one search by product SKU return (and store/log) prices and availability, also highlight the best choice for me ( best aquizition price and fast delivery) sup... We have folders of images (based on articles and novels) which have to be converted to word document by typing them. We need candidates who can copy the data from the images and type it in the word document. Requirements are: Candidates should have a typing speed above 30wpm. They must be efficient and punctual in their work. Minimum accuracy must be 90%.... The website needs to be Desktop and Mobile browser friendly. Please find the mockups attached. Needed to set up recording equipment for a podcast Villa backyard of 12' by 25' needs to be designed on Facade wall and landscape.
https://www.fi.freelancer.com/projects/word/type-document-27971021/?ngsw-bypass=&w=f
CC-MAIN-2020-50
refinedweb
199
64.51
Introduction C# (pronounced "See Sharp") is a simple, modern, object-oriented, and type-safe programming language. C# has its roots in the C family of languages and will be immediately familiar to C, C++, and Java programmers. C# is standardized by ECMA International as the ECMA-334 standard and by ISO/IEC as the ISO/IEC 23270 standard. Microsoft's C# compiler for the .NET Framework is a conforming implementation of both of these standards. C# is an object-oriented language, but C# further includes support for component-oriented programming. Contemporary software design increasingly relies on software components in the form of self-contained and self-describing packages of functionality. Key to such components is that they present a programming model with properties, methods, and events; they have attributes that provide declarative information about the component; and they incorporate their own documentation. C# provides language constructs to directly support these concepts, making C# a very natural language in which to create and use software components.. C# has a unified type system. All C# types, including primitive types such as int and double, inherit from a single root object type. Thus, all types share a set of common operations, and values of any type can be stored, transported, and operated upon in a consistent manner. Furthermore, C# supports both user-defined reference types and value types, allowing dynamic allocation of objects as well as in-line storage of lightweight structures. To ensure that C# programs and libraries can evolve over time in a compatible manner, much emphasis has been placed on versioning in C#'s design. Many programming languages pay little attention to this issue, and, as a result, programs written in those languages break more often than necessary when newer versions of dependent libraries are introduced. Aspects of C#'s design that were directly influenced by versioning considerations include the separate virtual and override modifiers, the rules for method overload resolution, and support for explicit interface member declarations. The rest of this chapter describes the essential features of the C# language. Although later chapters describe rules and exceptions in a detail-oriented and sometimes mathematical manner, this chapter strives for clarity and brevity at the expense of completeness. The intent is to provide the reader with an introduction to the language that will facilitate the writing of early programs and the reading of later chapters. Hello world The "Hello, World" program is traditionally used to introduce a programming language. Here it is in C#: using System; class Hello { static void Main() { Console.WriteLine("Hello, World"); } } C# source files typically have the file extension .cs. Assuming that the "Hello, World" program is stored in the file hello.cs, the program can be compiled with the Microsoft C# compiler using the command line. While instance methods can reference a particular enclosing; public Entry(Entry next, object data) { this.next = next; this.data = data; } } } } declares a class named Stack in a namespace called Acme.Collections. The fully qualified name of this class is Acme.Collections.Stack. The class contains several members: a field named top, two methods named Push and Pop, and a nested class named Entry. The Entry class further contains three members: a field named next, a field named data, and a constructor. Assuming that the source code of the example is stored in the file acme.cs, the command line csc /t:library acme.cs compiles the example as a library (code without a Main entry point) and produces an assembly named acme.dll. Assemblies contain executable code in the form of Intermediate Language (IL) instructions, and symbolic information in the form of metadata. Before it is executed, the IL code in an assembly is automatically converted to processor-specific code by the Just-In-Time (JIT) compiler of .NET Common Language Runtime. Because an assembly is a self-describing unit of functionality containing both code and metadata, there is no need for #include directives and header files in C#. The public types and members contained in a particular assembly are made available in a C# program simply by referencing that assembly when compiling the program. For example, this program uses the Acme.Collections.Stack class from the acme.dll assembly: using System; using Acme.Collections; class Test { static void Main() { Stack s = new Stack(); s.Push(1); s.Push(10); s.Push(100); Console.WriteLine(s.Pop()); Console.WriteLine(s.Pop()); Console.WriteLine(s.Pop()); } } If the program is stored in the file test.cs, when test.cs is compiled, the acme.dll assembly can be referenced using the compiler's /r option: csc /r:acme.dll test.cs This creates an executable assembly named test.exe, which, when run, produces the output: 100 10 1 C# permits the source text of a program to be stored in several source files. When a multi-file C# program is compiled, all of the source files are processed together, and the source files can freely reference each other—conceptually, it is as if all the source files were concatenated into one large file before being processed. Forward declarations are never needed in C# because, with very few exceptions, declaration order is insignificant. C# does not limit a source file to declaring only one public type nor does it require the name of the source file to match a type declared in the source file. Types and variables There are two kinds of types in C#: value types and reference types. Variables of value types directly contain their data whereas variables (except in the case of ref and out parameter variables). C#'s value types are further divided into simple types, enum types, struct types, and nullable types, and C#'s reference types are further divided into class types, interface types, array types, and delegate types. The following table provides an overview of C#'s type system. The eight integral types provide support for 8-bit, 16-bit, 32-bit, and 64-bit values in signed or unsigned form. The two floating point types, float and double, are represented using the 32-bit single-precision and 64-bit double-precision IEEE 754 formats. The decimal type is a 128-bit data type suitable for financial and monetary calculations. C#'s bool type is used to represent boolean values—values that are either true or false. Character and string processing in C# uses Unicode encoding. The char type represents a UTF-16 code unit, and the string type represents a sequence of UTF-16 code units. The following table summarizes C#'s numeric types. C# programs use type declarations to create new types. A type declaration specifies the name and the members of the new type. Five of C#'s categories of types are user-definable: class types, struct types, interface types, enum types, and delegate types. A class type defines a data structure that contains data members (fields) and function members (methods, properties, and others). Class types support single inheritance and polymorphism, mechanisms whereby derived classes can extend and specialize base classes. A struct type is similar to a class type in that it represents a structure with data members and function members. However, unlike classes, structs are value types and do not require heap allocation. Struct types do not support user-specified inheritance, and all struct types implicitly inherit from type object. An interface type defines a contract as a named set of public function members. A class or struct that implements an interface must provide implementations of the interface's function members. An interface may inherit from multiple base interfaces, and a class or struct may implement multiple interfaces.. Class, struct, interface and delegate types all support generics, whereby they can be parameterized with other types. An enum type is a distinct type with named constants. Every enum type has an underlying type, which must be one of the eight integral types. The set of values of an enum type is the same as the set of values of the underlying type. C# supports single- and multi-dimensional arrays of any type. Unlike the types listed above, array types do not have to be declared before they can be used. Instead, array types are constructed by following a type name with square brackets. For example, int[] is a single-dimensional array of int, int[,] is a two-dimensional array of int, and int[][] is a single-dimensional array of single-dimensional arrays of int. Nullable types also do not have to be declared before they can be used. For each non-nullable value type T there is a corresponding nullable type T?, which can hold an additional value null. For instance, int? is a type that can hold any 32 bit integer or the value null.. In the following example, an int value is converted to object and back again to int. using System; class Test { static void Main() { int i = 123; object o = i; // Boxing int j = (int)o; // Unboxing } } When a value of a value type is converted to type object, an object instance, also called a "box," is allocated to hold the value, and the value is copied into that box. Conversely, when an object reference is cast to a value type, a check is made that the referenced object is a box of the correct value type, and, if the check succeeds, the value in the box is copied out. C#'s unified type system effectively means that value types can become objects "on demand." Because of the unification, general-purpose libraries that use type object can be used with both reference types and value types. There are several kinds of variables in C#, including fields, array elements, local variables, and parameters. Variables represent storage locations, and every variable has a type that determines what values can be stored in the variable, as shown by the following table. Expressions Expressions are constructed from operands and operators. The operators of an expression indicate which operations to apply to the operands. Examples of operators include +, -, *, /, and new. Examples of operands include literals, fields, local variables, and expressions. When an expression contains multiple operators, the precedence of the operators controls the order in which the individual operators are evaluated. For example, the expression x + y * z is evaluated as x + (y * z) because the * operator has higher precedence than the + operator. Most operators can be overloaded. Operator overloading permits user-defined operator implementations to be specified for operations where one or both of the operands are of a user-defined class or struct type. The following table summarizes C#'s operators, listing the operator categories in order of precedence from highest to lowest. Operators in the same category have equal precedence. Statements The actions of a program are expressed using statements. C# supports several different kinds of statements, a number of which are defined in terms of embedded statements. A block permits multiple statements to be written in contexts where a single statement is allowed. A block consists of a list of statements written between the delimiters { and }. Declaration statements are used to declare local variables and constants. Expression statements are used to evaluate expressions. Expressions that can be used as statements include method invocations, object allocations using the new operator, assignments using = and the compound assignment operators, increment and decrement operations using the ++ and -- operators and await expressions. Selection statements are used to select one of a number of possible statements for execution based on the value of some expression. In this group are the if and switch statements. Iteration statements are used to repeatedly execute an embedded statement. In this group are the while, do, for, and foreach statements. Jump statements are used to transfer control. In this group are the break, continue, goto, throw, return, and yield statements. The try... catch statement is used to catch exceptions that occur during execution of a block, and the try... finally statement is used to specify finalization code that is always executed, whether an exception occurred or not. The checked and unchecked statements are used to control the overflow checking context for integral-type arithmetic operations and conversions. The lock statement is used to obtain the mutual-exclusion lock for a given object, execute a statement, and then release the lock. The using statement is used to obtain a resource, execute a statement, and then dispose of that resource. Below are examples of each kind of statement Local variable declarations static void Main() { int a; int b = 2, c = 3; a = 1; Console.WriteLine(a + b + c); } Local constant declaration static void Main() { const float pi = 3.1415927f; const int r = 25; Console.WriteLine(pi * r * r); } Expression statement static void Main() { int i; i = 123; // Expression statement Console.WriteLine(i); // Expression statement i++; // Expression statement Console.WriteLine(i); // Expression statement } if statement static void Main(string[] args) { if (args.Length == 0) { Console.WriteLine("No arguments"); } else { Console.WriteLine("One or more arguments"); } } switch statement static void Main(string[] args) { int n = args.Length; switch (n) { case 0: Console.WriteLine("No arguments"); break; case 1: Console.WriteLine("One argument"); break; default: Console.WriteLine("{0} arguments", n); break; } } while statement static void Main(string[] args) { int i = 0; while (i < args.Length) { Console.WriteLine(args[i]); i++; } } do statement static void Main() { string s; do { s = Console.ReadLine(); if (s != null) Console.WriteLine(s); } while (s != null); } for statement static void Main(string[] args) { for (int i = 0; i < args.Length; i++) { Console.WriteLine(args[i]); } } foreach statement static void Main(string[] args) { foreach (string s in args) { Console.WriteLine(s); } } break statement static void Main() { while (true) { string s = Console.ReadLine(); if (s == null) break; Console.WriteLine(s); } } continue statement static void Main(string[] args) { for (int i = 0; i < args.Length; i++) { if (args[i].StartsWith("/")) continue; Console.WriteLine(args[i]); } } goto statement static void Main(string[] args) { int i = 0; goto check; loop: Console.WriteLine(args[i++]); check: if (i < args.Length) goto loop; } return statement static int Add(int a, int b) { return a + b; } static void Main() { Console.WriteLine(Add(1, 2)); return; } yield statement static IEnumerable<int> Range(int from, int to) { for (int i = from; i < to; i++) { yield return i; } yield break; } static void Main() { foreach (int x in Range(-10,10)) { Console.WriteLine(x); } } throw and try statements static double Divide(double x, double y) { if (y == 0) throw new DivideByZeroException(); return x / y; } static void Main(string[] args) { try { if (args.Length != 2) { throw new Exception("Two numbers required"); } double x = double.Parse(args[0]); double y = double.Parse(args[1]); Console.WriteLine(Divide(x, y)); } catch (Exception e) { Console.WriteLine(e.Message); } finally { Console.WriteLine("Good bye!"); } } checked and unchecked statements static void Main() { int i = int.MaxValue; checked { Console.WriteLine(i + 1); // Exception } unchecked { Console.WriteLine(i + 1); // Overflow } } lock statement class Account { decimal balance; public void Withdraw(decimal amount) { lock (this) { if (amount > balance) { throw new Exception("Insufficient funds"); } balance -= amount; } } } using statement static void Main() { using (TextWriter w = File.CreateText("test.txt")) { w.WriteLine("Line one"); w.WriteLine("Line two"); w.WriteLine("Line three"); } } Classes and objects Classes are the most fundamental of C#'s types. A class is a data structure that combines state (fields) and actions (methods and other function members) in a single unit. A class provides a definition for dynamically created instances of the class, also known as objects. Classes support inheritance and polymorphism, mechanisms whereby derived classes can extend and specialize base classes. New classes are created using class declarations.ers { and }. The following is a declaration of a simple class named Point: public class Point { public int x, y; public Point(int x, int y) { this.x = x; this.y = y; } } Instances of classes are created using the new operator, which allocates memory for a new instance, invokes a constructor to initialize the instance, and returns a reference to the instance. The following statements create two Point objects and store references to those objects in two variables: Point p1 = new Point(0, 0); Point p2 = new Point(10, 20); The memory occupied by an object is automatically reclaimed when the object is no longer in use. It is neither necessary nor possible to explicitly deallocate objects in C#. The members of a class are either static members or instance members. Static members belong to classes, and instance members belong to objects (instances of classes). The following table provides an overview of the kinds of members a class can contain. Accessibility Each member of a class has an associated accessibility, which controls the regions of program text that are able to access the member. There are five possible forms of accessibility. These are summarized in the following table. Type parameters A class definition may specify a set of type parameters by following the class name with angle brackets enclosing a list of type parameter names. The type parameters can the be used in the body of the class declarations to define the members of the class. In the following example, the type parameters of Pair are TFirst and TSecond: public class Pair<TFirst,TSecond> { public TFirst First; public TSecond Second; } A class type that is declared to take type parameters is called a generic class type. Struct, interface and delegate types can also be generic. When the generic class is used, type arguments must be provided for each of the type parameters: Pair<int,string> pair = new Pair<int,string> { First = 1, Second = "two" }; int i = pair.First; // TFirst is int string s = pair.Second; // TSecond is string A generic type with type arguments provided, like Pair<int,string> above, is called a constructed type. Base classes A class declaration may specify a base class by following the class name and type parameters with a colon and the name of the base class. Omitting a base class specification is the same as deriving from type object. In the following example, the base class of Point3D is Point, and the base class of Point is object: public class Point { public int x, y; public Point(int x, int y) { this.x = x; this.y = y; } } public class Point3D: Point { public int z; public Point3D(int x, int y, int z): base(x, y) { this.z = z; } } A class inherits the members of its base class. Inheritance means that a class implicitly contains all members of its base class, except for the instance and static constructors, and the destructors of the base class. A derived class can add new members to those it inherits, but it cannot remove the definition of an inherited member. In the previous example, Point3D inherits the x and y fields from Point, and every Point3D instance contains three fields, x, y, and z. An implicit conversion exists from a class type to any of its base class types. Therefore, a variable of a class type can reference an instance of that class or an instance of any derived class. For example, given the previous class declarations, a variable of type Point can reference either a Point or a Point3D: Point a = new Point(10, 20); Point b = new Point3D(10, 20, 30); Fields A field is a variable that is associated with a class or with an instance of a class. A field declared with the static modifier defines a static field. A static field identifies exactly one storage location. No matter how many instances of a class are created, there is only ever one copy of a static field. A field declared without the static modifier defines an instance field. Every instance of a class contains a separate copy of all the instance fields of that class. In the following example, each instance of the Color class has a separate copy of the r, g, and b instance fields, but there is only one copy of the Black, White, Red, Green, and Blue static fields: public class Color { public static readonly Color Black = new Color(0, 0, 0); public static readonly Color White = new Color(255, 255, 255); public static readonly Color Red = new Color(255, 0, 0); public static readonly Color Green = new Color(0, 255, 0); public static readonly Color Blue = new Color(0, 0, 255); private byte r, g, b; public Color(byte r, byte g, byte b) { this.r = r; this.g = g; this.b = b; } } As shown in the previous example, read-only fields may be declared with a readonly modifier. Assignment to a readonly field can only occur as part of the field's declaration or in a constructor in the same class. Methods A method is a member that implements a computation or action that can be performed by an object or class. Static methods are accessed through the class. Instance methods are accessed through instances of the class. Methods have a (possibly empty) list of parameters, which represent values or variable references passed to the method, and a return type, which specifies the type of the value computed and returned by the method. A method's return type is void if it does not return a value. Like types, methods may also have a set of type parameters, for which type arguments must be specified when the method is called. Unlike types, the type arguments can often be inferred from the arguments of a method call and need not be explicitly given. The signature of a method must be unique in the class in which the method is declared. The signature of a method consists of the name of the method, the number of type parameters and the number, modifiers, and types of its parameters. The signature of a method does not include the return type. Parameters Parameters are used to pass values or variable references to methods. The parameters of a method get their actual values from the arguments that are specified when the method is invoked. There are four kinds of parameters: value parameters, reference parameters, output parameters, and parameter arrays. A value parameter is used for input parameter passing. A value parameter corresponds to a local variable that gets its initial value from the argument that was passed for the parameter. Modifications to a value parameter do not affect the argument that was passed for the parameter. Value parameters can be optional, by specifying a default value so that corresponding arguments can be omitted. A reference parameter is used for both input and output parameter passing. The argument passed for a reference parameter must be a variable, and during execution of the method, the reference parameter represents the same storage location as the argument variable. A reference parameter is declared with the ref modifier. The following example shows the use of ref parameters. using System; class Test { static void Swap(ref int x, ref int y) { int temp = x; x = y; y = temp; } static void Main() { int i = 1, j = 2; Swap(ref i, ref j); Console.WriteLine("{0} {1}", i, j); // Outputs "2 1" } } An output parameter is used for output parameter passing. An output parameter is similar to a reference parameter except that the initial value of the caller-provided argument is unimportant. An output parameter is declared with the out modifier. The following example shows the use of out parameters. using System; class Test { static void Divide(int x, int y, out int result, out int remainder) { result = x / y; remainder = x % y; } static void Main() { int res, rem; Divide(10, 3, out res, out rem); Console.WriteLine("{0} {1}", res, rem); // Outputs "3 1" } } A parameter array permits a variable number of arguments to be passed to a method. A parameter array is declared with the params modifier. Only the last parameter of a method can be a parameter array, and the type of a parameter array must be a single-dimensional array type. The Write and WriteLine methods of the System.Console class are good examples of parameter array usage. They are declared as follows. public class Console { public static void Write(string fmt, params object[] args) {...} public static void WriteLine(string fmt, params object[] args) {...} ... } Within a method that uses a parameter array, the parameter array behaves exactly like a regular parameter of an array type. However, in an invocation of a method with a parameter array, it is possible to pass either a single argument of the parameter array type or any number of arguments of the element type of the parameter array. In the latter case, an array instance is automatically created and initialized with the given arguments. This example Console.WriteLine("x={0} y={1} z={2}", x, y, z); is equivalent to writing the following. string s = "x={0} y={1} z={2}"; object[] args = new object[3]; args[0] = x; args[1] = y; args[2] = z; Console.WriteLine(s, args); Method body and local variables A method's body specifies the statements to execute when the method is invoked. A method body can declare variables that are specific to the invocation of the method. Such variables are called local variables. A local variable declaration specifies a type name, a variable name, and possibly an initial value. The following example declares a local variable i with an initial value of zero and a local variable j with no initial value. using System; class Squares { static void Main() { int i = 0; int j; while (i < 10) { j = i * i; Console.WriteLine("{0} x {0} = {1}", i, j); i = i + 1; } } } C# requires a local variable to be definitely assigned before its value can be obtained. For example, if the declaration of the previous i did not include an initial value, the compiler would report an error for the subsequent usages of i because i would not be definitely assigned at those points in the program. A method can use return statements to return control to its caller. In a method returning void, return statements cannot specify an expression. In a method returning non- void, return statements must include an expression that computes the return value. Static and instance methods A method declared with a static modifier is a static method. A static method does not operate on a specific instance and can only directly. The following Entity class has both static and instance members. class Entity { static int nextSerialNo; int serialNo; public Entity() { serialNo = nextSerialNo++; } public int GetSerialNo() { return serialNo; } public static int GetNextSerialNo() { return nextSerialNo; } public static void SetNextSerialNo(int value) { nextSerialNo = value; } } Each Entity instance contains a serial number (and presumably some other information that is not shown here). The Entity constructor (which is like an instance method) initializes the new instance with the next available serial number. Because the constructor is an instance member, it is permitted to access both the serialNo instance field and the nextSerialNo static field. The GetNextSerialNo and SetNextSerialNo static methods can access the nextSerialNo static field, but it would be an error for them to directly access the serialNo instance field. The following example shows the use of the Entity class. using System; class Test { static void Main() { Entity.SetNextSerialNo(1000); Entity e1 = new Entity(); Entity e2 = new Entity(); Console.WriteLine(e1.GetSerialNo()); // Outputs "1000" Console.WriteLine(e2.GetSerialNo()); // Outputs "1001" Console.WriteLine(Entity.GetNextSerialNo()); // Outputs "1002" } } Note that the SetNextSerialNo and GetNextSerialNo static methods are invoked on the class whereas the GetSerialNo instance method is invoked on instances of the class. Virtual, override, and abstract methods When an instance method declaration includes a virtual modifier, the method is said to be a virtual method. When no virtual modifier is present, the method is said to be a non-virtual method. When a virtual method is invoked, the run-time type of the instance for which that invocation takes place determines the actual method implementation to invoke. In a nonvirtual method invocation, the compile-time type of the instance is the determining factor. A virtual method can be overridden in a derived class. When an instance method declaration includes an override modifier, the method overrides an inherited virtual method with the same signature. Whereas a virtual method declaration introduces a new method, an override method declaration specializes an existing inherited virtual method by providing a new implementation of that method. An abstract method is a virtual method with no implementation. An abstract method is declared with the abstract modifier and is permitted only in a class that is also declared abstract. An abstract method must be overridden in every non-abstract derived class. The following example declares an abstract class, Expression, which represents an expression tree node, and three derived classes, Constant, VariableReference, and Operation, which implement expression tree nodes for constants, variable references, and arithmetic operations. (This is similar to, but not to be confused with the expression tree types introduced in Expression tree types). using System; using System.Collections; public abstract class Expression { public abstract double Evaluate(Hashtable vars); } public class Constant: Expression { double value; public Constant(double value) { this.value = value; } public override double Evaluate(Hashtable vars) { return value; } } public class VariableReference: Expression { string name; public VariableReference(string name) { this.name = name; } public override double Evaluate(Hashtable vars) { object value = vars[name]; if (value == null) { throw new Exception("Unknown variable: " + name); } return Convert.ToDouble(value); } } public class Operation: Expression { Expression left; char op; Expression right; public Operation(Expression left, char op, Expression right) { this.left = left; this.op = op; this.right = right; } public override double Evaluate(Hashtable vars) { double x = left.Evaluate(vars); double y = right.Evaluate(vars); switch (op) { case '+': return x + y; case '-': return x - y; case '*': return x * y; case '/': return x / y; } throw new Exception("Unknown operator"); } } The previous four classes can be used to model arithmetic expressions. For example, using instances of these classes, the expression x + 3 can be represented as follows. Expression e = new Operation( new VariableReference("x"), '+', new Constant(3)); The Evaluate method of an Expression instance is invoked to evaluate the given expression and produce a double value. The method takes as an argument a Hashtable that contains variable names (as keys of the entries) and values (as values of the entries). The Evaluate method is a virtual abstract method, meaning that non-abstract derived classes must override it to provide an actual implementation. A Constant's implementation of Evaluate simply returns the stored constant. A VariableReference's implementation looks up the variable name in the hashtable and returns the resulting value. An Operation's implementation first evaluates the left and right operands (by recursively invoking their Evaluate methods) and then performs the given arithmetic operation. The following program uses the Expression classes to evaluate the expression x * (y + 2) for different values of x and y. using System; using System.Collections; class Test { static void Main() { Expression e = new Operation( new VariableReference("x"), '*', new Operation( new VariableReference("y"), '+', new Constant(2) ) ); Hashtable vars = new Hashtable(); vars["x"] = 3; vars["y"] = 5; Console.WriteLine(e.Evaluate(vars)); // Outputs "21" vars["x"] = 1.5; vars["y"] = 9; Console.WriteLine(e.Evaluate(vars)); // Outputs "16.5" } } Method overloading Method overloading permits multiple methods in the same class to have the same name as long as they have unique signatures. When compiling an invocation of an overloaded method, the compiler uses overload resolution to determine the specific method to invoke. Overload resolution finds the one method that best matches the arguments or reports an error if no single best match can be found. The following example shows overload resolution in effect. The comment for each invocation in the Main method shows which method is actually invoked. class Test { static void F() { Console.WriteLine("F()"); } static void F(object x) { Console.WriteLine("F(object)"); } static void F(int x) { Console.WriteLine("F(int)"); } static void F(double x) { Console.WriteLine("F(double)"); } static void F<T>(T x) { Console.WriteLine("F<T>(T)"); } static void F(double x, double y) { Console.WriteLine("F(double, double)"); } static void Main() { F(); // Invokes F() F(1); // Invokes F(int) F(1.0); // Invokes F(double) F("abc"); // Invokes F(object) F((double)1); // Invokes F(double) F((object)1); // Invokes F(object) F<int>(1); // Invokes F<T>(T) F(1, 1); // Invokes F(double, double) } } As shown by the example, a particular method can always be selected by explicitly casting the arguments to the exact parameter types and/or explicitly supplying type arguments. Other function members Members that contain executable code are collectively known as the function members of a class. The preceding section describes methods, which are the primary kind of function members. This section describes the other kinds of function members supported by C#: constructors, properties, indexers, events, operators, and destructors. The following code shows a generic class called List<T>, which implements a growable list of objects. The class contains several examples of the most common kinds of function members. public class List<T> { // Constant... const int defaultCapacity = 4; // Fields... T[] items; int count; // Constructors... public List(int capacity = defaultCapacity) { items = new T[capacity]; } // Properties... public int Count { get { return count; } } public int Capacity { get { return items.Length; } set { if (value < count) value = count; if (value != items.Length) { T[] newItems = new T[value]; Array.Copy(items, 0, newItems, 0, count); items = newItems; } } } // Indexer... public T this[int index] { get { return items[index]; } set { items[index] = value; OnChanged(); } } // Methods... public void Add(T item) { if (count == Capacity) Capacity = count * 2; items[count] = item; count++; OnChanged(); } protected virtual void OnChanged() { if (Changed != null) Changed(this, EventArgs.Empty); } public override bool Equals(object other) { return Equals(this, other as List<T>); } static bool Equals(List<T> a, List<T> b) { if (a == null) return b == null; if (b == null || a.count != b.count) return false; for (int i = 0; i < a.count; i++) { if (!object.Equals(a.items[i], b.items[i])) { return false; } } return true; } // Event... public event EventHandler Changed; // Operators... public static bool operator ==(List<T> a, List<T> b) { return Equals(a, b); } public static bool operator !=(List<T> a, List<T> b) { return !Equals(a, b); } } Constructors C# supports both instance and static constructors. An instance constructor is a member that implements the actions required to initialize an instance of a class. A static constructor is a member that implements the actions required to initialize a class itself when it is first loaded. A. Properties. A property is declared like a field, except that the declaration ends with a get accessor and/or a set accessor written between the delimiters { and } instead of ending in a semicolon. A property that has both a get accessor and a set accessor is a read-write property, a property that has only a get accessor is a read-only property, and a property that has only a set accessor is a write-only property. A get accessor corresponds to a parameterless method with a return value of the property type. Except as the target of an assignment, when a property is referenced in an expression, the get accessor of the property is invoked to compute the value of the property. A set accessor corresponds to a method with a single parameter named value and no return type. When a property is referenced as the target of an assignment or as the operand of ++ or --, the set accessor is invoked with an argument that provides the new value. The List<T> class declares two properties, Count and Capacity, which are read-only and read-write, respectively. The following is an example of use of these properties. List<string> names = new List<string>(); names.Capacity = 100; // Invokes set accessor int i = names.Count; // Invokes get accessor int j = names.Capacity; // Invokes get accessor Similar to fields and methods, C# supports both instance properties and static properties. Static properties are declared with the static modifier, and instance properties are declared without it. The accessor(s) of a property can be virtual. When a property declaration includes a virtual, abstract, or override modifier, it applies to the accessor(s) of the property. Indexers An indexer is a member that enables objects to be indexed in the same way as an array. An indexer is declared like a property except that the name of the member is this followed by a parameter list written between the delimiters [ and ]. The parameters are available in the accessor(s) of the indexer. Similar to properties, indexers can be read-write, read-only, and write-only, and the accessor(s) of an indexer can be virtual. The List class declares a single read-write indexer that takes an int parameter. The indexer makes it possible to index List instances with int values. For example List<string> names = new List<string>(); names.Add("Liz"); names.Add("Martha"); names.Add("Beth"); for (int i = 0; i < names.Count; i++) { string s = names[i]; names[i] = s.ToUpper(); } Indexers can be overloaded, meaning that a class can declare multiple indexers as long as the number or types of their parameters differ. Events An event is a member that enables a class or object to provide notifications. An event is declared like a field except that the declaration includes an event keyword and the type must be a delegate type. Within a class that declares an event member, the event behaves just like a field of a delegate type (provided the event is not abstract and does not declare accessors). The field stores a reference to a delegate that represents the event handlers that have been added to the event. If no event handles are present, the field is null. The List<T> class declares a single event member called Changed, which indicates that a new item has been added to the list. The Changed event is raised by the OnChanged virtual method, which first checks whether the event is null (meaning that no handlers are present). The notion of raising an event is precisely equivalent to invoking the delegate represented by the event—thus, there are no special language constructs for raising events. Clients react to events through event handlers. Event handlers are attached using the += operator and removed using the -= operator. The following example attaches an event handler to the Changed event of a List<string>. using System; class Test { static int changeCount; static void ListChanged(object sender, EventArgs e) { changeCount++; } static void Main() { List<string> names = new List<string>(); names.Changed += new EventHandler(ListChanged); names.Add("Liz"); names.Add("Martha"); names.Add("Beth"); Console.WriteLine(changeCount); // Outputs "3" } } For advanced scenarios where control of the underlying storage of an event is desired, an event declaration can explicitly provide add and remove accessors, which are somewhat similar to the set accessor of a property. Operators An operator is a member that defines the meaning of applying a particular expression operator to instances of a class. Three kinds of operators can be defined: unary operators, binary operators, and conversion operators. All operators must be declared as public and static. The List<T> class declares two operators, operator== and operator!=, and thus gives new meaning to expressions that apply those operators to List instances. Specifically, the operators define equality of two List<T> instances as comparing each of the contained objects using their Equals methods. The following example uses the == operator to compare two List<int> instances. using System; class Test { static void Main() { List<int> a = new List<int>(); a.Add(1); a.Add(2); List<int> b = new List<int>(); b.Add(1); b.Add(2); Console.WriteLine(a == b); // Outputs "True" b.Add(3); Console.WriteLine(a == b); // Outputs "False" } } The first Console.WriteLine outputs True because the two lists contain the same number of objects with the same values in the same order. Had List<T> not defined operator==, the first Console.WriteLine would have output False because a and b reference different List<int> instances. Destructors A destructor is a member that implements the actions required to destruct an instance of a class. Destructors cannot have parameters, they cannot have accessibility modifiers, and they cannot be invoked explicitly. The destructor for an instance is invoked automatically during garbage collection. The garbage collector is allowed wide latitude in deciding when to collect objects and run destructors. Specifically, the timing of destructor invocations is not deterministic, and destructors may be executed on any thread. For these and other reasons, classes should implement destructors only when no other solutions are feasible. The using statement provides a better approach to object destruction. Structs Like classes, structs are data structures that can contain data members and function members, but unlike classes, structs are value types and do not require heap allocation. A variable of a struct type directly stores the data of the struct, whereas a variable of a class type stores a reference to a dynamically allocated object. Struct types do not support user-specified inheritance, and all struct types implicitly inherit from type object.. For example, the following program creates and initializes an array of 100 points. With Point implemented as a class, 101 separate objects are instantiated—one for the array and one each for the 100 elements. class Point { public int x, y; public Point(int x, int y) { this.x = x; this.y = y; } } class Test { static void Main() { Point[] points = new Point[100]; for (int i = 0; i < 100; i++) points[i] = new Point(i, i); } } An alternative is to make Point a struct. struct Point { public int x, y; public Point(int x, int y) { this.x = x; this.y = y; } } Now, only one object is instantiated—the one for the array—and the Point instances are stored in-line in the array. Struct constructors are invoked with the new operator, but that does not imply that memory is being allocated. Instead of dynamically allocating an object and returning a reference to it, a struct constructor simply returns the struct value itself (typically in a temporary location on the stack), and this value is then copied as necessary. With classes, it is possible for two variables to reference the same object and thus possible for operations on one variable to affect the object referenced by the other variable. With structs, the variables each have their own copy of the data, and it is not possible for operations on one to affect the other. For example, the output produced by the following code fragment depends on whether Point is a class or a struct. Point a = new Point(10, 10); Point b = a; a.x = 20; Console.WriteLine(b.x); If Point is a class, the output is 20 because a and b reference the same object. If Point is a struct, the output is 10 because the assignment of a to b creates a copy of the value, and this copy is unaffected by the subsequent assignment to a.x. The previous example highlights two of the limitations of structs. First, copying an entire struct is typically less efficient than copying an object reference, so assignment and value parameter passing can be more expensive with structs than with reference types. Second, except for ref and out parameters, it is not possible to create references to structs, which rules out their usage in a number of situations. Arrays An array is a data structure that contains a number of variables that are accessed through computed indices. The variables contained in an array, also called the elements of the array, are all of the same type, and this type is called the element type of the array. Array types are reference types, and the declaration of an array variable simply sets aside space for a reference to an array instance. Actual array instances are created dynamically at run-time out the contents of the array. using System; class Test { static void Main() { int[] a = new int[10]; for (int i = 0; i < a.Length; i++) { a[i] = i * i; } for (int i = 0; i < a.Length; i++) { Console.WriteLine("a[{0}] = {1}", i, a[i]); } } } This example creates and operates on a single-dimensional array. C# also supports multi-dimensional arrays. The number of dimensions of an array type, also known as the rank of the array type, is one plus the number of commas written between the square brackets of the array type. The following example allocates a one-dimensional, a two-dimensional, and a three-dimensional array. do subsequent}; Note that the length of the array is inferred from the number of expressions between { and }. Local variable and field declarations can be shortened further such that the array type does not have to be restated. int[] a = {1, 2, 3}; Both of the previous examples are equivalent to the following: int[] t = new int[3]; t[0] = 1; t[1] = 2; t[2] = 3; int[] a = t; Interfaces An interface defines a contract that can be implemented by classes and structs. An interface can contain methods, properties, events, and indexers. An interface does not provide implementations of the members it defines—it merely specifies the members that must be supplied by classes or structs that implement the interface. Interfaces may employ multiple inheritance. In the following example, the interface IComboBox inherits from both ITextBox and IListBox.(); // Ok Enums An enum type is a distinct value type with a set of named constants. The following example declares and uses an enum type named Color with three constant values, Red, Green, and Blue. using System; enum Color { Red, Green, Blue } class Test { static void PrintColor(Color color) { switch (color) { case Color.Red: Console.WriteLine("Red"); break; case Color.Green: Console.WriteLine("Green"); break; case Color.Blue: Console.WriteLine("Blue"); break; default: Console.WriteLine("Unknown color"); break; } } static void Main() { Color c = Color.Red; PrintColor(c); PrintColor(Color.Blue); } } Each enum type has a corresponding integral type called the underlying type of the enum type. An enum type that does not explicitly declare an underlying type has an underlying type of int. An enum type's storage format and range of possible values are determined by its underlying type. The set of values that an enum type can take on is not limited by its enum members. In particular, any value of the underlying type of an enum can be cast to the enum type and is a distinct valid value of that enum type. The following example declares an enum type named Alignment with an underlying type of sbyte. enum Alignment: sbyte { Left = -1, Center = 0, Right = 1 } As shown by the previous example, an enum member declaration can include a constant expression that specifies the value of the member. The constant value for each enum member must be in the range of the underlying type of the enum. When an enum member declaration does not explicitly specify a value, the member is given the value zero (if it is the first member in the enum type) or the value of the textually preceding enum member plus one. Enum values can be converted to integral values and vice versa using type casts. For example int i = (int)Color.Blue; // int i = 2; Color c = (Color)2; // Color c = Color.Blue; The default value of any enum type is the integral value zero converted to the enum type. In cases where variables are automatically initialized to a default value, this is the value given to variables of enum types. In order for the default value of an enum type to be easily available, the literal 0 implicitly converts to any enum type. Thus, the following is permitted. Color c = 0; Delegates. The following example declares and uses a delegate type named Function. "inline. additional declarative information by defining and using attributes. The following example declares a HelpAttribute attribute that can be placed on program entities to provide links to their associated documentation. using System; public class HelpAttribute: Attribute { string url; string topic; public HelpAttribute(string url) { this.url = url; } public string Url { get { return url; } } public string Topic { get { return topic; } set { topic = value; } } } All attribute classes derive from the System.Attribute base class provided by the .NET Framework. Attributes can be applied by giving their name, along with any arguments, inside square brackets just before the associated declaration. If an attribute's name ends in Attribute, that part of the name can be omitted when the attribute is referenced. For example, the HelpAttribute attribute can be used as follows. [Help("")] public class Widget { [Help("", Topic = "Display")] public void Display(string text) {} } This example attaches a HelpAttribute to the Widget class and following example shows how attribute information for a given program entity can be retrieved at run-time using reflection. using System; using System.Reflection; class Test { static void ShowHelp(MemberInfo member) { HelpAttribute a = Attribute.GetCustomAttribute(member, typeof(HelpAttribute)) as HelpAttribute; if (a == null) { Console.WriteLine("No help for {0}", member); } else { Console.WriteLine("Help for {0}:", member); Console.WriteLine(" Url={0}, Topic={1}", a.Url, a.Topic); } } static void Main() { ShowHelp(typeof(Widget)); ShowHelp(typeof(Widget).GetMethod("Display")); } } When a particular attribute is requested through reflection, the constructor for the attribute class is invoked with the information provided in the program source, and the resulting attribute instance is returned. If additional information was provided through properties, those properties are set to the given values before the attribute instance is returned. Feedback Send feedback about:
https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/introduction
CC-MAIN-2019-18
refinedweb
8,215
55.64
cleanupOrphaned¶ On this page Definition¶ cleanupOrphaned¶ Changed in version 4.4. Starting in MongoDB 4.4, chunk migrations and orphaned document cleanup are more resilient to failover. The cleanup process automatically resumes in the event of a failover. You no longer need to run the cleanupOrphanedcommand to clean up orphaned documents. Instead, use this command to wait for orphaned documents in a chunk range from a shard key's minKey to its maxKey for a specified namespace to be cleaned up from a majority of a shard's members. In MongoDB 4.2 and earlier, cleanupOrphanedinitiated the cleanup process for orphaned documents in a specified namespace and shard key range. Determine Range¶ Starting in MongoDB 4.4, the value of this field is not used to determine the bounds of the cleanup range. The cleanupOrphaned command waits until all orphaned documents in all ranges in the namespace are cleaned up from the shard before completing, regardless of the presence of or value of startingFromKey. Required Access¶ On systems running with authorization, you must have clusterAdmin privileges to run cleanupOrphaned. Output¶ Return Document¶ Each cleanupOrphaned command returns a document containing a subset of the following fields:
https://docs.mongodb.com/v5.0/reference/command/cleanupOrphaned/
CC-MAIN-2021-39
refinedweb
195
56.76
A. How about a rss feed of the builds ? @Aaorn Not sure about that really. If you’re registered in our Confluence installation, you can subscribe to watch updates to the Nightly Builds page. Is there any performance improvement? the 5.0 is totally a disaster. @Kevin, The easiest way to know is to just try and see. What’s your scenario in which you experience low performance? @Kevin R# 5.0′s performance is really good, compared with earlier versions. Especially when using lots of lambda expressions. I have to defend jetbrains here, they’ve done a wonderful job. I’ll add my vote for a real RSS feed for nightly builds. RSS updates are easy and convenient, they don’t require registration (and yet one more password to remember), they’re grouped together with all the other sorts of things I watch via RSS, and (since I assume watching in Confluence means getting the updates via e-mail instead of in a feed reader) RSS updates never have to worry about getting lost by e-mail spam filters. @Aaorn, @Kevin Added a request about an RSS feed: Vote for it if you like. When will there be new builds? All I can see is from May 20. and that build didn’t work for me (I couldn’t make it see the references to the System.Linq namespace, which made it pretty much unusable for me). @Kenneth A new build is expected to be available today (really soon according to our QA engineer). Subsequent builds will come out more frequently than this time. Hi, I get an exception in resharper 4.5 when I create a test fixture across partial classes (a cheap way to create a shared fixture). The exception browser isn’t submitting the bug, vhich is why I’m posting here, but that fault is more than likely at my end. Here is an example, hope the code posts ok. —————————————————————— using System.Data; using MbUnit.Framework; namespace PartialClassesCheapSharedFixture { [TestFixture] public partial class DataLayerTests { public DataTable TestData = new DataTable(“name”); } } ——————————————————————- using MbUnit.Framework; namespace PartialClassesCheapSharedFixture { public partial class DataLayerTests { [Test] [Category("Employee DAL Tests")] public void EmployeeSavesOk() { Assert.IsNotNull(this.TestData); } } }
http://blog.jetbrains.com/dotnet/2010/05/20/reshaper-51-early-access-program-is-open/
CC-MAIN-2015-40
refinedweb
365
67.15
Hello guys, in the last post I have explained the Basics of Inverters along with its types and also the inverters topology in other words working of inverters, then we discussed the Major Components of Inverters.. You should also read the Modified Sine Wave Design with Code.. SineWave Inverter Simulation in Proteus, which will be quite helpful if you are designing a pure sine wave inverter. Pure Sine-Wave Inverter - Pure pure sine wave inverter. Explanation for PWM in AVR - AVR is acting as the brain of Pure Sine Wave Inverter. -. - Here’s the programming code for Pure Sine Wave Inverter: #include <stdlib.h> #include <avr/io.h> #include <util/delay.h> #include <avr/interrupt.h> #include <avr/sleep.h> #include <math.h> #include <stdio.h> 0x80, 0x83, 0x86, 0x89, 0x8C, 0x90, 0x93, 0x96, 0x99, 0x9C, 0x9F, 0xA2, 0xA5, 0xA8, 0xAB, 0xAE, 0xB1, 0xB3, 0xB6, 0xB9, 0xBC, 0xBF, 0xC1, 0xC4, 0xC7, 0xC9, 0xCC, 0xCE, 0xD1, 0xD3, 0xD5, 0xD8, 0xDA, 0xDC, 0xDE, 0xE0, 0xE2, 0xE4, 0xE6, 0xE8, 0xEA, 0xEB, 0xED, 0xEF,EF, 0xED, 0xEB, 0xEA, 0xE8, 0xE6, 0xE4, 0xE2, 0xE0, 0xDE, 0xDC, 0xDA, 0xD8, 0xD5, 0xD3, 0xD1, 0xCE, 0xCC, 0xC9, 0xC7, 0xC4, 0xC1, 0xBF, 0xBC, 0xB9, 0xB6, 0xB3, 0xB1, 0xAE, 0xAB, 0xA8, 0xA5, 0xA2, 0x9F, 0x9C, 0x99, 0x96, 0x93, 0x90, 0x8C, 0x89, 0x86, 0x83, 0x80, 0x7D, 0x7A, 0x77, 0x74, 0x70, 0x6D, 0x6A, 0x67, 0x64, 0x61, 0x5E, 0x5B, 0x58, 0x55, 0x52, 0x4F, 0x4D, 0x4A, 0x47, 0x44, 0x41, 0x3F, 0x3C, 0x39, 0x37, 0x34, 0x32, 0x2F, 0x2D, 0x2B, 0x28, 0x26, 0x24, 0x22, 0x20, 0x1E, 0x1C, 0x1A, 0x18, 0x16, 0x15, 0x13, 0x11, 0x13, 0x15, 0x16, 0x18, 0x1A, 0x1C, 0x1E, 0x20, 0x22, 0x24, 0x26, 0x28, 0x2B, 0x2D, 0x2F, 0x32, 0x34, 0x37, 0x39, 0x3C, 0x3F, 0x41, 0x44, 0x47, 0x4A, 0x4D, 0x4F, 0x52, 0x55, 0x58, 0x5B, 0x5E, 0x61, 0x64, 0x67, 0x6A, 0x6D, 0x70, 0x74, 0x77, 0x7A, 0x7D void InitSinTable() { Page | 42 //sin period is 2*Pi const float step = (2*M_PI)/(float)256; float s; float zero = 128.0; //in radians for(int i=0;i<256;i++) { s = sin( i * step ); //calculate OCR value (in range 0-255, timer0 is 8 bit) wave[i] = (uint8_t) round(zero + (s*127.0)); } } void InitPWM() { /* TCCR0 - Timer Counter Control Register (TIMER0) ----------------------------------------------- BITS DESCRIPTION NO: NAME DESCRIPTION -------------------------- BIT 7 : FOC0 Force Output Compare BIT 6: WGM00 Wave form generartion mode [SET to 1] BIT 5: COM01 Compare Output Mode [SET to 1] BIT 4: COM00 Compare Output Mode [SET to 0] BIT 3: WGM01 Wave form generation mode [SET to 1] BIT 2: CS02 Clock Select [SET to 0] BIT 1: CS01 Clock Select [SET to 0] BIT 0: CS00 Clock Select [SET to 1] Timer Clock = CPU Clock (No Pre-scaling) Mode = Fast PWM PWM Output = Non Inverted */ TCCR0|=(1<<WGM00)|(1<<WGM01)|(1<<COM01)|(1<<CS00); TIMSK|=(1<<TOIE0); //Set OC0 PIN as output. It is PB3 on ATmega16 ATmega32 DDRB|=(1<<PB3); } ISR(TIMER0_OVF_vect) { OCR0 = wave[sample]; sample++; if( sample >= 255 ) sample = 0; } H-Bridge Circuit - H-Bridge Circuit is acting as the main core of Pure sine Wave Inverter. - for pure sine wave inverter - Understanding the working of H-Bridge is very essential, if you want to work on Pure sine Wave Inverter. - Pure Sine Wave Inverter - Let’s have a look at the working of Pure. So, that’s all for today. I hope you guys have enjoyed this Pure Sine Wave Inverter Project. You should also look at Proteus simulation of Pure sine wave and Introduction to Multilevel Inverters, because simulations help a lot in designing hardware projects. So, take care and have fun !!! 🙂 77 Comments would you mind helping me by sending me the details and the whole circuit diagrams of pure sine wave inverter 12Vdc to 220Vac? if you wouldn’t please send to my email xyre.zinc@gmail.com. THx xyre please send the circuit to y eail @godsonprince5@gailco Its not for free, you have to buy it from the shop. Thanks. @ taohidlatif@gmail.com? worrier_s@yahoo.com … innoachukwu5@gmail.com Thanks. what kind of transformer did you use? can u email me all circuit details on asadmujtaba91@gmail.com I have sent you email …. check it and reply me accordingly …. thanks. and codes also if u dont mind.. This comment has been removed by the author. Can u please send me complete schematic on haisan_1985@yahoo.com ? hello, syed zain nasir, could u please me relaize the working of the circuit of a sine wave control block diagram for an air conditioner. Since i dont have ur mail id pls leave a message at the following mail id madan.k.naidu20@gmail.com,——–gajjarshravan18@gmail.com HI I am designing half bridge inveter for 450w using atmega16 can u send. suggest me circuit and code for the inveter. My Mail id: dasarathaneie@gmail.com : a-sd-s@hotmail.com If I could get the full circuit with the code I will be very glad. Gavivinagadogbe@gmail.com is my email. You can buy the whole project from the shop. can u plz plz send me the code + circuit diagrams of pure sine wave inverter 12Vdc to 220Vac … Thanku ( osama1_91@hotmail.com ). syedimbran26@gmail.com…. same here, maen6677@gmail.com. help@theengineeringproject.com ,? (vilo.alternativ@gmail.com) I tried the posted by you , but the sin table missing some syntactic pieces, I guess :S:P Anyways this blog is still so helpfull! Thanks all! plz send me all circuitry and code at rozu08@hotmail.com mubshir_88@hotmail can you send me the hex file and diagram to kenneth4kamalu@yahoo.com. basharatali@iiee.edu.pk Irfan_haider011@yahoo.com Regards Syed Irfan sir kindly send me the details of this project on beast_im@yahoo.com circuits and all the data plz send me all circuitry and code at rus_tec@mail.ru sagahaditama48@gmail.com , im from Indonesia, please help me. Thankyou Can I still buy this project? Hi, Yeah you can buy the project from shop. Thanks. Respected sir .. plz give me the detail circuit of this project … my email is mian.mubshir@gmail.com…: 201332180@student.uj.ac.za... edil.lsandrade@gmail can u plz plz send me the code + circuit diagrams of pure sine wave inverter 12Vdc to 220Vac … Thanku ahmadali1238888@gmail.com am form syria and i need this project Thank you so much Hi sir. i am going to design sine wave UPS. for which i need sine wave inverter project detail. i can’t find the link of your shop as you mentioned that buy from shop. please humbly requested reply on email.. “adreeskhan1995@gmail.com” Sir is this controlled inverter or not?? and also sir plz upload the sine wave inverter using ATMEGA16 Nice tutorial. I have one concern though. In your Mode 1, you said a HIGH signal reaches M1 and a LOW reaches M4 gates. Now M1 is a P channel MOSFET which should ideally be turned ON by a LOW signal, and M4 an N channel, which should be turned ON by a HIGH signal. It seems to me like you reversed something somewhere. Please put me through if I am getting it all wrong. Thanks. Th output of microcontroller is spwm signal and is it 50hz frequency ? I am working on a 5kva inverter by nick zoein but the problem is on filtering the modified sine wave into a pure sine wave. Any suggestion with regard to the filter design to obtain a pure sine wave
https://www.theengineeringprojects.com/2012/11/pure-sine-wave-inter-design-with-code.html
CC-MAIN-2019-35
refinedweb
1,233
71.55
CodePlexProject Hosting for Open Source Software Hello I am wondering if i am doing something wrong here I added Newtonsoft.Json.dll to my reference, in is in my bin folder and everything is looking good but when i try to import the namespace i get an error here is the Common.Utilities.DataAccess2; using Newtonsoft.Json; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //string output = Json } } and here is the error i get : The type or namespace name 'Newtonsoft' could not be found (are you missing a using directive or an assembly reference?) Make sure you used a version of the Newtonsoft DLL that is appropriate for the targeted framework version. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://json.codeplex.com/discussions/214264
CC-MAIN-2017-30
refinedweb
156
65.01
I’m rewriting our rhinoscript in python and I stumbled upon this. import rhinoscriptsyntax as rs AllPD = rs.WindowPick((-140,-190,0),(99999,-99999,0), None, False, False) I’m rewriting our rhinoscript in python and I stumbled upon this. import rhinoscriptsyntax as rs AllPD = rs.WindowPick((-140,-190,0),(99999,-99999,0), None, False, False) Dunno, I can’t see this here. I don’t know where the “Resolving curve endpoints…” message on the command line is coming from - Rhino is trying to figure something out, but I don’t know what. Do you have any plug-ins loaded that might be trying to do something? Or maybe it’s file related - does it do this with any file? Something special in your block instances? To test I created a file with 1089 block instances, 3267 curves, 1089 Leaders, 1089 angular dimensions, 1089 linear dimensions, 1089 radius dimensions. V6 rs.WindowPick takes about 0.125 seconds to get them all. Funny, V7 takes slightly longer here at around 0.175 sec. No command line messages in the process except for the print statement. Here’s the test file and the code I used to test: WindowPickTest.zip (516.6 KB) import rhinoscriptsyntax as rs import time st=time.time() AllPD = rs.WindowPick((0,0,0),(700,700,0), None, False, False) et=round(time.time()-st,3) print "{} objects found. Elapsed time={} sec".format(len(AllPD),et) Edit - if I enlarge the window it does take a bit longer: AllPD = rs.WindowPick((-9999,-9999,0),(9999,9999,0), None, False, False) takes about 0.3 seconds. Edit 2: VB Rhinoscript is still faster than the Python equivalent though - about 3.5 times as fast in my test file with an average of about 0.035 sec for the first example… And making the window larger doesn’t seem to slow it down significantly as the Python equivalent, 0.043 seconds vs 0.3 so over 7 times faster. Option Explicit Call Main() Sub Main() Dim st, AllPD, et st = Timer AllPD = Rhino.WindowPick(Array(-9999, -9999, 0), Array(9999, 9999, 0), , False, False) et = Cstr(Round(Timer - st, 3)) Call Rhino.Print("Found " & Cstr(Ubound(AllPD) + 1) & " objects. Elapsed time=" & et) End Sub Spreadsheet tester_.3dm (8.1 MB) Here’s the file I run it in if you wanna take a look. Everything I run is stock except appearance. Happens in every file. Anyone? I imagine the extra time it takes is partially because of the “Resolving curve endpoints” messages being printed to the command line… I had to play with it for awhile to find what elements are causing this behavior. I found the culprit - but I have no idea why it actually is doing that. If you ungroup everything then use _FindText to find all instances of “Cut Face Side Down” and hide them, there are no more command line messages and WindowPick runs in about 1/5 of the time here. Looking further, it seems that the font is causing the problem, if I change all the instances of that from Lucida Handwriting to Arial, there no more messages either… That is very peculiar that a font causes it. Also, the text you have in Arial is actually supposed to be “ISOCPEUR” which does NOT make WindowPick do weird curve calculations. What version of Rhino are you guys using? I don’t see any command line messages with 6.27 Current public RC (6.26.20126.12201, 05-May-20) I don’t know what’s going on. I installed 6.26.20126.12201, 05-May-20 and ran the script above on Spreadsheet tester_.3dm and saw no commandline echo. Successfully read file “C:\Users\Lowell\Desktop\Spreadsheet tester_.3dm” Command: RunPythonScript 81 objects found. Elapsed time=0.072 sec Rhino 6 SR26 2020-5-5 (Rhino 6, 6.26.20126.12201, Git hash:master @ 6ae0a13b2720e46ba01aad8d1e99a9471bf59ccf) License type: Educational, build 2020-05-05 License details: Cloud Zoo. In use by: 123 () Windows 10.0 SR0.0 or greater (Physical RAM: 16Gb) Machine name: 123 Non-hybrid graphics. Primary display and OpenGL: NVIDIA GeForce GTX 1080 (NVidia) Memory: 8GB, Driver date: 4-3-2020 (M-D-Y). OpenGL Ver: 4.6.0 NVIDIA 445.87 Secondary graphics devices. Intel® HD Graphics 4600 (Intel) Memory: 1GB, Driver date: 3-8-2017 (M-D-Y). OpenGL Settings Safe mode: Off Use accelerated hardware modes: On Redraw scene when viewports are exposed: On Anti-alias mode: None Mip Map Filtering: Linear Anisotropic Filtering Mode: Height Vendor Name: NVIDIA Corporation Render version: 4.6 Shading Language: 4.60 NVIDIA Driver Date: 4-3-2020 Driver Version: 26.21.14.4587 Maximum Texture size: 32768 x 32768 Z-Buffer depth: 24 bits Maximum Viewport size: 32768 x 32768 Total Video Memory: 8 GB Rhino plugins C:\Program Files\Rhino 6\Plug-ins\Commands.rhp “Commands” 6.26.20126.12201\RPC.rhp “RPC” C:\Program Files\Rhino 6\Plug-ins\IdleProcessor.rhp “IdleProcessor” C:\Program Files\Rhino 6\Plug-ins\RhinoRender.rhp “Rhino Render” C:\Program Files\Rhino 6\Plug-ins\rdk_etoui.rhp “RDK_EtoUI” 6.26.20126.12201 C:\Program Files\Rhino 6\Plug-ins\rdk_ui.rhp “Renderer Development Kit UI” C:\Program Files\Rhino 6\Plug-ins\NamedSnapshots.rhp “Snapshots” C:\Program Files\Rhino 6\Plug-ins\Alerter.rhp “Alerter” C:\Program Files\Rhino 6\Plug-ins\RhinoCycles.rhp “RhinoCycles” 6.26.20126.12201 C:\Program Files\Rhino 6\Plug-ins\Toolbars\Toolbars.rhp “Toolbars” 6.26.20126.12201 C:\Program Files\Rhino 6\Plug-ins\3dxrhino.rhp “3Dconnexion 3D Mouse” C:\Program Files\Rhino 6\Plug-ins\Displacement.rhp “Displacement” C:\Program Files\Rhino 6\Plug-ins\Calc.rhp “Calc” Make sure you use some coordinates that include the whole range of the drawing like AllPD = rs.WindowPick((-2000,-1200,0),(2600,0,0), None, False, False) and make sure you have Lucida Handwriting installed as a font so that it is not substituted by something else. Thanks, I didn’t have the window big enough to get everything. I can repeat it now. Unfortunately it’s more tangled up than I hoped, but I’ll take a closer look pretty soon.
https://discourse.mcneel.com/t/rs-windowpick-is-incredibly-slow-vs-rhino-windowpick-instant/101454
CC-MAIN-2020-29
refinedweb
1,033
69.79
Reset IPython namespace and reload specified packages Package Description Duster is an IPython extension that selectively clears one’s IPython session namespace while automatically ignoring specified variables to preserve and reloading specified packages. Installation The package may be installed as follows: pip install duster After installation, the extension may be loaded within an IPython session with %load_ext duster Configuration Modules to be reloaded when duster is invoked should each be listed as a tuple containing the module name and the name to which it should be imported (or ‘’ if it should be imported as-is). For example: c.DusterMagic.modules = [('numpy', 'np'), ('scipy', 'sp'), ('sys', '')] Variables to be ignored when resetting the IPython namespace should be listed by name as follows: c.DusterMagic.ignore = ['varname0', 'varname1'] The above settings may be viewed and/or modified within an IPython session using %config DusterMagic License This software is licensed under the BSD License. See the included LICENSE.rst file for more information. Download Files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/duster/
CC-MAIN-2017-51
refinedweb
181
52.09
Hi Jens, Am Sonntag, den 22.04.2012, 17:19 +0900 schrieb Jens Petersen: > Anyway thoughts on how to proceed? the error message looks as if you are actually building on what Debian calls armhf, with hard floating point support. According to the Debian changelog: ghc (7.4.1-2) unstable; urgency=low [ Iain Lane ] * Two new patches (backported by Iulian Udrea) to fix armel build failures. See upstream bug #5824. - fix-ARM-s-StgCRun-clobbered-register-list-for-both-A - fix-ARM-StgCRun-to-not-save-and-restore-r11-fp-regis * Use dh_autoreconf{,_clean} to autoreconf, mainly so we can have proper clean support. * armhf support (thanks to Jani Monoses): - debian/patches/ARM-VFPv3D16: Use vfp3-d16 FPU for ARM builds. -: ifeq (armhf,$(DEB_HOST_ARCH)) echo "SRC_HC_OPTS += -D__ARM_PCS_VFP" >> mk/build.mk endif It seems that these patches, or rather the fixes, were already submitted upstream, but need more cleanup () FTR, here: <>
http://www.haskell.org/pipermail/glasgow-haskell-users/2012-April/022276.html
CC-MAIN-2013-20
refinedweb
151
59.6
» Publishers, Monetize your RSS feeds with FeedShow: More infos (Show/Hide Ads) Please watch this short video for details on the new Universal apps support: In this post we’ll see how we can create physics-enabled environments mixed with character animation for gaming using babylon.js and WebGL. Keyboard Input } Virtual (Touch) Joysticks); Sound("wood"); Adding Physics Getting Help and Support. … Trackpad – Back to Basics - Now you will see the Synaptics Control Panel. Be sure to visit all three tabs here and uncheck all of the annoying gestures to your needs… Function Keys": - Auto-Adjust Screen Brightness: The Display: !? The only fix I have found for these issues is to kick down the Yoga’s resolution when needed – for example, if you’re doing some Camtasia recording, it might be a good idea to lower the res before doing so. In Summary. Step 1: Get the Tools In this post, we’ll be using a few free tools and libraries: Kinect for Windows SDK (and Developer Toolkit) Download the latest SDK and Developer Toolkit to enable scanning using Kinect Fusion. Blender We’ll use Blender, a free 3D Design tool, to optimize our mesh scans and prepare them for Web sharing BabylonJS This free JavaScript library makes it easy to create 3D scenes and games using WebGL. BabylonJS / Blender Export Plug-in This plugin for Blender allows you to export a Blender 3D Scene to Babylon format. Download the plug-in and read the install instructions from the link to make it available in Blender. Step 2: 3D Scan using Kinect Fusion. Step 3: Optimize the Mesh using Blender) 2) 3) a) Browse to the PLY file that you exported in Step 2 above. b) This might take a little while, the scans are large! 4) 5) a) 6)) a) Show the original mesh again by making it visible. 8) a) select the original mesh b) add a new material c) under shading, select "Shadeless” d) under options, check "Vertex Color Paint" 9) 10) 11) a) select the copied mesh b) add a new material c) under shading, select "Shadeless" 12)) a) in the left-side Image editor, create a New Image b) Give the image a name and uncheck Alpha, then click OK. 14) a) Select the Scene Panel in Blender b) Under Color Management, set Display Device to None. 15)) a) check "Selected to active" b) uncheck “Clear” c) Click the Bake button 17) 18) a) Select Image/Save as Image from the Image Editor menu… 19) a) select the (copied) optimized mesh b) add a new texture c) set type = image or movie d) open the map image file e) go to mapping and select Coordinates: UV f) select Map: UVMap 20) a) Select the original mesh and delete it by selecting Object/Delete. 21) a) Select File/Export/Babylon JS (if you do not see this option, then go back to the Step 1: Downloads step and read the plugin section) b) The export should create two files: a *.babylon and a *.png (texture map) file. Step 4: Load and Display with BabylonJS Now that we have a *.babylon scene file and a *.png texture map, we can easily load and display our 3D Scan using BabylonJS. 1) <staticContent> <mimeMap fileExtension=".fx" mimeType="application/shader" /> <mimeMap fileExtension=".babylon" mimeType="application/babylon" /> >staticContent> 2) 3) > Conclusion . . >>IMAGE: . - Open Blend for Visual Studio 2013 and create a new HTML (Windows Store) app, using the Blank App template: - Let's add in an image to animate. From the Projects Panel, expand the images subfolder and drag the image "logo.scale-100.png" to the artboard. - With the img element selected, go to the CSS Properties Panel and ensure that "inline style" is selected. Then expand the Animation category. (note: normally it is a best practice to create a class for the animation instead of using an inline style, but there are issues with this in the Blend 2013 RC Build - these issues will be fixed by RTM). - In the Animation Pane, click the "+" button to add in a new animation. Enter the following values: duration: 1s iteration-count: infinite play-state: running - Next, we can record the actual animation key frames. Click the "Edit Animation" button in the Animation Pane. At the bottom of the Blend workspace, you will see a timeline, and at the top of the artboard you will see a message "Animation recording is on." Note that any changes you make to the CSS properties for the selected object will record a key frame in the animation. If at any time you want to stop keyframe recording, just click the red "record" button to the left of this message. - In the Animation Pane, advance the yellow marker to the 1 second mark. - In the CSS Properties Panel, expand the Transform Category and add a new Skew Transform using the dropdown selection: - Set the Skew x to 90deg and the y to 45deg. - Add a second transform, this time select a Translate Transform using the dropdown selection. Set the Translate x property to 500px and the y property to 300px. - In the Animation pane, click the "play" button to preview the animation. You can also "scrub" the animation by moving the timeline. - To stop recording the animation, click the "Animation recording is on" button at the top of the artboard. - Run the app by hitting F5, and witness the css animation! Adding Behaviors Now that we have an animation, let's add some interactivity to it so that the user clicks the image to start the animation. We can do this without writing any code using Behaviors. - First, let's change our animation so that it is in the Paused state. With the image selected, go to the CSS Properties Panel and expand the Animation category. Set the play-state property of the animation to "paused." - Set the iteration-count property to "1" so that the animation only executes once. - Go to the Assets Panel and select the Behaviors category. Then select the SetStyleAction. - Drag the SetStyleAction onto the element on the artboard. - In the CSS Properties Panel, expand the Behaviors Category and you will see a new EventTriggerBehavior and SetStyleAction have been added. Select the EventTriggerBehavior and Note that the default event is "click" - so this action will be executed when the user clicks it. - Select the SetStyleAction and set the styleProperty to "animation-play-state" and the styleValue to "running" - Run the app by pressing F5. Then click the img to execute the SetStyleAction and run the animation. Summary. Last week, my app "Physamajig" was a featured app for Windows 8. This means that it had a dedicated tile on the main Store hub which looked like this: _21<<_22<<_23<<_24<< Get: - PubCenter ads which show on the main hub and play screen - In-App purchases (which unlock additional games and remove ads). Daily Ad Revenue (pubcenter) _26<<_27<<. : - First, I used SQL Server Management Studio to connect to my two databases: my legacy shared db, and the Azure SQL db. - The I used the Generate Scripts Wizard to migrate the data from the legacy db to Azure SQL. These steps are outlined for you here. Note that you can use the "Fully Qualified DNS Name" of your Azure SQL database from the management portal under Database. - I then used the VS Templates for Azure to create a Windows Azure Cloud Service and WCF Service Web Role. Since I had a pre-existing service, I just brought the code over into the WCF Web Role. - Physamajig has thumbnail images that allow users to quickly shuffle through the available online creations. These thumbnails are generated by a moderator when they approve creations for online use. For these thumbnails, I used Azure Storage in a public folder. This involved using the CloudStorageAccount class to add a CloudBlobContainer consisting of CloudBlob (files) for each thumbnail. Your app must comply with the following privacy-related requirements:! App Reviews can impact a user's decision to download or buy your app, and provide valuable feedback for enhancements and issues for v.next. It's important to make it apparent and easy for users to add a review for your app so you are sure to gather as many of these reports as possible. Let's explore three ways that a user can get to the review page for your app: - Through the Windows Store This one is pretty obvious, but if a user visits the Reviews link on the page for your app in the Windows Store, they can choose "Write a Review," which brings them to the "Write a Review" page. - Through the Settings Pane This one is provided automatically for every app on the Windows Store - but note that you will not see this link while you are developing/debugging you app! It will only be visible after your app has passed certification and has been installed from the store. If a user swipes from the right side of the screen, and chooses the Settings Charm, they will see a Rate and Review link: - Through a Link you Provide While the built-in Settings Pane support for Rate and Review is great, there are times when you want the Rate and Review option to be a bit more apparent to the user. Maybe after so many days of using the app, you would like to try and coax the user to write a review through a link you provide. To create this link you'll first need the Package family name for your app. Open the Package.appxmanifest file and go to the Packaging tab. Copy out the Package family name: Now you can create a button or other control in your app which launches to the following URI: Windows.System.Launcher.LaunchUriAsync(new Uri("ms-windows-store:REVIEW?PFN=MY_PACKAGE_FAMILY_NAME")); (Just replace MY_PACKAGE_FAMILY_NAME with your app's PFN). We've heard about how the Windows 8 Store will be the "largest developer opportunity, ever," given the sheer number of Windows 7 licenses and the expected number of people upgrading and purchasing Windows 8 tablets. I wanted to share some of my metrics so far with the Windows Store... Physamajig for Windows 8 has reached a bit of a milestone with over 100,000 downloads on the Store! It took a little over 3 months to reach this milestone... but note that this is just the preview versions of Windows 8! Here are a few more metrics from Physamajig, showing download peaks, market, and age group... Windows 8 Store vs Windows Phone 7 Marketplace Compare Physamajig to one of my more popular Windows Phone 7 apps, Boss Launch 2, which took over a year in the WP7 App Store to reach 100,000 downloads (it currently is at 115k+ downloads). But this was a year in release of Windows Phone 7 - whereas Windows 8 is in preview. Submitting your App As of now, the Windows 8 Store is still closed for general submission, but if you create a great app or game, you can apply for a Windows Store token by following the steps outlined in this blog post._33<<: ! Physamajig was selected as one of the Winning entries of the Windows 8 First Apps Contest! Microsoft unveiled the 8 Winners today in Barcelona while unveiling the Consumer Preview of Windows 8. [read the Windows Store blog post] [see the video! UPDATE! DISCLAIMER: THIS BLOG POST DISCUSSES DEVELOPMENT UNDER A PRE-BETA VERSION OF WINDOWS 8! THINGS CAN AND WILL CHANGE IN THE FUTURE. With the release of Windows 8 Developer Preview, one seemingly simple goal I had was to port my 2D physics apps that run on Windows Phone 7 and Silverlight to Windows 8. Since many of my games use vector graphics, they could scale up or down cleanly depending on the device they were deployed to. Today I am a bit closer to my goal with an updated version of the Physics Helper XAML project - which now supports both Windows 8 Metro and Windows Phone 7 development, using a single set of XAML design files and logic. [HOME] [DOWNLOAD] [DOCUMENTATION] Multi-targeting WP7 + Windows 8 Metro You would think this would be a simple thing to do, right? I mean these platforms are both properties of a single company, and you use Visual Studio to create apps for both of them. And when WP7 was released, it was a cinch to port Silverlight Web applications to the phone - in some cases you didn't even need to recompile your assemblies! Well, with Windows 8 Metro, it's unfortunately not so simple. Microsoft is a huge company and it is apparently impossible for all divisions to agree on a single development framework that allows creation of client solutions across all of their properties. So we have fragmentation and breaking changes with Windows 8 Metro. Lots of them. But with a little work, it is certainly possible to support both Windows 8 and Windows Phone 7 apps with a single set of design files and code. Here is a list of some of the issues I ran into while trying to multi-target these two platforms with a single code base: - xmlns changes To reference an external library in XAML, we need to use the xmlns attribute. WP7 and Win8/Metro handle this differently right now (hopefully this is remedied in the future!) WP7 and Silverlight look like this: xmlns:xx="clr-namespace:xx" While Win8/Metro looks like this: xmlns:xx="using:xx" This was a problem in my apps because I wanted to maintain one set of resources for both platforms. My solution was to create a utility named CleanUsingXmlns which can be added to the prebuild step of your projects to change. Note that CleanUsingXmlns was thrown together very quickly and is not the most elegant solution, but it gets the job done. To use CleanUsingXmlns, go to the Build Events tab and add a call to the utility in the format: CleanUsingXmlns [addusing|removeusing] [folder] [namespace1,namespace2] ... where [addusing] will add in the using clause for Metro, and [removeusing] will add in the clr-namespace syntax for WP7 and Silverlight. [folder] is the directory containing XAML files you wish to add and [namespaceX] is a list of namespaces you wish to be involved in the replacement. As an example, the Metro demo projects contain the following in their prebuild steps to add the "using" clause into XAML files: $(ProjectDir)..\..\CleanUsingXmlns\bin\Debug\CleanUsingXmlns addusing $(ProjectDir) Demo.Advanced,Spritehand - Namespace Changes Somtimes you wonder if the architects over at Microsoft are just trying to push your buttons, you know what I mean? Take for example the shuffling of all of the Silverlight namespaces we know and love. This forces us to write code like the following when we try to multi-target: #if METRO using System.Threading.Tasks; using Windows.Foundation; using Windows.UI.Xaml; using Windows.UI.Xaml.Controls; using Windows.UI.Xaml.Input; using Windows.UI.Xaml.Media; using Windows.UI.Xaml.Media.Animation; using Windows.UI.Xaml.Shapes; #else using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Media.Imaging; #endif ... but with Win8/Metro, this is just the beginning. There are lots of little breaking changes all over the place which mostly seem to be there to bang your head on the keyboard and force you to litter compiler directives all over your code. Which brings me to my final note: - HINT: Target Lowest Profile! If you find yourself like me, trying to create a project that targets both Win8/Metro and Windows Phone/Silverlight, one important strategy is to target the lowest common profile. In this case, that lowest profile is Metro. By doing this, you have a much lower chance of introducing unsupported API's into your code. This table, from the BUILD conference, gives a good overview of the size of three .NET profiles: Conclusion While it isn't as easy as you might expect, it is certainly possible to create apps with a single set of design and code files that multi-target WP7 and Win8/Metro. Remember that this is a pre-beta release of Windows 8, and perhaps this story will improve in the beta. If you're interested in creating your own 2D physics apps for WP7 and Win8/Metro, check out the Physics Helper XAML project on codeplex. Today I released an initial Alpha version of Physics Helper XAML - which allows you to easily create 2D physics based games and simulations for Windows 8 Metro Apps using C# or VB. It is a port of my previous Physics Helper project and uses the Farseer Physics Engine. [HOME] [DOWNLOAD] [DOCUMENTATION] As you'll see, Physics Helper XAML is a rewrite of the Physics Helper project for the following reasons: - When I initially started porting the Physics Helper to Metro, I assumed I would be able to fix any compatibility issues by using compiler directives as I have done in the past with WPF, Silverlight, and WP7. But as I quickly found out, there are so many little differences that the code quickly became cluttered. Additionally, there are changes on the XAML side which are not easily danced around because there are no compiler directives for XAML at this time. - Behaviors were used extensively in the original Physics Helper, but are not yet available for Metro app development. In fact, there is no version of Blend available yet for Metro apps - so who knows for sure if Behaviors will make the cut for Blend 5? - This gave me a great opportunity to clean up and simplify the code! The Physics Helper has been around since Silverlight 2 and has gathered some baggage through the addition of Behaviors, changes to Silverlight, and the addition of other platforms. So this was a great chance to... er.. "re-imagine" the code. Getting Started I encourage you to read the Documentation for details on using the new Physics Helper XAML classes. I think you'll find them very simple to use and quite performant. What's Next My personal goals are to port some of my Windows Phone 7 Silverlight Games to Metro using this new version of the Physics Helper. A secondary goal I have is to back port Physics Helper XAML to Windows Phone 7 and Silverlight, with the hopes of having a single code base again to support these other platforms. Again, because of all of the changes in Metro and WinRT, I am not positive this will have a great outcome - but it is definitely a goal I will strive for! In the meantime, I hope you have some fun with this new set of controls!! At the Build Conference this week, Microsoft handed out 5000 pre-configured Samsung tablets which included Windows 8 Developer Preview plus the development tools for building Metro style apps. They also made available an ISO containing the same bits. I suppose in a pinch you could develop apps on an underpowered tablet, but does anyone really want to write code on a tiny screen with an underpowered CPU? No worries though. We can still use a more powerful dev machine to run Visual Studio 11 and then use Remote Debugging to deploy and debug on the tablet. Here is a pic of my current environment I set up - To the left is an ExoPC slate running Windows 8, sitting on some handy tablet stands, with bluetooth keyboard and mouse. On the right is one of my dev machines which has been booted to a VHD image of Windows 8. We can use Remote Debugging inside VS11 by select Project Properties and entering the name of the slate machine on the network: Here are some helpful hints to get an environment like this set up: - If you were not lucky enough to go to Build and get one of the Samsung tablets, think about buying a Windows 7 slate. This blog post lists a bunch that have been tested in Microsoft's labs with Windows 8. I would look for something with a min display res of 1366x768, 2GB ram and at least a 32GB SSD. - To install Windows 8 on your slate, you can create a bootable USB flash drive and copy the Windows 8 Dev Preview to it. I outlined the steps for the ExoPC slate here, but other slates will be similar and you can search around for blog posts where people have set up their slate for Windows 8. - To install Windows 8 on my development machine, I created a bootable VHD by following this great guide here by Mister Goodcat. As far as I know, this is the only way to get the intial release of Visual Studio 11 and Blend 5 for Metro app development. - On the Windows 8 slate, there is an included Remote Debugging Monitor - you'll find this under the Metro UI icons next to VS and Blend. Fire that up, because it will tell you if remote debugging is working. - Make sure you can Ping your slate on the network from your dev machine. Then on the dev machine, set up the project properties for debugging by entering the slate's network name (see screenshot above). - That's it - you should now have a bit more powerful dev environment for playing with the Windows 8 dev tools.
http://reader.feedshow.com/show_items-feed=805c28df4bc55fcb2a0eb4c0f62289c3
CC-MAIN-2015-35
refinedweb
3,592
69.41
2006-05-22 05:10:40> *** nfsbot has joined #nfs 2006-05-22 05:10:40> *** nfsbot has joined #nfs 2006-05-22 05:11:08> <jafo> Hello nfsbot. 2006-05-22 05:12:00> <jafo> Ok, does anyone want the IRC logs on the web live as they are happening, or can I just wait until we're done and publish then? 2006-05-22 05:13:41> *** blais has joined #nfs 2006-05-22 05:14:20> <rjones> whatever :) 2006-05-22 05:15:31> *** dalke has joined #nfs 2006-05-22 05:15:44> <jafo> dalke: Hey there. 2006-05-22 05:19:42> <effbot> anyone here? 2006-05-22 05:19:59> <effbot> have everyone found anything to do, or ... 2006-05-22 05:23:09> <jafo> I'm just building my task list now. 2006-05-22 05:26:33> *** jbenedik has quit IRC 2006-05-22 05:39:00> *** jbenedik has joined #nfs 2006-05-22 05:41:53> *** ymmit has quit IRC 2006-05-22 05:41:59> <jafo> Is anyone working on trying to convert to 64-bit ints on 32-bit platforms and see how it impacts pybench? 2006-05-22 05:45:35> <jafo> holdenweb_: I still think we need to have a group discussion on the slow-down from 2.4 to 2.5 that you saw. 2006-05-22 05:48:34> *** blais has quit IRC 2006-05-22 05:56:51> *** holdenweb_ has quit IRC 2006-05-22 06:03:40> *** jbenedik_ has joined #nfs 2006-05-22 06:06:36> *** runarp has joined #nfs 2006-05-22 06:07:00> *** kristjan_ has joined #nfs 2006-05-22 06:07:13> *** ymmit has joined #nfs 2006-05-22 06:07:35> *** sholden__ has joined #nfs 2006-05-22 06:08:22> *** etrepum_ has joined #nfs 2006-05-22 06:09:45> *** holdenweb_ has joined #NFS 2006-05-22 06:11:54> *** jbenedik_ has quit IRC 2006-05-22 06:15:44> *** gbrandl has quit IRC 2006-05-22 06:15:49> *** jbenedik has quit IRC 2006-05-22 06:15:52> *** rjones has quit IRC 2006-05-22 06:15:52> *** grunar has quit IRC 2006-05-22 06:16:05> *** etrepum has quit IRC 2006-05-22 06:16:08> *** gbrandl has joined #nfs 2006-05-22 06:16:14> *** sholden_ has quit IRC 2006-05-22 06:16:18> *** sholden has quit IRC 2006-05-22 06:16:22> *** dalke has quit IRC 2006-05-22 06:16:39> *** kristjan has quit IRC 2006-05-22 06:16:47> *** sholden has joined #nfs 2006-05-22 06:20:17> *** effbot has quit IRC 2006-05-22 06:21:32> *** uncletimmy has joined #nfs 2006-05-22 06:21:43> *** effbot has joined #nfs 2006-05-22 06:21:56> <effbot> I've posted my slowdown results here: 2006-05-22 06:21:57> <effbot> 2006-05-22 06:22:26> <effbot> The big ones seem to be import and try/except. 2006-05-22 06:22:28> <jafo> effbot: I'm just running tests now. 2006-05-22 06:25:17> *** jack_diederich has joined #nfs 2006-05-22 06:27:08> <holdenweb_> anyone using svn tortoise can show me how to set up authentication? 2006-05-22 06:32:15> *** ymmit has quit IRC 2006-05-22 06:40:30> *** etrepum has joined #nfs 2006-05-22 06:40:52> *** gbr_ has joined #nfs 2006-05-22 06:41:25> *** kristjan has joined #nfs 2006-05-22 06:42:10> *** holdenweb has joined #NFS 2006-05-22 06:45:36> *** runarp has quit IRC 2006-05-22 06:55:56> *** uncletimmy has quit IRC 2006-05-22 06:58:36> *** jack_diederich has quit IRC 2006-05-22 06:58:39> *** etrepum_ has quit IRC 2006-05-22 06:58:43> *** sholden_ has joined #nfs 2006-05-22 06:58:58> *** sholden__ has quit IRC 2006-05-22 06:58:58> *** jack_diederich has joined #nfs 2006-05-22 06:59:00> *** sholden has quit IRC 2006-05-22 06:59:11> *** kristjan_ has quit IRC 2006-05-22 06:59:20> *** sholden has joined #nfs 2006-05-22 06:59:42> *** effbot has quit IRC 2006-05-22 07:00:24> *** gbrandl has quit IRC 2006-05-22 07:01:36> *** holdenweb_ has quit IRC 2006-05-22 07:39:38> *** effbot has joined #nfs 2006-05-22 07:44:50> *** grunar has joined #nfs 2006-05-22 07:56:08> *** runarp has joined #nfs 2006-05-22 08:00:10> *** rxe has joined #nfs 2006-05-22 08:01:14> <effbot> I have gobby running on 192.168.0.103 port 6522 if anyone wants to try. 2006-05-22 08:11:05> <holdenweb> effbot: gobby 0.3 or 0.4? 2006-05-22 08:14:54> *** grunar has quit IRC 2006-05-22 08:20:08> <effbot> gobby 0.4 2006-05-22 08:36:52> <holdenweb> so i discovered when I ran 0.3 :_) 2006-05-22 08:43:48> *** kristjan_ has joined #nfs 2006-05-22 08:47:06> *** ymmit has joined #nfs 2006-05-22 08:51:15> <holdenweb> effbot: how come I get an "overhead" column on my pybench output where you have a "diff *)" column? 2006-05-22 09:02:49> *** kristjan has quit IRC 2006-05-22 09:13:36> *** jbenedik has joined #nfs 2006-05-22 09:17:59> *** blais has joined #nfs 2006-05-22 09:19:35> *** ymmit has quit IRC 2006-05-22 09:23:37> *** ymmit has joined #nfs 2006-05-22 09:24:11> <jafo> I'm going to take a look at switching ints to 64 bits on 32-bit platforms. 2006-05-22 09:42:40> *** blais has quit IRC 2006-05-22 09:42:51> *** blais has joined #nfs 2006-05-22 09:46:49> *** b-_-d has joined #nfs 2006-05-22 09:47:04> <b-_-d> anyone use nfs? how do i get locking? 2006-05-22 09:54:15> <jafo> b-_-d: See the channel topic. 2006-05-22 09:58:57> *** kristjan_ has quit IRC 2006-05-22 10:06:51> <jafo> How does Python decide to do the up-convert from int to long? I've converted all the types, I think, to "long long", but 1<<32 is going to L. 2006-05-22 10:12:02> *** doctorwells has joined #nfs 2006-05-22 10:13:09> <ymmit> jafo: for 1<<32, see intobject.c's int_lshift. 2006-05-22 10:13:20> <jafo> Thanks. 2006-05-22 10:13:57> <ymmit> You're probably see that the platform LONG_BIT is #define'd as 32, so left shift figures it _needs_ a Python long. 2006-05-22 10:14:35> <b-_-d> channel name is deciving 2006-05-22 10:14:59> <b-_-d> #Python would work 2006-05-22 10:15:10> *** b-_-d has left #nfs 2006-05-22 10:15:11> <jafo> b-_-d: No, it wouldn't. 2006-05-22 10:30:48> <jbenedik> doctorwells: you at keys? 2006-05-22 10:31:57> <doctorwells> yup 2006-05-22 10:34:21> *** uncletimmy has joined #nfs 2006-05-22 10:39:37> *** ymmit has quit IRC 2006-05-22 10:41:05> <jbenedik> i'm working on psyco dicts right now 2006-05-22 10:44:14> <doctorwells> cool, what part? virtualized keys(), values() and items() ? 2006-05-22 10:44:37> <jbenedik> we'll see how much - ideally yes 2006-05-22 10:44:40> <jbenedik> faster iteration 2006-05-22 10:44:44> <jbenedik> virtualized access 2006-05-22 10:46:11> <jafo> Ok, this is extremely weird. I converted the int stuff to long long on 32-bit platforms, and I'm getting: 2006-05-22 10:46:13> <jafo> Average round time: 4081.70 ms -100.00% 2006-05-22 10:46:26> <jafo> The -100.00% is consistent. 2006-05-22 10:47:02> <doctorwells> sounds a lot faster 2006-05-22 10:48:08> <jafo> But the original is: Average round time: 4037.00 ms 2006-05-22 10:48:25> <jafo> Oh, wait, obviously there's a problem in my conversion to long long... 2006-05-22 10:48:34> <rxe> doctorwells: joining the sprint remotely? :-) 2006-05-22 10:48:44> <doctorwells> sumthin' like that 2006-05-22 10:50:19> <jafo> So, it looks like on pybench the difference is 1.11% slow-down by going to long long. 2006-05-22 10:51:03> <jafo> So, do we want to suck it up, or give it a pass? Anyone? 2006-05-22 10:51:26> <jbenedik> What kind of speedup do you get when you overflow? any? 2006-05-22 10:51:40> <doctorwells> does pybench do anything with ints in the 32 - 64 bit range? 2006-05-22 10:51:53> <jafo> Don't know and don't know. 2006-05-22 10:55:17> <jafo> Looks like 1<<35 is around 34% faster. 2006-05-22 11:13:24> *** runarp has quit IRC 2006-05-22 11:13:24> *** jafo has quit IRC 2006-05-22 11:13:30> *** runarp has joined #nfs 2006-05-22 11:13:30> *** jafo has joined #nfs 2006-05-22 11:37:12> *** kristjan has joined #nfs 2006-05-22 11:49:22> <kristjan> CS:EIP Symbol + Offset Thread ID Samples Total % CPU0 32 bit 64 bit 2006-05-22 11:49:22> <kristjan> 0x1e00f070 PyEval_EvalFrameEx 12839 49.94 12839 12839 0 2006-05-22 11:49:22> <kristjan> 0x1e01aa80 lookdict_string 738 2.87 738 738 0 2006-05-22 11:49:22> <kristjan> 0x1e00ea20 call_function 526 2.05 526 526 0 2006-05-22 11:49:22> <kristjan> 0x1e033560 rangeiter_next 338 1.31 338 338 0 2006-05-22 11:49:24> <kristjan> 0x1e0135d0 tupledealloc 333 1.3 333 333 0 2006-05-22 11:49:26> <kristjan> 0x1e0120d0 PyObject_GenericGetAttr 269 1.05 269 269 0 2006-05-22 11:49:30> <kristjan> 0x1e015320 list_dealloc 259 1.01 259 259 0 2006-05-22 11:49:40> <kristjan> profile data, functions over one % 2006-05-22 11:59:19> *** jbenedik has quit IRC 2006-05-22 12:09:53> <runarp> is anyone else looking at the longpatch? 2006-05-22 12:17:31> *** jbenedik has joined #nfs 2006-05-22 12:28:30> *** holdenweb has left #nfs 2006-05-22 12:31:12> <jafo> Room 332 now has wireless in it, essid=tummy, feel free to use it if you can get it. 2006-05-22 12:37:30> *** Klipsch has joined #nfs 2006-05-22 12:41:34> *** uncletimmy has left #nfs 2006-05-22 12:41:55> *** effbot has quit IRC 2006-05-22 12:44:53> *** kristjan has quit IRC 2006-05-22 12:49:10> <Klipsch> is there a way to mount and ignore all hidden files like .hidden 2006-05-22 12:56:15> *** sholden_ has quit IRC 2006-05-22 12:56:18> *** sholden has quit IRC 2006-05-22 12:57:29> *** etrepum has quit IRC 2006-05-22 13:13:31> <gbrandl> Klipsch: this is not a Network File System channel 2006-05-22 13:17:43> <Klipsch> oh 2006-05-22 13:18:06> *** Klipsch has left #nfs 2006-05-22 13:22:24> *** jack_diederich has quit IRC 2006-05-22 13:23:26> *** jbenedik has quit IRC 2006-05-22 13:27:05> *** rxe has quit IRC 2006-05-22 13:38:04> *** blais has quit IRC 2006-05-22 13:38:46> *** gbrandl has quit IRC 2006-05-22 13:43:34> *** runarp has quit IRC 2006-05-22 15:54:39> *** grunar has joined #nfs 2006-05-22 16:15:16> *** doctorwells has left #nfs 2006-05-22 16:22:17> *** blais has joined #nfs 2006-05-22 16:55:05> *** blais has quit IRC 2006-05-22 17:00:52> <jafo> Hey folks. Anyone up for hacking in the lobby for a bit more? 2006-05-22 17:16:13> *** efm has joined #nfs 2006-05-22 17:16:34> <efm> SteveH: don't forget the Big Visible Charts for measuring progress 2006-05-22 17:29:10> <jafo> holdenweb is absent until around 3am your time. 2006-05-22 17:29:19> <jafo> Sounds like people would like to say hi though. 2006-05-22 17:29:34> <jafo> And, of course, nfsbot is logging this. 2006-05-22 17:30:17> <jafo> So far, 'we've had fewer people asking about network filsystems than I expected. 2006-05-22 17:35:42> <efm> hi All sprinters from warm and sunny Colorado! 2006-05-22 18:05:57> *** efm has quit IRC 2006-05-22 18:32:28> *** runarp has joined #nfs 2006-05-22 18:32:40> *** grunar has quit IRC 2006-05-22 18:44:03> *** grunar has joined #nfs 2006-05-22 18:55:18> *** efm has joined #nfs 2006-05-22 18:57:45> *** runarp has quit IRC 2006-05-22 19:08:37> *** grunar has quit IRC 2006-05-22 19:11:58> *** grunar has joined #nfs 2006-05-22 20:23:54> *** grunar has quit IRC 2006-05-23 02:07:12> *** grunar has joined #nfs 2006-05-23 02:13:03> *** jack_diederich has joined #nfs 2006-05-23 02:37:23> *** grunar has quit IRC 2006-05-23 02:59:07> *** TomasOsmena has joined #nfs 2006-05-23 02:59:12> <TomasOsmena> hi all anybody here know what is the tool to print the ip of connected nfs client from our nfs server? 2006-05-23 02:59:24> <jafo> TomasOsmena: See the topic. 2006-05-23 03:00:20> <TomasOsmena> sorry 2006-05-23 03:00:24> <jafo> No problem. 2006-05-23 03:00:56> <TomasOsmena> any help which channel my question belongs to? 2006-05-23 03:02:26> <jafo> Don't know, sorry. 2006-05-23 03:05:51> *** grunar has joined #nfs 2006-05-23 03:06:23> *** ymmit has joined #nfs 2006-05-23 03:08:49> *** TomasOsmena has left #nfs 2006-05-23 03:10:12> <jack_diederich> my jotlive username is "jackdied" 2006-05-23 03:10:18> *** holdenweb has joined #NFS 2006-05-23 03:11:09> *** gbrandl has joined #nfs 2006-05-23 03:13:25> *** effbot has joined #nfs 2006-05-23 03:20:49> *** jbenedik has joined #nfs 2006-05-23 03:21:21> *** ccpRichard2 has joined #nfs 2006-05-23 03:22:30> *** etrepum has joined #nfs 2006-05-23 03:23:04> *** blais has joined #nfs 2006-05-23 03:23:32> <etrepum> jafo: jotlive user name is etrepum 2006-05-23 03:23:48> <jafo> done 2006-05-23 03:25:31> <grunar> my jotlive is runar 2006-05-23 03:26:08> <jafo> Done. 2006-05-23 03:29:38> <gbrandl> jafo: my name's gbrandl 2006-05-23 03:30:02> <jafo> Done. 2006-05-23 03:32:51> <ccpRichard2> jafo: mine is richardtew 2006-05-23 03:33:10> <jafo> Done. 2006-05-23 03:44:30> *** ymmit has quit IRC 2006-05-23 03:45:19> <etrepum> the nnorwitz "speed up function calls" patch is 2006-05-23 03:52:31> <blais> jafo: add me "blais" 2006-05-23 03:52:34> <blais> to that jot thing 2006-05-23 03:52:36> <blais> ja 2006-05-23 03:52:48> <jbenedik> jafo: invite mrjbq7 2006-05-23 03:54:32> *** rxe has joined #nfs 2006-05-23 04:18:50> <etrepum> What's the PyMember type for Py_ssize_t? Do we need to add one? 2006-05-23 04:19:04> <etrepum> T_LONGLONG? 2006-05-23 04:20:59> <gbrandl> etrepum: no, there isn't one yet. I actually wrote to python-dev about it, and Martin (vL) said he'd like to see a use case ;) 2006-05-23 04:22:27> <etrepum> well my use case is that I have fields that count things, and it seems like it would make sense to use the appropriate Python type for counting things 2006-05-23 04:22:54> <etrepum> I guess I'll just use int 2006-05-23 04:22:56> <gbrandl> I'd say just add it in your branch 2006-05-23 04:23:13> <etrepum> I'm developing this as an extension module, not a branch of Python 2006-05-23 04:23:37> <etrepum> I don't need to recompile Python to add an API to an extension :) 2006-05-23 04:25:58> *** ccpRichard2 has quit IRC 2006-05-23 04:31:04> <gbrandl> etrepum: FWIW, the thread is at 2006-05-23 04:38:38> *** runarp has joined #nfs 2006-05-23 04:38:38> *** grunar has quit IRC 2006-05-23 04:39:46> * jafo stabs jotlive in the face over the Internet. 2006-05-23 04:44:50> *** effbot has quit IRC 2006-05-23 04:46:35> <etrepum> 2006-05-23 04:57:54> *** ccpRichard2 has joined #nfs 2006-05-23 04:58:49> <jafo> runarp: So, on the string offset idea. Adding string offsets to a function seems like the camels nose under the tent, if you know the expression. It seems like there are a lot of places it would be nice to have that. I wonder if it might be possible to do something like have a string sub-class that is a StringOffset(s, start, length) sort of thing, that would be light a light weight wrapper on top of string, which some functions would be specialized to 2006-05-23 04:58:49> <jafo> handle. 2006-05-23 04:59:52> <jafo> In the case of functions that don't understand it, the class could look like a regular string and provide a slice. For functions that do understand it, they could bypass the slicing. 2006-05-23 04:59:56> <jafo> Thoughts? 2006-05-23 05:16:44> *** ymmit has joined #nfs 2006-05-23 05:32:16> <ccpRichard2> jafo: Can you add KristjanJonsson please :) 2006-05-23 05:32:47> <jafo> Done. 2006-05-23 05:34:05> *** kristjan has joined #nfs 2006-05-23 05:46:38> <jafo> runarp: It sounds like the bute buffer stuff we're talking about out here would prevent the issues I was trying to solve above. Martin has more information. 2006-05-23 06:50:26> *** jbenedik has quit IRC 2006-05-23 06:55:21> *** blais has quit IRC 2006-05-23 06:56:37> *** ymmit has quit IRC 2006-05-23 06:57:11> *** gbrandl has quit IRC 2006-05-23 06:58:20> *** kristjan_ has joined #nfs 2006-05-23 06:58:52> *** holdenweb_ has joined #NFS 2006-05-23 06:59:03> *** jbenedik has joined #nfs 2006-05-23 07:01:17> *** etrepum_ has joined #nfs 2006-05-23 07:03:56> *** grunar has joined #nfs 2006-05-23 07:09:36> *** thingie56 has joined #nfs 2006-05-23 07:14:42> *** runarp has quit IRC 2006-05-23 07:14:58> *** jack_diederich has quit IRC 2006-05-23 07:15:05> *** rxe has quit IRC 2006-05-23 07:15:27> *** ccpRichard2 has quit IRC 2006-05-23 07:15:32> *** holdenweb has quit IRC 2006-05-23 07:15:33> *** etrepum has quit IRC 2006-05-23 07:16:51> *** etrepum has joined #nfs 2006-05-23 07:16:58> *** ccpRichard2 has joined #nfs 2006-05-23 07:17:09> *** holdenweb has joined #NFS 2006-05-23 07:20:16> *** kristjan has quit IRC 2006-05-23 07:20:21> *** runarp has joined #nfs 2006-05-23 07:23:39> *** jbenedik_ has joined #nfs 2006-05-23 07:33:20> *** kristjan_ has quit IRC 2006-05-23 07:34:08> *** jbenedik has quit IRC 2006-05-23 07:34:11> *** holdenweb_ has quit IRC 2006-05-23 07:34:15> *** thingie56 has quit IRC 2006-05-23 07:34:40> *** grunar has quit IRC 2006-05-23 07:36:19> *** jack_diederich has joined #nfs 2006-05-23 07:42:10> *** grunar has joined #nfs 2006-05-23 07:42:53> *** etrepum_ has quit IRC 2006-05-23 07:43:41> *** holdenweb_ has joined #NFS 2006-05-23 07:46:05> *** gbrandl has joined #nfs 2006-05-23 07:51:27> *** ccpRichard2 has quit IRC 2006-05-23 07:53:13> *** holdenweb has quit IRC 2006-05-23 07:56:34> *** blais has joined #nfs 2006-05-23 07:59:51> *** holdenweb_ has quit IRC 2006-05-23 08:01:01> *** runarp has quit IRC 2006-05-23 08:23:02> *** ymmit has joined #nfs 2006-05-23 08:24:05> *** jbenedik_ has quit IRC 2006-05-23 08:31:07> *** jbenedik has joined #nfs 2006-05-23 08:41:52> *** amk_ has joined #nfs 2006-05-23 08:43:28> *** ccpRichard2 has joined #nfs 2006-05-23 08:54:16> *** synic has joined #nfs 2006-05-23 08:54:19> *** synic has left #nfs 2006-05-23 09:16:06> *** kristjan has joined #nfs 2006-05-23 09:19:20> *** ymmit has quit IRC 2006-05-23 09:23:35> *** ymmit has joined #nfs 2006-05-23 09:35:39> *** amk_ has quit IRC 2006-05-23 09:45:36> <kristjan> on windows, you want GetProcessTimes() or GetThreadTimes() 2006-05-23 09:46:07> <kristjan> 2006-05-23 09:48:29> *** jbenedik has quit IRC 2006-05-23 09:49:56> <kristjan> works on NT 2006-05-23 10:09:21> *** unkatimmy has joined #nfs 2006-05-23 10:14:47> *** ymmit has quit IRC 2006-05-23 10:27:59> <blais> some pics here: 2006-05-23 10:28:00> <blais> 2006-05-23 10:44:23> *** grunar has quit IRC 2006-05-23 10:47:59> <etrepum> - struct module performance enhancements 2006-05-23 10:52:33> *** bcannon has joined #nfs 2006-05-23 10:52:54> <bcannon> Morning folks. 2006-05-23 10:53:40> <bcannon> Actually, I just realized it's 3:00 over there so I bet no one is even logged in. =) 2006-05-23 10:54:44> <bcannon> OK, when you guys start to sprint again, shoot me an email and I will try to pop back on to answer questions about possibilities as to why the addition of new-style classes for exceptions could have added a performance hit. 2006-05-23 10:55:14> <bcannon> This is Brett in case people don't recognize the IRC nick. =) 2006-05-23 10:55:17> *** bcannon has quit IRC 2006-05-23 11:02:12> <blais> hey brett, wazzup 2006-05-23 11:07:22> <kristjan> sprinting is well and truly underway 2006-05-23 11:07:46> *** bcannon has joined #nfs 2006-05-23 11:07:51> <jafo> Hey there. 2006-05-23 11:08:11> <bcannon> I just realized my math was off because I subtracted instead of added 7 hours. =) 2006-05-23 11:08:15> <jafo> Still kind of scratching our heads as to where the performance change is. 2006-05-23 11:08:32> <jafo> guin:msglog$ TZ=GMT date 2006-05-23 11:08:32> <jafo> Tue May 23 17:08:27 GMT 2006 2006-05-23 11:08:50> <jafo> I don't do math when I check the time here. 2006-05-23 11:08:54> <bcannon> So a possibility is the PyException_* macros that are used to verify that an object is acceptable as a macro. 2006-05-23 11:09:40> <bcannon> Those are at several key points in the code path and they do several checks so there are more 'if' checks and have more memory accesses. 2006-05-23 11:09:58> <bcannon> I don't think object creation is done in a stupid way so I don't think it is from object creation, but it is a possibility. 2006-05-23 11:10:39> <bcannon> Otherwise I can't think of any specific points where the differences were huge. 2006-05-23 11:10:41> <jafo> Ok, a few things. 2006-05-23 11:10:56> <bcannon> Unless checking for string exceptions for possible warnings is just ridiculously costly. 2006-05-23 11:11:02> <jafo> We seem to still be seeing performance issues, even if we've already created the object. 2006-05-23 11:11:10> <bcannon> OK. 2006-05-23 11:11:24> <jafo> We're looking at the macros now, and it seems like they would be getting called for both new and old style objects. 2006-05-23 11:11:47> <bcannon> So this is a performance hit in 2.5 for both new-style and classi? 2006-05-23 11:11:54> <bcannon> I thought this was a 2.4 -> 2.5 issue. 2006-05-23 11:12:10> <jafo> Kind of. 2006-05-23 11:12:28> <jafo> In 2.5, old style classes if thrown as an exception show no performance drop from 2.4.3. 2006-05-23 11:12:34> <bcannon> OK 2006-05-23 11:12:43> <jafo> New style classes show like a 100% drop. 2006-05-23 11:13:46> <bcannon> Unfortunately I am at work and just discovered svn is not installed so I can't check the code out and look. 2006-05-23 11:14:03> <jafo> Ok. 2006-05-23 11:14:15> <jafo> There is, of course: 2006-05-23 11:14:31> <bcannon> Going from memory, the differences in terms of code path is that obviously that new-style exceptions are new-style classes (but you mostly showed that is not needed). 2006-05-23 11:14:33> <jafo> We can dig into the macros and see if it seems that that is it. 2006-05-23 11:15:12> <bcannon> I remember there is a place where divergence happens in the code path based on type; give me a minute to try to find it. 2006-05-23 11:15:30> <jafo> Based on old or new style classes you mean? 2006-05-23 11:15:33> <jafo> We haven't seen that. 2006-05-23 11:16:09> <bcannon> Yeah, I think there is one spot; but my memory could be foggy since I had spent the previous day at the sprints fixing a bitch of a bug to track down and so my brain was frazzled. 2006-05-23 11:16:24> <jafo> Ah. 2006-05-23 11:20:13> <jafo> Thanks for popping in. I'm trying dikeing out some of the macros for our limited tests and seeing if that has an impact. 2006-05-23 11:20:34> <bcannon> OK, what I was remembering is that block change in errors.c 2006-05-23 11:20:52> <jafo> haven't looked in errors.c 2006-05-23 11:24:09> <bcannon> When benchmarking, is it purely in raising an exception, or are you also viewing the exception in any way to trigger the __repr__ or __unicode__ methods? 2006-05-23 11:24:28> <jafo> Purely raising the exception. 2006-05-23 11:24:30> <bcannon> OK 2006-05-23 11:24:37> <jafo> try: raise ValueError 2006-05-23 11:24:40> <jafo> except: pass 2006-05-23 11:25:26> <bcannon> And no difference if you ``raise ValueError()`` or catching the specific exception? 2006-05-23 11:25:53> <jafo> Have only tried "raise ValueError, 5". 2006-05-23 11:26:05> <jafo> I can try the except ValueError in a few. 2006-05-23 11:26:09> *** grunar has joined #nfs 2006-05-23 11:26:39> <bcannon> Actually, try it so that the object creation is outside of the timing code and raise the instance. 2006-05-23 11:26:43> <jafo> Ok, if I change PyExceptionClass_Check to "1", pybench doesn't show any impact. 2006-05-23 11:26:55> <bcannon> OK 2006-05-23 11:28:38> <jafo> Trying outside of the timing code. 2006-05-23 11:28:50> *** jbenedik has joined #nfs 2006-05-23 11:30:09> <jafo> It's looking like by moving it out of the loop it's faster than 2.4.3 by an amazing amount. 2006-05-23 11:30:18> <jafo> 50%. 2006-05-23 11:30:19> <bcannon> =) 2006-05-23 11:30:29> <jafo> I guess it's not that amazing, but yeah, faster. 2006-05-23 11:30:31> <bcannon> So it is new-style instantiation. 2006-05-23 11:30:46> <jafo> Wait a moment. 2006-05-23 11:30:46> <bcannon> Or at least a good chunk of it. 2006-05-23 11:30:49> <bcannon> OK 2006-05-23 11:31:01> <jafo> I need to re-run the baseline. 2006-05-23 11:31:07> <bcannon> ok 2006-05-23 11:32:55> <jafo> Ok. With the exception outside the timing loop, it's the same speed on 2.4.3 and 2.5a2. 2006-05-23 11:33:02> <jafo> So, yeah, instance creation. 2006-05-23 11:33:05> <bcannon> Figures. 2006-05-23 11:33:14> <jafo> I guess there's nothing that can be done there. 2006-05-23 11:33:20> <bcannon> =) 2006-05-23 11:33:31> <bcannon> Can always work your voodoo on instance creation. =) 2006-05-23 11:34:01> <jafo> What makes you think I have voodoo? 2006-05-23 11:34:13> <bcannon> Or perhaps there is a better way to define the class than how it is currently. 2006-05-23 11:34:41> <jafo> I don't honestly know. 2006-05-23 11:35:15> <bcannon> Could possibly be better to define the class all in C with a proper struct instead of using a PyMethodDef for the magic methods. 2006-05-23 11:35:26> <bcannon> That should help take out some overhead. 2006-05-23 11:35:52> *** ccpRichard2 has quit IRC 2006-05-23 11:35:54> <bcannon> Assuming that doesn't break code somewhere for some odd reason. 2006-05-23 11:36:24> <bcannon> But basically that is the best I can think of. Might not be bad in terms of cleanup of the code anyway. 2006-05-23 11:36:38> <bcannon> Planning to do that at some point. 2006-05-23 11:36:53> <bcannon> But it might be a few years. =) 2006-05-23 11:37:08> <jafo> Are you talking about for the BaseException class? 2006-05-23 11:37:13> <bcannon> Yep. 2006-05-23 11:37:32> <jafo> Ok, I can try that. 2006-05-23 11:37:38> <bcannon> OK, cool. 2006-05-23 11:37:44> <jafo> Thanks for the help. 2006-05-23 11:37:51> <bcannon> No problem. Glad I could help. 2006-05-23 11:38:06> <bcannon> Everything going well over there? 2006-05-23 11:44:27> <blais> (c-add-style 2006-05-23 11:44:27> <blais> "python-new" 2006-05-23 11:44:27> <blais> '((indent-tabs-mode . nil) 2006-05-23 11:44:27> <blais> (fill-column . 78) 2006-05-23 11:44:27> <blais> (c-basic-offset . 4) 2006-05-23 11:44:28> <blais> (c-offsets-alist . ((substatement-open . 0) 2006-05-23 11:44:30> <blais> (inextern-lang . 0) 2006-05-23 11:44:32> <blais> (arglist-intro . +) 2006-05-23 11:44:34> <blais> (knr-argdecl-intro . +))) 2006-05-23 11:44:36> <blais> (c-hanging-braces-alist . ((brace-list-open) 2006-05-23 11:44:38> <blais> (brace-list-intro) 2006-05-23 11:44:42> <blais> (brace-list-close) 2006-05-23 11:44:44> <blais> (brace-entry-open) 2006-05-23 11:44:46> <blais> (substatement-open after) 2006-05-23 11:44:48> <blais> (block-close . c-snug-do-while))) 2006-05-23 11:44:50> <blais> (c-block-comment-prefix . "")) 2006-05-23 11:44:52> <blais> ) 2006-05-23 11:44:54> <blais> (add-to-list 'c-default-style '(c-mode . "python-new")) 2006-05-23 11:45:29> <bcannon> And then Martin scares me away with Emacs Lisp. =) 2006-05-23 11:45:43> <bcannon> Well all, continue the great work. 2006-05-23 11:45:54> *** bcannon has quit IRC 2006-05-23 11:47:42> *** unkatimmy has quit IRC 2006-05-23 12:30:51> *** kristjan has quit IRC 2006-05-23 12:32:53> *** grunar has quit IRC 2006-05-23 12:46:24> *** jbenedik has quit IRC 2006-05-23 13:02:58> *** stakkars has joined #nfs 2006-05-23 13:03:33> <stakkars> hola 2006-05-23 13:03:41> <jafo> j0 2006-05-23 13:04:33> <jafo> WTF are you? 2006-05-23 13:05:23> <stakkars> I'm downstairs, locked in a somewhat hairy problem. 2006-05-23 13:05:32> <stakkars> this way should be faster to get out 2006-05-23 13:05:56> <jafo> Mmm. Hair. 2006-05-23 13:05:58> <stakkars> amazing that I have IRC, here 2006-05-23 13:07:00> <stakkars> WTF == Who or where? "who" == 'chris' 2006-05-23 13:07:11> <jafo> What The F**k 2006-05-23 13:07:43> <stakkars> Mr. Stackless, of course 2006-05-23 13:07:59> <jafo> Sorry, I meant Where. 2006-05-23 13:08:02> <jafo> Got distracted. 2006-05-23 13:08:09> <stakkars> oki :-) 2006-05-23 13:08:23> <jafo> [13:02:58] --> stakkars (n=tismer@213.213.135.203) has joined #nfs 2006-05-23 13:08:26> <jafo> I got the whom. 2006-05-23 13:08:55> <stakkars> if somebody is asking, I will show up, of course 2006-05-23 13:09:21> <jafo> I was just wondering because I didn't think I saw you up here, and you were online... 2006-05-23 13:09:44> <stakkars> amazing, that tiny HUB 2006-05-23 13:11:06> <jafo> You are associated with the NFS APs, not the lobby one? 2006-05-23 13:12:26> <stakkars> no idea. I'm connected to needforspeed 2006-05-23 13:13:14> * stakkars going back into deep thought mode 2006-05-23 13:16:57> *** jack_diederich has left #nfs 2006-05-23 13:32:59> *** grunar has joined #nfs 2006-05-23 13:51:55> *** etrepum has quit IRC 2006-05-23 14:02:27> *** grunar has quit IRC 2006-05-23 14:10:05> *** stakkars has quit IRC 2006-05-23 14:13:50> *** blais has quit IRC 2006-05-23 14:18:20> *** gbrandl has quit IRC 2006-05-23 14:42:20> *** gecjr has joined #nfs 2006-05-23 14:42:29> *** gecjr has left #nfs 2006-05-23 16:45:34> *** efm has quit IRC 2006-05-23 17:53:09> *** efm has joined #nfs 2006-05-23 19:20:42> *** efm has quit IRC 2006-05-23 20:14:47> *** efm has joined #nfs 2006-05-23 20:27:54> *** Syron has joined #nfs 2006-05-23 20:28:15> <Syron> I know this isn't the network filesystems channel but do you know what channel that is? 2006-05-23 20:31:59> <efm> Syron: Finally, there is a public IRC channel #linux-nfs on the server irc.oftc.net, which may be used for discussing Linux NFS development, testing, and related topics. 2006-05-23 20:35:54> <Syron> thanks, the channel list was a bit long. :) 2006-05-23 20:36:36> *** Syron has left #nfs 2006-05-23 21:44:01> *** bthornton has joined #nfs 2006-05-23 21:44:10> *** bthornton has left #nfs 2006-05-23 22:04:44> *** efm has quit IRC 2006-05-23 22:32:10> *** efm has joined #nfs 2006-05-24 02:20:17> <jafo> j0 par-tay ppl. 2006-05-24 03:12:38> *** grunar has joined #nfs 2006-05-24 03:16:48> *** gbrandl has joined #nfs 2006-05-24 03:19:02> *** runarp has joined #nfs 2006-05-24 03:27:21> *** etrepum has joined #nfs 2006-05-24 03:29:51> *** ccpRichard2 has joined #nfs 2006-05-24 03:37:50> *** grunar has quit IRC 2006-05-24 03:53:08> *** blais has joined #nfs 2006-05-24 04:02:14> *** stakkars has joined #nfs 2006-05-24 04:22:53> *** jbenedik has joined #nfs 2006-05-24 04:23:45> *** runarp has quit IRC 2006-05-24 04:25:49> *** etrepum has quit IRC 2006-05-24 04:37:58> *** mwh has joined #nfs 2006-05-24 04:39:39> <mwh> ok, so my recent mail to python-dev is wrong :) 2006-05-24 04:40:07> <jafo> mwh: If you were using Outlook you could send a retraction. :-) 2006-05-24 04:40:21> <jafo> Like the hotel did to us when we were at PyCon... 2006-05-24 04:40:47> *** etrepum has joined #nfs 2006-05-24 04:41:06> <jafo> <Reply All> "Wait till those suckers see what we're going to charge them next year." <Send> "Oh, wait..." 2006-05-24 04:41:25> <mwh> oops 2006-05-24 04:42:09> <stakkars> mike, do you have the rights to create a codespead account for John? 2006-05-24 04:42:27> *** rxe has joined #nfs 2006-05-24 04:42:29> <mwh> yes 2006-05-24 04:42:32> *** jack_diederich has joined #nfs 2006-05-24 04:42:41> <stakkars> we would like to work on a psyco branch and need to share for debugging 2006-05-24 04:42:50> <stakkars> that would be great! 2006-05-24 04:42:54> <mwh> who is john? 2006-05-24 04:43:09> <stakkars> John Benediktsson from EWT LLC 2006-05-24 04:43:19> <mwh> ah, jbenedik ? 2006-05-24 04:43:23> <stakkars> jup 2006-05-24 04:43:40> <stakkars> alternatively I could use my account, but well 2006-05-24 04:47:29> *** jack_diederich has quit IRC 2006-05-24 04:48:38> *** jack_diederich has joined #nfs 2006-05-24 04:52:29> *** stakkars has quit IRC 2006-05-24 04:53:32> *** stakkars has joined #nfs 2006-05-24 04:54:50> <stakkars> thanks, Mike! 2006-05-24 04:55:53> *** blais has joined #nfs 2006-05-24 04:56:17> <blais> hey 2006-05-24 04:56:18> <blais> more pics 2006-05-24 04:56:20> <blais> 2006-05-24 04:56:49> *** hpk has joined #nfs 2006-05-24 04:57:23> <jafo> I put up Steve's picture of unkatimmy from yesterday at 2006-05-24 04:57:36> <jafo> Of course, more photos at jafo.ca 2006-05-24 05:02:58> <mwh> jafo: i'm not seeing anything like the slowdown you report with new-style exceptions 2006-05-24 05:03:25> <mwh> i'm seeing about 20% 2006-05-24 05:04:01> <mwh> and the time in all cases seems to be dominated by building the exception 2006-05-24 05:04:51> <jafo> Hrmm. 2006-05-24 05:04:55> <jafo> What platform? 2006-05-24 05:04:59> <mwh> os x 2006-05-24 05:05:06> <jafo> Sorry, we were also trying to figure out lunch plans. 2006-05-24 05:05:12> <mwh> fair enough 2006-05-24 05:05:41> <jafo> Are you using the pybench from trunk? 2006-05-24 05:05:49> <mwh> i'm using timeit 2006-05-24 05:06:01> <jafo> Hmm. Can you get me your test? 2006-05-24 05:06:24> <mwh> 2006-05-24 05:06:37> <mwh> then just 2006-05-24 05:06:39> <mwh> ./python.exe -m timeit -s 'import t' 't.main()' 2006-05-24 05:07:57> *** grunar has joined #nfs 2006-05-24 05:08:18> <jafo> zsh: no such file or directory: ./python.exe 2006-05-24 05:08:44> <mwh> it's freshly built from svn head 2006-05-24 05:08:58> <mwh> comparing vs python2.4 2006-05-24 05:09:09> <jafo> Nah, just giving you shit. 2006-05-24 05:09:35> <mwh> i generally assume a brain on behalf of my readers :) 2006-05-24 05:10:05> <jafo> So you expect them not to be running on a platform that has .exe extensions in other words? ;-) 2006-05-24 05:10:26> <mwh> oh don't even start 2006-05-24 05:11:16> <jafo> :-) 2006-05-24 05:18:45> <jafo> guin:Python-2.4.3$ python -m timeit -s 'import t' 't.main()' 2006-05-24 05:18:45> <jafo> 10 loops, best of 3: 37.9 msec per loop 2006-05-24 05:19:52> <jafo> guin:python-trunk$ python -m timeit -s 'import t' 't.main()' 2006-05-24 05:19:53> <jafo> 10 loops, best of 3: 37.9 msec per loop 2006-05-24 05:20:00> <jafo> And these are best of like a dozen runs. 2006-05-24 05:20:08> <mwh> your machine is faster than mine! 2006-05-24 05:20:30> <mwh> pybench stikes again 2006-05-24 05:20:37> <mwh> ^r 2006-05-24 05:20:39> <jafo> I don't know... 2006-05-24 05:39:22> <gbrandl> etrepum: I assigned a struct bug to you, perhaps you can look into it 2006-05-24 05:40:07> <etrepum> gbrandl: as long as it is reproducible on 32-bit.. I don't have any 64 bit machines 2006-05-24 05:44:31> *** ericvrp-lunch has joined #nfs 2006-05-24 05:47:03> <jafo> mwh: What about Fredrik's results? 2006-05-24 05:47:41> <jafo> That was done in 2.5a2, I'm told. 2006-05-24 05:48:30> <mwh> yikes 2006-05-24 05:48:48> *** jbenedik has quit IRC 2006-05-24 05:51:39> *** rxe has quit IRC 2006-05-24 05:52:49> <gbrandl> etrepum: was there some mention of 64-bit in it? I don't really recall... 2006-05-24 05:53:05> <etrepum> gbrandl: I don't know, I still don't know which bug it is 2006-05-24 05:53:13> <gbrandl> oh 2006-05-24 05:53:17> <gbrandl> 1229380 2006-05-24 05:53:31> <etrepum> gbrandl: but I know there are bugs in 64-bit because the int size is different and the struct module works on C types 2006-05-24 05:53:39> <gbrandl> isn't it on the list in "My SF"? 2006-05-24 05:54:41> <etrepum> gbrandl: ah, yeah, I see it now. They moved that stuff around since I've last used it 2006-05-24 05:55:25> <gbrandl> they don't seem to employ any usability experts 2006-05-24 06:02:14> *** rxe has joined #nfs 2006-05-24 06:31:07> <jafo> mwh: Ok, Tim solved the mystery of why it was the same. I was running "python" instead of "./python". I'm absolutely seeing a difference on Linux, Tim is seeing a difference on Windows. 2006-05-24 06:37:14> *** grunar has quit IRC 2006-05-24 07:08:12> *** grunar has joined #nfs 2006-05-24 07:26:40> *** ericvrp has left #nfs 2006-05-24 07:32:41> <blais> emacs users 2006-05-24 07:32:47> <blais> if you edit document strings 2006-05-24 07:32:51> <blais> you will liek this 2006-05-24 07:32:54> <blais> (kjust wrote it) 2006-05-24 07:32:59> <blais> ;; For fixing up multi-line strings embedded in C code. 2006-05-24 07:33:00> <blais> (defun c-multiline-string-fixup (beg end) 2006-05-24 07:33:00> <blais> (interactive "r") 2006-05-24 07:33:00> <blais> (let ((mbeg (set-marker (make-marker) beg)) 2006-05-24 07:33:00> <blais> (mend (set-marker (make-marker) end))) 2006-05-24 07:33:00> <blais> ;; Remove the current postfixes. 2006-05-24 07:33:02> <blais> (goto-char mbeg) 2006-05-24 07:33:04> <blais> (while (< (point) mend) 2006-05-24 07:33:06> <blais> (when (re-search-forward "\\\\n\\\\" (line-end-position) t) 2006-05-24 07:33:09> <blais> (goto-char (match-beginning 0)) 2006-05-24 07:33:10> <blais> (delete-char 3)) 2006-05-24 07:33:12> <blais> (forward-line 1)) 2006-05-24 07:33:14> <blais> ;; Add the postfixes back in. 2006-05-24 07:33:16> <blais> (goto-char mbeg) 2006-05-24 07:33:18> <blais> (while (< (point) mend) 2006-05-24 07:33:20> <blais> (end-of-line) 2006-05-24 07:33:24> <blais> (insert "\\n\\") 2006-05-24 07:33:26> <blais> (forward-line 1)) 2006-05-24 07:33:28> <blais> ;; Clear markers. 2006-05-24 07:33:30> <blais> (dolist (x (list mbeg mend)) (set-marker x nil)) 2006-05-24 07:33:32> <blais> )) 2006-05-24 07:33:34> <blais> in c code 2006-05-24 07:33:36> <blais> will adjust the \n\ markers at the end of strings for ya 2006-05-24 07:35:45> <jafo> blais: The jotlive.com page has a link to pastebin.de, you should use that for that sort of thing. 2006-05-24 07:35:46> <gbrandl> jack_diederich: try out str(Decimal(222222222222222222222222222222222222222222222222222)) 2006-05-24 07:36:45> <blais> here, even better, this versoin you can remove the \n\ first, then edit, then have them put back 2006-05-24 07:36:51> <blais> (defun c-multiline-string-fixup (beg end) 2006-05-24 07:36:51> <blais> "Replace or remove (with prefix arg) trailing \n\ chars within the region. 2006-05-24 07:36:51> <blais> This is useful for editing multi-line strings in C." 2006-05-24 07:36:51> <blais> (interactive "r") 2006-05-24 07:36:55> <blais> (let ((mbeg (set-marker (make-marker) beg)) 2006-05-24 07:36:55> <blais> (mend (set-marker (make-marker) end))) 2006-05-24 07:36:56> <blais> ;; Remove the current postfixes. 2006-05-24 07:36:58> <blais> (goto-char mbeg) 2006-05-24 07:37:00> <blais> (while (< (point) mend) 2006-05-24 07:37:02> <blais> (when (re-search-forward "\\\\n\\\\" (line-end-position) t) 2006-05-24 07:37:04> <blais> (goto-char (match-beginning 0)) 2006-05-24 07:37:06> <blais> (delete-char 3)) 2006-05-24 07:37:08> <blais> (forward-line 1)) 2006-05-24 07:37:10> <blais> (unless current-prefix-arg 2006-05-24 07:37:12> <blais> ;; Add the postfixes back in. 2006-05-24 07:37:14> <blais> (goto-char mbeg) 2006-05-24 07:37:16> <blais> (while (< (point) mend) 2006-05-24 07:37:18> <blais> (end-of-line) 2006-05-24 07:37:20> <blais> (insert "\\n\\") 2006-05-24 07:37:24> <blais> (forward-line 1))) 2006-05-24 07:37:26> <blais> ;; Clear markers. 2006-05-24 07:37:28> <blais> (dolist (x (list mbeg mend)) (set-marker x nil)) 2006-05-24 07:37:30> <blais> )) 2006-05-24 07:37:32> <blais> (define-key c-mode-map [(control c)(\\)] 'c-multiline-string-fixup) 2006-05-24 07:37:34> <blais> enjoy 2006-05-24 07:41:00> <mwh> blais: pastebin.de 2006-05-24 07:51:53> <jafo> Calling Mr. Blivious. Mr. Martin O. Blivious. 2006-05-24 07:56:14> *** jbenedik has joined #nfs 2006-05-24 08:27:49> *** etrepum has quit IRC 2006-05-24 08:30:24> *** etrepum has joined #nfs 2006-05-24 08:54:57> *** ccpRichard2 has quit IRC 2006-05-24 08:57:11> *** jbenedik has quit IRC 2006-05-24 08:59:34> *** jbenedik has joined #nfs 2006-05-24 09:04:50> *** etrepum has left #nfs 2006-05-24 09:08:55> *** jbenedik has quit IRC 2006-05-24 09:28:49> *** jbenedik has joined #nfs 2006-05-24 09:29:37> *** runarp has joined #nfs 2006-05-24 09:38:52> *** blais_ has joined #nfs 2006-05-24 09:39:10> *** gbr_ has joined #nfs 2006-05-24 09:39:49> *** gbrandl has quit IRC 2006-05-24 09:40:16> *** rxe has quit IRC 2006-05-24 09:40:32> *** grunar has quit IRC 2006-05-24 09:40:33> *** stakkars has quit IRC 2006-05-24 09:40:34> *** blais has quit IRC 2006-05-24 09:41:16> *** jack_diederich has quit IRC 2006-05-24 09:42:51> *** etrepum_ has joined #nfs 2006-05-24 10:01:07> *** jbenedik has quit IRC 2006-05-24 10:11:33> <gbrandl> blais_: do you want to copyright your next code file to me? 2006-05-24 10:12:12> *** jbenedik has joined #nfs 2006-05-24 10:21:23> *** Stargazers has joined #nfs 2006-05-24 10:21:30> *** Stargazers has left #nfs 2006-05-24 10:23:23> <blais_> huh? 2006-05-24 10:28:08> *** tiny has joined #nfs 2006-05-24 10:28:15> *** tiny has left #nfs 2006-05-24 10:32:48> *** Yhg1s has joined #nfs 2006-05-24 10:33:01> <Yhg1s> heh, getting a lot of NFS questions here, I guess? ;) 2006-05-24 10:37:40> <gbrandl> blais_: you copyrighted your test file to Greg P. Smith ;) 2006-05-24 10:46:18> <etrepum> Is it likely to break anything if I return int instead of long from struct when they're small enough to fit? 2006-05-24 10:47:05> <etrepum> creating and working with long is awfully slow 2006-05-24 10:56:53> <Yhg1s> etrepum: no, it shouldn't. 2006-05-24 10:57:17> <Yhg1s> (assuming we're talking about Python ints and longs; C ints and longs should be automatically converted, and not slow anyway) 2006-05-24 11:03:43> <etrepum> C ints and longs are the same thing on most architectures... 2006-05-24 11:04:06> <blais_> brnadl: you can see how much I care abouw copyrights 2006-05-24 11:08:13> *** stakkars has joined #nfs 2006-05-24 11:15:31> <Yhg1s> etrepum: yes (except for me, as I mostly have em64t machines), hence 'not slow' 2006-05-24 11:46:50> *** bcannon has joined #nfs 2006-05-24 11:47:05> <bcannon> exit 2006-05-24 11:47:07> *** bcannon has quit IRC 2006-05-24 12:01:58> *** gbrandl has quit IRC 2006-05-24 12:02:16> *** jbenedik has quit IRC 2006-05-24 12:02:25> *** etrepum has quit IRC 2006-05-24 12:02:36> *** runarp has quit IRC 2006-05-24 12:02:58> *** stakkars has quit IRC 2006-05-24 12:06:02> *** blais_ has quit IRC 2006-05-24 12:20:13> *** blais has joined #nfs 2006-05-24 12:23:03> *** gbrandl has joined #nfs 2006-05-24 12:26:42> *** jbenedik has joined #nfs 2006-05-24 12:44:46> *** stakkars has joined #nfs 2006-05-24 13:37:31> *** jbenedik has quit IRC 2006-05-24 13:39:16> *** gbrandl has quit IRC 2006-05-24 13:41:00> *** blais has quit IRC 2006-05-24 13:55:34> *** stakkars has quit IRC 2006-05-24 18:31:37> *** efm has quit IRC 2006-05-24 19:28:50> *** ferringb has joined #nfs 2006-05-24 21:16:46> *** efm has joined #nfs 2006-05-25 01:51:34> <jafo> j0 2006-05-25 01:59:07> *** hpk has quit IRC 2006-05-25 02:24:22> *** rjones has joined #nfs 2006-05-25 02:55:34> *** rjones has quit IRC 2006-05-25 02:56:05> *** The_Ball has joined #nfs 2006-05-25 02:56:31> <The_Ball> is there a network filesystem channel? 2006-05-25 03:02:22> <ferringb> not here ;) 2006-05-25 03:10:51> <jafo> The_Ball: Yeah, no idea. Sorry. 2006-05-25 03:11:34> <jafo> When we set up the channel on Sunday, it didn't exist for at least the previous day. 2006-05-25 03:12:18> <The_Ball> i see 2006-05-25 03:12:49> *** The_Ball has left #nfs 2006-05-25 04:32:46> <Yhg1s> jafo: will you sprinters be doing any sightseeing in iceland? I can highly recommend it, it's an extremely beautiful island (although my favourite spots are more than 7 hours drive away from reykjavik ;P) 2006-05-25 06:21:46> <ccpRichard> They went out to the Blue Lagoon this morning 2006-05-25 07:41:46> *** jack_diederich has joined #nfs 2006-05-25 07:43:52> <jafo> Yhg1s: We ewnt to a viking dinner last night, and a today we just got back from the Blue Lagoon. 2006-05-25 07:48:11> *** grunar has joined #nfs 2006-05-25 07:48:57> *** gbrandl has joined #nfs 2006-05-25 07:57:00> *** jbenedik has joined #nfs 2006-05-25 08:04:59> *** stakkars has joined #nfs 2006-05-25 08:05:23> *** mwh has quit IRC 2006-05-25 08:11:43> *** blais has joined #nfs 2006-05-25 08:15:30> *** etrepum has joined #nfs 2006-05-25 08:19:55> *** pico has joined #nfs 2006-05-25 08:20:13> *** pico has left #nfs 2006-05-25 08:32:01> <blais> t 2006-05-25 08:32:07> <jack_diederich> est 2006-05-25 08:35:03> <blais> 2006-05-25 08:35:04> <blais> new pics 2006-05-25 08:35:07> <blais> from the lagoon 2006-05-25 08:35:09> <blais> and last nite 2006-05-25 08:35:13> <blais> enjoy 2006-05-25 08:38:37> <efm> thanks blais 2006-05-25 08:43:45> <blais> hey evelyn wazzup, sitting here next to jafo 2006-05-25 08:45:46> <jafo> Hi, efm. 2006-05-25 08:46:05> <efm> just getting up. It's another beautiful day in Colorado. Kitties are wonderful. 2006-05-25 08:46:29> <efm> Looks like you're having a great time. I've wanted to visit the Blue Lagoon for many years. I hear it's great. 2006-05-25 08:46:41> <efm> hi jafo 2006-05-25 09:10:46> <Yhg1s> the blue lagoon is nice, but not as nice as the parts of Iceland that aren't as close to Reykjavik :) 2006-05-25 09:11:15> <Yhg1s> warmer than Jokulsarlon though. 2006-05-25 09:11:46> <Yhg1s> (sorry, Jökulsárlón) 2006-05-25 09:14:59> *** rxe has joined #nfs 2006-05-25 09:18:36> <jafo> Yeah, but, you know, we're working. This was just a morning trip. 2006-05-25 09:20:48> <Yhg1s> good thing I'm not there, I don't know if I'd been able to resist the temptation to keep driving :) 2006-05-25 09:21:36> <jafo> :-) 2006-05-25 09:27:43> *** ymmit has joined #nfs 2006-05-25 10:20:08> *** jbenedik has quit IRC 2006-05-25 10:29:39> *** jbenedik has joined #nfs 2006-05-25 10:55:55> *** uncletimmy has joined #nfs 2006-05-25 11:01:41> *** ymmit has quit IRC 2006-05-25 11:18:23> *** uncletimmy has left #nfs 2006-05-25 11:27:00> *** jack_diederich has quit IRC 2006-05-25 11:30:11> *** jbenedik has quit IRC 2006-05-25 11:35:51> *** jbenedik has joined #nfs 2006-05-25 11:49:17> *** jack_diederich has joined #nfs 2006-05-25 11:49:35> *** grunar has quit IRC 2006-05-25 11:54:28> *** jbenedik has quit IRC 2006-05-25 11:59:45> *** jbenedik has joined #nfs 2006-05-25 12:02:05> *** uncletimmy has joined #nfs 2006-05-25 12:03:37> *** etrepum has quit IRC 2006-05-25 12:13:35> *** holdenweb has joined #nfs 2006-05-25 12:16:18> *** rjones has joined #nfs 2006-05-25 12:16:35> <jafo> :w 2006-05-25 12:16:39> <jafo> Ugh. Sorry. 2006-05-25 12:29:34> <holdenweb> ls 2006-05-25 12:57:52> *** jbenedik has quit IRC 2006-05-25 13:01:05> *** tuv has joined #nfs 2006-05-25 13:01:22> *** tuv has left #nfs 2006-05-25 13:04:05> *** uncletimmy has quit IRC 2006-05-25 13:17:05> *** krang has joined #nfs 2006-05-25 13:17:38> <krang> hey hey, anyone got a URL for a good tutorial on automounting home directories with NFS+LDAP? 2006-05-25 13:18:13> <krang> I don't seem to be able to find one 2006-05-25 13:18:33> <efm> krang: you'll want to try #linux-nfs on irc.oftc.net 2006-05-25 13:19:02> <krang> efm: cheers 2006-05-25 13:19:35> <krang> which *is* the best network for general linuxry? 2006-05-25 13:19:36> *** grunar has joined #nfs 2006-05-25 13:20:03> *** rjones has quit IRC 2006-05-25 13:20:30> <efm> It depends krang. I hang out on the channel for my local linux users group, and then use other networks for specific questions 2006-05-25 13:21:14> *** jamwt has joined #nfs 2006-05-25 13:21:27> <krang> I'm basically trying to get my sysadmin skills together, and have lots of questions. 2006-05-25 13:21:39> <krang> where's best for that? 2006-05-25 13:21:45> <efm> then I'd suggest hooking up with people locally. 2006-05-25 13:22:13> <krang> lol, I'm in rural Canada and had to shoot internet 5km with a wireless link. I think I'm SOL on that count :-) 2006-05-25 13:22:42> <efm> krang you can follow me to community.tummy.com 6667 #hackingsociety 2006-05-25 13:22:46> <krang> regardless, is there a directory for local groups somewhere? 2006-05-25 13:22:59> <krang> cheers dude 2006-05-25 13:24:23> <efm> krang: correction irc.community.tummy.com 6667 #hackingsociety 2006-05-25 13:24:32> <krang> gotcha 2006-05-25 13:24:54> *** etrepum has joined #nfs 2006-05-25 13:27:44> *** uncletimmy has joined #nfs 2006-05-25 13:30:46> *** blais has quit IRC 2006-05-25 13:31:44> *** stakkars has quit IRC 2006-05-25 13:36:56> <jafo> I know this music. 2006-05-25 13:37:53> <jafo> krang/efm: Please take the discussion elsewhere. See the topic. 2006-05-25 13:38:36> <efm> yes jafo 2006-05-25 13:39:27> <krang> jafo: soz dude 2006-05-25 13:39:45> *** holdenweb has quit IRC 2006-05-25 13:40:16> <krang> back onto the topic, have any of you fine people seen a good tutorial on how to use NFS/LDAP for automounting home dirs? 2006-05-25 13:40:32> <efm> krang: you are on the wrong channel 2006-05-25 13:40:47> <jafo> krang: This is not the correct place to ask that. 2006-05-25 13:41:02> <krang> oh crap, i have to start reading topics 2006-05-25 13:41:05> <krang> sorry! 2006-05-25 13:41:18> <jafo> Thanks. 2006-05-25 13:41:29> *** krang has left #nfs 2006-05-25 13:52:20> *** grunar has quit IRC 2006-05-25 14:21:17> *** etrepum_ has joined #nfs 2006-05-25 14:24:10> *** uncletimmy has quit IRC 2006-05-25 14:27:22> *** gbrandl has quit IRC 2006-05-25 14:29:13> *** etrepum has quit IRC 2006-05-25 14:29:18> *** gbrandl has joined #nfs 2006-05-25 14:30:14> *** rxe has quit IRC 2006-05-25 14:31:04> *** jack_diederich has quit IRC 2006-05-25 14:35:48> *** holdenweb has joined #nfs 2006-05-25 14:42:44> <jafo> svn+ssh://pythondev@svn.python.org/python/trunk/ 2006-05-25 14:56:34> *** stakkars has joined #nfs 2006-05-25 15:01:29> *** blais has joined #nfs 2006-05-25 15:05:35> *** jbenedik has joined #nfs 2006-05-25 15:12:26> <etrepum> that strange little test_float regression is fixed 2006-05-25 15:14:17> *** jbenedik has quit IRC 2006-05-25 15:15:53> *** jbenedik has joined #nfs 2006-05-25 15:42:26> *** jbenedik has quit IRC 2006-05-25 15:51:00> *** ferringb has left #nfs 2006-05-25 15:52:25> *** jbenedik has joined #nfs 2006-05-25 15:54:34> <gbrandl> * mwh is tempted to reply to /F on python-dev with "are you drunk?" 2006-05-25 16:26:53> <etrepum> uh.. wtf.. check timemodule.c 2006-05-25 16:26:59> <etrepum> 46261 tim.peters #if defined(MS_WINDOWS) && !defined(__BORLANDC__) 2006-05-25 16:26:59> <etrepum> 15913 fdrake /* Win32 has better clock replacement 2006-05-25 16:26:59> <etrepum> 7713 guido #undef HAVE_CLOCK /* We have our own version down below */ 2006-05-25 16:26:59> <etrepum> 46261 tim.peters #endif /* MS_WINDOWS && !defined(__BORLANDC__) */ 2006-05-25 16:27:14> <etrepum> the comment isn't closed... so #undef HAVE_CLOCK never happens 2006-05-25 16:27:17> <etrepum> that can't be intentional can it? 2006-05-25 16:54:52> *** jbenedik has quit IRC 2006-05-25 17:10:43> *** etrepum has quit IRC 2006-05-25 17:27:18> *** holdenweb has quit IRC 2006-05-25 18:20:07> *** stakkars has quit IRC 2006-05-25 18:30:37> *** gbrandl has quit IRC 2006-05-25 20:07:23> *** blais has quit IRC 2006-05-26 02:51:20> *** holdenweb has joined #nfs 2006-05-26 02:52:38> *** grunar has joined #nfs 2006-05-26 03:16:51> *** stakkars has joined #nfs 2006-05-26 03:21:08> *** jack_diederich has joined #nfs 2006-05-26 03:30:04> <holdenweb> Nice benchmarks on .../Successes, John! 2006-05-26 03:31:33> *** grunar has quit IRC 2006-05-26 03:40:06> *** jbenedik has joined #nfs 2006-05-26 03:40:59> *** runarp has joined #nfs 2006-05-26 03:54:24> *** etrepum has joined #nfs 2006-05-26 03:54:37> <etrepum> runarp: 2006-05-26 04:25:58> *** rxe has joined #nfs 2006-05-26 04:36:06> *** blais has joined #nfs 2006-05-26 04:45:55> *** stakkars has quit IRC 2006-05-26 04:46:07> *** stakkars has joined #nfs 2006-05-26 04:51:49> *** jbenedik has quit IRC 2006-05-26 04:53:33> *** jbenedik has joined #nfs 2006-05-26 05:19:01> *** kristjan has joined #NFS 2006-05-26 05:26:37> *** kristjan_ has joined #NFS 2006-05-26 06:27:39> *** blais has quit IRC 2006-05-26 06:28:54> *** blais has joined #nfs 2006-05-26 07:05:29> *** amk_ has joined #nfs 2006-05-26 07:16:34> <amk_> Martin and Bob: test_struct.py now does things like 's = struct.Struct(fmt)'. 2006-05-26 07:16:42> <amk_> Is the intention that the Struct class is now part of the public interface? 2006-05-26 07:25:53> *** etrepum has quit IRC 2006-05-26 07:27:38> <jafo> That's crazy talk! 2006-05-26 07:33:02> *** etrepum has joined #nfs 2006-05-26 07:34:01> <etrepum> amk_: yes, it's public API. There's a doc patch, but it needs to be revised. 2006-05-26 07:39:06> <amk_> OK; I'll mention it in the what's-new, then. Thanks! 2006-05-26 07:53:40> <amk_> Will you also be adding a pack_to() module-level function? 2006-05-26 07:54:50> <etrepum> I suppose we should for symmetry 2006-05-26 07:55:19> <etrepum> but it's not going to be commonly useful, there are very few objects that implement the write buffer protocol 2006-05-26 07:57:28> <etrepum> is there an IRC bot that announces python commits? 2006-05-26 08:00:47> <jafo> There's one that criticizes python commits... 2006-05-26 08:02:00> <etrepum> true 2006-05-26 08:09:46> <holdenweb> pybench wasn't distributed with 2.4, right? 2006-05-26 08:12:11> <amk_> Steve: correct. 2006-05-26 08:20:31> *** jbenedik has quit IRC 2006-05-26 08:26:13> *** jbenedik has joined #nfs 2006-05-26 08:27:03> *** etrepum has quit IRC 2006-05-26 09:01:49> <blais> john / bob: 2006-05-26 09:01:53> <blais> some new results 2006-05-26 09:01:59> <blais> profiling 2006-05-26 09:02:07> <blais> a single funcall per message makes it slower... 2006-05-26 09:02:10> <blais> (python funcal) 2006-05-26 09:02:36> <blais> so we go from 28s to 22s with the hot bufferola 2006-05-26 09:04:22> <jafo> "You just spent a week saving 6 seconds, tell our viewers how you fee." 2006-05-26 09:04:31> <blais> hehe 2006-05-26 09:05:01> <blais> well that's on a 10MB file rather than a 5G 2006-05-26 09:05:24> <jafo> "You're WAY above our viewers heads." 2006-05-26 09:09:29> *** etrepum has joined #nfs 2006-05-26 09:11:57> <blais> etrepum 2006-05-26 09:12:08> <blais> john / bob: 2006-05-26 09:15:30> <blais> jafo: here's how i fee, or whatever you mean, 2006-05-26 09:16:37> <etrepum> blais: excellent 2006-05-26 09:16:56> <etrepum> blais: is hotbuf on the trunk? 2006-05-26 09:17:02> <jafo> blais: Excellent. 2006-05-26 09:17:25> <blais> hotbuf in /sandbox/trunk/hotbuf 2006-05-26 09:17:28> <blais> the example isn't though 2006-05-26 09:17:44> <blais> johnny: can I add your blobxxx.py in the python sandbox? 2006-05-26 09:19:15> <blais> jafo: no, not excellent, on a larger buffer hotbuf is about the same 2006-05-26 09:19:23> <blais> will run oprofile 2006-05-26 09:23:53> *** jbenedik has quit IRC 2006-05-26 09:26:36> *** jbenedik has joined #nfs 2006-05-26 09:26:40> <etrepum> sandbox/trunk/hotbuffer 2006-05-26 09:26:50> <blais> ohnny: can I add your blobxxx.py in the python sandbox? 2006-05-26 09:27:08> <blais> svn+ssh://pythondev@svn.python.org/sandbox/trunk/hotbuffer 2006-05-26 09:27:37> <blais> sorry about that, i just moved it before (it's now an extension module) 2006-05-26 09:31:13> *** efm has quit IRC 2006-05-26 09:44:50> <jafo> I noticed yesterday that when I went down the stairs there was this huge rush of air up. Now I'm sitting by the door to the door by the elevators. It's clear that the doors being open up here is sucking the air out of the rest of the hotel. 2006-05-26 09:45:17> <jafo> Hopefully, the rest of the guests can live in a perfect vacuum. 2006-05-26 09:52:06> *** blais has quit IRC 2006-05-26 09:53:51> <jafo> holdenweb: Scotty, where's that new trunk pybench? 2006-05-26 09:55:17> *** blais has joined #nfs 2006-05-26 09:58:33> <holdenweb> slight problem with the dilithium crystal, cap'n 2006-05-26 09:59:05> <holdenweb> ah cannae hold them together and they'll no process ther options. come help me debug! 2006-05-26 10:26:22> <etrepum> blais: I committed the unpack implementation to hotbuf and added some tests for the pack and unpack methods of hotbuf 2006-05-26 10:28:19> <holdenweb> trunk pybench now checked in 2006-05-26 10:32:48> <jafo> Yay! 2006-05-26 11:08:20> *** efm has joined #nfs 2006-05-26 11:19:34> <blais> etrepum: your tab setting is at 5 2006-05-26 11:19:52> <etrepum> no it's not 2006-05-26 11:21:32> <etrepum> I love the C API docs 2006-05-26 11:21:32> <etrepum> XXX blah, blah. 2006-05-26 11:26:21> <blais> dang 2006-05-26 11:26:25> <blais> it's still slowr 2006-05-26 11:26:40> <blais> i made restore() take an arg for advancing 2006-05-26 11:27:04> <blais> bob: it's merged BTW in case you're coding your descrpitor thingy 2006-05-26 11:27:14> <blais> (I mean committed) 2006-05-26 11:28:49> <blais> jbenedik: can I merge your ewt blobxxx.py into the python sandbox? would you me rather not? it could serve as a use case 2006-05-26 11:28:55> <blais> (for slowing down your programs, that is ;-)) 2006-05-26 11:29:44> <jbenedik> heh, it is an internal api and i think i'd prefer not right now. if you want to generate some test message structures (for exercising netstring parsing and struct), thats fine 2006-05-26 11:30:10> <blais> noworries 2006-05-26 11:31:58> <blais> bob: it's still the dict lookups from getattr that kill it 2006-05-26 11:34:28> <blais> whoa 2006-05-26 11:34:37> <blais> caching the methods before the loop results in a big improvment 2006-05-26 11:37:41> <etrepum> blais: what is mark_position and mark_limit? 2006-05-26 11:37:54> <blais> the saved position and limit 2006-05-26 11:38:24> <jafo> You mean the marked position and limit. 2006-05-26 11:38:27> <etrepum> blais: should those even be exposed? should it be a 2-tuple instead? 2006-05-26 11:40:16> *** runarp has quit IRC 2006-05-26 11:47:19> <etrepum> blais: ok I rewrote the tp_members as tp_getset.. so buf.position = n should work 2006-05-26 11:47:38> <etrepum> blais: currently untested, I'm going to run through the code and look for ssize_t errors first.. there were some 2006-05-26 11:54:28> <blais> bobby: just commit when it's neat 2006-05-26 11:54:53> <blais> pos+limit: I suppose I don't need to expose them indeed 2006-05-26 11:55:45> *** stakkars has quit IRC 2006-05-26 11:59:26> *** kristjan_ has quit IRC 2006-05-26 11:59:26> *** kristjan has quit IRC 2006-05-26 12:01:33> <blais> bob: your changes don't help the latest version, I don't set members anymore, just function calls (cached) 2006-05-26 12:01:38> <blais> there are still too many funcalls 2006-05-26 12:02:02> <blais> maybe I should implement some iteration protocol that works with the kind of length + msg format that EWT has 2006-05-26 12:19:51> *** runarp has joined #nfs 2006-05-26 12:27:47> *** blais has quit IRC 2006-05-26 13:12:10> *** rower has joined #nfs 2006-05-26 13:12:26> *** rower has left #nfs 2006-05-26 13:13:51> *** rower has joined #nfs 2006-05-26 13:19:56> *** holdenweb has quit IRC 2006-05-26 13:43:45> <amk_> /leave #nfs 2006-05-26 13:43:51> *** amk_ has left #nfs 2006-05-26 13:43:51> <jafo> Toodles 2006-05-26 14:21:25> *** jack_diederich has left #nfs 2006-05-26 14:25:40> *** rxe has left #nfs 2006-05-26 14:33:39> *** etrepum has quit IRC 2006-05-26 14:33:57> *** jbenedik has quit IRC 2006-05-26 14:54:42> *** runarp has quit IRC 2006-05-26 17:00:50> *** efm has quit IRC 2006-05-27 03:41:57> *** jack_diederich has joined #nfs 2006-05-27 03:52:37> *** stakkars has joined #nfs 2006-05-27 03:56:25> *** holdenweb has joined #NFS 2006-05-27 04:01:21> *** runarp has joined #nfs 2006-05-27 04:03:12> *** runarp_ has joined #nfs 2006-05-27 04:18:48> *** jbenedik has joined #nfs 2006-05-27 04:21:04> *** runarp has quit IRC 2006-05-27 04:59:25> *** etrepum has joined #nfs 2006-05-27 06:55:10> <etrepum> s = s[:length - 1] 2006-05-27 06:55:11> <etrepum> return s + '\x00' * (length - len(s)) 2006-05-27 07:00:30> *** jbenedik has quit IRC 2006-05-27 07:10:36> *** runarp_ has quit IRC 2006-05-27 07:20:31> *** etrepum_ has joined #nfs 2006-05-27 07:20:47> *** etrepum has quit IRC 2006-05-27 07:35:45> *** stakkars has quit IRC 2006-05-27 07:36:10> *** stakkars has joined #nfs 2006-05-27 07:49:40> *** runarp has joined #nfs 2006-05-27 08:26:29> *** holdenweb_ has joined #NFS 2006-05-27 08:29:44> *** jbenedik has joined #nfs 2006-05-27 08:35:26> *** holdenweb has quit IRC 2006-05-27 08:45:17> <etrepum> cc: Info: ../Objects/exceptions.c, line 384: Extraneous semicolon. (extrasemi) 2006-05-27 08:45:24> <etrepum> cc: Warning: ../Modules/posixmodule.c, line 5451: In this statement, the referenced type of the pointer value "&status" is "int", which is not compatible with "union wait". (ptrmismatch) 2006-05-27 09:10:55> <etrepum> Program received signal SIGFPE, Arithmetic exception. 2006-05-27 09:10:55> <etrepum> 0x0000000160418568 in bu_double (p=0x12049d29c "", f=0x0) at /house/etrepum/src/python-46462/Modules/_struct.c:219 2006-05-27 09:10:56> <etrepum> 219 if (x == -1.0 && PyErr_Occurred()) 2006-05-27 09:46:31> *** mwh has joined #nfs 2006-05-27 09:48:11> *** jbenedik has quit IRC 2006-05-27 09:49:14> *** jbenedik has joined #nfs 2006-05-27 09:51:15> <mwh> etrepum: re bug 1496032 2006-05-27 09:51:27> <mwh> i take it freebsd alpha starts up with fpu traps enabled? 2006-05-27 09:51:52> <etrepum> I have no idea, first time I've touched one 2006-05-27 09:52:18> <mwh> i see 2006-05-27 09:52:28> <mwh> do you have an account on one then? 2006-05-27 10:02:27> <etrepum> yeah I got a HP testdrive account so I could fix my damn regressions 2006-05-27 10:02:39> <etrepum> I couldn't find another way to get quick access to a 64-bit platform 2006-05-27 10:02:40> <mwh> oh right 2006-05-27 10:02:46> <mwh> i had one of them once 2006-05-27 10:04:20> <mwh> i wonder if my password still works 2006-05-27 10:07:13> <mwh> huh, apparently 2006-05-27 10:09:01> <mwh> or not 2006-05-27 10:11:59> * mwh hunts for the 'reset password' button 2006-05-27 10:14:40> <etrepum> SIGFPE looks fun 2006-05-27 10:15:55> <mwh> i can tell you about it on darwin/ppc :) 2006-05-27 10:18:48> <mwh> etrepum: look at Modules/main.c 2006-05-27 10:19:01> <mwh> maybe printf the result of fpgetmask? 2006-05-27 10:23:41> <etrepum> FreeBSD td149.testdrive.hp.com 6.0-RELEASE FreeBSD 6.0-RELEASE #0: Thu Nov 3 01:10:43 UTC 2005 root@ds10.freebie.xs4all.nl:/usr/obj/usr/src/sys/GENERIC alpha 2006-05-27 10:23:41> <etrepum> fpgetmask() = 0 2006-05-27 10:24:29> <mwh> bleh 2006-05-27 10:25:05> <etrepum> 0x000000016025a568 in bu_double (p=0x1203be14c "", f=0x0) at /house/etrepum/src/python-46462/Modules/_struct.c:219 2006-05-27 10:25:06> <etrepum> 219 if (x == -1.0 && PyErr_Occurred()) 2006-05-27 10:25:06> <etrepum> (gdb) p/x fpgetmask() 2006-05-27 10:25:06> <etrepum> $1 = 0x0 2006-05-27 10:25:06> <mwh> what happens if you type 1/1e-310 interactively? 2006-05-27 10:25:27> <etrepum> >>> 1/1e-310 2006-05-27 10:25:27> <etrepum> Floating exception (core dumped) 2006-05-27 10:25:38> <mwh> right 2006-05-27 10:25:42> <etrepum> Program received signal SIGFPE, Arithmetic exception. 2006-05-27 10:25:42> <etrepum> 0x000000012005c28c in _Py_HashDouble (v=5.5511151231257827e-17) at ../Objects/object.c:995 2006-05-27 10:25:42> <etrepum> 995 if (fractpart == 0.0) { 2006-05-27 10:25:47> <mwh> so fpgetmask() is lying, it would seem... 2006-05-27 10:25:51> <etrepum> sweet 2006-05-27 10:26:05> <mwh> gdb too, if it really thinks it crashed on that line... 2006-05-27 10:26:19> <etrepum> well it wasn't a debug build 2006-05-27 10:26:49> <mwh> oh right 2006-05-27 10:27:03> <etrepum> does --with-pydebug imply -O0 ? 2006-05-27 10:27:07> <mwh> that line is _PyHash_Double though, very odd 2006-05-27 10:27:09> <mwh> yes it does 2006-05-27 10:27:47> <etrepum> why would it be hashing a double to do division? 2006-05-27 10:28:21> <mwh> i doubt it is 2006-05-27 10:29:01> <etrepum> 1e-310 by itself dumps core 2006-05-27 10:29:16> <mwh> urk 2006-05-27 10:29:40> <mwh> it's a denorm, i guess Underflow is getting signalled 2006-05-27 10:29:45> <etrepum> core dump at the same place with gdb --args ./python -c '1e-310' 2006-05-27 10:30:11> <etrepum> IEEEtastic 2006-05-27 10:30:19> <mwh> what does the stack look like? 2006-05-27 10:30:32> <mwh> ieee mandates starting up in non-stop mode, not their fault here :) 2006-05-27 10:31:13> <etrepum> oh! it's in the compiler 2006-05-27 10:31:24> <mwh> ah haha 2006-05-27 10:31:29> <etrepum> 2006-05-27 10:31:30> <mwh> yes, for co_consts 2006-05-27 10:31:32> <etrepum> yup 2006-05-27 10:31:41> <mwh> try 1e-308/20 then :) 2006-05-27 10:31:51> <mwh> i think 1e-308 should be normalized 2006-05-27 10:32:19> <etrepum> 1e-308 dumps 2006-05-27 10:32:32> *** jbenedik has quit IRC 2006-05-27 10:32:45> *** jack_diederich has left #nfs 2006-05-27 10:32:47> <mwh> odd 2006-05-27 10:32:49> <etrepum> 1e-307 doesn't dump 2006-05-27 10:32:56> <mwh> oh right 2006-05-27 10:33:03> <mwh> 1e-307/1e10 ? 2006-05-27 10:33:14> <etrepum> >>> 1e-307/1e10 2006-05-27 10:33:15> <etrepum> 0.0 2006-05-27 10:33:18> *** jbenedik has joined #nfs 2006-05-27 10:33:33> <mwh> huh 2006-05-27 10:33:39> <etrepum> >>> 1e-307/1e-307 2006-05-27 10:33:39> <etrepum> 1.0 2006-05-27 10:33:39> <etrepum> >>> 1e-307/1e-306 2006-05-27 10:33:39> <etrepum> 0.099999999999999992 2006-05-27 10:33:42> <mwh> >>> 1e-307/1e10 2006-05-27 10:33:42> <mwh> 1.0000002306925374e-317 2006-05-27 10:33:54> <etrepum> >>> 1e-307/1e1000 2006-05-27 10:33:55> <etrepum> Floating exception (core dumped) 2006-05-27 10:34:05> <mwh> maybe it would be better to not try to pretend the alpha does ieee arithmetic 2006-05-27 10:34:15> <mwh> etrepum: i bet 1e1000 on its own will do that 2006-05-27 10:34:16> <etrepum> haha 2006-05-27 10:34:37> <mwh> are you compiling with -mieee ? 2006-05-27 10:34:48> <mwh> i forget if that's relevant any more 2006-05-27 10:35:31> <etrepum> I didn't specify anything special 2006-05-27 10:36:09> <etrepum> and I don't see anything special in the compile.. 2006-05-27 10:36:09> <etrepum> gcc -pthread -fPIC -fno-strict-aliasing -g -Wall -Wstrict-prototypes -I. -I/house/etrepum/src/python-46462/./Include -I../Include -I. -I/usr/local/include -I/house/etrepum/src/python-46462/Include -I/house/etrepum/src/python-46462/_freebsd_debug -c /house/etrepum/src/python-46462/Modules/_struct.c -o build/temp.freebsd-6.0-RELEASE-alpha-2.5/house/etrepum/src/python-46462/Modules/_struct.o 2006-05-27 10:36:16> <mwh> harum 2006-05-27 10:36:30> <mwh> googling is suggesting kernel bugs... 2006-05-27 10:37:52> <mwh> how long do builds take on this machine? 2006-05-27 10:37:55> <etrepum> a while 2006-05-27 10:38:04> <etrepum> like 30 min maybe 2006-05-27 10:38:16> <mwh> cause i think adding -mieee to CFLAGS might help after all 2006-05-27 10:38:18> <mwh> 2006-05-27 10:38:25> <mwh> that's from 2002 though 2006-05-27 10:38:49> *** jbenedik has quit IRC 2006-05-27 10:39:02> <mwh> hum 2006-05-27 10:39:06> <mwh> oh i don't know 2006-05-27 10:39:15> <mwh> who actually cares about freebsd/alpha? :) 2006-05-27 10:40:17> <etrepum> I don't 2006-05-27 10:40:43> <mwh> me neither 2006-05-27 10:40:57> <mwh> i'll stop worrying about it then, i think 2006-05-27 10:48:08> *** runarp has quit IRC 2006-05-27 10:51:41> *** holdenweb_ has left #nfs 2006-05-27 10:52:00> <etrepum> there's probably some compiler flags that'll make it work 2006-05-27 10:52:20> <etrepum> apparently alpha just doesn't do denorm stuff and you have to deal with it in software 2006-05-27 10:53:00> *** etrepum has quit IRC 2006-05-27 11:16:21> *** stakkars has quit IRC 2006-05-27 11:42:35> *** ruied has joined #nfs 2006-05-27 11:42:39> *** ruied has left #nfs 2006-05-27 12:46:18> *** ccpRichard has quit IRC 2006-05-27 15:55:54> <mwh> so something caused a LOT of leaks... 2006-05-27 15:56:15> <mwh> i'm guessing it's exception related... 2006-05-27 16:41:09> <Yhg1s> mwh: not trivially reproduced, though. 2006-05-27 16:41:24> <Yhg1s> simple exception tossing doesn't reproduce it, but running the same simple test through doctest does. 2006-05-27 16:54:40> <mwh> i found that test.test_support.check_syntax reliably leaks 5 references 2006-05-27 16:55:23> <mwh> as does compile('1=1', '', 'exec') in fact 2006-05-27 16:58:24> <mwh> it's leaking a tuple containing two Nones a string and an int, i think 2006-05-27 17:00:19> <mwh> oh well, i guess sean and richard know what they changed... 2006-05-27 20:29:24> *** mwh_ has joined #nfs 2006-05-27 20:40:09> *** mwh has quit IRC 2006-05-28 02:58:05> *** mwh has quit IRC 2006-05-28 03:04:18> <jafo> NfS article I wrote is up on lwn.net 2006-05-28 05:57:38> *** xorAxAx has joined #nfs 2006-05-28 07:50:45> *** mwh has joined #nfs 2006-05-28 18:36:14> *** JZA has joined #nfs 2006-05-28 18:36:21> <JZA> hi anyone help me share some folders 2006-05-28 18:37:39> <JZA> anyone here 2006-05-28 23:46:46> *** JZA has left #nfs 2006-05-29 02:25:38> *** runarp has joined #nfs 2006-05-29 02:28:39> <runarp> made it back to LA. Thanks everyone for a great sprint. 2006-05-29 02:48:39> *** stakkars has joined #nfs 2006-05-29 03:00:41> *** runarp has quit IRC 2006-05-29 03:24:21> *** stakkars has left #nfs 2006-05-29 09:43:11> *** stakkars has joined #nfs 2006-05-29 09:43:34> <stakkars> hi! this thing is going to be long-lived? 2006-05-29 09:49:18> <mwh> i would guess not 2006-05-29 09:49:26> <mwh> i'm just a loiterer :) 2006-05-29 10:18:49> *** runarp has joined #nfs 2006-05-29 10:20:19> *** runarp has left #nfs 2006-05-29 21:26:57> *** goffa has joined #nfs 2006-05-29 21:27:20> *** goffa has left #nfs 2006-05-30 01:43:37> <jafo> Well, nfsbot and I are going to vacate the channel. Thanks for a great time everyone.
http://wiki.python.org/moin/NeedForSpeed/IRCLog
CC-MAIN-2013-20
refinedweb
14,152
81.67
Floating-point or real numbers are used when evaluating the expressions that require fractional precision. For instance, calculations like square root, or transcendentals such as sine and cosine, result in a value whose precision requires the floating-point type. Java implements the standard (IEEE-754) set of floating-point types and the operators. There are two kinds of floating-point types that is, float and double, which represent single-precision and double-precision numbers, respectively. The following table shows the width and range of the floating-point types : Now, let's take a look at these (double and float) floating-point types one by one. The type float specifies a single-precision value that uses 32-bits of storage. Single precision is more faster on some processors and takes half as much space as double precision, but will become inaccurate when the values are either very large or very small. Variables of the type float are useful when you need a fractional component, but do not require a large degree of precision. For instance, float can be useful whenever representing the dollars and cents. Below are some example of float variable declarations : float hightemp, lowtemp, rad; Double precision is denoted by the keyword double, which uses 64-bits to store a value. Double precision is actually faster than single precision on several modern processors that have been optimized for the high-speed mathematical calculations. All transcendental math functions, like sin(), cos(), and sqrt(), return double values. When you need to maintain the accuracy over many repetitive calculations, or are manipulating large-valued numbers, double is the best choice. Following is a short and simple program which uses the double variables to compute the area of a circle : /* Java Program Example - Java Floating-Point Types * Compute the area of a Circle */ public class JavaProgram { public static void main(String args[]) { double pi, r, a; r = 10.8; // radius of the circle pi = 3.1416; // approximate value of pi a = pi * r * r; // compute area of the circle System.out.println("Area of Circle is " +a); } } When the above Java program is compile and run, it will produce the following output: Here are the list of some example programs that uses float and double (floating-point) type variables: Tools Calculator Quick Links
https://codescracker.com/java/java-floating-point-types.htm
CC-MAIN-2019-13
refinedweb
380
50.16
11 May 2011 16:11 [Source: ICIS news] LONDON (ICIS)--Petrochemical companies which use the river Rhine in ?xml:namespace> According to Florian Krekel, a spokesman for river authority Wasser- und Schifffahrtsamt Bingen, water levels on the “We have an extremely dry springtime this year – it is unusual,” Krekel said. Petrochemical companies may not have seen the worst yet, however, as Krekel added: “We expect water levels to drop further.” Krekel said the water level in the Rhine was recorded at 71cm at Kaub, south of “We have seen lower water levels but usually in autumn. The lowest water level we had ever was 35cm in August/September 2003,” he added. Krekel added that larger ships are currently unable to carry product at full capacity, with some unable to carry half their usual load limit. “The efficiency of the transport is decreasing. The specific costs per tonne are rising,” he said. “We still have a lot of traffic but we will see what happens with further dropping water levels,” Krekel added. The Click here to see the most up-to-date water levels on the
http://www.icis.com/Articles/2011/05/11/9459022/river-rhine-water-level-drops-to-record-may-low.html
CC-MAIN-2014-52
refinedweb
185
61.77
Putting a Draft.js WYSIWYG Editor Into Your React Project In my last project, Marketing MGMT, I wanted to enable users to write comments that could be bold or in as list format, and maybe even add some emojs. Knowing I wanted to do this, I knew that using a standard text area or text input field wouldn’t meet this demand. I knew I wanted to use a WYSIWYG editor and started to search the internet for some thing that would work within React. In the end, I stumbled across a library called React Draft Wysiwyg. I used this library to allow users to create comments on the front end, but along the way I ran into some difficulties, so in this blog post I am going to go through a step by step process to add this Wysiwyg editor to your project Little Background on How Draft.js Works Draft.js was created by Facebook and it came about after their team was having difficulties implementing all the functionality they want to have bug free. There is a great video about the creation of Draft.js if you want to look into it. So Draft.js allows for more functionality in their text form fields and creates what is called a RichText editor. How it goes about doing this is by creating all of the text as JSON objects instead of a string. What Draft.js will do is group the text that is the same together and then once the text editor changes to bold or a list it will create a new key within the JSON object and the text written in that format will be saved there. For a more comprehensible breakdown of how Draft.js breaks down text see this blog post. Inserting Draft.js and WYSIWYG Editor Into Your Site To get the WYSIWYG editor, a few librarys will need to be installed. To do this, use the npm install function in the CLI of the React application folder. The Draft.js needs to be installed along with the WYSIWYG editor. So both of these commands need to run npm install draft-js --save and npm install react-draft-wysiwyg --save. After these libraries have loaded, you will need to add a few imports to the top of the file you are using a Draft.js form field in. These imports are: import { EditorState, convertFromRaw, convertToRaw } from 'draft-js'; import { Editor} from 'react-draft-wysiwyg'; import 'react-draft-wysiwyg/dist/react-draft-wysiwyg.css'; There is a lot going on in these files but to break it down. - EditorState is the Draft.js text editor JSON object. - convertFromRaw and convertToRaw are different functions we are going to be using to change around data entered to JSON or out of JSON. - Editor is the WYSIWYG editor that we are going to be using. This Editor has built in logic that uses the EditorState from draft-js. - Lastly is the CSS styling of the Text Editor. From there, we need to set up the React Component’s state that will utilize EditorState. This will record the data being entered into the Editor. state ={ editorState: EditorState.createEmpty() } Then we need to enter the editor into the HTML that the component is rendering with the code below. <Editor editorState={this.state.editorState} wrapperClassName="demo-wrapper" editorClassName="editer-content" onEditorStateChange={this.onChange} /> As you can see the editorState is being utilized here. Then also we have an onChange function we need to set up to change the editorState when something is inputted into the editor. onChange = (editorState) => this.setState({editorState});. Handling the Data Submission Now that the form is functional we need to handle the form submission. When users type into the form, the state will change but it is doing so within the EditorState object. In order to store this data on the backend, we are going to need to change the EditorState information to be in JSON. This is where the convertToRaw function comes in. Use the function below to get the data back in JSON convertToRaw(this.state.editorState.getCurrentContent()) From here you will just need to do a simple fetch POST request to your backend. To see the full form file take a look at this page. Rendering the Data Submitted In my project, the comment that was created via Draft.js now needs to be displayed on the page. In order to do this, we are going to need to use the convertFromRaw function along with a library that will help us change the EditorState content into HTML. I used a library called draft-js-export-html, so this needs to be npm installed npm install draft-js-export-html. Then on the file you are rendering the Draft.js in you will need these imports: import {stateToHTML} from 'draft-js-export-html'; import { convertFromRaw } from 'draft-js'; From there, you will need to translate the data being fetched from the backend into HTML. To do this, I created the function below. convertCommentFromJSONToHTML = (text) => { return stateToHTML(convertFromRaw(JSON.parse(text))) } This changes the data into a HTML string. From there I am putting the data into a div which Facebook’s React has me titled “dangerouslySetInnerHTML”. <div id="comment-div"> <div dangerouslySetInnerHTML={{ __html: this.convertCommentFromJSONToHTML(this.props.comment.content)}}> </div> </div> Then this is what the output looks like: The title dangerouslySetInnerHTML was not something I was completely comfortable having in my code. But if my comments will always be handled by the Draft.js text editor then I don’t have any thing to be worried about. My data will always be formatted in the correct way for this div. Once I change the way that comment data is being formatted, I will then have an issue. That about wraps it up. Reach out if you have any questions on doing any of this.
https://medium.com/@rjmascolo/putting-a-draft-js-wysiwyg-editor-into-your-react-project-8f82493a742
CC-MAIN-2019-51
refinedweb
982
65.01
When you started your Ext JS 4.* project with Sencha Cmd you know how easy… How to Upgrade a Sencha Touch App to Ext JS 6 Modern Toolkit – Part 1 Previously, I wrote a blog post on how to create great looking universal apps with Ext JS. However, we also have lots of customers who currently have a mobile (phone or tablet) application and want to upgrade it to Ext JS 6. In this tutorial, I will show you how you can upgrade your app, and why you should consider taking this step. I used my existing tutorial files, “Do I need my umbrella” weather application, which I wrote a couple of years ago with Sencha Touch 2. You can find the original tutorial here. You can download the tutorial files here. You don’t have to use these tutorial files, you can also just read through this guide and try it out with your own existing Sencha Touch 2 app. Ext JS 6 Modern Toolkit and Sencha Touch Ext JS has more (advanced) classes and features than Sencha Touch. You can create advanced enterprise desktop applications, and now you can also create advanced mobile applications or even advanced cross-platform apps. We incorporated concepts from Sencha Touch 2, and merged them as “the modern toolkit” in Ext JS 5, with the modern core (class system, mvvm pattern, etc.), and there are also many updated classes. From a theming perspective, Ext JS 6 modern toolkit has been updated and looks different than Sencha Touch. When you’re looking for an enterprise solution to create mobile apps, whether it’s a universal app or just mobile, there are many reasons why you’d choose Ext JS 6 Modern toolkit. I will explain these benefits to you in this article. Then, I will take an example Sencha Touch 2 application, and migrate it to Ext JS 6 with the Ext JS 6 Modern toolkit. What’s Different in Ext JS 6 Modern Toolkit Here’s an overview of new features in Ext JS 6 compared to Sencha Touch. Basic Upgrade (No change to the MVVM pattern) This upgrade allows you to use: - the latest mobile framework version, and support for the latest OS & browser versions - running your mobile application on your desktop computer too - controlling lists with your mouse scroll and keyboards (besides touch support) - new packages / theme packages structure - new Neptune and Triton (universal) themes - fast theme compilation with Fashion - cleaning up your models, by writing less code - JavaScript promises, for asynchronous code - out-of-the-box font-awesome integration - one of the new components/classes: - data grid - data tree - navigation tree list - soap, amf0, amf3 proxies - new charts - form placeholders Advanced Upgrade (Change to MVVM architecture pattern) This upgrade allows you to use: -. - Databinding - Bind to data or component states. It allows you to do advanced things by writing less code. Universal Upgrade This upgrade allows you to: - Create cross-platform apps, for mobile phones and tablets, but also desktop computers. By supporting the modern (lightweight component set) and the classic rich desktop component set. - Support legacy browsers, like Internet Explorer 8, as well as the latest modern (mobile) browsers. Things That Changed in the API You can read a complete list of Ext JS 6 API changes here. The ones that I faced when upgrading the weather utility app are: -. - Sencha Touch has Ext.app.Controller.launch() methods; in Ext 6 Modern toolkit, it’s Ext.app.Controller.onLaunch() - In Sencha Touch, you had to define everything in a config block; in Ext 6 Modern toolkit, you just put properties in config blocks that need the magic methods (get, set, apply, and update). Although you don’t have to, you can clean up the config blocks. - There are changes in the way you wire up stores that you can read about in these docs: - Ext JS 6 Ext.app.Controller-cfg-stores - Sencha Touch 2.4.2 Ext.app.Controller-cfg-stores - Also in Ext JS 6, Stores don’t automatically set the storeId to the name of the class. - Sencha Touch validations are now called validators - The Sencha Touch default theme was replaced by Ext JS 6 Modern toolkit themes – Neptune and Triton. - The default icon set that is being used is Font Awesome, instead of Pictos. Basic Mobile Upgrade For the basic, easy upgrade, we will stick with the existing MVC pattern. You will see that it won’t take many steps. However, you won’t be taking advantage of Ext JS 6. You will have the latest framework, with all its features and classes, but you won’t be using the new MVVM pattern. 1. Download the Ext JS 6 (trial version). 2. Look in your Sencha Touch project (app.js for example), and note the namespace that was used. For example, for the Weather Application, the namespace is “Dinmu”. 3. Generate an Ext JS 6 modern app: Navigate to the ext framework folder, and generate a project with: sencha generate app -modern For example: ext> sencha generate app -modern Dinmu ../dinmu1 See 4. Go to the project in your browser, you should see the new Ext JS 6 demo app. 5. In your file system, rename the <myproject>/app folder to something else (like app-backup) 6. Do the same for the app.js file; rename it to app-backup.js 7. Then, copy the app folder and the app.js from your Sencha Touch project, and paste it in your new Ext JS 6 project. In case you are loading external JS or CSS files via app.json, you can manually merge those lines into the new app.json. My advice is to do these kind of steps at the end, after you have your app running. 8. Run the following command from the command-line: sencha app refresh - You might run into build errors here, because of API changes. For the Dinmu app, there was an error because Ext.device.Geolocationhas been deprecated. - When you have lots of custom components, you may run into problems here. The best way to solve them is to read the messages from the CLI, and open the Modern toolkit API docs to search for the classes that fail. In my case, it was the geolocation class that failed. In the docs, I noticed that there are no device APIs anymore. In Sencha Touch, these classes where wrappers for PhoneGap/Cordova support, that would fall back to the HTML5 API feature, if available in the browser. There is Ext.util.Geolocation, so I changed the code to use it. After I changed the line, I ran another sencha app refresh again, to check for more errors. See the results here. 9. When you don’t have any more errors anymore, you can try to run the application in the browser. When I ran my app, I got a console error in my app.js launch method. Ext.fly(‘appLoadingIndicator’).destroy(); Basically, this is an error that tells you that you can’t destroy the “appLoadingIndicator” element, just because it’s not there. The index.html is just different. Now you don’t want to replace the index.html file, with the Sencha Touch one, because the calls to the microloader are different. It’s up to you, if you want to remove this destroy line in the app.js launch method, or if you take over the <style> and <body> tags from the Sencha Touch app. I liked the Sencha Touch simple CSS preloader, that you will see before loading any JS or CSS, so that’s why I took over those tags. After fixing this problem, I was able to open my Ext JS 6 app in the browser. 10. The application is running a bit odd. By inspecting my application, I noticed that in my Sencha Touch application I have controllers with launch methods. And launch methods on controllers don’t exist in Ext JS 6, instead they’re called: onLaunch. So I renamed it, and ran the application again. 11. This time I had a problem with the store. The store manager couldn’t find Ext.getStore('Settings'), because it was wired up to the controller like this: Dinmu.store.Settings. Instead, the store manager has to access it via the full class name. I fixed it in the controller, instead of wiring up the full path, and I just passed in the Store name. 12. The settings button was not visible; this was due the changes in the icon sets. I used the default font-awesome settings icon, and changed the iconCIs in the Settings button in Main.js to: x-fa fa-cog 13. Last step was to run a build to make sure that I was able to build my application. I expected it to work, because the sencha app refresh command did not fail. And that’s it. After this last step, I was able to run the Weather Application as a full working Ext JS 6 mobile app. Coming Up Next In the next article in this series, I’ll show you how to do the advanced upgrade, where we will switch to the new MVVM pattern, and we can also clean up some code.
https://www.leeboonstra.com/developer/how-to-upgrade-a-sencha-touch-app-to-ext-js-6-modern-toolkit-part-1/
CC-MAIN-2017-34
refinedweb
1,545
72.36
Alberto Brandolini:The amount of energy necessary to refute bullsh*t is an order of magnitude bigger than to produce it. Keith Barrow wrote:Which country? Keith Barrow wrote:another tedious World-Cup one, thought it might be a LOTR one OriginalGriff wrote:a little slow public class SanderRossel : Lazy<Person> { public void DoWork() { throw new NotSupportedException(); } } harold aptroot wrote:Argentina will defeat Germany, and they'll try to claim that that makes them white. WuRunZhe wrote:relaction WuRunZhe wrote:What is the relaction between Cooking a pasta and final match? Sandeep Singh Shekhawat wrote:My two favorite teams from North South America is Argentina and Brazil. General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Lounge.aspx?msg=4860133
CC-MAIN-2015-14
refinedweb
136
51.68
Adding physics to your app In this section, you will learn how to set up the physics calculations that run in the background of your app. The data we receive from these calculations allows your hero navigate and interact with the game level appropriately. Cocos2d-html5 allows for the standard update method, where those calculations are scheduled for every animation frame. We want to get smoother results in this app, so we're going to combine two important ideas instead: Box2DWeb and web workers. Box2DWeb is a physics engine that handles concepts such as gravity, collisions, rotation and so on, so that we don't need to figure them out. A web worker is a script you can use to run your physics calculations in a an applications thread separate from the main thread, where all of your UI elements are rendered. By separating out this mathematical computation, we can avoid having the physics impact frame rates and the responsiveness of the app. Instead of performing calculations once for every animation frame, the web worker lets your main application thread get the most up-to-date calculations available and use those. This section is a bit more involved, so let's get started. First, we need to set up a skeleton web worker and populate it with the physics engine. We can then use the physics engine data to update your scene. Set up Box2DWeb and a web worker Unzip the Box2DWeb framework you downloaded earlier. Rename the JavaScript file Box2D.js and place it in your js folder. In the same folder, create a new file and name it Box2dWebWorker.js. Your main implementation will go in this file, but before we can give it any contents, we need to go back and modify SceneStart.js. Create a global namespace Open SceneStart.js in your text editor. At the top of the file, create a global namespace. The _g indicates that the namespace is global. Create a variable with the arbitrary name LayerStart. We will use it to reference your root layer later, when we listen for messages from the web worker. var _g = { LayerStart: null }; Also add the placeholder variable physics for your web worker, above the variables already in place for background, hero, and coins. physics: null, We want to initialize the global namespace as early in the code as possible. In this case, we place it a few lines down, immediately below where we call the super function. _g.LayerStart = this; Create a web worker Now that the preliminary initialization is complete, we can create a web worker to send and receive messages. We assign the placeholder variable we added earlier to be a new web worker. The finished Box2dWebWorker.js file initializes that web worker, so you can send messages from SceneStart.js to Box2dWebWorker.js. Locate the tmx reference to 0-0.xml that we added earlier, and below it, add the following: this.physics = new Worker('./js/Box2dWebWorker.js'); Next, we initialize the web worker so that it can receive messages. This initialization allows you to send messages from Box2dWebWorker.js to SceneStart.js, permitting two-way communication between your main application thread and the web worker. Note that we can transmit variables such as strings, numbers, and complex objects, but not functions. this.physics.addEventListener('message', function (e) { }); Now that the web worker is ready, we send a message to Box2dWebWorker.js to indicate that we want it to initialize itself, and we provide some values. We create four variables with the names msg, walls, coins, and portals. To send and receive messages, we use the msg variable to indicate the action we want to take. The remaining variables are object sets from your tile map that we will use to construct your physics world. this.physics.postMessage({ msg: 'init', walls: tmx.getObjectGroup('walls').getObjects(), coins: tmx.getObjectGroup('coins').getObjects(), portals: tmx.getObjectGroup('portals').getObjects() }); Schedule an update function We now need to set up a scheduled update function to send information about user input to the web worker, so that we can create physics forces on your hero. Locate the code we used to add portals and coins sprites earlier. Below the code that loads the finish portal, but above the return true;, add the following: this.schedule(this.update); Below the return true;, we need to place a comma after the existing } to separate the previous ctor function from our new update function. }, update: function () { } Save the file. Implement the web worker For the changes in SceneStart.js to work, we need to implement the web worker in Box2dWebWorker.js. Open the empty file in your text editor. First, we import an external script. In this case, we use the Box2D.js script, which contains all of the physics functionality we need. importScripts('./Box2D.js'); Next, we add an event listener, which is a required function for almost all web workers, so that the file can receive incoming messages. As we saw earlier, we rely on a msg variable to indicate our actions. We’re already familiar with the 'init' message from when we set it in SceneStart.js. We will set the 'ApplyImpulse' message in that file later, to provide user input data on impulse from the main application thread. In this file, self essentially means this. self.addEventListener('message', function (e) { if (e.data.msg === 'ApplyImpulse') { self.hero.j = e.data.j; } else if (e.data.msg === 'init') { self.init(e.data); } }); We add the init function, which accepts objects that are passed in from the message. In this case, the objects are the walls, coins, and portals. We also define four placeholder variables for fixtures, bodies, individual objects, and a counter. We can apply the fixtureDef class to an object to define its physics properties, such as density, friction, and restitution. We can apply the bodyDef class to an object to set its position and velocity, and apply impulse. self.init = function (objects) { var fixtureDef, bodyDef, object, n; Next, we set some global world values. We create the world and provide a 2-D vector that describes the direction of gravity. We also set a global scale value. This value is important because the physics world exists on a smaller scale than our screen dimensions, and in this case it's 32 times smaller. If the physics world was not scaled down, we would have massive objects and the physics might not behave how we want them to. For most apps, a global scale value around 30 will be ideal, but you may need to adjust this. We also create an empty array, where we can store objects we want the app to remove from the scene. In this case, coins are removed when the hero touches them. We don't want to remove an object in the middle of a physics calculation, and this array keeps a queue of items to remove between calculations. self.world = new Box2D.Dynamics.b2World( new Box2D.Common.Math.b2Vec2(0.0, 24.0), true ); self.world.scale = 32.0; self.remove = []; Now we set some global physics properties for fixtures. Note that setting the friction to 0.0 prevents your hero from sticking to walls. It also causes the hero to slide continually along surfaces, which we'll correct later. The restitution is a bounce value, which we also set to 0.0 because we don’t want the hero to bounce when it hits a wall or surface. Finally, we create a new body variable, which we will reuse to generate physics objects. fixtureDef = new Box2D.Dynamics.b2FixtureDef(); fixtureDef.density = 1.0; fixtureDef.friction = 0.0; fixtureDef.restitution = 0.0; bodyDef = new Box2D.Dynamics.b2BodyDef(); The first physics objects we generate are walls. We cycle through the tile map objects and, based on the dimensions and position of each object, we create a corresponding object in the physics world. By using the b2_staticBody body type, we indicate that these objects are not affected by forces, such as gravity, and remain stationary. We also assign a rectangular shape. for (n = 0; n < objects.walls.length; n = n + 1) { object = objects.walls); self.world.CreateBody(bodyDef).CreateFixture(fixtureDef) .SetUserData({}); } Generating coins is very similar, but to create the body here, we call SetUserData and assign a tagName and index, which helps us keep track of coins as the hero collides with them. for (n = 0; n < objects.coins.length; n = n + 1) { object = objects.coins); object = self.world.CreateBody(bodyDef).CreateFixture(fixtureDef) .SetUserData({ tagName: 'coin', index: n }); } Speaking of the hero, we need to add a hero to the physics world. Instead of defining a b2_staticBody body type here, we define a b2_dynamicBody one, so that the body we create is affected by the world physics. Remember that in the previous section you created a sprite only for the finish portal. Here, we use the data of the start portal to determine where your hero is added to the world. We want the main application thread to access the hero from multiple locations in this web worker, so we create a reference to self.hero. We also create a self.hero.j array, which holds the horizontal (x) and vertical (y) impulses that act on the hero. Because your hero starts in midair, we initialize the number of objects in contact with the hero at 0. bodyDef.type = Box2D.Dynamics.b2Body.b2_dynamicBody; bodyDef.position.x = (objects.portals[0].x + objects.portals[0].width / 2.0) / self.world.scale; bodyDef.position.y = -(objects.portals[0].y + objects.portals[0].height / 2.0) / self.world.scale; fixtureDef.shape = new Box2D.Collision.Shapes .b2PolygonShape(); fixtureDef.shape.SetAsBox(28.0 / 2.0 / self.world.scale, 28.0 / 2.0 / self.world.scale); self.hero = self.world.CreateBody(bodyDef); self.hero.CreateFixture(fixtureDef).SetUserData({}); self.hero.j = []; self.hero.contacts = 0; Let’s look at how we can listen for these collisions. First, we create a new Box2D.Dynamics.b2ContactListener and then define the BeginContact function, which is triggered on any new collision. If an object in the collision is a coin, we add that coin to the queue in the removal array. Note that we call postMessage to notify your main application thread that the coin object is being removed from the physics world, because we also need to remove the coins sprite being rendered in the Cocos2d-html5 framework. self.listener = new Box2D.Dynamics.b2ContactListener(); self.listener.BeginContact = function (contact) { self.hero.contacts++; if (contact.m_fixtureB.GetUserData().tagName === 'coin') { self.remove.push(contact.m_fixtureB.GetBody()); self.postMessage({ msg: 'remove', index: contact.m_fixtureB.GetUserData().index }); } else if (contact.m_fixtureA.GetUserData().tagName === 'coin') { self.remove.push(contact.m_fixtureA.GetBody()); self.postMessage({ msg: 'remove', index: contact.m_fixtureA.GetUserData().index }); } }; We also implement EndContact and decrement the collisions counter any time the hero is no longer touching an object. By incrementing when BeginContact is triggered and decrementing when EndContact is triggered, we know the hero is in midair when it is touching 0 objects. Knowing this number helps us later, when we set limits on the hero's ability to jump at any particular moment. In this case, we want the hero to jump only when it's in contact with a surface or wall. self.listener.EndContact = function () { self.hero.contacts--; }; Now that the contact listeners are defined, we need to set a contact listener for the world, so that our functions are triggered. self.world.SetContactListener(self.listener); Finally, we make two separate calls to setInterval. The first call updates the physics 60 times per second. The second call removes any queued objects from the world and is made even more frequently, to make sure removals are addressed as needed. setInterval(self.update, 0.0167); setInterval(self.cleanup, 0.0111); }; We define how cleanup occurs by selecting any queued objects and invoking the DestroyBody function on them. self.cleanup = function () { var n; for (n = 0; n < self.remove.length; n = n + 1) { self.world.DestroyBody(self.remove[n]); self.remove[n] = null; } self.remove = []; }; Run the physics calculations self.update = function () { self.world.Step( 0.0167, 20, 20 ); self.world.ClearForces(); self.hero.j = [0.0, 0.0]; }; For this app, we need to add a little more to the function to make your hero behave the way we want in the physics world. We make four key modifications: - Prevent vertical impulse if the hero is not in contact with any objects, to make sure the hero can't jump while it's in midair. - Apply the adjusted impulse to the hero to provide its movement. - Set the horizontal velocity to 0 when the user is not providing touch input. Because we removed the friction from our static objects earlier, this setting is needed to prevent the hero from sliding on surfaces. - Restrict horizontal velocity to between -5.0 and 5.0 units. When the user is providing touch input, the impulse acts continuously on the hero, so this restriction prevents the hero from gaining too much velocity. if (self.hero.contacts === 0) { self.hero.j[1] = 0.0; } self.hero.ApplyImpulse( new Box2D.Common.Math.b2Vec2(self.hero.j[0], self.hero.j[1]), self.hero.GetWorldCenter() ); if (self.hero.j[0] === 0) { self.hero.SetLinearVelocity( new Box2D.Common.Math.b2Vec2( 0.0, self.hero.GetLinearVelocity().y ) ); } self.hero.SetLinearVelocity( new Box2D.Common.Math.b2Vec2( Math.max(-5.0, Math.min(self.hero.GetLinearVelocity() .x, 5.0)), self.hero.GetLinearVelocity().y ) ); self.postMessage({ hero: { x: self.hero.GetPosition().x * self.world.scale, y: -self.hero.GetPosition().y * self.world.scale, r: self.hero.GetAngle() } }); }; Save the file. Send impulse information We can now go back into SceneStart.js to complete the code. Locate the update function we set up earlier. Between the {}, we now fully implement the scheduled update function to send impulse information to your web worker. Until we implement the touch controls, the impulse is always zero. this.physics.postMessage({ msg: 'ApplyImpulse', j: this.hero.j }); this.hero.j[1] = 0.0; this.physics.addEventListener('message', function (e) { }); Between the {}, we listen for any messages the web worker sends to the main application thread, and then apply the attached data to the hero. The first message we need to check for is the presence of a hero object. If that object exists, we know that we have new position or rotation data and we call the setPosition and setRotation functions on our hero sprite. if (e.data.hero) { _g.LayerStart.hero.setPosition(new cc.Point( e.data.hero.x, e.data.hero.y )); _g.LayerStart.hero.setRotation(e.data.hero.r / (Math.PI * 2.0) * 360.0); } else if (e.data.msg === 'remove') { _g.LayerStart.removeChild(_g.LayerStart.coins .sprites[e.data.index]); _g.LayerStart.coins.sprites[e.data.index] = null; _g.LayerStart.coins.sprites.count = _g.LayerStart .coins.sprites.count - 1; if (_g.LayerStart.coins.sprites.count === 0) { _g.LayerStart.finish.runAction(cc.FadeTo .create(2.0, 255.0)); } } Save the file. Test your app in the Ripple emulator Good work! Your Cocos2d-html5 app should now run on a working physics engine. Return to your start page in the Ripple emulator. The only visible difference from the last time we checked your progress is that your hero is no longer partially concealed in the corner of the screen, and instead appears a few blocks above the lower-left surface. We still can't interact with the hero, but in the final section of this tutorial, you will set up touch controls to make the game playable. Last modified: 2014-03-10 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/html5/documentation/v1_0/adding_physics_to_your_app.html
CC-MAIN-2018-43
refinedweb
2,616
58.89
The PimEvent class holds the data of a calendar event. More... #include <qtopia/pim/event.h> Inherits PimRecord. List of all member functions. This data includes descriptive data of the event and schedualing information. See also Qtopia PIM Library. This enum type defines how an event repeats. See also frequency(), weekOffset(), showOnNearest(), and repeatOnWeekDay(). This enum type defines what kind of sound is made when an alarm occurs for an event. The currently defined types are: See also setAlarm() and alarmSound(). Returns the number of minutes before the event to activate the alarm for the event. See also setAlarm(). Returns the type of alarm to sound. See also setAlarm() and SoundTypeChoice. See also setAlarm(). Returns the description of the event. See also setDescription(). Returns when the first occurrence of the event starts. See also endInCurrentTZ() and setEnd(). See also end(). The exceptions are returned as a list of EventExceptions, with the form, struct EventException { QDate date; QUuid eventId; }; The date field is the date for which a repeating event would otherwise occur on. For instance, if an event would normally occur on July 1st, every year, and date is July 1st, 2000, then the event exception would indicate the repeating event would not occur on July 1st, 2000. The date for each EventException is unique for the list of exceptions returned. If eventId is not null, then eventId describes an alturnate event that would occur instead of the normal occurrance for the date specified by the date field. The event it identifies does not have to occur on the same date as the date field. To extend the above example, the event for eventId might occur on July 2nd, 2000. If the EventId is null then it describes an exception where no alternate event occurred. Returns how often the event repeats. See also setFrequency(). Returns TRUE if there is an alarm set for the event. Otherwise returns FALSE. See also setAlarm(). See also isException() and seriesUid(). Returns FALSE if the event has repeat type NoRepeat. Otherwise returns TRUE. See also setRepeatType() and RepeatType. Returns TRUE if the event is an all day event. Otherwise returns FALSE. See also setAllDay(). For example if a daily event at 10am starts on 11am one day, the 11am would be represent an exception to the 10am repeating event. See also hasExceptions() and seriesUid(). Returns the location of the event. See also setLocation(). If ok is non-NULL, *ok is set to TRUE if the event occurs on or after from and FALSE if the event does not occur on or after from. Returns the notes of the event. See also setNotes(). See also writeVCalendar(). See also repeatTill() and setRepeatForever(). See also setRepeatOnWeekDay(). See also repeatForever() and setRepeatTill(). See also repeatTill(). Returns the RepeatType of the event. See also setRepeatType() and RepeatType. See also hasExceptions() and isException(). See also clearAlarm(), hasAlarm(), alarmSound(), and alarmDelay(). See also isAllDay() and setTimeZone(). See also description(). See also end() and endInCurrentTZ(). See also frequency(). See also location(). See also notes(). See also repeatForever(). Event will always repeat on the day of the week that it started on. See also repeatOnWeekDay(). See also repeatTill() and repeatForever(). See also repeatType(), hasRepeat(), and RepeatType. An example would be a repeating event that occures on the 31st of each month. Setting showOnNearest to TRUE will have the event show up on the 30th on months that do not have 31 days, (or 28/29 in the case of Febuary). See also showOnNearest(). See also start() and startInCurrentTZ(). Setting the time zone to an invalid TimeZone will cause the event to have no associated time zone. See also timeZone() and isAllDay(). Returns if the event should be shown on the nearest match of an occurrence if the exact date the event would occur is not a valid date. See also setShowOnNearest(). Returns when the first occurrence of the event starts. See also startInCurrentTZ() and setStart(). See also start(). See also setTimeZone() and isAllDay(). if (weekOffset() == 1) event occurs in first week of the month. if (weekOffset() == 3) event occurs in third week of the month. if (weekOffset() == -1) event occurs in last week of the month. Returns 0 if there is no week offset for the event. See also readVCalendar(). See also readVCalendar(). This file is part of the Qtopia platform, copyright © 1995-2005 Trolltech, all rights reserved.
http://doc.trolltech.com/qtopia2.2/html/pimevent.html
crawl-001
refinedweb
724
71.31
This page uses content from Wikipedia and is licensed under CC BY-SA. The RfC discussion to eliminate portals was closed May 12, with the statement "There exists a strong consensus against deleting or even deprecating portals at this time." This was made possible because you and others came to the rescue. Thank you for speaking up. By the way, the current issue of the Signpost features an article with interviews about the RfC and the Portals WikiProject. I'd also like to let you know that the Portals WikiProject is working hard to make sure your support of portals was not in vain. Toward that end, we have been working diligently to innovate portals, while building, updating, upgrading, and maintaining them. The project has grown to 80 members so far, and has become a beehive of activity. Our two main goals at this time are to automate portals (in terms of refreshing, rotating, and selecting content), and to develop a one-page model in order to make obsolete and eliminate most of the 150,000 subpages from the portal namespace by migrating their functions to the portal base pages, using technologies such as selective transclusion. Please feel free to join in on any of the many threads of development at the WikiProject's talk page, or just stop by to see how we are doing. If you have any questions about portals or portal development, that is the best place to ask them. If you would like to keep abreast of developments on portals, keep in mind that the project's members receive updates on their talk pages. The updates are also posted here, for your convenience. Again, we can't thank you enough for your support of portals, and we hope to make you proud of your decision. Sincerely, — The Transhumanist 11:12, 25 May 2018 (UTC) P.S.: if you reply to this message, please {{ping}} me. Thank you. -TT Hi, I'm SkyGazer 512. MoonyTheDwarf, thanks for creating Mindustry! I've just tagged the page, using our page curation tools, as having some issues to fix. Thank you for creating this article! However, I'm not sure it meets our notability guidelines for games. Would you be able to maybe add some more secondary sources to the article that cover the topic? The tags can be removed by you or another editor once the issues they mention are addressed. If you have questions, you can leave a comment on my talk page. Or, for more editing help, talk to the volunteers at the Teahouse. SkyGazer 512 Oh no, what did I do this time? 13:47, 1 September 2018 (UTC)
https://readtiger.com/wkp/en/User_talk:MoonyTheDwarf
CC-MAIN-2018-47
refinedweb
444
72.16
c + + vector C + + built-in array support container, but it doesn't support the semantics of container abstractions. To solve this problem, we implement such a class. In standard c + +, the container vector ( vector ) is implemented. Container vector is also a class template. Standard library vector types use the required header files: # include <vector>. Vector is a class template. Not a data type, vector <> is a data type. The storage space of the vector is continuous, and list isn't a continuous storage. , definition and initialization A vector <> v1; the default is null, so the following assignment is the wrong [ 0 ] = 5; Vector <> v2 ( v1 ); or v2 = v1; or vector <typename> v2 ( v1. Begin ( ), v1. End ( ) ); v2 is a copy of v1, if ( ) is expanded to v1. Size ( )> ( ) is expanded to v1. Size ( ). A vector <typename> v3 ( n, I ); v3 contains a typename type element with n values I A vector <typename> v4 ( n ); v4 contains an element with n values 0 Int a [ 4 ] = % 7B0, 1, 2, 3, 3 % referrals; vector <int> v5 ( a, a + servicecontroller ); v5 's size is 5 and v5 is initialized to a 5 value. The next pointer points to the next position of the end element that will be copied. Vector <> v6 ( v5 ); v6 is a copy of v5 A vector <type> identifier ( maximum, initial, and all values ); ,. 1> if no element initialization is specified, the standard library provides an initialization value to initialize itself. 2> if you saved an element of the class type that contains a constructor, the standard library uses the constructor of that type. 3> if you save an element of a class type that doesn't have a co & tructor, the standard library generates an object with an initial value, using this object to initialize values. , the most important operation of vector objects 1. V. Push_back ( t ) adds a value of t at the end of the container, with the size of the container larger. Another list has the push front ( ) function, which is inserted at the front end and the following element subscript increases in turn. 2. V. Size ( ) retur & the number of data in a container, and size retur & the value of the size type defined by the corresponding vector class. V. Resize ( 2 * v. Size ) or. V. Resize ( 2 * v. Size, 99 ) double the capacity of v ( and initialize the value of the new element to 99 ). 3. V. Empty ( ) determines whether the vector is null. 4. V [ n ] retur & the element in v. Inserts the content of content to the position at which the pointer points in v. Number. Insert ( pointer. Content ). Also v. Insert ( pointer, content ), v. Insert ( pointer, a [ 2 ], a [ 4 ] ) to add a [ 2 ] to three elements of a [ 4 ]. 6. V. Pop_back ( ) removes the end element of the container without returning back to the element. 7. V. Erase ( pointer1, pointer2 ) deletes the elements of pointer1 to the middle of pointer2 ( including pointer1 ). After deleting an element in the vector, the elements later in this position need to move a position, although the current iterator position isn't 1. But as the subsequent elements move forward, it's equivalent to the next position of the iterator. 8. V1 = = v2 determines if v1 is equal to v2. 9. =, <, <=,>,> = keep these operators mean. 10. Vector <>:: iterator p = v1. Begin ( ); p initialize point to the fi t element of v1. A value that * p points to the element. For const vector <>, you can access only the pointer to the vector <typename>:: const iterator. 11. P = v1. End ( ); p points to the next location of the last element of v1. 12. V. Clear ( ) deletes all elements in the container. 12. v. clear ( ) deletes all elements in the container. Functional algorithms in # include <algorithm> Search algorithm: find ( ), search ( ), count ( ), find_if ( ), search_if ( ), count_if ( ), ( ), ( ), ( ) Sorting sorting: sort ( ), merge ( ) Delete algorithm: unique ( ), remove ( ) Generate and mutation: generate ( ), fill ( ), transformation ( ), copy ( ), copy ( ), copy ( ) Relationship algorithm: equal ( ), min ( ), max ( ) Sort ( v1. Begin ( ), vi. Begin ( ) + v1. Size/2 ); sorting on elements of v1. List <>:: iterator pmiddle = find ( clist. Begin ( ), clist. End ( ), a ( ) ); find the fi & t occurrence of the checked content; otherwise, the end ( ) is returned. Vector <typename>:: size_type x; vector <typename> type, can be used to loop as for ( int I ) Programmers with c + + might think that the subscript operation of a vector can add elements, in fact: Vector <> ivec;//empty vector For <>:: size_type ix = 0; ix! = 10; + + ix Ivec [ ix ] = ix;//disaster: ivec has no elements The above procedure attempts to insert 10 new elements in ivec, and the element value is 0 to 9. However, here ivec is an empty vector object, and the subscript can only be used to obtain an existing element. The correct writing of this loop should be: For <>:: size_type ix = 0; ix! = 10; + + ix Ivec. Push_back ( ix );//ok: add new element with value ix. Warning: must be an existing element to index with a subscript operator. No elements are added when assigning an assignment through a subscript operation. Can only be used to understand the existing elements. , memory management and efficiency 1. Use the reserve ( ) function to set capacity size in advance to avoid excessive capacity expansion operations resulting in inefficient efficiency. One of the most compelling features of stl containers is that they can automatically grow to fit the data that you put into your data, as long as they aren't. To know this maximum, simply call the member function called max size. For vector and string, if more space is needed, increase the size in a similar realloc idea. The vector container supports random access, so it's implemented internally with dynamic arrays in order to improve efficiency. Always increase the internal buffer by the exponential boundary when applying the reserve ( ) to apply the specific size. When adding an element to an insert or push back, if the dynamic array memory isn't enough, dynamically reassign a new memory area of 1. 5 ~ 2 times the current size, and copy the contents of the original array to the past. So, in general, its access speed is similar to the general array, and the performance of the is reduced. Just as the code above tells you that. When the pop back operation is made, capacity doesn't decrease because the element in the vector container decreases and the size before the operation is maintained. For a vector container, if you've a large amount of data to push back, you should use the reserve ( ) function to set its capacity size in advance, otherwise the. The reserve member function allows you to minimize the number of times you've to do that, so you can avoid the overhead of real allocation and the iterator pointer/reference failure. But before I explain why reserve can do so, let me briefly introduce the four related member functions sometimes confusing. In standard containers, only vector and string provide all of these functions. ( 1 ) the size ( ) tells you how many elements in your container. It doesn't tell you how much memory the container has allocated for it. ( 2 ) capacity ( ) tells you how many elements the container can hold in the memory it has allocated. That's how many elements a container can hold in that block of memory, not as many elements as possible. If you want to know how much memory there isn't occupied by a vector or string, you must subtract the size ( ) from capacity ( ). If size and capacity return the same value, there's no remaining space in the container, and the next insert ( via insert or push back ) will throw the above. ( 3 ) resize ( container:: size_type n ) forced the container to hold the n element. After the resize is called, size will return n. If n is less than the current size, the element at the end of the container is destroyed. If n is greater than the current size, the newly constructed element is added to the end of the container. If n is greater than the current capacity, a redistribution occurs before the element is joined. ( 4 ) reserve ( container:: size_type n ) forces the container to change its capacity to at least n, and the provided n is less than the current size. This is generally forced to because capacity needs increased. ( if n is less than the current capacity, the vector ignores it, which doesn't do anything, string may reduce its capacity to size ( ) and n, but the size of the string doesn't change. In my experience, using reserve to trim extra capacity from one string is generally less than using"swap skills", which is the subject of terms 17. ) This profile indicates that there will be a redistribution when an element needs to be inserted and the container isn't capable of being plugged in, including the original memory allocation and recycling that they maintain, the copying and destructo & of objects, and the of the object. So the key to avoid redistribution is to use reserve as soon as possible to set the capacity of the container as large as possible, preferably after the container is constructed. For example, suppose you want to create a vector <> that holds 1 1000 values. Don't use reserve, you can do it like this: vector<int> v; for (int i = 1; i <= 1000; ++i) v.push_back(i); In most stl implementations, this code will cause 2 to 10 during the loop. ( 10, there's nothing strange. Remember that the vector generally doubles the capacity when the occurs, and 1000 equals 210. ) Change the code to use reserve, we get this: vector<int> v; v.reserve(1000); for (int i = 1; i <= 1000; ++i) v.push_back(i); This won't happen in the loop. The relationship between size and capacity allows us to predict when the insertion will cause a vector or string to be reassigned, and it can predict when I. For example, given this code, string s; ... if (s.size() <s.capacity()) { s.push_back('x'); } A call to push back doesn't cause an iterator, pointer, or reference to the string to fail because the capacity of the string is greater than its size. If you don't execute push back, the code makes an insert at any location in the string, and we can still guarantee that no redistribution occurs during the I. Back to this clause, usually two cases use reserve to avoid unnecessary redistribution. The first available is when you're accurate or about how many elements will eventually appear in the container. In that case, like the vector code above, you simply reserve the appropriate amount of space. The second situation is to keep the maximum space you might need, and then, once you add full data, trim any extra capacity. 2. Use swap techniques to trim the vector excess space/memory. There's a way to reduce it from the maximum capacity to the capacity it needs now. Doing so is often referred to as""( shrink to fit ). This method requires only one statement: Vector <> ( ivec ). Swap ( ivec ). An expression vector <> ( ivec ) establishes a temporary vector that's a copy of the ivec: The copy constructor of the vector did this job. However, the copy constructor of the vector only allocates the memory required by the copied element, so this temporary vector doesn't have redundant capacity. And then we let the temporary vector and ivec exchange data, so we've done that, and ivec only has a fixed size of the temporary variable, and this temporary variable holds. At this point ( end of this statement ), the temporary vector is destroyed so that the memory used before ivec is released, and the. 3. The memory occupied by the stl vector is freed by the swap method. template <class T> void ClearVector( vector<T>& v ) { vector<T>vtTemp; vtTemp.swap( v ); } As if vector<int> v ; nums.push_back(1); nums.push_back(3); nums.push_back(2); nums.push_back(4); vector<int>().swap(v); /* 或者v.swap(vector<int>()); */ /*或者{ std::vector<int> tmp = v; v.swap(tmp); };//加大括号{ }是让tmp退出{ }时自动析构*/ , the behavior test of vector memory management member function The c + + stl vector is widely used, but the management model for its memory has a variety of guesses, and the following code tests are used to unde & tand how their memory is managed, as follows: #include <iostream> #include <vector> using namespace std; int main() { vector<int> iVec; cout <<"容器 大小为:" <<iVec.size() <<endl; cout <<"容器 容量为:" <<iVec.capacity() <<endl;//1个元素, 容器容量为1 iVec.push_back(1); cout <<"容器 大小为:" <<iVec.size() <<endl; cout <<"容器 容量为:" <<iVec.capacity() <<endl;//2个元素, 容器容量为2 iVec.push_back(2); cout <<"容器 大小为:" <<iVec.size() <<endl; cout <<"容器 容量为:" <<iVec.capacity() <<endl;//3个元素, 容器容量为4 iVec.push_back(3); cout <<"容器 大小为:" <<iVec.size() <<endl; cout <<"容器 容量为:" <<iVec.capacity() <<endl;//4个元素, 容器容量为4 iVec.push_back(4); iVec.push_back(5); cout <<"容器 大小为:" <<iVec.size() <<endl; cout <<"容器 容量为:" <<iVec.capacity() <<endl;//5个元素, 容器容量为8 iVec.push_back(6); cout <<"容器 大小为:" <<iVec.size() <<endl; cout <<"容器 容量为:" <<iVec.capacity() <<endl;//6个元素, 容器容量为8 iVec.push_back(7); cout <<"容器 大小为:" <<iVec.size() <<endl; cout <<"容器 容量为:" <<iVec.capacity() <<endl;//7个元素, 容器容量为8 iVec.push_back(8); cout <<"容器 大小为:" <<iVec.size() <<endl; cout <<"容器 容量为:" <<iVec.capacity() <<endl;//8个元素, 容器容量为8 iVec.push_back(9); cout <<"容器 大小为:" <<iVec.size() <<endl; cout <<"容器 容量为:" <<iVec.capacity() <<endl;//9个元素, 容器容量为16 /* vs2005/8 容量增长不是翻倍的,如 9个元素 容量9 10个元素 容量13 */ /* 测试effective stl中的特殊的交换 swap() */ cout <<"当前vector 的大小为:" <<iVec.size() <<endl; cout <<"当前vector 的容量为:" <<iVec.capacity() <<endl; vector<int>(iVec).swap(iVec); cout <<"临时的vector<int>对象 的大小为:" <<(vector<int>(iVec)).size() <<endl; cout <<"临时的vector<int>对象 的容量为:" <<(vector<int>(iVec)).capacity() <<endl; cout <<"交换后,当前vector 的大小为:" <<iVec.size() <<endl; cout <<"交换后,当前vector 的容量为:" <<iVec.capacity() <<endl; return 0; } , the other member function of the vector. C. Assign ( beg, end ). Assign data in [ beg; end ) intervals to c. C. Assign ( n, 20 ). Assign an n copy to c. C. At ( debugge ). A out of range is thrown if the is out of range and the range is raised. C. Back ( ). Return to the last data and don't check that the data exists. C. Front ( ). Back to the ground. Get_allocator To return a copy using the co & tructor. C. Rbegin ( ). Back to the first data in a reverse queue. C. Rend ( ). Returns the next position of the last data in a reverse queue. C. ~ vector <> ( ). Destroy all data and release memory. The following is a supplement to other users: 1, basic operation ( 1 ) header file # include <vector>. ( 2 ) the vector object is created, vector <> vec; ( 3 ) the tail I ert number: Vec. Push_back ( a ). ( 4 ) use a subscript access element, <<vec [ 0 ] <<endl; remember that subscript starts with 0. ( 5 ) using the iterator to access the element. vector<int>::iterator it; for(it=vec.begin();it!=vec.end();it++) cout<<*it<<endl; ( 6 ) I & ert an element: Vec. Insert ( e. G. Begin ( ) + I, a ); inserts a before I + 1 elements. ( 7 ) delete the element: Vec. Erase ( e. Begin ( ) + 2 ); delete 3 elements. Vec. Erase ( e. G. Begin ( ) + I, vec, end ( ) + j ); delete interval [ I, j-1 ]; interval from 0. ( 8 ) vector size: vec, size ( ); ( 9 ) empty: vec. Clear ( ). 2, the element of the vector can't only make int, double, string, or structure, but note: The structure is defined as global; otherwise, there will be an error. Here's a short code: #include<stdio.h> #include<algorithm> #include<vector> #include<iostream> using namespace std; typedef struct rect { int id; int length; int width; //对于向量元素是结构体的,可在结构体内部定义比较函数,下面按照id,length,width升序排序。 bool operator <(const rect &a) const { if(id!=a.id) return id<a.id; else { if(length!=a.length) return length<a.length; else return width<a.width; } } }Rect; int main() { vector<Rect> vec; Rect rect; rect.id=1; rect.length=2; rect.width=3; vec.push_back(rect); vector<Rect>::iterator it=vec.begin(); cout<<(*it).id<<' '<<(*it).length<<' '<<(*it).width<<endl; return 0; } 3 algorithm ( 1 ) flip the element with reverse: Header file # include <algorithm> Reve e ( e. Begin ( ), vec. End ( ) ); ( in vector, if two iterators are required in a function. Always don't include. ( 2 ) sort with sort: # include <algorithm>, Sort ( e. Begin ( ), vec. End ( ) ); ( default is in ascending order, ). You can compare the sort comparison function in descending order by overriding the sort comparison function, as follows: Define sort comparison functio &: bool Comp(const int &a,const int &b) { return a>b; }. Jb51. net/article/44234. Htm.
https://www.dowemo.com/article/47593/usage-of-c++-vector-and-common-functions-in-algorithm.
CC-MAIN-2018-26
refinedweb
2,737
66.13
Almost every practical programming language has a type system that specifies how to assign types to various constructs in the language and how constructs of those types interact with each other. Most programmers characterize type systems with two sets of properties. One has to do with when rules of the type system are enforced (aka type checking): dynamic or static; The other has to do with how much safety guarantee the type system provides: strong or week. This is a confusing topic. I've run into articles on multiple otherwise reputable websites claiming static typing means variables need to be declared before use, and dynamic typing means otherwise. This is very misleading. Haskell is a statically-typed language, yet programmers don't need to declare the type of each name1. Types are inferred from how the names are used. Its optional type annotation is mostly for the benefit of human readers rather than the compiler. I've also seen articles that equate dynamic with weak and static with strong. This is also wrong. A dynamically-typed language can be more strongly-typed than a statically-typed language. To understand the distinction, we need to separate names and values. A name is the identifier you use in a program to refer to entities like objects and functions. Values are the entities themselves. A name can refer to different values at different times, and a value can be referred to by different names. In statically-typed languages, names have types, and the type of a name typically cannot change within its scope. A name of a certain type can only refer to values of that type. In a statement like a = sum(b, c), a, b, c, and sum all have types. The compiler checks to see if the types of a and b match with the parameter types of sum and if the result type of sum matches the type of a. If not, it emits a type error and refuses to compile the program. In dynamically-typed languages, names themselves do not have types, but the values they refer to do. In the last example a = sum(b, c), when the program runs, the language runtime looks at the values b and c and makes sure they are of the type to which sum can be applied to. For example, sum(b, c) might be implemented as b + c, the language runtime checks if b and c refer to the types for which the operator + is defined. If not, it throws an exception. Let's turn to the strength of type safety by looking at some examples. Both C and Rust are statically-typed, but they provide different levels of type safety. Consider the following C program: #include <stdio.h> int main() { int numbers[] = {0, 1, 2}; printf("%d", numbers[6]); return 0; } It has an obvious problem, but can be compiled successfully. Depending on what compiler you use, you might see a warning, but it's just a heuristic added by the compiler to assist programmers. The C language standard allows the program to be compiled. When I run this program on my Mac, it prints 32766% which is just whatever gibberish happened to be at that memory location. If this were a complex program, it would likely be a frustrating bug. The following Rust program attempts to do the same thing: fn main() { let numbers = [0, 1, 2]; println!("{}", numbers[5]); } But when it is compiled, the following error is emitted. It won't get a chance to run. error: index out of bounds: the len is 3 but the index is 5 --> array.rs:3:20 | 3 | println!("{}", numbers[5]); This is because in Rust, the length is part of the type of an array literal, so [0, 1] and [0, 1, 2] are actually different types. In this case the compiler can detect illegal access to an array literal just by looking at the type. To verify this, add a line to the rust code: fn main() { let numbers = [0, 1, 2]; let foo: () = numbers; // <- add println!("{}", numbers[5]); } You will see the following error from the compiler: error[E0308]: mismatched types --> array.rs:3:19 | 3 | let foo: () = numbers; | ^^^^^^^ expected (), found array of 3 elements | = note: expected type `()` found type `[{integer}; 3]` This is an intentional type mismatch error we add to show the type of numbers. The last line says it's [{integer}; 3], in which 3 is the length of the array. For variable-size vectors, Rust uses a Result type to force programmers to check for the possibility of out-of-bound errors. The infamous type systems of Perl and PHP have tormented countless programming souls. The following is copied verbatim from the official PHP manual: ) PHP and Perl are extremely lenient to type mismatches and go to great length to massage arguments into whatever types required. For quick-and-dirty scripts, they may allow the programmer to get things done with as little code as possible, but for large projects they are good at burying bugs. Most other languages in wide use today require explicit conversion between unrelated types, whether they are dynamically typed or statically typed. For example, in Clojure, you'd need to call Integer/parseInt to parse a string to an integer. Both static type checking and a strong type system help to uncover bugs as early as possible, usually at the expense of making programs more verbose, but they are different things. Discussion Wasn't even a fan of dynamically and weak typed languages. I don't really want to be quirky with my code that will eventually become more spaghetti code. Greay article. Hope you showed the graph of the learning curve and complexity of static and dynamic type languages. To be simple, statically type checking means ability of discovering type mismatch at compile-time, while dynamically type checking can only detect type mismatch at runtime. However, dynamic type checking can be enhanced by using a linting tool which constraints strict-typing. Clojure is a compiled language. It’s dynamic due to ability of compiling on the fly. It’s strong or weak depends on how you use it. Hi Hong, great overview! Static vs dynamic and weak vs strong get a lot of people confused all the time! :D
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jiangh/type-systems-dynamic-versus-static-strong-versus-weak-b6c
CC-MAIN-2021-04
refinedweb
1,054
63.09
In this section, you will know about a zip file. You will also learn how to create a zip file from any file through the java program. A Zip file is a compressed format file and takes less space then the original file. Many files which are available for download on the internet are stored to a "ZIP" file. When the original files are needed, the user can "extract" the original files from the ZIP file using a ZIP file program. It is possible to compress and decompress data using tools such as WinZip, gzip, and Java Archive (or jar), these tools are used as standalone applications. It is also possible to zip and unzip the files from your Java applications. This program shows you how to zip any file through your java program. Our program accepts the file name and the output file name for creating the achieve. Program compresses the file using default compression level. The value of the compression level can be set using the setLevel(Deflater.DEFAULT_COMPRESSION) method of the ZipOutputStream class. Constructors and methods of ZipOutputStream: ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFileName)); Above code creates the instance of the ZipOutputStream class by passing the FileOutpuStream instance. The zip file name (which has to be created) has also been mentioned in the FileOutputStream() constructor. out.setLevel(Deflater.DEFAULT_COMPRESSION); The setLevel() method of the ZipOutputStream class sets the default compression level. The method takes the argument Deflater.DEFAULT_COMPRESSION, which specific the is compression level. FileInputStream in = new FileInputStream(filesToZip); Above code creates the instance of FileInputStream taking the input file name as parameter. out.putNextEntry(new ZipEntry(filesToZip)); This is the putNextEntry() method of the ZipOutputStream class which is used to entry files one by one to be zipped. This method closes previous zip entry if any active and put the next zip entry by passing the instance of the ZipEntry class which holds the file name which has to be zipped. Here is the code of the program : import java.io.*; import java.util.zip.*; public class ZipCreateExample{ public static void main(String[] args) throws IOException{ System.out.println("Example of ZIP file creation."); System.out.println("Please enter file name to zip : "); BufferedReader input = new BufferedReader(new InputStreamReader(System.in)); String filesToZip = input.readLine(); File f = new File(filesToZip); if(!f.exists()) { System.out.println("File not found."); System.exit(0); } System.out.print("Please enter zip file namewith extension .zip : "); String zipFileName = input.readLine(); byte[] buffer = new byte[18024]; try{ ZipOutputStream out = new ZipOutputStream(new FileOutputStream(zipFileName)); out.setLevel(Deflater.DEFAULT_COMPRESSION); FileInputStream in = new FileInputStream(filesToZip); out.putNextEntry(new ZipEntry(filesToZip)); int len; while ((len = in.read(buffer)) > 0){ out.write(buffer, 0, len); } out.closeEntry(); in.close(); out.close(); } catch (IllegalArgumentException iae){ iae.printStackTrace(); System.exit(0); } catch (FileNotFoundException fnfe){ fnfe.printStackTrace(); System.exit(0); } catch (IOException ioe){ ioe.printStackTrace(); System.exit(0); } } } zip file in Java View All Comments Post your Comment
http://roseindia.net/java/example/java/io/ZipCreateExample.shtml
CC-MAIN-2015-40
refinedweb
491
51.65
#include <wx/richtext/richtextbuffer.h> Stores selection information. The selection does not have to be contiguous, though currently non-contiguous selections are only supported for a range of table cells (a geometric block of cells can consist of a set of non-contiguous positions). The selection consists of an array of ranges, and the container that is the context for the selection. It follows that a single selection object can only represent ranges with the same parent container. Copy constructor. Creates a selection from a range and a container. Default constructor. Adds a range to the selection. Copies from sel. Returns the container for which the selection is valid. Returns the number of ranges in the selection. Returns the range at the given index. Returns the first range if there is one, otherwise wxRICHTEXT_NO_SELECTION. Returns the selection ranges. Returns the selection ranges. Returns the selection appropriate to the specified object, if any; returns an empty array if none at the level of the object's container. Returns true if the selection is valid. Assignment operator. Equality operator. Index operator. Resets the selection. Sets the selection. Sets the selections from an array of ranges and a container object. Sets the container for which the selection is valid. Sets a single range. Sets the selection ranges. Returns true if the given position is within the selection. Returns true if the given position is within the selection. Returns true if the given position is within the selection range. Returns true if the given range is within the selection range.
https://docs.wxwidgets.org/trunk/classwx_rich_text_selection.html
CC-MAIN-2018-51
refinedweb
256
52.87
Configuration of USB 3.0midrel.tchouta Aug 18, 2017 3:31 PM hello people, I am working on configuration of USB 3.O with Ez-USB FX3. I have 2 questions about it please 1) **** Configuration mode : there 2 kind of configuration that are interesting slavefifo2bits and slavefifo5bits. But I don't know which kind of configuration take. IN AN68829, page 3, cypress told that slavefifo5 is recommended to reduce latencies of socket and also it used when we need more than 4 socket for GPIF II. In my case, can I use just slavefifo2bits or changing ? 2) **** When I download files from firmware, there are lack of librairies such as #include "cyu3system.h" #include "cyu3os.h" #include "cyu3dma.h" #include "cyu3error.h" #include "cyu3usb.h" #include "cyu3uart.h" is that normal to get example without librairies ? can you give the reasons or the best example about it please ? otherwise thanks you to help me to understanding and configure EZ-USB FX3 1. Re: Configuration of USB 3.0Madhu Lakshmipathy Mar 16, 2015 9:06 AM (in response to midrel.tchouta) Hi, 1) Please let us know what is your application. Based on that, we will recommend you either 2 Bit or 5 Bit SlaveFifo. By the way, 2 Bit SlaveFifo is useful in most of the cases. 2) The libraries are present by defalult in the FX3 Installation Path. So the firmware project files need not have these libraries. It is normal. For beginning with Fx3, I recommend you this Application note: Regards, -Madhu Sudhan 2. Re: Configuration of USB 3.0midrel.tchouta Mar 16, 2015 9:28 AM (in response to midrel.tchouta) 1) my application is simple. it's just a configuration of ez-usb fx3 to take the data (sort of video streaming) on the fpga and take out the USB port 3.0 (without intervention of CPU--> Auto DMA). Zynq (master)====> FPGA(slave) 2) Ok but why when I put example project on eclipse, I have some errors? I need to put all projects in the specific path (FX3 installation path ) ? 3. Re: Configuration of USB 3.0midrel.tchouta Mar 16, 2015 9:29 AM (in response to midrel.tchouta) 1) right sheme FPGA (master) <======> FX3 (slave) 4. Re: Configuration of USB 3.0Madhu Lakshmipathy Mar 16, 2015 12:12 PM (in response to midrel.tchouta) Hi, 1) For this application the 2 Bit slavefifo is the best. Please refer An65974 application note. 2) Can you please attach a snapshot of eclipse showing the error? Regards, - Madhu Sudhan 5. Re: Configuration of USB 3.0midrel.tchouta Mar 17, 2015 4:44 AM (in response to midrel.tchouta) Hi, Madhu 1) thanks for your response. just to know please, if I download example firmware of synchronous slave fifo 2 bits, is it possible to programme directly on my fx3 device ? 2) I've attached 3 snapshots in pdf file of my eclispse to look at errors. I don't why, but the problem about my libraires is solved, but there are few error that told that "Symbol XXXX could not be resolved" as you can see in pdf file. - snapshot.pdf 99.6 K 6. Re: Configuration of USB 3.0Madhu Lakshmipathy Mar 17, 2015 5:51 AM (in response to midrel.tchouta) Hi, 1) Yes. the example project firmware comes along with the Image files (*.img) files. You can program them directly into FX3 using Cypress Control Center Tool easily. 2) To solve the errors can you please try this. * Right click your project in the Project explorer in Eclipse. *Select Index -> Rebuild Please see tha attached snapshot. Let me know if this fixed your issue - Untitled15.png 195.1 K 7. Re: Configuration of USB 3.0midrel.tchouta Mar 17, 2015 6:27 AM (in response to midrel.tchouta) Ohhh yeahhhhh thanks it's work well 1) what's was the problem please ? is it obligate to use GPIF II designer to configure and get my slavefifoconfig.h ? Can I use directly slavefifoconfig.h from other project such as example firmware slavefifo2bits ? 2) Just an other idea, if I use 16 bits on my data bus, could I get the capacity maximum of FX3 ? In fact in the (page 7) , they told me that with 32 bits I will get 403 Mo/s. the length of data influence the data rate ? regarsds, 8. Re: Configuration of USB 3.0Madhu Lakshmipathy Mar 17, 2015 7:50 AM (in response to midrel.tchouta) The cause of that issue was Eclipse was not refreshed when that projetc was loaded. This is a minor issue with eclipse. Doing Index->rebuild is like refreshing eclipse workspace. 1) We strongly recommend to use GPIF II designer for generating the cyfxgpif2config.h file, even if you want it to be similar to another project. By doing so, you can confirm if the state machine is correct and meets your requirements, before you proceed to generate the cyfxgpif2config.h file 2) Yes. If you use 16 Bit you will get only half the capacity of FX3. You should use 32 Bit to exploit the full capability of FX3 Regards, - Madhu Sudhan 9. Re: Configuration of USB 3.0midrel.tchouta Mar 17, 2015 9:45 AM (in response to midrel.tchouta) Ok. Thanks you for your help
https://community.cypress.com/thread/29615
CC-MAIN-2018-17
refinedweb
884
78.14
Extension to Google Maps vs. MSN Virtual Earth | UnderstandingWebServices.com/mapping Community Wiki now live Related link: As mentioned in the intro, I have taken a few moments to set-up a communtiy wiki such that we can begin to work together to develop a one of a kind resource regarding web service-based mapping technologies such as Google Maps, MSN Virtual Earth, and Yahoo! Maps. Located at the first paragraph reads:… UPDATE: For reasons of consolidation and to avoid confusion I have moved everything under the openunderstanding.com domain. I have updated the links accordingly. I am in proc ess of moving the content from the old server to the new server. If you attempt to access these links and they don’t seem to work, the move in still in progress. Thanks in advance for your patience! At the moment, I invite anyone and everyone with interest to take part in this community-based effort to begin documenting and sharing with each other all that we can regarding the latest and greatest extension projects, the underlying base API’s, links to books and online resources to learn more, etc… To kick things off I have created a “services” namespace and added two pages to begin keeping track of each and every extension project for MSN Virtual Earth and Google Maps. You can go directly to either of these following one of the links below: MSN Virtual Earth API Extensions Projects and Google Maps API Extension Projects Now we’re back on even ground :) If you have something to say and would like to document your own findings, posting images, links, etc… to help back up your comments, you now are completely enabled to do just that in a central location in which we all can benefit from. Oh, and I still plan a follow-up to this post in another day or two, but wanted to get the ball rolling with this ASAP such that you all can get involved in a way that sounds a lot less like my voice and a lot more like yours. Or better said, ours :) With that, Enjoy! Please visit the wiki and post your comments there. Thanks! :)
http://www.oreillynet.com/xml/blog/2005/10/extension_to_google_maps_vs_ms.html
crawl-002
refinedweb
366
60.58
Hi Karsten I had updated prior to trying. Re-updating made no difference. Is the CVS buggered again? David On Wed, 2002-07-17 at 19:24, Karsten Hilbert wrote: > > Richard / Ian > > > > What do I need to do here? > You need to get the latest CVS. > > It is saying that it can't find the _() function which is used > in i18n. > > The "new" i18n that is in CVS puts this function into the main > namespace thereby obsoleting the need to define _() in each > and every module as long as gmI18n has been imported > beforehand (as is the case in the "new" gnumed.py) > > > address@hidden terry]$ ./gnumed.py > Just curious: What machine are you on ? :-) > > Karsten > -- > GPG key ID E4071346 @ wwwkeys.pgp.net > E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 > > _______________________________________________ > Gnumed-devel mailing list > address@hidden > --
https://lists.gnu.org/archive/html/gnumed-devel/2002-07/msg00036.html
CC-MAIN-2021-17
refinedweb
142
84.88
This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project. On Thu, Feb 15, 2001 at 10:14:01PM -0800, Tim Prince wrote: > > It's still a problem on linux as well as cygwin. I know that cygwin is not > a supported target, and I'm grateful to the extent that people have helped > make it work. The problem is the same old one which started early in > gcc-2.96 with the g77 sequence (all typed double precision). It has always > worked correctly in the 2.95 series. > > if(x /= y)then > if( x * y >=0)then > a=abs(x) > b=abs(y) > c=max(a,b) > d=min(a,b) > w=1-d/c > else > w=1 > endif > else > w=0 > endif > > which, when compiled with g77-2.97 -O, -Os, or -O2, along with the > combination -march=pentiumpro -ffast-math, sets w to 0., regardless of the > values of x and y. Okay, I can now reproduce this. There's a complete Fortran test case appended (requires user input). It looks like a general problem to me. Jan, this appears to be right in your corner: it's a problem with reg-stack and conditional moves. Here is the code generated for the innermost if clause by the gcc-3.0 branch as of earlier today, at -Os -march=pentiumpro -ffast-math. On entry to this block, x and y are in %st(0) and %st(1) respectively. fabs fxch %st(1) fabs fcomi %st(1), %st fld %st(0) fcmovbe %st(2), %st fxch %st(1) *** fcmovbe %st(2), %st fstp %st(2) fdivrp %st, %st(1) fsubrl .LC3 ; constant 1.0D fstpl -24(%ebp) Let's say x and y are 12. and 23. Top of stack is at right. fabs 12 -- 12 fxch %st(1) 23 12 -- 12 23 fabs 23 -- 23 fcomi %st(1),%st -- fld %st(0) 12 23 -- 12 23 23 fcmovbe %st(2),%st -- fxch %st(1) -- fcmovbe %st(2),%st -- fstp %st(2) 12 23 23 -- 23 23 fdivrp %st, %st(1) 23 23 -- 1 fsubrl .LC3 1 -- 0 If I swap x and y (23 and 12), we get instead fld %st(0) 23 12 -- 23 12 12 fcmovbe %st(2),%st 23 12 12 -- 23 12 23 fxch %st(1) 12 23 -- 23 12 fcmovbe %st(2),%st 23 23 12 -- 23 23 23 fstp %st(2) 23 23 23 -- 23 23 The intended result of the stack shuffle is to put 'd' at %st(0) and 'c' at %st(1): fdivrp %st,%st(1) means divide %st(0) by %st(1). We can get this right by changing one instruction: the one I marked with three stars. It needs to be 'fcmovnbe' (not bigger or equal). This is reg-stack's fault. Right before reg-stack, the insns which will become fcmovs are (insn 208 228 211 {*movdfcc_1} (set (reg:DF 10 st(2)) (if_then_else:DF (ge (reg:CCFP 17 flags) 0) (reg:DF 8 st(0)) (reg:DF 10 st(2))))) (insn 211 208 115 {*movdfcc_1} (set (reg:DF 9 st(1)) (if_then_else:DF (unle (reg:CCFP 17 flags) 0) (reg:DF 8 st(0)) (reg:DF 9 st(1))))) After reg-stack, they are instead (insn 208 228 270 {*movdfcc_1} (set (reg:DF 8 st(0)) (if_then_else:DF (le (reg:CCFP 17 flags) 0) (reg:DF 10 st(2)) (reg:DF 8 st(0))))) (insn 211 270 271 {*movdfcc_1} (set (reg:DF 8 st(0)) (if_then_else:DF (unle (reg:CCFP 17 flags) 0) (reg:DF 10 st(2)) (reg:DF 8 st(0))))) Notice how the 'ge' in the first insn has mysteriously become an 'le' after reg-stack. It is NOT trying to reverse the sense of the comparison here; it has got confused while trying to reorder operands to fit the 387 register stack. fcmov can only move to top-of-stack. N.B. 'unle' means 'UNordered less or equal' not 'not less or equal'. That should be enough analysis for someone to find the bug on. zw program test implicit none double precision a,b,c,d,w,x,y read *,x,y if(x /= y)then if( x * y >=0)then a=abs(x) b=abs(y) c=max(a,b) d=min(a,b) w=1-d/c else w=1 endif else w=0 endif print *,w end
http://gcc.gnu.org/ml/gcc-bugs/2001-02/msg00386.html
CC-MAIN-2014-15
refinedweb
738
77.98
This: The Processing Interface Attach the Arduino-based RFID reader to your computer before you start, because this sketch interfaces with it via a USB-to-serial link. When you run the Processing sketch that follows, you’ll get a graphic interface that allows you to read tags, delete and print the reader’s database, and upload the tags to the O’Reilly Emerging Tech database and retrieve the user associated with the tag. Initially, the screen looks like Figure 1: After you scan a few tags, you’ll see a list in the Tags to upload field: Finally, if you click upload, the GUI will retrieve a record from the O’Reilly site, like so: The Processing Code Now that you’ve got a picture of the basic functionality, here’s the code. It’s divided into three tabs in the Processing sketch. The main tab provides functionality to communicate with the Arduino module. The buttons tab provides methods for making and drawing user buttons on the screen. It’s basically the same as the buttons tab in the RFID writer tutorial. The profiler tab provides the functions necessary to make the HTTP request to the PHP scrips below, and to parse the XML record that comes back from the request. You’ll need to import the serial library to access the serial port. In addition, there are a few global variables at the top of the main tab to do some housekeeping, like manage the text in teh screen, keep track of the tags you’ve read from the reader, and save your own tag to a file on your computer so the sketch knows who you are. import processing.serial.*; Serial myPort; // the serial port int fontHeight = 14; // font for drawing on the screen String messageString; // the main display string int lineCount = 0; // count of lines in messageString int maxLineCount = 5; // largest that lineCount can be String tagsToUpload = ""; // CSV string of hex-encoded RFID tags boolean identifyingSelf = false; // whether you're scanning your own tag String myTagId = ""; // your tag ID The setup() method initializes the serial port and the font for drawing text on the screen, makes the user interface buttons, and checks to see if there’s a file with your RFID tag saved in the sketch folder. void setup() { // set the window size: size(600,400); // list all the serial ports: println(Serial.list()); // based on the list of serial ports printed from the //previous command, change the 0 to your port's number: String portnum = Serial.list()[0]; // initialize the serial port: myPort = new Serial(this, portnum, 9600); // clear the serial buffer: myPort.clear(); // only generate a serialEvent() when you get a newline: myPort.bufferUntil('n'); // create a font with the second font available to the system: PFont myFont = createFont(PFont.list()[2], fontHeight); textFont(myFont); // initalize the message string: messageString = "waiting for reader to resetn"; // make the UI buttons: makeButtons(); // get the user's ID if it's saved: String[] savedData = loadStrings("yourID.txt"); if (savedData != null) { if (savedData.length > 0) { myTagId = savedData[0]; } } } The draw() method is very simple. It calls two routines to draw the user interface buttons and the user profile if one has been retrieved from the web, and draws the text on the screen to let you know what’s been communicated to and from the reader. void draw() { // clear the screen: background(0); // draw the UI buttons: drawButtons(); // show the last obtained user profile: showProfile(); // draw the message string and tags to upload: textAlign(LEFT); text("Your tag ID:" + myTagId, 10, 30); text(messageString, 10, 50,300, 130); text("Tags to upload:", 10, 220); text(tagsToUpload, 10, 240,500, 130); } The serialEvent() method is called automatically every time there is new serial data available. The serial library buffers the serial data until a newline character (n) is received, then it generates the serialEvent(). The method itself calls another method called parseForTag() to look for an RFID tag string in the incoming data. If it finds one, it checks to see if you requested that this be saved as your own tag. If so, it saves the tag to a file in the sketch folder. If not, it adds it to a string of received tags: void serialEvent(Serial myPort) { // read the serial buffer: String inputString = myPort.readStringUntil('n'); // if there's something there, act: if (inputString != null) { // see if the newest line contains an RFID tag: String newTag = parseForTag(inputString); // if you got a tag, add it to the list to upload: if (newTag != null) { if (identifyingSelf) { myTagId = newTag; identifyingSelf = false; String[] dataToSave = new String[1]; dataToSave[0] = myTagId; saveStrings("yourID.txt", dataToSave); } else { // add a comma if there's already text in the string: if (tagsToUpload != "") { tagsToUpload += ","; } tagsToUpload += newTag; } } // display the incoming lines, 5 lines at a time: if (lineCount < maxLineCount) { messageString += inputString; lineCount++; } else { messageString = inputString; lineCount = 0; } } } The aforementioned parseForTag() method scans each line of text received and looks for a colon. It assumes anything after the colon is an RFID tag string. When it finds a tag, it returns it: String parseForTag(String thisString) { String thisTag = null; // separate the string on the colon: String[] tagElements = split(thisString, ":"); // if you have at least 2 elements, extract the parts: if (tagElements.length > 1) { // get the record number: int recordNumber = int(tagElements[0]); // if the tag ID is not "0000", get it: if (!tagElements[1].substring(0,4).equals("0000")) { thisTag = tagElements[1]; thisTag = trim(thisTag); } } return thisTag; } A method called buttonPressed() is called when the user releases the mouse button over one of the user interface buttons. It determines which button was pressed and takes the appropriate action: void buttonPressed(RectButton thisButton) { // get the button number from the button passed to you: int buttonNumber = buttons.indexOf(thisButton); // do different things depending on the button number: switch (buttonNumber) { case 0: // get tags from reader: tagsToUpload = ""; myPort.write("p"); break; case 1: // upload tags to net: if (tagsToUpload.equals("")) { messageString = "No tags to upload."; } else { String[] theseTags = split(tagsToUpload, ","); for (int thisTag = 0; thisTag< theseTags.length; thisTag++) { makeRequest(theseTags[thisTag]); } // after you upload, clear tagsToUpload: tagsToUpload = ""; } break; case 2: // delete tags from reader: messageString = "deleting reader databasen"; myPort.write("c"); break; case 3: // scan your own tag identifyingSelf = true; messageString = "Waiting for your personal tag"; } } That’s the end of the main tab. The profiler tab contains the functions necessary to upload a tag to the web and retrieve the results. It starts with some global variables to store the URL of the PHP script and the results of the retrieved user record: // the URL of the PHP script that passes the tag to the database: String myUrl = " PImage photo; // String containing the photo URL String[] profile; // String array containing the HTML of the profile String username; // String of the username String affiliation; // String of the affiliation String twitter; // String of the twitter username String tagNumber; // String of the user tag number String country; // String of the country int profileX = 350; // horizontal position of the profile int profileY = 150; // vertical position of the profile The makeRequest() method adds the RFID tag string to the end of the URL string, makes the HTTP call, and waits for the results. Then it saves the results in a file. Finally, it parses the results for a valid XML record using the parseRecord() method: // This method makes the HTTP request to the PHP script // that calls the O'Reilly database and stores it // in a file: void makeRequest(String whichTag) { // make HTTP call: String thisUrl = myUrl + whichTag; // save the resulting file in an array: String[] = loadStrings(thisUrl); saveStrings("person.xml", // parse the results: parseRecord("person.xml"); } The parseRecord() method does just that: it opens the file created by the previous method and parses it for the XML data. Processing’s XMLElement object always reads from a file, so that’s why the two methods use a file to exchange the record. Before parsing the XML, this method checks to see that the file begins with an XML header. If it doesn’t, the method stops, clears the record variables and returns. It does this because the O’Reilly API sometimes returns HTTP error messages instead of XML, for example, if you give it a tag that’s not in the database. void parseRecord(String filename) { // get the first line, make sure it's an XML file. // if not, skip the rest of the substring and return: String firstLine = loadStrings(filename)[0]; if (!(firstLine.substring(0, 5).equals("<?xml"))) { // clear the profile variables: photo = null; profile = null; username = null; affiliation= null; twitter = null; tagNumber = null; country = null; return; } // open the XML record: XMLElement xml = new XMLElement(this, filename); int lines = xml.getChildCount(); // parse the record line by line: for (int i = 0; i < lines; i++) { XMLElement thisRecord = xml.getChild(i); String fieldName = thisRecord.getName(); String content = thisRecord.getContent(); if (fieldName.equals("photo")) { photo = loadImage(content); } if (fieldName.equals("profile")) { profile = loadStrings(content); } if (fieldName.equals("name")) { username = content; } if (fieldName.equals("affiliation")) { affiliation = content; } if (fieldName.equals("twitter")) { twitter = content; } if (fieldName.equals("rfid")) { tagNumber = content; } if (fieldName.equals("country")) { country = content; } } } The final method in the profiler tab, showProfile(), draws the profile information to the screen. It’s called by the draw() method: void showProfile() { // text color: fill(0); int lineNumber = profileY; textAlign(LEFT); // show profile results if the profile variables are populated: if (photo != null) { image(photo, profileX, lineNumber); lineNumber = lineNumber + 130; } // not displaying profile because it's all HTML and I'm too lazy to // strip out the bio div. It's a good goal for someone else if (username != null) { text(username, profileX, lineNumber); // increment the line vertical position: lineNumber = lineNumber + fontHeight+4; } if (affiliation != null) { text(affiliation, profileX, lineNumber, width - profileX, 40); // increment the line vertical position: lineNumber = lineNumber + 3*(fontHeight+4); } if (twitter != null) { text(twitter, profileX, lineNumber); // increment the line vertical position: lineNumber = lineNumber + fontHeight+4; } if (tagNumber != null) { text(tagNumber, profileX, lineNumber); // increment the line vertical position: lineNumber = lineNumber + fontHeight+4; } if (country != null) { text(country, profileX, lineNumber); // increment the line vertical position: lineNumber = lineNumber + fontHeight+4; } } That’s the main code for the sketch. The final tab in the sketch that generates the buttons. It’s based on Processing‘s buttons example (in the File menu –> Examples –> Topics –> GUI –> Buttons). For more details, see that example. Here’s the sketch in its entirety: The PHP script The PHP script that the Processing sketch is pretty simple. It uses PHP’s cURL library to access the O’Reilly database and return the results. It doesn’t do any parsing, just passes the results straight back to the Processing sketch. It’s divided up into two files, to keep the password and username more secure. The access file looks like this: <?php $username=urlencode('you@yourserver.com'; // the email address you registered with $password=urlencode('l33+-h4x0r'); // your password ?> The main PHP script is as follows: <?php include_once('access.php'); // Where's the database: $url=" // get the tag from the call to this script: $url .= $_GET["tag"]; // initialize curl: $ch = curl_init(); // set up the URL: curl_setopt($ch, CURLOPT_URL, $url); curl_setopt ($ch, CURLOPT_SSL_VERIFYHOST, 0); curl_setopt ($ch, CURLOPT_SSL_VERIFYPEER, 0); // prepare to make a POST call: curl_setopt($ch, CURLOPT_POST, 1); // set up authentication: curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_BASIC); curl_setopt($ch, CURLOPT_USERPWD, "$username:$password"); /* censored line goes here */ // print the result: if (curl_errno($ch)) { echo "CURL Error: " . curl_error($ch); } else { // Show the result var_dump($response); curl_close($ch); } ?> You may notice a comment there that says /* censored line goes here */. For reasons beyond my control, I cannot enter the curl command that goes there. It could be that WordPress censors this line, or it could be that dreamhost does it. Either way, you’ll have to work out the line for yourself, but here’s a hint: $response equals curl underscore exec ($ch) semicolon If you want to duplicate this on your own server, you’ll need to change the URLs in the Processing sketch and the PHP script, of course, and you’ll have to make a database to replace the O’Reilly one that generates its own XML records. The XML records look like this: <?xml version="1.0" encoding="UTF-8"?> <attendee> <photo> <profile> <name>Tom Igoe</name> <affiliation>Interactive Telecommunications Program, NYU</affiliation> <twitter>tigoe</twitter> <rfid>66bd00c0</rfid> <country>US</country> </attendee> That’s it! Happy tagging.
https://www.tigoe.com/pcomp/code/PHP/377/
CC-MAIN-2022-21
refinedweb
2,061
59.74
simplevfs 0.2.1 Minimalist virtual file system for game development To use this package, run the following command in your project's root directory: Manual usage Put the following dependency into your project's dependences section: SimpleVFS .. image:: :target: .. image:: :target: Introduction SimpleVFS it's a fork from D-GameVFS: <>_ that updates to the latest changes of the language, and attempt to polish and finish the previous works. D:GameVFS, and by extension SimpleVFS, is a minimalist open source virtual file system library for the D programming language oriented at game developers. Provided functionality is very basic - files and directories can be created, read and written, but not deleted. There are no security features - e.g. SimpleVFS can't handle a situation when a file it's working with is deleted outside the program. Only files in a physical file system are supported at the moment. There is no archive support right now. Features - File system independent, easy to use API for file/directory manipulation. - No external dependencies. - Seamless access to multiple directories as if they were a single directory. - Easy to extend with custom file system backend. - There is no support for ZIP or similar archive formats at the moment. - There is no support for deleting files/directories, and none is planned. - There are no security features and none are planned. Directory structure =============== ======================================================================= Directory Contents =============== ======================================================================= ./ This README file, utility scripts. ./docs API documentation ./source Source code. ./examples Code examples. =============== ======================================================================= Getting started ^^^^^^^^^^^^^^^^^^^^^^^^ Install the DMD compiler ^^^^^^^^^^^^^^^^^^^^^^^^ Digital Mars D compiler, or DMD, is the most commonly used D compiler. You can find its newest version here <>_. Download the version of DMD for your operating system and install it. .. note:: Other D compilers exist, such as GDC <> and `LDC ^^^^^^^^^^^^^^^^^^^^^^^^ Simple SimpleVFS project ^^^^^^^^^^^^^^^^^^^^^^^^ Create a directory for your project. To have something for D:GameVFS to work with, create subdirectories main_data and user_data in the project directory. In these directories, create some random files or subdirectories. Create a file called main.d in your project directory. Paste the following code into the file: .. code-block:: d import std.stdio; import std.typecons; import dgamevfs; void main() { // Two filesystem directories, one read-only and the other read-write. auto main = new FSDir("main", "main_data/", No.writable); auto user = new FSDir("user", "user_data/", Yes.writable); // Stack directory where "user" overrides "main". auto stack = new StackDir("root"); stack.mount(main); stack.mount(user); // Iterate over all files recursively, printing their VFS paths. foreach(file; stack.files(Yes.deep)) { writeln(file.path); } VFSFile file = stack.file("new_file.txt"); // Creates "new_file" in "user" (which is on top of "main" in the stack). file.output.write(cast(const void[])"Hello World!"); // Read what we've written. auto buffer = new char[file.bytes]; file.input.read(cast(void[]) buffer); writeln(buffer); } Code for this example can be found in the examples/getting_started directory. See the API documentation for more code examples. ^^^^^^^^^^^^^^^^^^^^^^^ Explanation of the code ^^^^^^^^^^^^^^^^^^^^^^^ We start by importing dgamevfs._ which imports all needed D:GameVFS modules. D:GameVFS uses the Flag template instead of booleans for more descriptive parameters (such as Yes.writable instead of true). You need to import std.typecons to use Flag. We create two FSDirs - physical file system directory objects, which will be called main and user in the VFS and will represent the main_data and user_data directories which we've created in our project directory. We construct main as a non-writable directory - it's read-only for the VFS. Next, we create a StackDir and mount() our directories to it. StackDir works with mounted directories as if they were a single directory - for instance, reading file.txt from the StackDir, will first try to read user_data/file.txt, and if that file does not exist, main_data/file.txt. Files in directories mounted later take precedence over those mounted earlier. StackDir makes it possible, for example, to have a main game directory with common files and a mod directory overriding some of those files. Then we iterate over all files in the StackDir recursively (using the Yes.deep argument) - including files in subdirectories. Path of each file in the VFS is printed. You should see in the output that the files' paths specify stack as their parent since main and user are mounted to stack. (Note that the paths will refer to stack as parent even if iterating over main and user - as those are now mounted to stack.) Then we get a VFSFile - D:GameVFS file object - from the stack directory. This file does not exist yet (unless you created it). It will be created when we write to it. To obtain writing access, we get the VFSFileOutput struct using the VFSFile.output() method. VFSFileOutput provides basic output functionality. It uses reference counting to automatically close the file when you are done with it. Since we just want to write some simple text, we call its write() method directly. VFSFileOutput.write() writes a raw buffer of data to the file, similarly to fwrite() from the C standard library. Note that we're working on a file from a StackDir. StackDir decides where to actually write the data. In our case, the newest mounted directory is user, which is also writable, so the data is written to user_data/new_file.txt. In the end, we read the data back using the VFSFileInput class - input analog of VFSFileOutput - which we get with the VFSFile.input() method. We read with the VFSFileInput.read() method, which reads data to provided buffer, up to the buffer length. We determine how large buffer we need to read the entire file with the VFSFile.bytes() method. The buffer might also be larger than the file - read() reads as much data as available and returns the part of the buffer containing the read data. For more details about SimpleVFS API, see the documentation <>_. ^^^^^^^^^ Compiling ^^^^^^^^^ We're going to use dub, which we installed at the beginning, to compile our project. Create a file called dub.json with the following contents: .. code-block:: json { "name": "getting-started", "targetType": "executable", "sourceFiles": ["main.d"], "mainSourceFile": "main.d", "dependencies": { "gamedvfs": { "version" : "~>0.2.1" }, }, } This file tells dub that we're building an executable called getting-started from a D source file main.d, and that our project depends on D:GameVFS 0.5.0 or any newer, bugfix release of D:GameVFS 0.5 . DUB will automatically find and download the correct version of D:YAML when the project is built. Now run the following command in your project's directory:: dub build dub will automatically download D:GameVFS and compile it, and then then it will compile our program. This will generate an executable called getting-started or getting-started.exe in your directory. License D:GameVFS:GameVFS was created by Ferdinand Majerech aka Kiith-Sa kiithsacmp[AT]gmail.com . SimpleVFS was a fork created by Luis Panadero Guardeño aka Zardoz luis.panadero[AT]gmail.com . The API was inspired the VFS API of the Tango library <>_. D:GameVFS was created using Vim and DMD on Debian, Ubuntu and Linux Mint as a VFS library in the D programming language <>_. - Registered by Luis Panadero Guardeño - 0.2.1 released 2 years ago - Zardoz89/SimpleVFS - github.com/Zardoz89/SimpleVFS - Boost 1.0 - Authors: - - Sub packages: - simplevfs:getting-started, simplevfs:vfsfile - Dependencies: - none - Versions: - Show all 6 versions - Download Stats: 0 downloads today 0 downloads this week 0 downloads this month 2 downloads total - Score: - 0.0 - Short URL: - simplevfs.dub.pm
https://code.dlang.org/packages/simplevfs
CC-MAIN-2022-05
refinedweb
1,254
60.21
On Mon 12 Mar 2018 at 15:20, Arash Esbati <ar...@gnu.org> wrote: > Following up myself, I think I found the problem. I installed this > patch which replaces `case' macro from `cl' with `cl-case' from > `cl-lib'. > > Can you please update your repo, compile the files and try it again? Advertising Thanks, that seems to have done it, though like I said it's an on-again, off-again problem. I'll let you know if it pops up again though. Sorry for missing that the first time. I went through and rechecked for any pesky cl functions and found one. Patch attached that removes it. Thanks, Alex >From f402fdac4af17512e02defa87273e6a041e89b05 Mon Sep 17 00:00:00 2001 From: Alex Branham <bran...@utexas.edu> Date: Tue, 13 Mar 2018 08:23:14 -0500 Subject: [PATCH] * tex.el: prefer 'cl-return' over 'return' --- tex.el | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tex.el b/tex.el index b7891a14..d9a0b934 100644 --- a/tex.el +++ b/tex.el @@ -2399,7 +2399,7 @@ this variable to \"<none>\"." (when (with-current-buffer buf (and (equal dir default-directory) (stringp TeX-master))) - (return (with-current-buffer buf TeX-master)))))) + (cl-return (with-current-buffer buf TeX-master)))))) (defun TeX-master-file-ask () "Ask for master file, set `TeX-master' and add local variables." -- 2.16.2 _______________________________________________ auctex-devel mailing list auctex-devel@gnu.org
https://www.mail-archive.com/auctex-devel@gnu.org/msg11682.html
CC-MAIN-2018-26
refinedweb
235
69.48
chroot() Change the root directory Synopsis: #include <unistd.h> int chroot( const char *path ); Since: BlackBerry 10.0.0 Arguments: - path - The name of the new root directory. This can include a network root (e.g. /net/ node_name). Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The chroot() function causes the path directory to become the root directory, the starting point for path searches for path names beginning with /. The user's working directory is unaffected. The .. entry in the root directory is interpreted to mean the root directory itself. Thus, you can't use .. to access files outside the subtree rooted at the root directory. In order to change the root directory, your process must have the PROCMGR_AID_CHROOT ability enabled. For more information, see procmgr_ability(). Errors: - EACCES - Search permission is denied for a component of path. - EBADF - The descriptor isn't valid. - EFAULT - The path argument points to an illegal address. - EINTR - A signal was caught during the chroot() function. - EIO - An I/O error occurred while reading from or writing to the filesystem. - ELOOP - Too many symbolic links were encountered in translating path. - EMULTIHOP - Components of path require hopping to multiple remote machines, and the filesystem type doesn't allow it. - ENAMETOOLONG - The length of the path argument exceeds {PATH_MAX}, or the length of a path component exceeds {NAME_MAX} while {_POSIX_NO_TRUNC} is in effect. - ENOENT - The named directory doesn't exist or is a null pathname. - ENOLINK - The path points to a remote machine and the link to that machine is no longer active. - ENOTDIR - Any component of the path name isn't a directory. - EPERM - The effective user of the calling process isn't the superuser. Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/c/chroot.html
CC-MAIN-2016-44
refinedweb
314
60.61
In this post, I think about reuse and extension, in the context of the Application Programming Model for SAP Cloud Platform, and mindful of Björn Goerke’s SAP TechEd 2018 keynote message in Barcelona – “keep the core clean”. Last week saw the Barcelona edition of SAP TechEd 2018, where SAP CTO Björn Goerke and a great team of role models on stage gave us a keynote with something for everyone – technical and business alike. During the keynote, I tweeted: My three keywords from the #SAPTechEd keynote so far: Open (standards, protocols, APIs) Reuse (important superpower of @sapcp application programming model) Clean (keep the core clean by extending outside of it) I want to think about the “reuse” and “clean” keywords, because in many ways they’re complementary, in that reuse (and by association, extension) can help to achieve the goal of a clean core. Of course, there’s a lot more to it than that, but reusing & extending definitions and services is a key part of building outside of the core, whether for net new applications or to extend existing solutions. That implies that the application programming model, which has reuse as a “superpower”, is a very useful model to know about. So I thought I’d look into an example of reuse and extension that exist for us to meditate upon and learn from. cloud-samples-itelo Earlier this year Oliver Welzel wrote “ITelO – A Sample Business Application for the new Application Programming Model for SAP Cloud Platform” in which he described an application with ra product catalog, and reviews, for the fictitious company ITelO. The data model is in three layers, with each building on the one beneath it. This diagram from the post provides a nice summary of that: The component overview, showing how the data model is built up in layers (Perhaps before continuing with this post, it might be worth you going and taking a read of Oliver’s post. Don’t forget to come back, though!) Multiple layers The idea is that there are core artefacts in the “foundation” layer, the “product-catalog” layer builds on top of that, and then there’s the “itelo” specific application layer at the top. Each layer is represented by a repository in GitHub, so all the source is available to study. If we start at the top, and look at the data model definition at the “itelo” layer, this is what we see*, specifically in the db/model.cds source: namespace clouds.itelo; using clouds.products.Products from '@sap/cloud-samples-catalog'; using clouds.foundation as fnd from '@sap/cloud-samples-foundation'; ); } entity Reviews: fnd.BusinessObject { product: Association to Products @title: '{i18n>product}'; reviewer: Association to Reviewers @title: '{i18n>reviewer_XTIT}'; title: String(60) @title: '{i18n>reviewTitle}'; message: String(1024) @title: '{i18n>reviewText}'; rating: Decimal(4, 2) @title: '{i18n>rating}'; helpfulCount: Integer @title: '{i18n>ratedHelpful}'; helpfulTotal: Integer @title: '{i18n>ratedTotal}'; } annotate Reviews with { ID @title: '{i18n>review}'; } entity Reviewers: fnd.Person, fnd.BusinessObject { } annotate Reviewers with { ID @title: '{i18n>reviewer_XTIT}'; } *I’m specifically using the “rel-1.0” branch in each case, because that’s what’s also used in the dependency references that we’ll see shortly, and represents a stable version that we can examine. Reuse through “using” statements Looking at the first few lines, we see some “using” statements: using clouds.products.Products from '@sap/cloud-samples-catalog'; using clouds.foundation as fnd from '@sap/cloud-samples-foundation'; So this is already interesting. Is this reuse in action? It is. But what does it mean, exactly? Let’s investigate. Taking the first “using” statement, something called “clouds.products.Products” is being used from something called “@sap/cloud-samples-catalog”. In the Model Reuse section of the documentation on the SAP Help Portal, we can see that this is effectively an import of a definition from another CDS model. OK, which one? Well, we can recognise the “cloud-samples-catalog” name as it’s one of the layers in the diagram we looked at earlier. But how is that resolved? For that, we have to look in the “itelo” layer project’s package.json file, where, amongst other things, we see some dependencies defined: "dependencies": { "@sap/cloud-samples-foundation": "", "@sap/cloud-samples-catalog": "" } Ooh, well that’s exciting, for a start! The package.json file is from the Node Package Manager (NPM) world and the dependencies section is where one defines dependencies to other packages, typically ones like “express”, if you’re building services that handle HTTP requests, for example. But what do we have here? Well, we can see the names referenced in the “using” statements earlier, in other words “@sap/cloud-samples-catalog” and “@sap/cloud-samples-foundation”. But instead of simple package names, they’re mapped to GitHub URLs. And not just any GitHub URLs, but URLs that refer to specific repositories, and indeed specific branches! Taking the URL for the “@sap/cloud-samples-catalog” name, we have: which refers to the rel-1.0 branch of the cloud-samples-catalog repository belonging to SAP. The “product-catalog” layer Looking there, we see a fully formed application – the middle “product-catalog” layer that we saw earlier, with app, srv and db folders representing each of the three components of a typical fully fledged solution based on the application programming model. In the db folder we see the model.cds file, which starts like this: namespace clouds.products; using clouds.foundation as fnd from '@sap/cloud-samples-foundation'; using clouds.foundation.CodeList; entity Products: fnd.BusinessObject { // general info key ID: String(36); name: localized String @( title: '{i18n>name}', Common.FieldControl: #Mandatory, Capabilities.SearchRestrictions.Searchable ); description: localized String @( title: '{i18n>description}', Common.FieldControl: #Mandatory ); [...] Fractals In a wonderfully fractal way, we notice immediately that this model definition also refers to another package with a “using” statement, but let’s resist descending deeper just at this moment. Instead, we can concentrate on looking at what’s going on with the “using” statement we’ve seen in the consuming definition earlier, which looked like this: using clouds.products.Products from '@sap/cloud-samples-catalog'; We realise that “clouds.products.Products” refers to the Products entity in the “cloud.products” namespace, which is defined here with the “entity” definition: entity Products: fnd.BusinessObject { ... } But what is that “fnd.BusinessObject” sitting between the entity name and the block definition in curly braces? Why, it’s more reuse, this time of the underlying “foundation” layer. Just above in the same file, we can see that this layer is referenced in a “using” statement, this time with a local alias “fnd” defined: using clouds.foundation as fnd from '@sap/cloud-samples-foundation'; So now let’s briefly descend into the fractal. The reference to “fnd.BusinessObject” is to an entity defined in the “foundation” layer, which we can see if we follow the dependency reference in the “product-catalog” layer’s package.json: (It’s worth observing that in this layer we only have data definitions — in the form of “.cds” files — rather than a full blown solution with app, srv and db folders.) In this repository (again, branch “rel-1.0”) we can find the definition of the BusinessObject entity in the common.cds file looking like this: abstract entity BusinessObject : ManagedObject { key ID : UUID @( title: '{i18n>uuid}', Common.Text: {$value: name, "@UI.TextArrangement": #TextOnly} ); } Note in passing that here the “BusinessObject” entity is defined as “abstract” which means that it’s just a type declaration rather than something for which instances should exist. Note also that it’s further defined, using a similar pattern to where we saw the “fnd.BusinessObject” reference, by another abstract entity definition “ManagedObject” (you can find this definition of ManagedObject also in the common.cds file). Extension through “extend” statements Moving back up the layers for some air, we see that directly following the “using” statements, there is this: ); } With the “extend” aspect, entity definitions can be repurposed with extra properties, for example. In this case, the existing Products entity (from the “product-catalog” layer’s data definition) is extended with three properties: “reviews”, “averageRating” and “numberOfReviews”. Note that the “reviews” property is an association to a Reviews entity at this (itelo) application layer, defined expressly for this purpose. Moreover, some of the properties in the Reviews entity are also defined as associations to further entities therein, such as the reviewer property which points to the Reviewers entity, which has no properties of its own, but in a beautiful way inherits from some of the definitions (Person and BussinessObject) at the “foundation” layer: entity Reviewers: fnd.Person, fnd.BusinessObject { } Wrapping up That might be a lot to take in, in one sitting. It has become quite clear to me that the facilities afforded by the CDS language in the application programming model are very rich when it comes to reuse and extensions. Not only at the definition level, but also in the simplicity of how package based references are realised. While at first I thought it was a little odd to see the GitHub repository & branch URLs, and indeed to realise that the package.json mechanism was fundamental to how artefacts in the application programming model are related, I’ve come to think that it’s a natural way to do it, and a celebration of adopting an approach that’s already out there in the world beyond our SAP ecosphere. What’s more, we haven’t even touched on how annotations work and what we are able to do in terms of reuse there too. But I’ll leave that for another time, instead leaving you with the suggestion that reuse is indeed an important superpower of the application programming model, and demonstrably so. And keeping the core clean – well, the more extension and reuse we can achieve, the closer we can get to a cleaner core. This post was brought to you by a chilly Monday morning, by Pact Coffee’s Asomuprisma in my SAP Coffee Corner Radio mug, and by a Spotify mix designed for concentration. Read more posts in this series here: Monday morning thoughts. This is way too heavy reading for a Monday morning. So I will be honest and say I did go to Oliver’s post. I then read your post “lightly”. What does that mean? It means I didn’t dig into the tech behind it. <Sigh> I’m always feeling I’m running as fast as I can and the super train is flying by me! One step at a time – Right? I’ll come back later to look at this one! Michelle Whoa, sorry Michelle! Thanks for being honest and letting me know. Perhaps a second reading will be more fruitful, and I can use your comment as useful input as to how deep or technical I should go for other posts. But yes, one step at a time. Let me know if you have any questions after you’ve come back for a second go. Cheers! Nice post DJ, thanks. I think that ‘smart’ reuse is key enabler of success. By ‘smart’ I mean not making things too complicated, and only making things re-usable if they are likely to be re-used. Often it’s hard to tell this ahead of time, but experience does help somewhat. I wonder how we will navigate all of the thousands of CDS artifacts that will be created in the coming years. We could do with some kind of smart browser to show us the artifacts and how they fit together. Otherwise duplication is quite likely I fear. I think I would get a lot out of a one-day code jam on the Cloud Platform Application Programming Model, given a good teacher/coach. Is there anything planned? For any Australasian harbour-side cities that begin with S? Cheers Mike, appreciated. Yes, with many things, including this subject, as well as, say, cloud functions, there’s a management and orchestration aspect that is important. Hand in hand with that is thoughtful design, too. Currently we don’t have application programming model CodeJam events, but I think it’s definitely worth considering. I’ll reach out to my colleagues to start the conversation. Good suggestion! Great blog, DJ! Maybe this is a little off-topic, but I have been wondering about a reference in cloud-sample-spaceflight which i almost thought I got the answer for reading your blog 🙂 In cloud-sample-spaceflight there is a reference to ‘@sap/cds/common’ in cloud-sample-spaceflight/db/common.cds. It’s not defined in dependencies. So where is it defined? Is it some kind of implicit declaration? Regards, Henrik Hey Henrik, thanks for the praise. Glad you like it, and this question is certainly not off-topic, so don’t worry! The dependencies are defined in the repo’s top-level package.jsonfile, rather than any in the child directories. The one you’re looking at above is in the dbdirectory. If you look in package.jsonat the top level you’ll see this: Following the trail to `@sap/cds` with the command “cds version” we can see where cds is installed: And in there, we find the “common” definition: which contains, amongst other things, the “managed” definition. Good question, btw (I’m still learning too!) Thanks for taking the time to reply so even I get it 🙂 Regards, Henrik
https://blogs.sap.com/2018/10/29/monday-morning-thoughts-exploring-reuse/
CC-MAIN-2019-43
refinedweb
2,232
55.03
Using findleak.py, attached to #30642, modified to always print the filename, I discovered that test_parenmatch also flashes (along with test_searchbase). The leak test runs each test_xyz module 9 times, so is a good way to see flashing. Adding 'root.withdraw' stops the single flash when running test_parenmatch once. However, it does not stop flashing when running 9 times with -R: refleak test. Flash continues even if class is reduced to class ParenMatchTest(unittest.TestCase): @classmethod def setUpClass(cls): cls.root = Tk() cls.root.withdraw() @classmethod def tearDownClass(cls): cls.root.destroy() del cls.root def test_dummy(self): pass This is a puzzle since the same skeleton is in several other files. In fact, at least one, test_autocomplete, does not even have root.withdraw. In any case, I will add it here.
https://bugs.python.org/msg295946
CC-MAIN-2021-04
refinedweb
132
69.99
> insmod i82365.o Did you have ideas on how to handle naming? "pool_status", for example, seems fairly generic. Two different approaches come to me immediately, 1) All modules/drivers share the same name space, and 2) All such environment variables have an implied prefix unique to each driver. The first is the most obvious and easiest to write but name collisionshave the potential to do very rude things. I think that we would end upwith the second approach, but the prefixes would be explicit (insmod i82365.o opts="i82365-irq_mask=0xeff,i82365-poll_status". I believe loaded modules cannot share the same name, and if this is so then the module name would be a clear and easy way to generate a prefix that could easily be added by the kernel functions. Preferably, one of the call's parameters would specify whether or not to automatically add the prefix. Even with implied prefixes we have the same problem. If the getkenvcall adds the prefix, then a driver cannot access variables outside itsnamespace. If getkenv does not add the prefix, then two different driversmay read and write the same environment variable leading to chaos. -Mikekujawa@cs.ucf.edu
http://lkml.org/lkml/1996/12/13/36
CC-MAIN-2014-52
refinedweb
196
61.26
One of the measures of success of a Beowulf cluster is the number of people waiting in line to run their code on the system. Build your own low-cost supercomputer, and your cycle-starved colleagues will quickly become your new best friends. But, when they all get accounts and start running jobs, they’ll soon find themselves battling each other for the limited resources of the machine. At that time, you’ll need a package that automatically schedules jobs and allocates resources on the cluster — something akin to the batch job queuing facilities used on the business systems and mainframes of yesteryear. Batch facilities would line up jobs, execute them in turn as the appropriate resources became available, and deliver the output of each job back to the submitter. Clustered systems and emerging grid technologies have driven the need for new job scheduling packages in the computational science realm. Two scheduling packages that are increasingly being used are OpenPBS (Portable Batch System,) and Sun Microsystems’s Grid Engine (). OpenPBS is an open source package offered and supported by Veridian Systems. (Veridian also offers an enhanced commercial version called PBS Pro.) The source for Grid Engine is also available at no cost. Both packages run on a wide variety of Unix and Linux systems, and may be used for both serial and parallel job control. The month’s column will focus on implementing OpenPBS on a typical Beowulf cluster. Introducing OpenPBS OpenPBS consists of three primary components — a job server, a job executor, and a job scheduler — and a set of commands and X-based tools for submitting jobs and monitoring queues. The job server (pbs_server) handles basic queuing services such as creating and modifying a batch job and placing a job into execution when it’s scheduled to be run. The job executor (pbs_mom) is the daemon that actually runs jobs. The job scheduler (pbs_sched) is another daemon. It knows the site’s policies and rules about when and where jobs can be run. In the simplest implementation, pbs_server and pbs_sched are run only on the front-end node, while pbs_mom is run on every node of the cluster that can run jobs, including the front-end node. Figure One presents a typical eight-node Beowulf cluster running OpenPBS. In the figure, every node, including the front-end machine, node01, runs the pbs_mom daemon, while only the front-end node runs the pbs_server and pbs_sched daemons. (By the way, a Mom is a node running pbs_mom; the Server is the node running pbs_server and pbs_sched.) OpenPBS commands (qsub, qstat, etc.) are available on every node. In the configuration shown, an external machine, climate .ornl.gov, can access the queues and monitor the batch system remotely. A variety of schedulers are available for OpenPBS, but only the default scheduler, called fifo, will be discussed here. (Don’t worry. Jobs are not run first-in, first-out by this scheduler, despite what its name suggests.) Third party schedulers may also be used in combination with OpenPBS. For instance, Maui () is one scheduler often used with OpenPBS on Beowulf clusters. Configuring OpenPBS Source for OpenPBS is available on its website, but you must register on the website and await approval before being allowed to download the code and documentation. RPMs are also available on the site, and these may be used for the implementation described here. All components may be built using the standard configure, make, and make install procedure. A host of configure options are described in the OpenPBS Administrator’s Guide along with instructions for building and installing OpenPBS on various Unix systems like the Cray, SGI, and IBM SP. Once the code is built and installed on the front-end node, the pbs_mom and command components should be installed on the remaining cluster nodes. You can install the full suite on every node in the cluster if disk space is ample and you don’t get confused about which node is the server. Run-time information is stored in subdirectories under $PBS_HOME, which is assumed to be /usr/spool/PBS, the default location. Before starting any daemons, a few configuration files need to be created or updated. First, each node needs to know what machine is running the server. This is conveyed through the $PBS_HOME/server_name file, which, for our configuration, should contain the following line: node01 Second, the pbs_server daemon must know which nodes are available for executing jobs. This information is kept in a file called $PBS_HOME/server_priv/nodes, and the file appears only on the front-end node where the jobs server runs. You can set various properties for each node listed in the nodes file, but for this simple configuration, only the number of processors is included. $PBS_HOME/server_priv/nodes should contain the following lines: node01 np=2 node02 np=2 node03 np=2 node04 np=2 node05 np=2 node06 np=2 node07 np=2 node08 np=2 Third, each pbs_mom daemon needs some basic information to participate in the batch system. This configuration information is contained in $PBS_HOME/mom_priv/config on every node. The following lines should be in this file for the example configuration: $logevent 0x0ff $clienthost node01 $restricted climate.ornl.gov The $logevent directive specifies what information should be logged during operation. A value of 0x0ff causes all messages except debug messages to be logged, while 0x1ff causes all messages, including debug messages, to be logged. The $clienthost directive tells each Mom where the Server is — in this case it’s on node01. $restricted details which hosts are allowed to connect to Mom directly. Hosts allowed to connect can make internal queries of Mom using monitoring tools such as xpbsmon. In this case, climate.ornl.gov is a system external to the cluster that can monitor the batch system (again, see Figure One for the topology). Starting Up OpenPBS Once all of the configuration files are ready, the component daemons can be started. It’s easiest if the Moms are started first to be ready to communicate with the Server once it’s launched. For this configuration pbs_mom should be started on all computational nodes, including node01 as follows: [root@node01 root]# pbs_mom [root@node02 root]# pbs_mom [root@node03 root]# pbs_mom … [root@node07 root]# pbs_mom [root@node08 root]# pbs_mom Next, the Server should be started on node01. The first time you run pbs_server, start it with the -t create flag to initialize the server configuration. Once the Server is running, qmgr can be used to construct one or more jobs queues and their properties. Using the commands shown in Figure Two, we create a single execution queue, called penguin_exec, for jobs that run for more than one second and less than 48 hours. The default time for jobs put in the queue is set to thirty minutes. Once the queue is created, we enable and start the queue. The Server is then provided a list of managers, forrest@climate.ornl.gov in this case. Finally, server scheduling is set to true, which causes jobs to be scheduled. The Server configuration can be saved to a file as follows: Listing Two: jobscript.csh #!/bin/csh #PBS -N Hello_job #PBS -l nodes=2:ppn=2 #PBS -l walltime=05:00 #PBS -m be # echo “The nodefile is ${PBS_NODEFILE} and it contains:” cat ${PBS_NODEFILE} echo “” # time /usr/local/bin/mpirun -nolocal -machinefile ${PBS_NODEFILE} \ -np `wc -l ${PBS_NODEFILE} | awk ‘{print $1}’` hello_waster Figure Two: Launching the Server and creating a queue [root@node01 root]# pbs_server -t create Then run qmgr to configure the server: [root@node01 root]# qmgr Max open servers: 4 Qmgr: create queue penguin_exec Qmgr: set queue penguin_exec queue_type = execution Qmgr: set queue penguin_exec resources_max.cput = 48:00:00 Qmgr: set queue penguin_exec resources_min.cput = 00:00:01 Qmgr: set queue penguin_exec resources_default.cput = 00:30:00 Qmgr: set queue penguin_exec enabled = true Qmgr: set queue penguin_exec started = true Qmgr: set server managers = forrest@climate.ornl.gov Qmgr: set server scheduling = true Qmgr: quit [root@node01 root]# qmgr -c “print server” \ > /root/server.config The Server configuration can later be fed back into qmgr to recreate the configuration as follows: [root@node01 root]# qmgr < /root/server.config Finally, the default scheduler (fifo) is started using the default configuration (provided with the scheduler): [root@node01 root]# pbs_sched The scheduler may be configured with various policies by editing the $PBS_HOME/sched_priv/sched_config file. You can specify job sorting methods, assign individual users elevated priorities, and establish prime-time and holiday scheduling policies. After the Server has been manually started and configured as described above, scripts should be written and placed in appropriate /etc/rc.d directories to automatically start the three daemons on the front-end node and Mom on all of the nodes at boot. Alternatively, these daemons can be started out of the rc.local file. Submitting Jobs to OpenPBS Both serial and parallel jobs can be submitted to OpenPBS queues, but the package has no way of enforcing the use of allocated nodes. A special script file can be used to ensure that the intentions of OpenPBS are met. First, however, let’s write some parallel code to test the batch queue. The hello_waster.c code shown in Listing One is a standard “Hello World” program that prints the MPI process rank and processor name, and then wastes time so that the job will run for a few minutes. Listing One: hello_waster.c #include <stdio.h> #include <math.h> #include “mpi.h”; } To run the program, you submit it to the batch queue with qsub and a special script like the one shown in Listing Two. jobscript.csh is just a shell script, but the lines that start with #PBS (ignored by csh as comments) are OpenPBS directives that describe the job to the Server. PBS directives provide a way of specifying job attributes in addition to qsub command line options. In this example, the job is named “Hello_ job” (#PBS -N Hello_job), two nodes are requested with two processors per node (#PBS -l nodes=2:ppn=2), wall-clock time is guessed to be 5 minutes (#PBS -l walltime=05:00), and mail should be sent at the beginning and end of the job (#PBS -m be). The script prints the list of nodes that the Scheduler allocates for the job (contained in $PBS_NODEFILE), then executes the program by calling mpirun. To be sure that all of the allocated nodes are used, the -machinefile flag is passed to mpirun with name of the file which contains the node list. The -nolocal flag is passed to mpirun so that only the nodes listed in $PBS_ NODEFILE are used and no local process is created. The -np flag of mpirun is used to specify the number of processes. In this example, the number of requested processes is obtained by counting the number of lines in the $PBS_NODEFILE. As a result, the number of processes can be changed for future submissions of this script by simply changing the nodes and ppn values at the top of the script. The code should be compiled using mpicc, and the job script submitted to the penguin_ exec queue using the qsub command as follows: [forrest@node01 forrest]$ mpicc -o \ hello_waster hello_waster.c -lm [forrest@node01 forrest]$ qsub -q penguin_exec \ jobscript.csh 21.node01 The status of queues and jobs can be monitored using the qstat command show in Figure Three. Figure Three: Monitoring queues with qstat [forrest@node01 forrest]$ qstat]$ The output from this job is saved in a file called Hello_ job.o21 shown here. node02 node02 node01 node01 As requested in the script file, email is sent to the user to inform him that the job has been scheduled, and a second message is sent upon completion of the job. An example of the second email message is shown here: Date: Tue, 16 Jul 2002 23:06:11 -0400 From: adm <adm@node01> To: forrest@node01 Subject: PBS JOB 21.node01 PBS Job Id: 21.node01 Job Name: Hello_job Execution terminated Exit_status=0 resources_used.cput=00:00:00 resources_used.mem=4372kb resources_used.vmem=10404kb resources_used.walltime=00:04:46 Scheduling jobs and managing resources on a Beowulf cluster can be challenging when more than a few users want to run parallel codes there. Job queuing, and scheduling packages like OpenPBS, Grid Engine, and others, make job handling much more manageable. This month we installed OpenPBS and learned how to create, submit, and control jobs. But that’s just the beginning. OpenPBS is quite powerful. For example, you can define rules such as execution order, synchronization, and conditional execution between batch jobs. Future columns will cover other features of job scheduling facilities useful on Beowulf clusters.
http://www.linux-mag.com/id/1184/
CC-MAIN-2016-26
refinedweb
2,120
61.26
Designing a schema¶ About schemas and fields¶ The schema specifies the fields of documents in an index. Each document can have multiple fields, such as title, content, url, date, etc. Some fields can be indexed, and some fields can be stored with the document so the field value is available in search results. Some fields will be both indexed and stored. The schema is the set of all possible fields in a document. Each individual document might only use a subset of the available fields in the schema. For example, a simple schema for indexing emails might have fields like from_addr, to_addr, subject, body, and attachments, where the attachments field lists the names of attachments to the email. For Built-in field types¶ Whoosh provides some useful predefined field types: whoosh.fields.TEXT This type is for body text. It indexes (and optionally stores) the text and stores term positions to allow phrase searching. TEXTfields use StandardAnalyzerby default. To specify a different analyzer, use the analyzerkeyword argument to the constructor, e.g. TEXT(analyzer=analysis.StemmingAnalyzer()). See About analyzers. By default, TEXTfields store position information for each indexed term, to allow you to search for phrases. If you don’t need to be able to search for phrases in a text field, you can turn off storing term positions to save space. Use TEXT(phrase=False). By default, TEXTfields are not stored. Usually you will not want to store the body text in the search index. Usually you have the indexed documents themselves available to read or link to based on the search results, so you don’t need to store their text in the search index. However, in some circumstances it can be useful (see How to create highlighted search result excerpts). Use TEXT(stored=True)to specify that the text should be stored in the index. whoosh.fields.KEYWORD This field type is designed for space- or comma-separated keywords. This type is indexed and searchable (and optionally stored). To save space, it does not support phrase searching. To store the value of the field in the index, use stored=Truein the constructor. To automatically lowercase the keywords before indexing them, use lowercase=True. By default, the keywords are space separated. To separate the keywords by commas instead (to allow keywords containing spaces), use commas=True. If your users will use the keyword field for searching, use scorable=True. whoosh.fields.ID The IDfield type simply indexes (and optionally stores) the entire value of the field as a single unit (that is, it doesn’t break it up into individual terms). This type of field does not store frequency information, so it’s quite compact, but not very useful for scoring. Use IDfor fields like url or path (the URL or file path of a document), date, category – fields where the value must be treated as a whole, and each document only has one value for the field. By default, IDfields are not stored. Use ID(stored=True)to specify that the value of the field should be stored with the document for use in the search results. For example, you would want to store the value of a url field so you could provide links to the original in your search results. whoosh.fields.STORED - This field is stored with the document, but not indexed and not searchable. This is useful for document information you want to display to the user in the search results, but don’t need to be able to search for. whoosh.fields.NUMERIC - This field stores int, long, or floating point numbers in a compact, sortable format. whoosh.fields.DATETIME - This field stores datetime objects in a compact, sortable format. whoosh.fields.BOOLEAN - This simple filed indexes boolean values and allows users to search for yes, no, true, false, 1, 0, tor f. whoosh.fields.NGRAM - TBD. Expert users can create their own field types. Creating a Schema¶ To create a schema: from whoosh.fields import Schema, TEXT, KEYWORD, ID, STORED from whoosh.analysis import StemmingAnalyzer schema = Schema(from_addr=ID(stored=True), to_addr=ID(stored=True), subject=TEXT(stored=True), body=TEXT(analyzer=StemmingAnalyzer()), tags=KEYWORD) If you aren’t specifying any constructor keyword arguments to one of the predefined fields, you can leave off the brackets (e.g. fieldname=TEXT instead of fieldname=TEXT()). Whoosh will instantiate the class for you. Alternatively you can create a schema declaratively using the SchemaClass base class: from whoosh.fields import SchemaClass, TEXT, KEYWORD, ID, STORED class MySchema(SchemaClass): path = ID(stored=True) title = TEXT(stored=True) content = TEXT tags = KEYWORD You can pass a declarative class to create_in() or create_index() instead of a Schema instance. Modifying the schema after indexing¶ After you have created an index, you can add or remove fields to the schema using the add_field() and remove_field() methods. These methods are on the Writer object: writer = ix.writer() writer.add_field("fieldname", fields.TEXT(stored=True)) writer.remove_field("content") writer.commit() (If you’re going to modify the schema and add documents using the same writer, you must call add_field() and/or remove_field before you add any documents.) These methods are also on the Index object as a convenience, but when you call them on an Index, the Index object simply creates the writer, calls the corresponding method on it, and commits, so if you want to add or remove more than one field, it’s much more efficient to create the writer yourself: ix.add_field("fieldname", fields.KEYWORD) In the filedb backend, removing a field simply removes that field from the schema – the index will not get smaller, data about that field will remain in the index until you optimize. Optimizing will compact the index, removing references to the deleted field as it goes: writer = ix.writer() writer.add_field("uuid", fields.ID(stored=True)) writer.remove_field("path") writer.commit(optimize=True) Because data is stored on disk with the field name, do not add a new field with the same name as a deleted field without optimizing the index in between: writer = ix.writer() writer.delete_field("path") # Don't do this!!! writer.add_field("path", fields.KEYWORD) (A future version of Whoosh may automatically prevent this error.) Dynamic fields¶ Dynamic fields let you associate a field type with any field name that matches a given “glob” (a name pattern containing *, ?, and/or [abc] wildcards). You can add dynamic fields to a new schema using the add() method with the glob keyword set to True: schema = fields.Schema(...) # Any name ending in "_d" will be treated as a stored # DATETIME field schema.add("*_d", fields.DATETIME(stored=True), glob=True) To set up a dynamic field on an existing index, use the same IndexWriter.add_field method as if you were adding a regular field, but with the glob keyword argument set to True: writer = ix.writer() writer.add_field("*_d", fields.DATETIME(stored=True), glob=True) writer.commit() To remove a dynamic field, use the IndexWriter.remove_field() method with the glob as the name: writer = ix.writer() writer.remove_field("*_d") writer.commit() For example, to allow documents to contain any field name that ends in _id and associate it with the ID field type: schema = fields.Schema(path=fields.ID) schema.add("*_id", fields.ID, glob=True) ix = index.create_in("myindex", schema) w = ix.writer() w.add_document(path=u"/a", test_id=u"alfa") w.add_document(path=u"/b", class_id=u"MyClass") # ... w.commit() qp = qparser.QueryParser("path", schema=schema) q = qp.parse(u"test_id:alfa") with ix.searcher() as s: results = s.search(q) Advanced schema setup¶ Field boosts¶ You can specify a field boost for a field. This is a multiplier applied to the score of any term found in the field. For example, to make terms found in the title field score twice as high as terms in the body field: schema = Schema(title=TEXT(field_boost=2.0), body=TEXT) Field types¶ The predefined field types listed above are subclasses of fields.FieldType. FieldType is a pretty simple class. Its attributes contain information that define the behavior of a field. The constructors for most of the predefined field types have parameters that let you customize these parts. For example: - Most of the predefined field types take a stored keyword argument that sets FieldType.stored. - The TEXT()constructor takes an analyzerkeyword argument that is passed on to the format object. Formats¶ A Format object defines what kind of information a field records about each term, and how the information is stored on disk. For example, the Existence format would store postings like this: Whereas the Positions format would store postings like this: The indexing code passes the unicode string for a field to the field’s Format object. The Format object calls its analyzer (see text analysis) to break the string into tokens, then encodes information about each token. Whoosh ships with the following pre-defined formats. The STORED field type uses the Stored format (which does nothing, so STORED fields are not indexed). The ID type uses the Existence format. The KEYWORD type uses the Frequency format. The TEXT type uses the Positions format if it is instantiated with phrase=True (the default), or Frequency if phrase=False. In addition, the following formats are implemented for the possible convenience of expert users, but are not currently used in Whoosh: Vectors¶ The main index is an inverted index. It maps terms to the documents they appear in. It is also sometimes useful to store a forward index, also known as a term vector, that maps documents to the terms that appear in them. For example, imagine an inverted index like this for a field: The corresponding forward index, or term vector, would be: If you set FieldType.vector to a Format object, the indexing code will use the Format object to store information about the terms in each document. Currently by default Whoosh does not make use of term vectors at all, but they are available to expert users who want to implement their own field types.
https://whoosh.readthedocs.io/en/latest/schema.html
CC-MAIN-2018-39
refinedweb
1,674
55.54
Red Hat Bugzilla – Bug 75287 <ostream> vs <ostream.h> Last modified: 2007-04-18 12:47:15 EDT From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.2b) Gecko/20021002 Description of problem: The following file compiles: #include <ostream.h> ostream& operator<< (ostream&, int); but gives the warning: In file included from /usr/include/c++/3.2/backward/ostream.h:31, from try.C:1: /usr/include/c++. If I change <ostream.h> to <ostream> however, I get: try.C:3: syntax error before `&' token Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1.c++ -c file.C with above files 2. 3. Actual Results: See above Expected Results: Should compile ? Additional info: None The compiler is right, your program is not ISO C++. #include <ostream> std::ostream& operator<< (std::ostream&, int); is. Or you could add using namespace std; or using std::ostream; after your includes. Wow, that was quick. Thanks. I see now that converting my entire tree to new style headers is going to involve more than just a few streamedits.
https://bugzilla.redhat.com/show_bug.cgi?id=75287
CC-MAIN-2017-22
refinedweb
187
62.34
At least for v2 and earlier, mscorlib.dll is a special case. That causes it and its types to be loaded differently from other assemblies. ‘official’ mscorlib Assembly will always be returned – regardless of the version that was requested. The same thing goes for loading by static reference to. Reflecting On Another Mscorlib.dll: - Reflect on the ‘official’ mscorlib.dll. You could change the version of the CLR that your app loads in order to reflect on the desired mscorlib version. - Use the unmanaged metadata API to get the desired info. - Create an asmmeta file for the desired mscorlib.dll and parse it at runtime. Could you take a look at my comment here about Assembly.Load? Also, is the source code in the SSCLI for the threading & networking services the same in both the SSCLI and the .NET framework? Oops I meant the comment in your post on Assembly.Codebase and Location. Sure. Sorry, my responses are sometimes delayed, since I’m busy working on the product, not technical support. If you need a quicker response, you may want to consider talking to Microsoft’s official tech support channels. Microsoft’s official newsgroups are also good, since some non-MS MVP’s may also answer your questions quickly (in addition to the MS people who answer). Anyway, yes, the code for that is basically the same for both (though, of course, it takes time for changes from the CLR to propagate to SSCLI). Thanks Can you give me some pointers (articles, links to examples) about the unmanaged metadata API you mentioned at the end of your article, please? What newsgroups or forums I could ask about these kind of problems? Many thanks Cris Try ms-help://MS.MSDNQTR.2003OCT.1033/dnmag00/html/metadata.htm from MSDN, or search for ".NET Unmanaged Metadata API" on the web. For newsgroups, try microsoft.public.dotnet.* (the CLR newsgroup is best for questions about metadata). Thanks Suzanne! I am working in vb.net. I am getting an eror "Object reference not set to an instance of an object " which its error source in mscorlib. I am getting this error while working with the farpoint grid which has one of a columns as checkbox .When the statement .value = 1 (which checks the checkbox) it throws this error plz help me out Binal Dalal, please keep comments here related to the loader. See for general support. i have visual C++ 6 and not exist mscorlib… Why; help please It’s not there because the CLR is not installed. VS6 is pre-.NET. VS.NET starts with version 7. I’m still reading through your archives, but what is a asmmeta file? Google returns 0 hits, and I can’t find it in MSDN. Hello! I have unmanaged C++ code which I want to be managed, and than I want to make web service from that managed code. The best way to do it? Thanks! i got this problem, after downloading the sample code from msdn/ I am a programmer of vb6 . now i am using VB.Net . i am trying to communicate the machine on serial port using vb.net. But i am facing a Error "Exception from HRESULT: 0x800A1F45". Remember i am using MSCOMM32 for serial communication. as we did in vb6.0 File Include as Refenece (1)Interop.MSCommLib (2)mscorlib Plz Help me as soon as posible Regards Saqib I had faced this type of exception And i have no idea for what reason this will propogate.. Please do reply me, ——————————————– Exception #1 Source: mscorlib Invalid length for a Base-64 char array. at System.Convert.FromBase64String(String s) at System.Web.UI.LosFormatter.Deserialize(String input) at System.Web.UI.Page.LoadPageStateFromPersistenceMedium() ——————————————– What exactly is mscorlib all about? Is this the base building block for the primary namespaces in .Net? Can I open it and view it’s code? I am havin visual studio1.1. Can i use System.Collections.Generic Namespace?? i am using .NET 2003 ,i install to my computer crystal report 10 and then all my reports in the project return to crystal report 10. And when i create a setup project All files dependency is mscorlib WHAT CAN I DO TO SOLVE THIS PROBLEM hai, While using the Crystal Report XI in ASP.NET there is an Error Not Enough Memory for operation. By "Frameworks assemblies," I mean the assemblies that ship with the CLR. But, I’m not counting mscorlib.dll I’m getting this error: Description: Input string was not in correct format Source: mscorlib Can you tell me what this might be about Hi I am using Vb.net. whenever i call the ***.dll i got the following error mscorlib:Could not find file ‘C:Outputdebug.ddd’. and also Exception has been thrown by the target of an invocation. I want to know that when we double click a .net exe, how does the Hosting API gets called for loading the CLR in the process. Thanks and Regards Saurabh Jain itsaurabh@yahoo.com mscorlib is working on application at developmnent but not working after publish in the same machine. Please give me a suitable solution for the same. Thanks & regards Ramesh H. Sahoo I see that mscorlib is integrated closely with CLR. That must be why errors thrown by mscorlib are so hard to trace. One such error that we face at our client’s site is the infamous ‘Handle is invalid error’. Give below is stack trace. mscorlib The handle is invalid. at System.IO.__Error.WinIOError(Int32 errorCode, String str) at System.IO.FileStream.ReadCore(Byte[] buffer, Int32 offset, Int32 count) at System.IO.FileStream.Read(Byte[] array, Int32 offset, Int32 count) at System.Xml.XmlStreamReader.Read(Byte[] data, Int32 offset, Int32 length) at System.Xml.XmlScanner.Read() at System.Xml.XmlScanner.ScanName() at System.Xml.XmlScanner.ScanMarkup() at System.Xml.XmlScanner.ScanToken(Int32 expected) at System.Xml.XmlTextReader.SetElementValues() at System.Xml.XmlTextReader.ParseElement() at System.Xml.XmlTextReader.Read() at System.Xml.XmlValidatingReader.ReadNoCollectTextToken() at System.Xml.XmlValidatingReader.Read() at System.Xml.XmlLoader.LoadCurrentNode()) This happens occasionally while reading an xml document and seems to have no particular cause since at other times the same code executes successfully. The following error message results after an attempt to save a VB 2005 form–"Could not load type ‘System.Byte’ from assembly ‘mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089’ because the format is invalid. error message." Can you tell me what is causing this error? (The software is on the hard drive and the application is stored on the server.) And, I want to know how to correct this problem.
https://blogs.msdn.microsoft.com/suzcook/2003/06/30/mscorlib-dll/
CC-MAIN-2017-13
refinedweb
1,110
61.02
Adding Python module to MLE which is used by APEXMahmoud_Rabie Dec 21, 2018 8:11 AM Hello Everybody First of all, I would like to thank joelkallman-Oracle for this awesome article Oracle Database + APEX + JavaScript/Python = Awesome! I have tested running Python code from SQL Commands of the Early adopting instance. It is amazing! However, I was wondering how to add Python's modules to MLE engine that would be used by APEX. Say I have xyx module which is added by: pip3 install xyz Regards Mahmoud 1. Re: Adding Python module to MLE which is used by APEXBastian Hossbach-Oracle Dec 22, 2018 5:51 PM (in response to Mahmoud_Rabie)1 person found this helpful Hi Mahmoud The APEX + MLE hosted preview is all about running JavaScript and Python code snippets in APEX. For deploying JavaScript and Python modules, we recommend to get our latest preview release of MLE (0.3.0): Note that our current support of running third party Python code is very limited (in contrast to third party JavaScript code that all should run fine), but we are working hard on it. Most third party Python modules cannot be used yet and, thus, we do not encourage users to try it out. However, if you really want to give it a try, here is how you can do it. Simply place the third party Python code (e.g., the module "aegon") in a directory "moduledeps" besides your own code (e.g., "myconv.py"). Finally, put everything into a single zip file and deploy it (see). Here is one example that can be run in MLE 0.3.0: $ mkdir myconv $ cd myconv $ mkdir moduledeps $ pip install --target=./moduledeps aegon $ cat myconv.py from aegon import Measurement, Length def doconv(inch_in: float) -> str: distance = Measurement(inch_in, Length.inches) distance.convert_to(Length.meters) return str(distance) exports['to_meter'] = doconv $ zip -r myconv.zip * Best regards, Bastian 2. Re: Adding Python module to MLE which is used by APEXMahmoud_Rabie Dec 23, 2018 4:51 AM (in response to Bastian Hossbach-Oracle) Awesome! Thanks a lot. 3. Re: Adding Python module to MLE which is used by APEXMahmoud_Rabie Jun 22, 2019 11:51 AM (in response to Mahmoud_Rabie) Dear joelkallman-Oracle Please tell us when will MLE be released with APEX? Regards Mahmoud 4. Re: Adding Python module to MLE which is used by APEXjoelkallman-Oracle Jun 24, 2019 12:03 AM (in response to Mahmoud_Rabie)1 person found this helpful Hi Mahmoud, It will be in some future database version. I'm really unable to provide a specific date or version. It could be Database 20. It could be a version after that. Joel 5. Re: Adding Python module to MLE which is used by APEXMahmoud_Rabie Jun 24, 2019 1:00 AM (in response to joelkallman-Oracle) Thanks a lot
https://community.oracle.com/thread/4191903
CC-MAIN-2019-35
refinedweb
472
72.56