text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
New to Helpshift? Try out sample projects in 3 clicks - Or, follow these simple steps to add Helpshift in-app support to your Android app right away - For example, update your AndroidManifest.xml as follows: <uses-sdk android: here. v7.7.0and v7.7.1to production, please get in touch with us via support and make sure to upgrade to v7.7.2asap. Apologies for any inconvenience that this has caused.-aar:7.11.1' } ready to help you have conversations with your users! Move on to Initializing Helpshift in your App. If your Android project is based on IntelliJ or Eclipse, download the latest version of the Android SDK. The zip file contains the Helpshift SDK in the Android Library Project format. If your Android project is based on Maven, follow the. From SDK version 4.4.0 onwards, the Helpshift Android SDK requires minimum API level to be 14. For versions 14 and 15 you will not receive the rich push notifications which are only supported after android api level 16. With the transition to TLSv1.2 on Sep 20,2021, connections to a Helpshift SDK via any previous TLS version will not work as expected. Specifically, this means that: The Helpshift Android SDK requires WRITE_EXTERNAL_PERMISSION. For SDK v6.2.0 and above this permission is optional, and you can choose to not include this permission in your app’s manifest file. The behaviour of the SDK based on whether this permission is mentioned in your manifest file is detailed here. If you are having issues with Helpshift integration, head over to the Troubleshooting section for further information. If you've successfully tested out an in-app conversation, it's time to dive into these advanced topics - If you are using Proguard, you will need to add the following to your project's proguard-project.txt file. # For Helpshift SDK <= 4.8.1 -keepnames class * extends com.helpshift.support.fragments.MainFragment # For Serializable classes (); } # If the app uses support libs version 23 or below -keepclassmembernames class android.support.v4.app.Fragment { android.support.v4.app.FragmentManagerImpl mChildFragmentManager; } # Support design -dontwarn android.support.design.** -keep class android.support.design.** { *; } -keep interface android.support.design.** { *; } -keep public class android.support.design.R$* { *; } # Appcompat -keep public class android.support.v7.widget.** { *; } -keep public class android.support.v7.internal.widget.** { *; } -keep public class android.support.v7.internal.view.menu.** { *; } -keep public class * extends android.support.v4.view.ActionProvider { public <init>(android.content.Context); } # Cardview # Based on this issue -keep class android.support.v7.widget.RoundRectDrawable { *; } Proguard rules for Android support libraries are taken from the android-proguard-snippets repository, which you can review here>
https://developers.helpshift.com/android/getting-started/
CC-MAIN-2022-27
refinedweb
442
52.36
a guide to releasing your first npm package Introduction npm (node package manager) is the world's largest software registry. According to the documentation, npm consists of three distinct components: - the registry - the website - the CLI The registry is an immense database containing all the node packages & modules ever published, complete with version info and metadata. On the website, you can search for and view packages, create an npm profile, and manage user settings. The CLI provides the primary means of interacting with npm, including the publishing of packages & modules, which we will now cover in-depth. Prerequisites: (1). an npm account (2). a GitHub account (3). the current version of Node.js (4). the current stable version of npm npm install npm@latest -g Package vs Module It is important to understand the two types of objects you can publish to the registry. How is a package defined? - a). A folder containing a program described by a package.jsonfile. - b). A g-zipped tarball containing (a). - c). A URL that resolves to (b). - d). A <name>@<version>that is published on the registry with (c). - e). A <name>@<tag>that points to (d). - f). A <name>that has a latesttag satisfying (e). - g). A gitURL that, when cloned, results in (a). How is a module defined? Any file or directory within node_modules that can be loaded by the Node.js require() function. To comply, at least one of these formats must be used. - A folder with a package.json filecontaining a "main"field. - A folder with an index.jsfile in it. - A JavaScript file. A module can also be a package, but not all modules are packages by default. Private vs Public This parameter determines the visibility of your new npm package. For this article, it is assumed that you will elect to create a public package, but you may always reference the documentation for more details. Scoped vs Unscoped Here you will define whether you want to release your code completely to the public domain (unscoped) or publish the code within a retained namespace (scoped). Unscoped packages are always public. Private packages are always scoped. Scoped packages are private by default; you must pass a command-line flag when publishing to make them public. The command line interprets your decision by how your package is named. I'll create examples based on my own npm account username. PUBLIC/UNSCOPED: npm publish my-package PUBLIC/SCOPED: npm publish @killshot13_npm/my-package --access public PRIVATE/SCOPED: npm publish @killshot13_npm/my-package To avoid confusing the reader, I shall refrain from much discussion about teams, organizations, and enterprise accounts. Since this article is about publishing our first npm package, I doubt these factors will be of much concern at present. Just be aware of the concepts for future knowledge. Create | Review | Test | Publish At last, the moment you have been waiting for! These four simple steps are what you will be using to publish your package or module to the registry. WARNING: If you deviate from this path or omit one of these steps, chances are you will encounter errors and spend a considerable amount of time debugging your files. Create mkdir my-package cd my-package Now that you have a root directory, initiate git and npm. git init git remote add origin git://YOUR URL HERE npm init // Follow the prompts to create a package.json file. // Consider the conventions listed above when naming your package. Now open the directory in your favorite code editor and add a README.md file and the rest of your package code files. Review In this step, use whatever means are at your disposal to double-check your code for environmental variables, including passwords, API keys, and other sensitive data. If necessary, create the appropriate files and replace your existing code with default variables as needed to protect your secrets. .gitignore | .npmignore | .env | PROCESS.ENV Test Almost there! Just one more quick (hopefully) thing to wrap up before you publish. To keep bugs out of the registry, especially when publishing publicly, you should first test your package in your own environment. Since you are already using the npm CLI, you can just run this command. // Use the full path to your project directory. npm install my-package If everything functions as designed, congratulations, you are ready to publish your first package! Publish This is the simplest and most satisfying step. Navigate back to the root directory of your npm package. cd /path/to/package Are you ready? npm publish my-package If npm prints something that looks like this, you are golden! Tips & Tricks Once you npm publish, you CANNOT change your package name! Take a moment to solidify your choice of words. Try to keep package names short and descriptive. For an example of what not to do, imagine installing this npm package. I pity the fool! npm install --save @teambit/staged-components.component-status-resolver Sometimes, even with an unscoped public package, npm refuses to publish unless you include a flag. Add --access public to your command and try again. npm publish my-package --access public If you enabled 2FA when creating your npm account, you will have to provide the one-time token in your publish command. npm publish my-package --otp=811486 Conclusion I hope you enjoyed this article. Feel free to add suggestions or ask questions in the comments below, and I will get back to you. Thank you for reading, and happy coding! Don't forget to 💖 this article and leave a 💭. If you're feeling extra generous, please click my name below to 🎆subscribe🎇! -- killshot13 Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/killshot13/npm-publish-29g1
CC-MAIN-2022-33
refinedweb
941
67.35
First the code: public class Solution { public static class Node { String word = null; Node[] ar = new Node[26]; } public String replaceWords(List<String> dict, String sentence) { if (dict.size() == 0 || sentence.length() == 0) return ""; StringBuilder sb = new StringBuilder(); String[] words = sentence.split(" "); Node root = new Node(); for (String s : dict) { insert(root, s.toCharArray(), 0); } for (int i = 0; i < words.length; i++) { sb.append(get(root, words[i].toCharArray(), 0)); if (i < words.length - 1) sb.append(" "); } return sb.toString(); } public void insert(Node node, char[] chs, int index) { if (node.word != null) return; if (index == chs.length) { node.word = String.valueOf(chs); return; } char ch = chs[index]; if (node.ar[ch - 'a'] == null) { node.ar[ch - 'a'] = new Node(); } insert(node.ar[ch - 'a'], chs, index + 1); } public String get(Node node, char[] chs, int index) { if (node.word != null) return node.word; if (index >= chs.length) return String.valueOf(chs); char ch = chs[index]; if (node.ar[ch - 'a'] == null) return String.valueOf(chs); return get(node.ar[ch - 'a'], chs, index + 1); } } This is a typical trie problem and the algorithm required to solve this one is also very typical. You only need to implement insert so that you can build a trie, and then get so that you can get the root for every successor. My approach of handling the trie is inspired by the approach taken by Robert Sedgwick in his book of Algorithms - Each node has 26 pointers downward. Also each node stands for a particular character on a particular index. - Each node can store a word. It is the accumulation of all the characters on the path from root to itself. It is only stored here, when we walk down from the root, and reaches the end of the keyon this node. So a lot of nodes actually have their wordattribute as null. This is not a hard trie problem, and the code is quite clear. One little trick: in get, when you found that the current node has a word, then you return since this is the shortest root found, this is trivially obvious; but in insert, you should actually do the same, because for example: if you have the key as ratt and is trying to insert it, and you arrived at a node whose word is rat, then you should just abort here: there is no need to store ratt in some node downward anyway, since it will never be retrieved during the get loop. The performance is 24ms (99%) @ 2017-08-04 20:10:47. The complexity IMO is O(size_of_dict) (in the unit of character, so it's like sum of all root_lengths) for building the trie, and O(number_of_words_in_sentence * length_of_longest_root_in_dict) for the lookup.
https://discuss.leetcode.com/topic/98272/java-verbose-clear-24ms-99-trie-based-solution
CC-MAIN-2017-34
refinedweb
457
75.91
I2S¶ About¶ I2S - Inter-IC Sound, correctly written I²S pronounced “eye-squared-ess”, alternative notation is IIS. I²S is an electrical serial bus interface standard used for connecting digital audio devices together. It is used to communicate PCM (Pulse-Code Modulation) audio data between integrated circuits in an electronic device. The I²S bus separates clock and serial data signals, resulting in simpler receivers than those required for asynchronous communications systems that need to recover the clock from the data stream. Despite the similar name, I²S is unrelated and incompatible with the bidirectional I²C (IIC) bus. The I²S bus consists of at least three lines: Note All lines can be attached to almost any pin and this change can occur even during operation. Bit clock line Officially “continuous serial clock (SCK)”. Typically written “bit clock (BCLK)”. In this library function parameter sckPinor constant PIN_I2S_SCK. Word clock line Officially “word select (WS)”. Typically called “left-right clock (LRCLK)” or “frame sync (FS)”. 0 = Left channel, 1 = Right channel In this library function parameter fsPinor constant PIN_I2S_FS. Data line Officially “serial data (SD)”, but can be called SDATA, SDIN, SDOUT, DACDAT, ADCDAT, etc. Unlike Arduino I2S with single data pin switching between input and output, in ESP core driver use separate data line for input and output. For backward compatibility, the shared data pin is sdPinor constant PIN_I2S_SDwhen using simplex mode. When using in duplex mode, there are two data lines: Output data line is called outSdPinfor function parameter, or constant PIN_I2S_SD_OUT Input data line is called inSdPinfor function parameter, or constant PIN_I2S_SD_IN I2S Modes¶ The I2S can be set up in three groups of modes: Master (default) or Slave. Simplex (default) or Duplex. Operation modes (Philips standard, ADC/DAC, PDM) Most of them are dual-channel, some can be single channel Note Officially supported operation mode is only I2S_PHILIPS_MODE. Other modes are implemented, but we cannot guarantee flawless execution and behavior. Master / Slave Mode¶ In Master mode (default) the device is generating clock signal sckPin and word select signal on fsPin. In Slave mode the device listens on attached pins for the clock signal and word select - i.e. unless externally driven the pins will remain LOW. How to enter either mode is described in the function section. Operation Modes¶ Setting the operation mode is done with function begin (see API section) I2S_PHILIPS_MODE Currently the only official* PIN_I2S_SCK PIN_I2S_FS PIN_I2S_SD PIN_I2S_SD_OUTonly need to send one channel data but the data will be copied for another channel automatically, then both channels will transmit same data. ADC_DAC_MODE The output will be an analog signal on pins 25(L or R?) and 26(L or R?). Input will be received on pin _inSdPin. The data are sampled in 12 bits and stored in a 16 bits, with the 4 most significant bits set to zero. PDM_STEREO_MODE Pulse-density-modulation is similar to PWM, but instead, the pulses have constant width. The signal is modulated with the number of ones or zeroes in sequence. PDM_MONO_MODE Single-channel version of PDM mode described above. Simplex / Duplex Mode¶ The Simplex mode is the default after driver initialization. Simplex mode uses the shared data pin sdPin or constant PIN_I2S_SD for both output and input, but can only read or write. This is the same behavior as in original Arduino library. The Duplex mode uses two separate data pins: Output pin outSdPinfor function parameter, or constant PIN_I2S_SD_OUT Input pin inSdPinfor function parameter, or constant PIN_I2S_SD_IN In this mode, the driver is able to read and write simultaneously on each line and is suitable for applications like walkie-talkie or phone. Switching between these modes is performed simply by calling setDuplex() or setSimplex() (see APi section for details and more functions). Arduino-ESP32 I2S API¶ The ESP32 I2S library is based on the Arduino I2S Library and implements a few more APIs, described in this documentation. Initialization and deinitialization¶ Before initialization, choose which pins you want to use. In DAC mode you can use only pins 25 and 26 for the output. begin (Master Mode)¶ Before usage choose which pins you want to use. In DAC mode you can use only pins 25 and 26 as output. int begin(int mode, int sampleRate, int bitsPerSample) Parameters: [in] modeone of above mentioned operation mode, for example I2S_PHILIPS_MODE. [in] sampleRateis the sampling rate in Hz. Currently officially supported value is only 16000 - other than this value will print warning, but continue to operate, however the resulting audio quality may suffer and the app may crash. [in] bitsPerSampleis the number of bits in a channel sample. Currently, the supported value is only 16 - other than this value will print a warning, but continues to operate, however, the resulting audio quality may suffer and the application may crash. For ADC_DAC_MODE the only possible value will remain 16. This function will return true on success or fail in case of failure. When failed, an error message will be printed if subscribed. begin (Slave Mode)¶ Performs initialization before use - creates buffers, task handling underlying driver messages, configuring and starting the driver operation. This version initializes I2S in SLAVE mode (see previous entry for MASTER mode). int begin(int mode, int bitsPerSample) Parameters: [in] modeone of above mentioned modes for example I2S_PHILIPS_MODE. [in] bitsPerSampleis the umber of bits in a channel sample. Currently, the only supported value is only 16 - other than this value will print warning, but continue to operate, however the resulting audio quality may suffer and the app may crash. For ADC_DAC_MODE the only possible value will remain 16. This function will return true on success or fail in case of failure. When failed, an error message will be printed if subscribed. Pin setup¶ Pins can be changed in two ways- 1st constants, 2nd functions. Note Shared data pin can be equal to any other data pin, but must not be equal to clock pin nor frame sync pin! Input and Output pins must not be equal, but one of them can be equal to shared data pin! sckPin != fsPin != outSdPin != inSdPin sckPin != fsPin != sdPin By default, the pin numbers are defined in constants in the header file. You can redefine any of those constants before including I2S.h. This way the driver will use these new default values and you will not need to specify pins in your code. The constants and their default values are: PIN_I2S_SCK14 PIN_I2S_FS25 PIN_I2S_SD26 PIN_I2S_SD_OUT26 PIN_I2S_SD_IN35 The second option to change pins is using the following functions. These functions can be called on either on initialized or uninitialized object. If called on the initialized object (after calling begin) the pins will change during operation. If called on the uninitialized object (before calling begin, or after calling end) the new pin setup will be used on next initialization. setSckPin¶ Set and apply clock pin. int setSckPin(int sckPin) This function will return true on success or fail in case of failure. setFsPin¶ Set and apply frame sync pin. int setFsPin(int fsPin) This function will return true on success or fail in case of failure. setDataPin¶ Set and apply shared data pin used in simplex mode. int setDataPin(int sdPin) This function will return true on success or fail in case of failure. setDataInPin¶ Set and apply data input pin. int setDataInPin(int inSdPin) This function will return true on success or fail in case of failure. setDataOutPin¶ Set and apply data output pin. int setDataOutPin(int outSdPin) This function will return true on success or fail in case of failure. setAllPins¶ Set all pins using given values in parameters. This is simply a wrapper of four functions mentioned above. int setAllPins(int sckPin, int fsPin, int sdPin, int outSdPin, int inSdPin) Set all pins to default i.e. take values from constants mentioned above. This simply calls the the function with the following constants. PIN_I2S_SCK14 PIN_I2S_FS25 PIN_I2S_SD26 PIN_I2S_SD_OUT26 PIN_I2S_SD_IN35 int setAllPins() onTransmit¶ Register the function to be called on each successful transmit event. void onTransmit(void(*)(void)) onReceive¶ Register the function to be called on each successful receives event. void onReceive(void(*)(void)) setBufferSize¶ Set the size of buffer. int setBufferSize(int bufferSize) This function can be called on both the initialized or uninitialized driver. If called on initialized, it will change internal values for buffer size and re-initialize driver with new value. If called on uninitialized, it will only change the internal values which will be used for next initialization. Parameter bufferSize must be in range from 8 to 1024 and the unit is sample words. The default value is 128. Example: 16 bit sample, dual channel, buffer size for input: 128 = 2B sample * 2 channels * 128 buffer size * buffer count (default 2) = 1024B And more `1024B for output buffer in total of 2kB used. This function always assumes dual-channel, keeping the same size even for MONO modes. This function will return true on success or fail in case of failure. When failed, an error message will be printed. Duplex vs Simplex¶ Original Arduino I2S library supports only simplex mode (only transmit or only receive at a time). For compatibility, we kept this behavior, but ESP natively supports duplex mode (receive and transmit simultaneously on separate pins). By default this library is initialized in simplex mode as it would in Arduino, switching input and output on sdPin (constant PIN_I2S_SD default pin 26). setDuplex¶ Switch to duplex mode and use separate pins: int setDuplex() input: inSdPin (constant PIN_I2S_SD_IN, default 35) output: outSdPin (constant PIN_I2S_SD, default 26) setSimplex¶ (Default mode) Switch to simplex mode using shared data pin sdPin (constant PIN_I2S_SD, default 26). int setSimplex() Data stream¶ read¶ Read size bytes from internal buffer if possible. int read(void* buffer, size_t size) This function is non-blocking, i.e. if the requested number of bytes is not available, it will return as much as possible without waiting. Hint: use available() before calling this function. Parameters: [out] void* buffer buffer into which will be copied data read from internal buffer. WARNING: this buffer must be allocated before use! [in] size_t size number of bytes required to be read. Returns number of successfully bytes read. Returns false` in case of reading error. Read one sample. int read() peek¶ Read one sample from the internal buffer and returns it. int peek() Repeated peeks will be returned in the same sample until read is called. write¶ Write a single byte. size_t write(uint8_t) Single-sample writes are blocking - waiting until there is free space in the internal buffer to be written into. Returns number of successfully written bytes, in this case, 1. Returns 0 on error. Write single sample. size_t write(int32_t) Single-sample writes are blocking - waiting until there is free space in the internal buffer to be written into. Returns number of successfully written bytes. Returns 0 on error. Expected return number is bitsPerSample/8. Write buffer of supplied size; size_t write(const void *buffer, size_t size) Parameters: [in] const void *buffer buffer to be written [in] size_t size size of buffer in bytes Returns number of successfully written bytes. Returns 0 in case of error. The expected return number is equal to size. write¶ This is a wrapper of the previous function performing typecast from uint8_t*` to void*. size_t write(const uint8_t *buffer, size_t size) write_blocking¶ Core function implementing blocking write, i.e. waits until all requested data are written. size_t write_blocking(const void *buffer, size_t size) WARNING: If too many bytes are requested, this can cause WatchDog Trigger Reset! Returns number of successfully written bytes. Returns 0 on error. Sample code¶ #include <I2S.h> const int buff_size = 128; int available, read; uint8_t buffer[buff_size]; I2S.begin(I2S_PHILIPS_MODE, 16000, 16); I2S.read(); // Switch the driver in simplex mode to receive available = I2S.available(); if(available < buff_size){ read = I2S.read(buffer, available); }else{ read = I2S.read(buffer, buff_size); } I2S.write(buffer, read); I2S.end();
https://docs.espressif.com/projects/arduino-esp32/en/latest/api/i2s.html
CC-MAIN-2022-33
refinedweb
1,974
57.16
Low-level VMU filesystem driver. More... #include <sys/cdefs.h> #include <dc/maple.h> Go to the source code of this file. Low-level. Set the no-copy flag. Overwrite existing files. This file is a VMU game. Delete a file from the VMU. Given a previously-read directory, add a new dirent to the dir. Another file with the same name should not exist (delete it first if it does). This function will not check for dupes! Given a VMU's root block, return the amount of space in bytes required to hold its directory. Given a previously-read directory, locate a file by filename. Given a previously-read directory, return the number of dirents available for new files. Given a selected VMU's root block, read its directory. This function reads the directory of a given VMU root block. It assumes the mutex is held. There must be at least the number of bytes returned by vmufs_dir_blocks() available in the buffer for this to succeed. Given a selected VMU's root block and dir blocks, write the dirty dir blocks back to the VMU. Assumes the mutex is held. Given a VMU's root block, return the amount of space in bytes required to hold its FAT. Given a previously-read FAT, return the number of blocks available to write out new file data. Given a selected VMU's root block, read its FAT. This function reads the FAT of a VMU, given its root block. It assumes the mutex is held. There must be at least the number of bytes returned by vmufs_fat_blocks() available in the buffer for this to succeed. Given a selected VMU's root block and its FAT, write the FAT blocks back to the VMU. This function assumes the mutex is held. Given a previously-read FAT and directory, delete the named file. No changes are made to the VMU itself, just the in-memory structs. Given a pointer to a directory struct and a previously loaded FAT, load the indicated file from the VMU. An appropriate amount of space must have been allocated previously in the buffer. Assumes the mutex is held. Given a pointer to a mostly-filled directory struct and a previously loaded directory and FAT, write the indicated file to the VMU. The named file should not exist in the directory already. The directory and FAT will not be sync'd back to the VMU, this must be done manually. Assumes the mutex is held. Return the number of user blocks free for file writing. You should check this number before attempting to write. Initialize vmufs. Must be called before anything else is useful. Lock the vmufs mutex. This should be done before you attempt any low-level ops. Unlock the vmufs mutex. This should be done once you're done with any low-level ops. Read a file from the VMU, using a pre-read dirent. This function is faster to use than vmufs_read() if you already have done the lookup, since it won't need to do that. Reads a selected VMU's root block. This function assumes the mutex is held. Writes a selected VMU's root block. This function assumes the mutex is held. Shutdown vmufs. Must be called after everything is finished. Write a file to the VMU. If the named file already exists, then the function checks 'flags'. If VMUFS_OVERWRITE is set, then the old file is deleted first before the new one is written (this all happens atomically). On partial failure, some data blocks may have been written, but in general the card should not be damaged.
http://cadcdev.sourceforge.net/docs/kos-2.0.0/vmufs_8h.html
CC-MAIN-2018-05
refinedweb
605
77.53
I have tried to modify the example code from to make my 16x2 LCD display work with 8-bits characters in order to display the whole Swedish alphabet. #include <LiquidCrystal.h> //LiquidCrystal lcd(12, 11, 6, 5, 4, 3); LiquidCrystal lcd(12, 11, 10, 9, 8, 7, 6, 5, 4, 3); void setup() { // set up the LCD's number of columns and rows: lcd.begin(16, 2); lcd.print("Hello, world!"); lcd.setCursor(0,1); lcd.print("ABCÅÄÖ,abcåäö."); } void loop() {} I added the four extra wires required for 8-bits mode: Please ignore the temperature sensor for now. But the output is exactly the same as if I had hooked it up for 4-bits mode. What am I missing?
https://forum.arduino.cc/t/solved-liquidcrystal-h-and-8-bits-mode/161795
CC-MAIN-2021-43
refinedweb
121
70.84
Doyle LardellPro Student 888 Points confused looks like i did everything right import sys while True: start = input("If you want to start movie enter any button, if not press n: ") if start.lower() == 'n': sys.exit() elif: print("Enjoy the show!") 1 Answer Jon Mirow9,846 Points Hi there! So close! I suspect that you're main error happened when you were editing or refactoring your code. Your if/else block currently has a elif with no condition. It's safe to use an else there in this case, and you need to unindent that to match the if higher up. Finally this specific question doesn't need a loop. That's good thinking though - quite often if you're asking for imput from a user you would want to use a loop to ensure that the user gives you the "right" kind of input, but in this case we don't need it. :) Hope it helps!
https://teamtreehouse.com/community/confused-84
CC-MAIN-2020-24
refinedweb
158
82.04
Causes the operating system to change the state of the current instance to ThreadState.Running. Once a thread is in the ThreadState.Running state, the operating system can schedule it for execution. The thread begins executing at the first line of the method represented by the System.Threading.ThreadStart or System.Threading.ParameterizedThreadStart delegate supplied to the thread constructor. If this overload is used with a thread created using a System.Threading.ParameterizedThreadStart delegate, null is passed to the method executed by the thread. Once the thread terminates, it cannot be restarted with another call to Start. The following example demonstrates creating a thread and starting it. C# Example using System; using System.Threading; public class ThreadWork { public static void DoWork() { for (int i = 0; i<3;i++) { Console.WriteLine ("Working thread ..."); Thread.Sleep(100); } } } class ThreadTest{ public static void Main() { ThreadStart myThreadDelegate = new ThreadStart(ThreadWork.DoWork); Thread myThread = new Thread(myThreadDelegate); myThread.Start(); for (int i = 0; i<3; i++) { Console.WriteLine("In main."); Thread.Sleep(100); } } } One possible set of output isIn main. Note that the sequence of the output statements is not guaranteed to be identical across systems.
http://docs.go-mono.com/monodoc.ashx?link=M%3ASystem.Threading.Thread.Start
CC-MAIN-2018-26
refinedweb
191
53.58
fn.QName( $paramURI as String?, $paramQName as String ) as xs.QName Returns an xs:QName with the namespace URI given in $paramURI. If $paramURI is the zero-length string or the empty sequence, it represents "no namespace"; in this case, if the value of $paramQName contains a colon (:), an error is raised [err:FOCA0002]. The prefix (or absence of a prefix) in $paramQName is retained in the returned xs:QName value. The local name in the result is taken from the local part of $paramQName. If $paramQName does not have the correct lexical form for xs:QName an error is raised [err:FOCA0002]. Note that unlike xs:QName this function does not require an xs:string literal as the argument. fn.QName("", "person") => an xs:QName with namespace URI = "", local name = "person", and prefix = "". fn.QName("", "ht:person") => an xs:QName with namespace URI = "", local name = "person", and prefix = "ht".
http://docs.marklogic.com/fn.QName
CC-MAIN-2017-47
refinedweb
150
66.23
This post and the next post will address AOP in Python. In general AOP in Python is very simple thanks to Python’s decorators. The aspects which we would like to apply in this post are low-level, meaning they’ll be applied on in-body instructions and not just on method level. The way in which we’re going to implement it will be using code weaving and rewriting. I previously blogged about similar concept in .Net using Mono Cecil, where we tracked IL instructions. The topic will be covered by two posts, where the first one will address rewriting code and the second one will deal with replacing the original code. Background Motivation The general motivation for AOP is to separate the business logic from other functional logic, like logging, security or error handling. Most of the common examples fit the pattern of wrapping the function with new one. Then, perform logic before/after the method is executed. This is very useful, yet, limits our ability to change behavior of specific instructions inside the method which are relevant to the aspect. Example During the post we will use a concrete simple example. Let us observe the following example (Python 2.7): def foo(x): return x < 1 print foo(None) As you probably know, this will print: True This is a common Python (2.7) behavior but might not be intuitive. In general, assuming we had many variables and many comparisons, we’d like to change all to the pattern: VAR is not None and VAR < CONST The goal of our process will be to transform the method to: def foo(x): return x is not None and x < 1 Where the aspect we’re applying is Update Comparison of None and Constants. The required steps The steps required by this solution are the following: - Decorate the method – create an entry point for the mechanism which’ll apply the aspect. - Create an AST from the method – prepare a modifiable syntax tree from the original method. - Rewrite the AST – find the instructions influenced by the aspect and modify them. - Create bytecode – create identical code to the original one other than newly generated bytecode. - Update the method – replace the original method code with the new one. Decorating the method Like the common approach, we will use a decorator to modify the function. We will start from this simple decorator and build over it: def rewrite_comparisons(original_function): assert isinstance(original_function, FunctionType) return original_function @rewrite_comparisons def foo(x): return x < 1 This decorator does nothing, so far. Getting code from function The first challenge is getting the method code from a function and make it modifiable. Since Python provides bytecode by default for a method, we will use built-in inspect to extract the original source code: function_source_code = inspect.getsource(original_function) Inspect uses the code locations linked to the function and read them from the source file. The return value is a string with function. This is different from disassembling code from the method bytecode. We can assume that for our functions the source code is available. Otherwise, this first step will fail, and the processing will need to be in bytecode level (which might be covered in other post). In addition, this constrains us to ensure decorator is called before any other decorator. Otherwise, previous decorators might be ignore since their effect is not reflected in the original source code. Building an AST (abstract syntax tree) After the previous line of code extracted the source, we can parse it to an AST. The motivation for building an abstract syntax tree, is that it’s modifiable and we can compile is back to bytecode. function_source_code = inspect.getsource(original_function) node = ast.parse(function_source_code) The node we get is the root one of the parsed code. It links to all the elements in the hierarchy and represents a simplified module code. Taking for example the foo function, the tree is: Module # the method declaration (foo) FunctionDef # the arguments list (x) arguments Name Param # return instruction Return # comparison of two elements Compare # load variable (x) Name Load # comparison operator (<) Lt # load constant (1) Num The AST represents the function, while the decorator is omitted for simplicity. As can easily be seen, the tree represents all the content of the method, including declaration, other methods in context if there are and more. Given the AST, we’d like to modify it a fit the need that our aspect requires. Transforming the AST AST visitors We will use the AST visitors as an introduction to syntax tree traversal. The node visitors follow a convention where callback names are of pattern visit_NODETYPE(self, node), where node type can be any these. For example, if we want a callback on method calls, we can define one for the Call node and name it visit_Call(self, node). In our example, we can visit the compare nodes, and print all the operands: from ast import NodeVisitor class ComparisonVisitor(NodeVisitor): def visit_Compare(self, node): operands = [node.left] + node.comparators print '; '.join(type(operand).__name__ for operand in operands) For every callback, we are assured the type of the node fits the Compare node type. Given the type, we can investigate it’s members. Comparison in Python is composed of operators (one or more) and operands (two or more). In the case of Compare node, the first operand is called left, and the rest are called comparators. One of the reason for the complicated structure is to support expressions like: 0 < x < 100 Using the visitor we can query the nodes, but not modify them. If we visit the the original foo function: </pre> node = ast.parse(inspect.getsource(foo)) ComparisonVisitor().visit(node) The result we expect is: Name; Num Since comparison is x < 1, where x is Name load in the context and 1 is a Constant Number in the context. AST transformers Python provides transformers, which are a special type of AST visitors. The transformers, in contrast to nodes visitors, modify the nodes they visit. In our example, we’ll look for nodes that represent comparison between variables and numbers, and then extend them to comply with the aspect. from ast import NodeTransformer from _ast import BoolOp, And, Name, Compare, IsNot, Load, Num class ComparisonTransformer(NodeTransformer): def visit_Compare(self, node): parts = [node.left] + node.comparators # check if any constant number is involved in the comparison if not any(isinstance(part, Num) for part in parts): return node # get all the "variables" involved in the comparison names = [element for element in parts if isinstance(element, Name)] if len(names) == 0: return node # create a reference to None none = Name(id='None', ctx=Load()) # create for each variable a node that represents 'var is not None' node_verifiers = [Compare(left=name, ops=[IsNot()], comparators=[none]) for name in names] # combine the None checks with the original comparison # e.g. 'a < b < 1' --> 'a is not None and b is not None and a < b < 1 return BoolOp(op=And(), values=node_verifiers + [node]) This chunk of code is a simplified (relaxed type input checks no attempts to code location fixes) version of a transformer that visits all nodes of type Compare. The transformer methods names use the same convention as the visitors. According to the original behavior a new node is being built. This node is a new Boolean expression, which requires all the variables[2] in use to be not None and to satisfy the original comparison. If we’d look at the output, the the AST will be modified and verify variables are not None before they’re compared to None. The out tree for the modified foo is: Module # the method declaration (foo) FunctionDef # the arguments list (x) arguments Name Param # return instruction Return # the bool expression that combines with And: # 1. the original comparison # 2. the new check 'VAR is not None' BoolOp And # the 'x is not None' comparison Compare Name Load IsNot Name Load # the original comparison 'x < 1' Compare Name Load Lt Num Prepare the node forrecompilation In the next phase, we’re going to import the new code as temporary module, which will case the declaration of the new method to be executed again. In order to do so, we’d like to remove the rewriter decorator, since we don’t want it to process the modified function. In addition, we rename the function for safety to avoid collisions between the declared function and other locals. Lastly, we ask python to fix code locations for the new nodes so they can be compiled later on. This is done using fix_missing_locations. from ast fix_missing_locations def rewrite_method(node): # assuming the method has single decorator (which is the rewriter) - remove it node.body[0].decorator_list.pop() # we rename the method to ensure separation from the original one. # this step has no real meaning and not really required. node.body[0].name = 'internal_method' # transform Compare nodes to fit the 'is not None' requirement ComparisonTransformer().visit(node) # let python try and fill code locations for the new elements fix_missing_locations(node) Summary During the first phase we got as an input a function (through a decorator), then modified it’s body by visiting it’s body using a syntactic level. Lastly, we modified it’s declaration and source locations so it can be safely imported as a new function. As you probably notice, the only part in this code which is concerned by the aspect is the transformer. Meaning, if we’d like to apply a different aspect the only part which’ll change is the transformer. In our example the ComparisonTransformer is hard-coded for simplicity, but in real solution we’d provide it as an argument to the decorator. Next phase In the next phase we’ll use the modified function to generate replacement bytecode.
https://blog.elishalom.com/2015/07/25/rewrite-python-methods-body/
CC-MAIN-2018-26
refinedweb
1,634
51.38
Quick Search Integration Tutorial This tutorial shows you how to write a module that plugs custom items into the NetBeans Quick Search feature. For troubleshooting purposes, you are welcome to download the completed tutorial source code. Introduction to Quick Search Integration: Creating the Module Project In this section, we use a wizard to create the source structure that every NetBeans module requires. The source structure consists of certain folders in specific places and a set of files that are always needed. For example, every NetBeans module requires a nbproject folder, which holds the project’s metadata. Choose File > New Project (Ctrl-Shift-N). Under Categories, select NetBeans Modules. Under Projects, select Module. Click Next. In the Name and Location panel, type NetBeansZoneSearchin Project Name. Change the Project Location to any directory on your computer. Click Next. In the Basic Module Configuration panel, type org.netbeans.modules.nbzoneas the Code Name Base. Click Finish. The IDE creates the NetBeansZoneSearch project. The project contains all of your sources and project metadata, such as the project’s Ant build script. The project opens in the IDE. You can view its logical structure in the Projects window (Ctrl-1) and its file structure in the Files window (Ctrl-2). Using the Quick Search Provider Wizard In this section, we use a wizard to create a stub Java class and layer entries necessary for beginning our integration with the Quick Search feature. Right-click the "NetBeansZoneSearch" project node and choose New > Other. In the New File dialog, choose Module Development > Quick Search Provider: In the Quick Search Provider panel, set the following: Provider Class Name. Specifies the class name of the stub that the wizard will generate. Type "NBZoneSearchProvider" in this field. Package. Specifies the package where the stub class will be generated. Select "org.netbeans.modules.nbzone" from the drop-down. Category Display Name. Specifies the display name of the category that the stub will create. Type "NetBeans Zone" in this field. Command Prefix. Specifies prefix for narrowing the search to the category that the stub will create. Type "nb" in this field. Position in Popup. Specifies the position of the new category in within the Quick Search feature. Leave "0", so that the category will be first in the popup.> Integrating an External HTML DOM Parser: The first way is to put the JAR into a separate module, called a "library wrapper module", and have the functionality module depend on the library wrapper module, after putting both into a module suite. The advantage of having two separate modules is that, when a new version of the external JAR is released, you will only need to redistribute a small module containing only the external JAR, rather than a larger one that also contains the functionality code. The second way is to add the JAR to the functionality module, which is what is done below. The advantage of this approach is that it is more convenient in the short term only, since you only have one module to distribute, while the disadvantage is that you are mixing your external library with the functionality code, which is not a strictly modular approach. 1. Right-click the project, choose Properties, and wrap the JAR as shown below: Look in the Files window and notice that you have your Tidy.jarin a new folder, named release/modules/extfolder:. Coding the Quick Search Integration. Code the "NBZoneSearchProvider" class as follows: public class NBZoneSearchProvider implements SearchProvider { @Override public void evaluate(SearchRequest request, SearchResponse response) { try { //The URL that we are providing a search for: URL url = new URL(""); //Stuff needed by Tidy: Tidy tidy = new Tidy(); tidy.setXHTML(true); tidy.setTidyMark(false); tidy.setShowWarnings(false); tidy.setQuiet(true); //Get the org.w3c.dom.Document from Tidy, //or use a different parser of your choice: "title" attribute from the current "a" element: if (null != list.item(i).getAttributes().getNamedItem("title")) { String title = list.item(i).getAttributes().getNamedItem("title").getNodeValue(); //If the title matches the requested text: if (title.toLowerCase().indexOf(request.getText().toLowerCase()) != -1) { //Add the runnable and the title to the response //and return if nothing is added: if (!response.addResult(new OpenFoundArticle(href), title)) { return; } } } } } catch (IOException ex) { Exceptions.printStackTrace(ex); } } private static class OpenFoundArticle implements Runnable { private String article; public OpenFoundArticle(String article) { this.article = article; } ); } } } } Make sure the following import statements are declared at the top of the class: import java.io.IOException;; import org.openide.awt.StatusDisplayer; import org.openide.util.Exceptions; import org.w3c.dom.Document; import org.w3c.dom.NodeList; import org.w3c.tidy.Tidy; Installing and Trying Out the Functionality Let’s now install the module and then use the quick search feature integration. The IDE uses an Ant build script to build and install your module. The build script is created for you when you create the project. In the Projects window, right-click the project and choose Run. A new instance of the IDE starts up and installs the Quick Search integration module. In the top-right of the IDE, you will find your Quick Search feature:: Click an item and, if you have set a browser in the IDE, it opens, displaying the selected article. Using the Quick Search Feature on the NetBeans Platform. Add the following tags to the layer.xmlfile: : Alternatively, you can show the Quick Search feature right-aligned in the menu bar: : Next Steps For more information about creating and developing NetBeans modules, see the following resources:
https://netbeans.apache.org/tutorials/nbm-quick-search.html
CC-MAIN-2021-17
refinedweb
911
57.57
On Mon, 2010-04-05 at 16:25 -0400, H. Peter Anvin wrote:> On 04/05/2010 01:02 PM, Guenter Roeck wrote:> >>> >> You didn't answer my question (c).> >>> >> I want to know how you ended up with> >> boot_params.screen_info.orig_video_isVGA == 1 on a system with no VGA,> >> which seems like it would have resolved this.> >>> >> I am *not* inclined to add a compile-time test for what should have been> >> handed with a runtime test already.> >>> > Sorry, I thought I did answer it.> >> > You didn't. You still haven't!> c) boot_params.screen_info.orig_video_isVGA == 1?boot_params.screen_info.orig_video_isVGA == 0 in the problem case. As Itried to explain below, the problem happens before setup_early_printk()is called, and thus the value of orig_video_isVGA is irrelevant for theproblem case. Not sure how else I can explain it.> > The problem is that early_printk() can be called prior to the call to> > setup_early_printk(). Since early_console is currently pre-initialized> > with early_vga_console, output can be written to VGA memory space even> > if there is no VGA controller in the system (and even if> > boot_params.screen_info.orig_video_isVGA == 0). This happens for all> > early_printk() calls executed prior to the call to setup_early_printk().> > If boot_params.screen_info.orig_video_isVGA == 0, at least this bit of> your patch has no effect:> > > > - if (!strncmp(buf, "vga", 3) &&> > > + if (have_vga_console && !strncmp(buf, "vga", 3) &&> > > boot_params.screen_info.orig_video_isVGA == 1) {> It does; as a result of this part of the patch, the compiler canoptimize all vga related code away. As I said, this is just anoptimization resulting in less code. It is however not important /relevant from a functional point of view, and I don't mind taking itout.> Now, we have at least two ways to report a non-VGA console at runtime:> > boot_params.screen_info.orig_video_isVGA != 1> boot_params.screen_info.orig_video_lines == 0> > The former is zero for CGA/MDA/EGA, but early_vga_write() doesn't work> right for MDA at least, so keying on isVGA is probably right.> > early_printk() being called before setup_early_printk() is a problem,> and it's not immediately obvious to me how to fix it. We can of course> make early_vga_write() simply return if boot_params.screen_info.isVGA ==> 0, of course, but it really is a bigger problem than that in many ways.> As far as I can see, boot_params.screen_info.orig_video_isVGA is setearly enough during boot, so that should at least solve the immediateproblem. However, it would result in early messages being ignored, whichmight not be desirable.Would you accept a minimized patch like this ? /* Direct interface for emergencies */+#ifdef CONFIG_VGA_CONSOLE static struct console *early_console = &early_vga_console;+#else+static struct console *early_console = &early_serial_console;+#endif static int __initdata early_console_initialized;This would prevent the problem while minimizing changes, and at the sametime permit early messages to be written to the serial console.Guenter
http://lkml.org/lkml/2010/4/5/190
CC-MAIN-2013-20
refinedweb
458
57.06
Hey everyone. In recent months I have had more and more comments and personal messages about what programming is and what the different tools do. So here is my most basic explanation of programming. (warning this is really basic so if you have programmed before you probably know all of this already :D) Enjoy Step 1: HTML (the Way You Are Looking at This Right Now) HTML was created by Tim Berners-Lee back in 1989 and it stands for Hyper Text Markup Language. What this language is mainly used for is programming webpages. In fact, if you press CTRL U on Firefox right now you will see the source code (the code that makes up a webpage) for the page you are on right now unless you use Chrome in which case get Firefox because Google is going to become Skynet (just kidding, Google's great) Now normally when you want to program something like an App you'll need advanced software, however, when it comes to HTML you can use any word processor (notepad) and a browser. Typical HTML code will look like this: <.html>(starting the code by declaring its HTML) <.head>( declaring the start of the heading <.title> This is my Page Title <./title>(The Title) ( declaring the end of a heading with a /) <.body>( starting body of the page) This is my test page (the body of the page) <./body>(ending the body of the page with a /) <./html>(ending the code) Just remember that this is a super basic example, most professional webpages actually use many different programming language. Here is how they all work together: Html: places text on web based pages and gives it size/style Css: tells the html/text on the page how to look and where to go. (Makes pretty) Java-Script: gives your html/css animation/function Step 2: Java (first please note that this is different to java script) Java is a programming language that developers use to create applications on your computer. Chances are you've downloaded a program that required the Java run-time, and so you probably have it installed it on your system. Java also has a web plug-in that allows you to run these apps in your browser. Java was created by James Gosling in 1995. Now, if you want to program Apps you'll probably end up using using Java in some way or another. Here is some example of code: /* HelloWorld.java (The name of the App) */ (starting the code) public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World!"); (telling the app what to display) } } (closing the code) Examples of games programmed in Java would be Minecraft, many Tom Clancy games and tons more Step 3: Arduino The Arduino IDE is an awesome programming language with similar commands to C. Its main uses are for programming the Arduino brand micro-controllers. (As well as a few other compatible micro-controllers.) The Arduino IDE is different to Java and Html because instead of programming Apps or Webpages, what it does is it takes the code you input and makes it Machine language which it then sends to a micro-controller. (A micro-controller is a small computer capable of controlling anything from motors to robots.) int led = 13; (declaring that pin 13 on the Arduino will now be known as led) void setup() { (Setting up the pins input or outputs) pinMode(led, OUTPUT); (saying that pin led (13) will be an output) } (closing the void setup) void loop(); (starting the void loop, this is what will repeat forever) { (starting the void loop) digitalWrite(led, HIGH); (Turning the led on) delay(1000); (keeping the led on for 1 second, 1000 milliseconds) digitalWrite(led, LOW); (turning the led off) delay(1000); (keeping the led off for 1 second, 1000 milliseconds) } (closing the void loop) You can find more info about the Arduino IDE and Arduino micro-controller in my other Instructable : Step 4: C++ C++ is defined on Google as a general-purpose programming langue, and that's exactly what it is. C as a whole is used for programming things anywhere from a micro controller (like Arduino) to making apps (like Java). However C++ mainly used to program things like Games, Office Applications, video editors and even operating systems. The chances are if you are using software that doesn't connect to the internet it's written in some form of C. I won't list an example for this because there are so many different versions , however a quick Google with the version you are running will bring you to pages upon pages of code. Step 5: To Conclude So, to conclude, many different programming languages are used for many different things. Html is used for websites Arduino is used for microcontrollers and Java is used for App development. So the next time you want to program something you know exactly what software to use with it. 6 Discussions Question 1 year ago which one is used for programming wireless remote control? 1 year ago i love this, but what about automating things? Question 1 year ago what about to automate things. which one will i use? 1 year ago where is raspberry PI?!?!?! 3 years ago Great article here is another series for beginners C++ 3 years ago Oh my gosh! This is so well broke down and easy to understand! Thanks dude!
https://www.instructables.com/id/The-basics-of-Programming/
CC-MAIN-2019-18
refinedweb
909
66.78
I know ya'll have probably seen this one a thousand times but would appreciate some help. Here's the code: Header File: Main File:Main File:Code:/* The game of paper, rock, scissors. */ #include <ctype.h> /* for isspace() */ #include <stdio.h> /* for printf(), etc */ #include <stdlib.h> /* for rand() and srand() */ #include <time.h> /* for time() */ enum p_r_s1 {paper, rock, scissors, game, help, instructions, quit}; enum outcome {win, lose, tie, error}; typedef enum p_r_s1 p_r_s1; typedef enum outcome outcome; outcome compare(p_r_s1 player_choice, p_r_s1); void report(outcome result, char *player_choice, char *machine_choice); p_r_s1 selection_by_machine(void); p_r_s1 selection_by_player(void); Compare File:Compare File:Code:#include "p_r_s1.h" int main(void) { int win_cnt = 0, lose_cnt = 0, tie_cnt = 0; outcome result; p_r_s1 player_choice, machine_choice; srand(time(NULL)); /* seed the random number generator*/("\nPROGRAMMER ERROR: Cannot get to here!\n\n"); exit(1); } prn_game_status(win_cnt, lose_cnt, tie_cnt); prn_final_status(win_cnt, lose_cnt); return 0; } Print File:Print File:Code:#include "p_r_s1.h" outcome compare(p_r_s1 player_choice, p_r_s1 machine_choice) { outcome result; if (player_choice == machine_choice) return tie; switch (player_choice) { case paper: result = (machine_choice == rock) ? win : lose; break; case rock: result = (machine_choice == scissors) ? win : lose; break; case scissors: result = (machine_choice == paper) ? win : lose; break; default: printf("PROGRAMMING ERROR: Unexpected choice!\n\n"); exit(1); } return result; } Report File:Report File:Code:#include "p_r_s1.h" void prn_final_status(int win_cnt, int lose_cnt) { if (win_cnt > lose_cnt) printf("CONGRATULATIONS - You won! the player and the machine will choose one\n" "of p, r, or s. If the two choices are the same,\n" "then the game is a tie. Otherwise:\n" "\n" " \"paper covers the rock\" (a win for paper)\n" " \"rock crushes the scissors\" (a win for rock)\n" " \"scissors cut through paper\" (a win for scissors)\n" "\n" "There are other allowable inputs:\n" "\n" " g for game status (print number of wins)\n" " h for help (print short instructions)\n" " i for instructions (print these instructions)\n" " q for quit (quit the game)\n" "\n" "This game is played repeatedly until q is entered.\n" "\n" "Good Luck!\n"); } Select File:Select File:Code:#include "p_r_s1.h" void report(outcome result, int *win_cnt_ptr, int *lose_cnt_ptr, int *tie_cnt_ptr, p_r_s1 player_choice, p_r_s1 machine_choice) { switch (result) { case win: ++*win_cnt_ptr; if (player_choice == paper) printf("%27sYou chose paper I chose rock. You win.\n", ""); else if (player_choice == rock) printf("%27sYou chose rock I chose scissors. You win.\n", ""); else if (player_choice == scissors) printf("%27sYou chose scissors I chose paper. You win.\n", ""); break; case lose: ++*lose_cnt_ptr; if (player_choice == paper) printf("%27sYou chose paper I chose scissors. You lose.\n", ""); else if (player_choice == rock) printf("%27sYou chose rock I chose paper. You lose.\n", ""); else if (player_choice == scissors) printf("%27sYou chose scissors I chose rock. You lose.\n", ""); break; case tie: ++*tie_cnt_ptr; if (player_choice == paper) printf("%27sYou chose paper I chose paper. We tie.\n", ""); else if (player_choice == rock) printf("%27sYou chose rock I chose rock. We tie.\n", ""); else if (player_choice == scissors) printf("%27sYou chose scissors I chose scissors. We tie.\n", ""); break; default: printf("PROGRAMMER ERROR: Unexpected result!\n\n"); exit(1); } } I keep getting the following errors:I keep getting the following errors:Code:#include "p_r_s1.h" p_r_s1 selection_by_machine(void) { return ((p_r_s1) (rand() % 3)); } p_r_s1 selection_by_player(void) { char c; p_r_s1; } main.cpp(9) : warning C4244: 'argument' : conversion from 'time_t' to 'unsigned int', possible loss of data Linking... main.obj : error LNK2019: unresolved external symbol "void __cdecl report(enum outcome,int *,int *,int *)" (?report@@YAXW4outcome@@PAH11@Z) referenced in function _main Debug/p_r_s1.exe : fatal error LNK1120: 1 unresolved externals I'm told that the function prototype and definition don't match number of items and correct types in the header and main files but I can't see where, my eyes hurt looking for it Thanks
http://cboard.cprogramming.com/c-programming/59272-prs-help.html
CC-MAIN-2015-48
refinedweb
624
56.25
Would ANT 1.6.2Beta support option for tasks like copy etc wherein the destination can be a fileset. In current implementation only source can be a fileset not a destination. If not supported is there a way around this. Thanks Anil -----Original Message----- From: Antoine Levy-Lambert [mailto:antoine@gmx.de] Sent: Friday, July 02, 2004 6:54 AM To: Ant Developers List; Ant Users List Subject: ant 1.6.2beta1 available Hi, I have the pleasure to announce the availability of ant 1.6.2beta1 on Structural innovations since ant 1.6.1 are : * Nested elements for namespaced tasks and types may belong to the Ant default namespace as well as the task's or type's namespace, * All exceptions thrown by tasks are now wrapped in a buildexception giving the location in the buildfile of the task, Ant 1.6.2beta1 fixes a large number of bugs and adds a number of features which were asked for by users on Bugzilla. Have fun with ant 1.6.2beta1. Antoine Levy-Lambert --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscribe@ant.apache.org For additional commands, e-mail: user-help@ant.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional commands, e-mail: dev-help@ant.apache.org
http://mail-archives.apache.org/mod_mbox/ant-dev/200407.mbox/%3C528598BA3961154CAFDFD3917A015EC7F2CDC1@cnfqe109.cnf.prod.cnf.com%3E
CC-MAIN-2014-52
refinedweb
212
59.8
This preview shows pages 1–2. Sign up to view the full content. View Full Document This preview has intentionally blurred sections. Unformatted text preview: Managing in Complex Environments QUIZ #2 Name:_________________________________ Fall semester, 2006 Professor Atkin My lecture time is: _____ ID# _________________________________ SAMPLE QUIZ #2 PLEASE NOTE: QUESTIONS 1 12 ARE FROM CHAPTER 3 ; 13 28 ARE FROM CHAPTER 4 ; OTHERS WERE FROM CHAPTER 5 FOR WHICH YOU ARE NOT RESPONSIBLE AT THIS TIME & HENCE ARE NOT INCLUDED HERE. This closed-book, closed-note quiz has 35 3-point questions to be answered on optical scan sheets. When done with the quiz, please complete the peer evaluation. To get a grade, you must return (a) the quiz, (b) the answer sheet, and (c) the completed peer evaluation. Good luck! 1. All other things equal, a supply-side industry definition A a. identifies more rivals than a demand-side definition b. is more appropriate for goods industries than for service industries c. both of the above d. none of the above 2. The goal of industry analysis is to understand why some firms do well even in unattractive industries. a. true b. false B 3. Most industries are B a. monopolies b. oligopolies c. perfect competitions d. insufficient information to determine 4. Focused firms are firms that engage in D a. related diversification b. unrelated diversification c. both of the above d. none of the above 5. When suppliers have relatively greater power than the industry rivals C a. raw material prices charged by the suppliers tend to increase b. rivals expenditures tend to increase c. both of the above d. none of the above d.... View Full Document - Spring '08 - Atkins Click to edit the document details
https://www.coursehero.com/file/203091/2008-SAMPLE-Quiz-2-07-SAMPLE-01-Fall-2006-ANSWERS/
CC-MAIN-2016-50
refinedweb
290
69.68
In software development, routing serves to map all incoming requests to handlers and generate the URLs used in responses. In ASP.NET Core, routing has been rewritten from the roots up. Previously, routing with MVC and Web API was very similar, but both were using different frameworks (and code) to do the same thing. An important difference was that Web API supported RESTful routes by default. For example, if a controller’s action method name started with , then invoking an HTTP Post would call that method by default. Since Microsoft decided to rebuild and unify the routing framework, what applies now for MVC, applies also for Web API. Before we dig into how to build routing, however, let’s review why routing is so important for your application. Why Routing? SEO friendly RESTfully configured routing facilitates the Search Engine Optimization (SEO) of your content. A site’s URL is one of the top criteria that impacts site ranking. By converting to you encourage search engines to rank it higher for keyphrases related to “how to peel potatoes.” Also, when you have a URL that is more descriptive, it is easier for users to correctly anticipate the content, leading to increased time on page, which also impacts SEO and your overall page authority. URLs do not need to map a file Without routing, an incoming request would be mapped to a physical file. With routing we have full control of the request, allowing us to decide what action and controller we execute when a certain HTTP request comes in. Long URLs and file extensions can be eliminated Routing helps to shorten the URL in instances where many parameters and filters are in play. By eliminating the file extension, we can hide what kind of environment we are working in. So, how do we take advantage of these benefits? Let’s look at five ways you can build routing in your ASP.NET Core application. 1. Creating Default Routes You can define the default route by convention in your project’s Startup class. public class Startup { public void ConfigureServices(IServiceCollectionservices) { services.AddMvc(); } public void Configure(IApplicationBuilderapp, IHostingEnvironmentenv, ILoggerFactoryloggerFactory) { app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); } } With the above, we assure the essential configuration exists in our project for the standard MVC pattern of Controller + Action + ID (Optional) route. You can also declare the routing pattern like this: routes.MapRoute( name: "default_route", template: "{controller}/{action}/{id?}", defaults: new { controller = "Home", action = "Index" } ); (This is how we used to do routing in ASP.NET Core.) 2. Extending Default Routes Once we have the default route configured, we might want to extend it by adding customized routes based on specific needs. For this, we can add configurations using the MapRoute() method. app.UseMvc(routes => { //New Route routes.MapRoute( name: "about-route", template: "about", defaults: new { controller = "Home", action = "About" } ); routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); We added an extra route which grants access to the About action on the Home controller with an /about route. Because the default pattern route is still present, we can also access the About page with the conventional /home/about route. 3. Using Attributes You can also configure routes using attributes in your controller and actions. [Route("[controller]")] public class AnalyticsController : Controller { [Route("Dashboard")] public IActionResultIndex() { return View(); } [Route("[action]")] public IActionResultCharts() { return View(); } } In this sample we can access to the controller actions with the following routes: /Analytics/Dashboard /Analytics/Charts You can see the two tokens [controller] and [action] indicate that we have to refer to the controller and action name that has been declared. In this case, “Analytics” is the name of the controller, and “Charts” the name of the action, therefore it the name of the route. 4. Building RESTful Routes In order to declare a RESTful controller, we need to use the following route configuration: [Route("api/[controller]")] public class ValuesController : Controller { // GET api/values [HttpGet] public IEnumerable<string> Get() { return new string[] {"hello", "world!"}; } // POST api/values [HttpPost] public void PostCreate([FromBody] string value) { } } Here we are telling to our RESTful service to accept calls under the /api/values route. Note that we no longer use the Route attribute for actions. Instead we decorate it with HttpGet , HttpPost , HttpPut , Http attributes. Or, we can take a look at a different scenario: // POST api/values/5 [HttpPost("{id}")] public void PostUpdate(int id, [FromBody] string value) { } Here we have the following routes for the controller Values HTTP Postof /valuesroute will invoke HTTP Postof /values/PostNameroute will invoke Post([FromBody]string value)action 5. Using Constraints We can restrict the type of value that we pass to actions using constraints. For example, if we expect an argument that is a number we have to restrict it to an integer type. Declare constraints in attributes using curly brackets {id:int} . [HttpGet("{id:int}")] public string GetById(int id) { return "item " + id; } Here, we are telling the action GetByID to accept only an integer argument. Adding a question mark to the constraints {id:int?} indicates that the parameter is optional. Therefore with a question mark we can call /GetByID/123 or /GetByID without additional parameters. We can also define constraints in default routes declared in the Startup class this way: routes.MapRoute( name: "getProductById", template: "Products/{id:int}", defaults: new { controller = "Products", action = "GetById" }); There are several available constraints like bool, datetime, decimal, min, max, regex, etc. If you’re ready to learn more about RESTful application design in ASP.NET Core, check out these resources:
http://126kr.com/article/8exk832t2yp
CC-MAIN-2017-09
refinedweb
928
54.52
React's Ecosystem as a flexible Framework React is only a view-layer library. Thus React only enables you to build component driven user interfaces. It comes with a couple of built-in solutions though, for instance local state management and syntactic events to make interactions happen, but after all you are only dealing with a view-layer library. It is often said that plain React is sufficient when building applications. In the open source book the Road to learn React it is showcased that plain React suffices to build an application. But in the end, when implementing a larger application, you need a couple of more libraries to have a sophisticated web application with React as its core. Developers coming from frameworks such as Angular or Ember often have a hard time to figure out all the building blocks they will need to build a sophisticated web application with React as its core. Coming from a framework, you are used to have all necessary functionalities at your disposal. However, React is only a view-layer library. Thus you would need to figure out all the other building blocks, to be more specific: all the other libraries that are needed to complement React. Nevertheless I think it is one of the crucial advantages of React in staying flexible when choosing your libraries to complement your React application. I made this experience myself when I came from Angular to React. It might help you to understand the reasons behind a change from a framework to a library. I would argue that React with its ecosystem is a flexible framework. You can choose your libraries to complement your React core. The following article will give you an opinionated approach to select from these libraries to build a sophisticated React application. In the end, you will have an opinionated list of building blocks. Nevertheless, it is up to you to exchange them with your own preferred libraries. After all, the article attempts to give newcomers in the React ecosystem an opinionated overview. React's Boilerplate Decision Even nowadays developers struggle on making a decision on how to setup their React project when joining the React community. There are thousands of boilerplate projects to choose from and every boilerplate project attempts to fulfil different needs. They vary in a range of minimalistic to almost bloated projects. The status quo in the community is by starting your project with create-react-app. It comes with a zero-configuration setup and gives you a minimalistic up and running React application out of the box. You can always decide to lay open the toolchain by using its eject functionality. Afterward, you can alter the underlying toolchain. In the end, there will never be the perfect boilerplate project. You will always have to add your own tooling. That’s why it makes sense, when having a solid understanding of React itself, to start of with a minimal React boilerplate project. You will be able to understand the underlying mechanics, you will do it on your own without copying a project, and you can add your own tooling to it. When choosing a bloated React boilerplate project in the first place, you will only be overwhelmed when you want to change something in the toolchain. An alternative in the ecosystem, similar to create-react-app, is Next.js. It is a zero-configuration React application as well, but for server-side rendered React. Recommendations: - create-react-app - Next.js for server-side rendered React - own minimal boilerplate, when having a solid understanding of React Utility Libraries for React JavaScript ES6 and beyond gives you tons of built-in functionalities dealing with arrays, objects, numbers, objects, strings etc. One of the most used JavaScript built-in functionalities in React is the map() method of arrays. Why? Because you always have to render a list of items in a component. Since JSX is a mixture of HTML and JavaScript, you can use JavaScript to map over your items and return JSX. const List = ({ list }) => <div> {list.map(item => <div key={item.id}>{item.title}</div>)} </div> However, you might come to the point to choose a utility library that gives you more elaborated functionalities. You might even want to be more flexible when chaining these utility functions or even compose them dynamically into each other. That’s the point in time where you would introduce a utility library. My personal recommendations are two libraries. The first recommendation is Lodash. It is the most widespread utility library in JavaScript. I guess there are people who know more about Lodash than about the native JavaScript functionalities, because people often learn libraries before learning a programming language, but also because JavaScript introduced new functionalities in its recent versions. Nevertheless, Lodash comes with a powerful set of functions to access, manipulate and compose. The second recommendation is Ramda. When you lean towards functional programming (FP) in JavaScript, there is no way around this utility library. Even though Lodash comes with its own functional programming derivate (Lodash FP), I would always recommend using Ramda when dealing with FP in JavaScript. It gives you a powerful set of functionalities to be productive. So when introducing a utility library to your React core, you could make the decision between Lodash and Ramda. Whereas Lodash is the more down to earth library for every JavaScript developer, Ramda comes with a powerful core when functional programming comes into play. Recommendations: - JavaScript ES6 and beyond - Lodash - Ramda, when doing functional programming Styling in React When it comes to styling in React, it becomes opinionated in the React ecosystem. Not only regarding the specific solutions that are already out there, but because of the overarching philosophies. For instance, is it okay to have inline style in JSX? Is it fine to colocate style with components? When starting with React, it is just fine to use plain CSS. If your first project is setup with create-react-app, you will encounter only CSS and can decide to add inline style too. const Headline = ({ children }) => <h1 className="headline" style={{ color: 'lightblue' }}> {children} </h1> In smaller applications, it can be just fine to go only with plain CSS and inline style. Once your application scales, I would advice you to have a look into CSS modules. It gives you a way to encapsulate your CSS so that it doesn’t leak to other parts of the application. Parts of your application can still share style while other parts don’t have to get access to it. CSS modules scale well in growing applications. In React these modules are most often colocated files to your React component files. A different approach of styling a component in React is defining a Styled Component. This approach is brought to you by a library called styled-components. It colocates styling in your JavaScript to your React components and doesn’t attempt to share the styling with other components. It only styles a specific component. Last but not least, there is one neat helper library for styling in React: classnames. It enables you introducing conditional styling. In plain JavaScript, it would be possible to create a React class attribute with conditionals: const Box = ({ status, children }) => { let classNames = ['box']; if (status === 'INFO') { classNames.push('box-info'); } if (status === 'WARNING') { classNames.push('box-warning'); } if (status === 'ERROR') { classNames.push('box-error'); } return ( <div className={classNames.join(' ')}> {children} </div> ); } But it is so much easier with the classnames library: import cs from 'classnames'; const Box = ({ status, children }) => { let classNames = cs('box', { 'box-info': status === 'INFO', 'box-warning': status === 'WARNING', 'box-error': status === 'ERROR', }); return ( <div className={classNames}> {children} </div> ); } It works perfectly with CSS modules too. import cs from 'classnames'; import styles from './style.css'; const Box = ({ status, children }) => { let classNames = cs('box', { [styles.box_info]: status === 'INFO', [styles.box_warning]: status === 'WARNING', [styles.box_error]: status === 'ERROR', }); return ( <div className={classNames}> {children} </div> ); } The library is almost mandatory in React applications when it comes to conditional stylings. Recommendations: - plain CSS and inline style - CSS modules or Styled Components - almost mandatory classnames library Asynchronous Requests in React Beyond a Todo application in React, you will pretty soon have to make a request to a third party API. In the past, you would have often used jQuery for this kind of job. Nowadays, recent browsers implement the native fetch API to conduct asynchronous requests. It uses promises under the hood. Basically a fetch looks like the following, for instance in a React lifecycle method when a component mounts: componentDidMount() { fetch(my/api/domain) .then(response => response.json()) .then(result => { // do success handling // e.g. store in local state }); } Basically you wouldn’t have to add any other library to do the job. However, there exist libraries which only purpose it is to provide sophisticated asynchronous requests. They come with more powerful functionalities yet are only a lightweight library. One of these libraries that I would recommend is called axios. It can be used instead of the native fetch API when your application grows in size. Recommendations: - native fetch API - axios React's Higher Order Components Eventually you get to the point where you want to abstract away functionalities for your components. These opt-in functionalities can be shared across components yet leave the components themselves lightweight. That’s when React’s higher order components come into play. These kind of components don’t need any additional library in React. However, there are common use cases for React’s higher order components that are already solved in a library called recompose. For instance, having a higher order component for conditional rendering (branch). When you introduce higher order components to your React application, make sure that the use case is not already covered in recompose. Another neat helper in the recompose library is the compose() function. It allows you to opt-in multiple higher order components in an elegant way. However, you could use a utility library such as Lodash or Ramda for the compose function too. Recommendations: - recompose for utility higher order components - recompose or utility library (Lodash, Ramda) for compose Type Checking Fortunately React comes with its own type checking abilities. With PropTypes you are able to define the incoming props for your React components. import PropTypes from 'prop-types'; const List = ({ list }) => <div> {list.map(item => <div key={item.id}>{item.title}</div>)} </div> List.propTypes = { list: PropTypes.array.isRequired, }; Whenever a wrong type is passed to the component, you will get an error message when running the application. However, in scaling React applications you can add sophisticated type checker such as Flow and TypeScript. When using such a type checker, you can get errors already during development time. You wouldn’t have to start your application in order to find about a bug that could have prevented with such type checking. That way a type checker might be able to improve your developer experience and avoids to introduce bugs in the first place. Flow was introduced by Facebook and feels more natural in the React ecosystem than TypeScript. That’s why I recommend using it in a React application over TypeScript. Recommendations: - React’s PropTypes - Flow (or TypeScript) Formatting in React Basically there are three options to have formatting rules in React. It should be quite similar to other ecosystems. The first approach is to follow a style guide that is embraced by the community. One popular React style guide was open sourced by Airbnb. Even though you don’t deliberately follow the style guide, it makes sense to read it once to get the basics of formatting in React. The second approach is to use a linter such as ESLint. You can integrate it in your toolchain when you are at the point of introducing new toolings to your project yourself. The third and most important approach is using Prettier. It is an opinionated code formatter. You can integrate it in your editor or IDE that it formats your code every time you save a file or commit it with git. Perhaps it doesn’t match always your taste, but at least you never need to worry again about code formatting in your own or a team code base. Recommendations: - reading one popular React style guide - Prettier State Management Fortunately React comes with its own local state management in components. This is why it is just fine to learn plain React first. You will only master the fundamentals in React when using this.state and this.setState() for local state management. Often newcomers make the mistake to learn React altogether with Redux. So don’t bother too early with a state management library when you are just starting to use React. But what comes when you run into first scaling issues in React’s local state management? There are two solutions you can choose from: Redux and MobX. Both come with their advantages and disadvantages. You can read the linked article to make a more informed decision about them. Redux is so popular yet such an innovative place that it comes with its own ecosystem. When using it with React, you will certainly run into the bridging library react-redux. It is the official library to connect your view layer (React) to your state layer (Redux). A similar library comes into play when you decide to use MobX instead of Redux. As mentioned, Redux comes with its own ecosystem. The next recommendations are far beyond a simple setup for a React application. But when you scaled your application to a certain point, where Redux becomes an inherent part for your application and you are confident in using Redux, I can recommend to have a look into these libraries: Redux Saga, Normalizr and Reselect. I am releasing soon a book about state management in React where these topics are taught. You can subscribe to get to know when its released. Recommendations: - React’s local state - Redux or MobX - when doing great with React’s local state - when needed React's Routing Routing is often introduced in an early stage in React applications. After all, React helps you implementing a view-layer that is most often used in a single page application. Thus routing is a crucial part of the application. But before you introduce a heavy router in your application, when you are just about to learn React, you can give React’s conditional rendering a shot first. It is not a valid replacement for routing, but in small applications it is often sufficient to exchange components that way. It doesn’t change the URL though, but still you would be able to map different states to your view. When introducing a sophisticated router, there are a few routing solutions out there for React. But the most anticipated solution is React Router. It works well along with an external state management library such as Redux or MobX too. Recommendations: - React’s conditional rendering - React Router So in the end, the React ecosystem can be seen as a framework for React, but it stays flexible. It is a flexible framework where you can make own decisions on which libraries you want to opt-in. You can start small and add only libraries to solve specific problems for you. You can scale your building blocks along the way when your application grows. Otherwise you can stay lightweight by using plain React. Therefore here again a list of libraries that could complement React as the core of the application regarding different project sizes. Keep in mind that the list is opinionated, but I am keen to get your feedback too. - Small Application - Boilerplate: create-react-app - Utility: JavaScript ES6 and beyond - Styling: plain CSS and inline style - Asynchronous Requests: fetch - Higher Order Components: optional - Formatting: none - Type Checking: none - State Management: local state - Routing: none or conditional rendering - Medium Application - Boilerplate: create-react-app with eject - Utility: JavaScript ES6 + Lodash or Ramda - Styling: CSS modules or Styled Components - Asynchronous Requests: fetch or axios - Higher Order Components: maybe + optional recompose - Formatting: Prettier - Type Checking: none or Flow - State Management: local state and very optional Redux - Routing: React Router - Large Application - Boilerplate: create-react-app with eject or own boilerplate project - Utility: JavaScript ES6 + Lodash or Ramda - Styling: CSS modules or Styled Components - Asynchronous Requests: axios - Higher Order Components: maybe + optional recompose - Formatting: Prettier - Type Checking: Flow - State Management: local state and Redux or MobX - Routing: React Router The previous recommendations are opinionated. You can choose your own flexible framework for your ideal React application. Every “ideal” React setup is subjective to its needs of the developers and project. After all, there is no ideal React application setup.
https://www.robinwieruch.de/essential-react-libraries-framework/
CC-MAIN-2017-51
refinedweb
2,778
55.24
I need a drip Ip and whether it is responding or not, so simple Posted 31 January 2011 - 05:12 PM Posted 01 February 2011 - 12:28 AM #include <winsock2.h> #include <windows.h> #include <stdio.h> #pragma comment(lib, "ws2_32.lib") int main(int argc, char const *argv[]) { WSADATA wsi; memset(&wsi, 0, sizeof(wsi)); if (WSAStartup(MAKEWORD(2, 2), &wsi)) { printf("cannot open winsock?\n"); return 1; } if (argc != 2) { printf("usage: testping hostname\n"); return 1; } int s = socket(AF_INET, 0, IPPROTO_ICMP); if (s < 0) { printf("Could not create IPPROTO_ICMP socket: %d\n", WSAGetLastError()); return 1; } printf("Looking up %s\n", argv[1]); struct sockaddr_in sin; memset(&sin, 0, sizeof(sin)); struct hostent *hent = gethostbyname(argv[1]); if (!hent) { printf("Could not resolve hostname %s\n", argv[1]); return 1; } unsigned char addr[4]; memcpy(addr, hent->h_addr_list[0], 4); printf("%s is %d.%d.%d.%d\n", argv[1], addr[0], addr[1], addr[2], addr[3]); sin.sin_family = AF_INET; memcpy(&sin.sin_addr, hent->h_addr_list[0], 4); char icmp[32]; memset(icmp, 0, sizeof(icmp)); icmp[0] = 8; // echo request icmp[1] = 0; icmp[2] = 8; // checksum icmp[3] = 0; int st = ::sendto(s, icmp, sizeof(icmp), 0, (sockaddr *)&sin, sizeof(sin)); if (st <= 0) { printf("Could not send ping packet\n"); return 1; } char buf[100]; int sl = sizeof(sin); st = ::recvfrom(s, buf, 100, 0, (sockaddr *)&sin, &sl); if (st <= 0) { printf("Error receiving on ping socket\n"); return 1; } printf("Got ping response.\n"); return 0; } Posted 03 February 2011 - 07:50 AM int result; result = system("ping"); Posted 03 February 2011 - 08:36 AM Posted 03 February 2011 - 09:22 AM Can't you just try to connect to it? Pinging it doesn't prove its usable, just that it might be contactable. In fact, ping could be blocked while allowing TCP. Just connect. If you can't, then by all means run a ping if you think it will help the user diagnose the error. You can try use CreateProcess() WinAPI function, which should give you more control over the I/O of the ping program, unlike system(). Some light Googling suggests QT might provide for this case. Posted 03 February 2011 - 09:40 AM Posted 03 February 2011 - 09:55 AM Responding means different things. Your users won't care if ping works. They care about connecting to the service and doing whatever they want to do. Hence, don't test if ping works, test if the service itself works by using it! If you aren't going to connect to it, then the user doesn't care if it is "responding". If you want to use Google, pinging isn't a great test - it doesn't tell you if you can do a search. It is simpler to connect to on port 80, which is a much better test. That said, if the latter fails then trying the former will help you diagnose the issue. For most users they won't care, network connectivity issues are beyond their power to fix. Tell us, in a bit more detail, why you care that the server responds to a ping request? CreateProcess() is a function provided by Windows, Google for it to find out more information.
http://www.gamedev.net/topic/594412-ping-socket/
CC-MAIN-2015-22
refinedweb
545
63.19
In my last blog entry showed how to use a simple simle class called MultiSampleCodeTimer to measure the performance (time), of pretty much any fragment of CLR code quickly and easily. That entry had a solution packaged up as a zip file TypePerfMeasurement.zip that you could use to duplicate the experiment yourself. However I did not have time to show you what is particularly interesting about those paricular performance measurements (I pciked them for a reason). Here I wish to do complete the explaination. When I unpack and run TypePerfMeasurement.zip (please do it yourself, detailed instructions at the end of my last post), I get the following output. Data units of msec resolution = 0.279365 usec 10 typeof(string) : count: 10000 7.677 +- 3% msec 10 typeof(string).TypeHandle : count: 10000 0.000 +- >400% msec 10 anObj.GetType() == type : count: 10000 7.393 +- 41% msec 10 Type.GetTypeHandle(obj).Equals(tHnd) : count: 10000 4.554 +- 9% msec 10 anObj.GetType() == typeof(string) : count: 10000 0.103 +- 7% msec 10 (anObj is string) : count: 10000 0.594 +- 12% msec Please take the time to run it yourself. You don't need Visual studio do so (see readme.txt file, it is as easy as running 'buildit, and running the resulting exe). Note that if you are running under visual studio, and you get very differrent numbers based on whether you run it under VS or outside VS, you did not set your VS settings as I describe in this post. Go and do that now. Note that your numbers are going to be different than mine, but the relative sizes between the various rows should be reasonably close. What do these numbers mean? We need some background first. The .NET Runtime defineds a type called System.Type, which is a representation of a type in the system. EVERY class that has been defined in the system has a cooresponding System.Type. The System.Type object is the gateway to exploring at runtime all the data that the users typed when he defined the type. You can get superclasses, methods, properties fields, etc by calling the appropriate operations on System.Type. This ability to inspect and invoke characteristics of the source code is called Reflection, and System.Type is your gateway into this capability. While the System.Type class is very powerful, that power comes at a cost. Also most programs doe NOT need this power and should NOT use it (I need to blog about that too) becuase of the perf cost. Thus the .NET runtime has tried to provide alternatives to using System.Type for some common operations that CAN be implemented efficiently. One such operation is testing if an object is of a EXACTLY a certain type. While the 'is' operator in c# can often be used for this (and is the preferred method), however that mechanism will also match any subtype (eg ("foo" is object) returns true) and also only works for literal types (if the type was a variable of type System.Type), you could not do this. For example System.Type myType = typeof(string); // in real life not a literal // and then in other methods (and maybe in a loop) if (anObj.GetType() == myType) { /* some op */ } In this example we use the 'GetType() method on System.Object to see if 'anObject' is a string. Unfortunately both the operations used above (typeof, and GetType()), are relatively expensive (as shown in my data above 5-10ms instead of .7ms for 'is' operator). Type checking operations like the one above are common enough that the .NET runtime has added a special type called 'RuntimeTypeHandle' to make them fast. RuntimeTypeHandle is a less user friendly but 'fast' alternative to System.Type. It happens to be simply a managed wrapper that represents the internal pointer (to a structure called a MethodTable), that the runtime uses internally to represent a type. As mentioned it is not very friendly, however it can handle the scenario above, and is very fast. The code using RuntimeTypeHandles looks like RuntimeTypeHandle myType = typeof(string).TypeHandle; // and then in other methods (and maybe in a loop) if (Type.GetTypeHandle(anObj).Equals(myType)) { /* some op */ } This code is significantly faster than the previous code becuse no System.Type object needed to be created, only RuntimeTypeHandles, wich is simply a value type wrapping an unmanaged poitner, and thus is very lean (The code above sadly is slower than it should be because some JIT optimizations where not done in time for V2.0, but is still significantly faster than using System.Type. The first operation RuntimeTypeHandle myType = typeof(string).TypeHandle; Is blazenly fast, as shown by our measurements (it is slow small it is in the noise). This is suprising because this line does a 'typeof(string)' and we know that that operation is relatively slow. Setting a breakpoint in this code shows why. That line of code compiled down to 00000000 mov dword ptr [ecx+10h],790FA3E0h Basically the JIT compiler is smart enough to realize that to fetch the RuntimeTypeHandle, you don't need to fetch the System.Type and fetch its TyepHandle, you can just lookup the value at compile time and emit code to generate the literal. This is very fast. Sadly the other part of the operation if (Type.GetTypeHandle(anObj).Equals(myType)) { /* some op */ } Should be just as fast (fetch the MethodTable poitner from the the object, and test for pointer equality), but is not due to some JIT optimizations that did not kick in appropriately (you will see that we make real calls for both GetTypeHanlde and Equals). Even so, it is faster because we avoid creating a System.Type object. As data point of just how fast the code above coudl be (when the JIT is fixed), consider the final experiment in the code result = anObj.GetType() == typeof(string); Given what we know already, we would expect this to be slow, and yet, the measurements above show it to be FASTER than using the 'is' opertator. How can that be? The answer is that the JIT recognises this sequence and knows that while it seems like two System.Type object need to be created and compared, all you really want is a yes-no answer on a type question that can be answered using RuntimeTypeHandles. It thus substitutes that code 0000000f cmp dword ptr [edx],790FA3E0h 00000015 sete al 00000018 mov byte ptr [ecx+0Ch],al EDX holds 'anObj' and the method table for an object is the first field of the object, and 790fa3e0h is the method table pointer (RuntimeTypeHandle) for string. (See previous post on using !DumpMT to determine this). Thus in one instrution we have tested whether 'anObj' is string. The 'sete' instruction converts the procressor condition flags set by 'cmd' into a boolena value in register AL, and the last insturction sets 'result' to this value. This is pretty lean an mean! Summary: In this entry, we have taken a look at some code generation for doing 'type reflection'. We measured its performance and got some anomolies (some operations were much faster than we expected. We looked into the disassembly for those operations and determined that the JIT compiler was doing some non-trivial optimizations that made certain operations very fast (in one case faster than the 'built in' 'is' operation). We have learned that a System.Type object is relatively expensive compared to RuntimeTypeHandle, and we have used techniques from the last few perf blog entries to help dig into exactly why. I again encourage you to experiment with the TypePerfMeasurement.zip example yourself and hone your skills in measuring and investigating .NET Runtime performance. In my next blog entry I will be doing 'inventory' on the performance characteristics of the 'basic' operations of the runtime. PingBack from After reading your post, I started wondering about the use of GetType in the implementation of our solution, in which we cache object references in a HashTable using the Name of the Type as a Key. I need some more time to examine your TypePerfMeasurement example, but in the meantime I wrote some code to check the performance of a few methods. I found that TypeHandle property was actually much slower than using the GetType() method, or the GetTypeHandle method — here is sample output (using Debug build, Release was similar but slightly faster overall): Using Type.GetTypeHandle Time elapsed: 1187.5ms Using typeof().TypeHandle Time elapsed: 5296.875ms Using typeof().GetType() Time elapsed: 1171.875ms Here is the code I used to generate these results: using System; namespace test_typehandle { internal class BaseClass{} internal class SubClass : BaseClass{} class Program { static void Main(string[] args) { int begin = 0; int end = 10000000; int counter = begin; DateTime start = DateTime.Now; string key = string.Empty; BaseClass baseClass = new BaseClass(); SubClass subClass = new SubClass(); for (counter = begin; counter < end; counter++) { key = GetKeyByTypeDotGetTypeHandle(baseClass); key = GetKeyByTypeDotGetTypeHandle(subClass); } OutputResults("Using Type.GetTypeHandle", start); start = DateTime.Now; for (counter = begin; counter < end; counter++) { key = GetKeyByTypeHandle<BaseClass>(); key = GetKeyByTypeHandle<SubClass>(); } OutputResults("Using typeof().TypeHandle", start); start = DateTime.Now; for (counter = begin; counter < end; counter++) { GetKeyByType<BaseClass>(); GetKeyByType<SubClass>(); } OutputResults("Using typeof().GetType()", start); } static void OutputResults(string method, DateTime start) { TimeSpan elapsed = DateTime.Now.Subtract(start); Console.WriteLine(method); Console.WriteLine(string.Format("Time elapsed: {0}ms", elapsed.TotalMilliseconds)); } static string GetKeyByTypeDotGetTypeHandle(object obj) { return Type.GetTypeHandle(obj).ToString(); } static string GetKeyByTypeHandle<T>() { return typeof(T).TypeHandle.Value.ToString(); } static string GetKeyByType<T>() { return typeof(T).GetType().ToString(); } } } PingBack from Your conclusions apply only to x86. x64 results are dramatically different. The conclusions also differ dramatically when you apply these tests to generic type parameters, instead of hard-coded types. See my recent post where I modified your benchmark suite: higherlogics.blogspot.ca/…/clr-cost-of-dynamic-type-tests.html
https://blogs.msdn.microsoft.com/vancem/2006/10/01/drilling-into-net-runtime-microbenchmarks-typeof-optimizations/
CC-MAIN-2017-39
refinedweb
1,633
57.67
From: Beman Dawes (bdawes_at_[hidden]) Date: 2003-05-25 07:51:55 At 01:05 PM 5/23/2003, Rob Stewart wrote: >> - and I get a compile-time error: >> 'abort' is not a member of std. > >You have to conditionally compile the using directive, but you >won't have to conditionally compile "abort" versus "std::abort" >when you do that. Yes. Just to be sure that is clear, here is some sample code: #include <boost/config.hpp> #include <ctime> # ifdef BOOST_NO_STDC_NAMESPACE namespace std { using ::clock_t; using ::clock; } # endif ... std::clock_t my_clock = std::clock(); --Beman Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/05/48233.php
CC-MAIN-2019-22
refinedweb
116
67.86
Stream Analytics Azure Stream Analytics lets users perform real-time analytics for their Internet of Things (IoT) solutions. In this article, we will use Event Hubs as a source and Service Bus Queue as a destination for Stream Analytics. Azure Event Hubs is a hyper-scale telemetry ingestion service that can store millions of events. And, Azure Service Bus is a highly-reliable cloud messaging service with Queues offering a First In, First Out (FIFO) message delivery to competing consumers. Event Hubs (source) Create an Event Hubs namespace in Azure Portal. Then add a new Event Hub into the namespace. Service Bus Queue (destination) Create a Service Bus namespace in Azure Portal. Then add a new queue into the namespace. Stream Analytics job Create a new Azure Stream Analytics job that processes messages coming through Event Hub and stores the output into Service Bus queue. For this post, we are using a pass-through Stream Analytics. Service Bus Queue Reader Logic App The Service Bus Queue Reader Logic App reads a message from a Service Bus queue, uses Azure Functions to remove special characters that get added by Azure Stream Analytics and then sends it to a child Logic App for processing the message. Click here to download complete Logic App definition. Service Bus Queue Trigger Add a new Service Bus queue trigger to read messages from a Service Bus queue and trigger the Logic App. Special characters handling using Azure Functions Azure Stream Analytics adds certain special (Unicode) characters in output messages which cause failures in Logic Apps when trying to decode them (e.g. to JSON format). As a result, we need to pre-process the message using Azure Functions and remove those special characters before going ahead with regular message processing in Logic Apps. Create a new C# Webhook Function App (let’s call it ParseStreamAnalyticsJSONContent) in Azure Portal and insert below code snippet into it. Create the Function App in same region as Logic Apps for being callable from Logic Apps. It is recommended to create it in same Resource Group as Logic Apps. Click here to download the source code. Then add a Function App action in Logic Apps and choose ParseStreamAnalyticsJSONContent function app that was created in previous step. Call Message Processor child Logic App Call the Message Processor child Logic App that has logic for processing these messages, details of which are covered in next section. Message Processor Logic App The Message Processor Logic App takes an event data in JSON format and send an e-mail using Office 365 Connector. Click here to download complete Logic App definition. Request Trigger Create a request trigger and provide it JSON schema for incoming message. JSON schema helps tokenize message properties which come handy in subsequent steps. Office365 Send Email Send an email using Office 365 Connector. You can use message properties coming from the request trigger to craft the email template (subject, body etc.). Response action Add a response action to complete this Logic App. Having a Response action is a MUST to be able to call this Logic App from another Logic App.
https://blogs.msdn.microsoft.com/vinaysin/2017/01/17/consuming-azure-stream-analytics-output-in-azure-logic-apps/
CC-MAIN-2017-09
refinedweb
520
54.12
Introduction In this part of the tutorial, we’ll take a look at how we can figure out a structure when reverse engineering a binary. First, we must write a C++ program that declares and uses the structure, so that we’ll be able to reverse engineer it. The basic difference between arrays and structures is the fact that we’re using an index to address consecutive elements of the array, whereas with structures we’re using named members to access specific data within the structure. When working with structures, we must keep in mind that the size of the structure is declared as the sum of all its data members aligned on word boundary in memory. What does that mean? It means that the compiler will align each data structure on a 4-byte boundary, so it can read and write member values from memory more efficiently. Global Structures The program written in C++ that uses global structures can be seen below: #include <iostream> struct s { int x; int y; int z; double value; } mys; int main(int argc, char **argv) { mys.x = 1; mys.y = 2; mys.z = 3; mys.value = 9.9; std::cout << mys.value << std::endl; return 0; } Let’s compile and run the program to see what it does. We can do that by downloading the MinGW software package in Windows and issue the two commands that can be seen on the picture below: We compiled the program with the g++ compiler and after running it, the program outputted the number 9.9. In the source code of the program, we’re first defining a structure that has four members: variable x, an integer; variable y, an integer; variable z, an integer; and variable value, a double. The structure can represent points and their values in a three-dimensional space. We’re also defining an instance of the structure named mys when declaring the structure: note that this is just a shortcut to declaring the structure in a normal way like “struct s mys.” If we load the program in Ida, we can quickly find the following disassembly that initialized the numbers 1, 2, 3 to the x, y, z members of a structure and which defined the value of member ‘value’ to be 9.9. The disassembly can be seen on the picture below: In the assembly code, we can see the direct assignment of values 1, 2, 3 and 9.9 to a certain memory location by using the variables dword_405020, dword_405024, dword_405028 for variables x, y and z and dword_405030, dword_405034 for variable ‘value’. In the assembly code, there is no math involved at all, so we really can’t be sure if the structure is involved or not. The way we see it, the program references a few global variables rather the members of the structure. Local Structures First let’s present the C++ program that allocates the structure locally and declares its members. Basically, the program is the same as with the global structures, except that the structure is declared locally; all the rest is the same. The whole C++ program is as follows: #include <iostream> struct s { int x; int y; int z; double value; }; int main(int argc, char **argv) { struct s mys; mys.x = 1; mys.y = 2; mys.z = 3; mys.value = 9.9; std::cout << mys.value << std::endl; return 0; } We can see that we’re first declaring the structure with four members: variable x, y and z that are of type int and variable ‘value’ that is of type double. We can copy the program to Windows executable, compiling it and running it. We can see that on the picture below: Okay, so the program works as expected, because it outputs the number 9.9. But we’re interested in the disassembled version of the program that we can obtain very quickly by opening up the executable in Ida Pro and finding the appropriate section of the executable. The disassembly listing can be seen below: In the disassembly function, we can quickly figure out that we’re using 0x30 bytes for local variables, which is the fact that we’re declaring the structure locally. In the previous example where we declared the structure globally, we only used 0x10 bytes on the stack for local variables. We can also see that this time, we’re now assigning the values 1, 2, 3 and 9.9 to different global variables inside the assembly, yet we’re actually using the stack pointer ESP with the right offsets to access certain members of the structure. The x variable from the C++ code lies at the address [esp+30h+var_18], which means that the local variable var_18 is used to reference the x member of the structure. The same goes for other members, where var_14 is used for member y, var_10 is used for member z and var_8 and var_4 are used for member ‘value’. This gives us the picture that the function is using different local variables to hold the values assigned to them, but in reality we’re defining the members of the previously defined structure ‘s’. When we know how a certain function uses a structure, we can rename the local variables to define the structure more clearly. This also presents another useful feature of Ida: renaming the variables automatically generated by Ida itself. The disassembly of the renamed local variables could look like the picture below: Notice that the local variables that used the offset into the structure are not renamed to define their real members x, y, z and ‘value’? This can be a valuable help if we would like to share our work with others; maybe putting a few comments in there wouldn’t be such a bad idea. Heap Structures Heap structures are basically the same as local or global structures, except that they are defined on heap. I guess we should first present the program written in C++ that does exactly that: it allocates the structure on the heap and then allocates certain values to its members and prints the value stored in memory ‘value’. Such a C++ code can be seen below: #include <iostream> struct s { int x; int y; int z; double value; }; int main(int argc, char **argv) { s *mys = new s; mys->x = 1; mys->y = 2; mys->z = 3; mys->value = 9.9; std::cout << mys->value << std::endl; return 0; } Upon compiling and running the example code above, the program will print the value 9.9 as we can see on the picture below: When we load the program with Ida Pro, we can quickly find the relevant code the above program was compiled into. The assembly version of the above program can be seen on the picture below: Notice that we’re first calling the Znwj function that equals the new function in C++. That function creates a new struct on the heap and stores the pointer to the structure in eax, which we’re writing to the address on stack [esp+20h+var_4]. Afterwards, we’re using this pointer to get access to various structure members by using the appropriate offset into the structure: [eax], [eax+4], [eax+8], [eax+10] and [eax+14]. We’re also passing the 0x18 constant to the new function, which means that the struct’s size is 0x18 (24 bytes). Defining Structures Manually in Ida In the preceding examples we saw how the structures from C++ were translated into assembly code. Let’s summarize how the structure members were accessed in each of the three examples. When we declared the structure globally, the structure was accessed as follows: mov ds:dword_405020, 1 mov ds:dword_405024, 2 mov ds:dword_405028, 3 When we declared the structure locally, the structure was accessed as follows: mov [esp+30h+var_18], 1 mov [esp+30h+var_14], 2 mov [esp+30h+var_10], 3 When we declared the structure on the heap, the structure was accessed as follows: mov eax, [esp+20h+var_4] mov dword ptr [eax], 1 mov dword ptr [eax+4], 2 mov dword ptr [eax+8], 3 We can see that in the second and third case, we’re using offsets to access certain members of a structure. We should tell Ida that we’re dealing with a structure since Ida can only detect the use of a known structure by itself, but it certainly can’t detect using the custom structure as we did in the above cases. We can open the Structures window by going to View – Open Subviews – Structures to see if Ida has detected the use of any structure in the program. Currently there are no structures in this executable as we can see below: There are some comments presented in the structures view that informs us of how we can use the structures window. To create a structure, we can press the Insert key, while the Del key deletes a structure. We want to press the Insert key to insert a new structure. Upon doing that, the following dialog box will pop-up: We need to enter the name of the structure, which is only the letter ‘s’ and press OK. A new empty structure will be added to the Structures windows as can be seen on the picture below: To add members to the data structure, we must position our cursor to where we want the structure to be and press the letter ‘d’. We must then press the letter ‘d’ as long as the added member isn’t the required size as it should be. Afterwards, we can right-click the name, which is by default field_0 and change its name to something else. Using that approach, we must add all members of a certain structure, which in our case are the variables x, y, z and ‘value’. We must also ensure the proper alignment of all the fields in the array. At the end, we can even collapse the structure with the minus ‘-‘ sign to represent it in one line; the opposite operation of that is to expand the structure by pressing the plus ‘+’ sign. There is an easier way to create a structure in Ida – we can import the structure written in C/C++ programming language itself by importing the header file into Ida. We can do that by first creating the header file, which will contain only the defined structure itself and nothing else. Then we need to go to File – Load File – Parse C Header File and choose the created header file. Ida will parse and import it and then display the following notification window: The window tells us that the structure was successfully imported. After that, we should go to the Structures window and Insert a new structure with the same name as what we imported from the header file. This will actually add the structure among all the structures in the executable. We can see the structure ‘s’ being added to the structure window below: To use the structure in the disassembly listing, we have to double click on the offset that is being used to reference different members of the structure. Let’s take a look at the following program disassembly: In the picture above, it’s already evident that we renamed the offsets that reference different members of the structure into x, y, z as valueH and valueL. After that, we should double-click on the variable x to be thrown at an actual stack frame memory address (this can be a memory allocated in any section) as follows: Then we should select the first variable of the structure, in this case variable x, and select Edit – Struct Var. This will display a list of known structures within the executable. In our case, only the imported structure ‘s’ is known, as we can see on the picture below: The structure will be applied to the current address and will consume as many bytes as the size of the structure. This is why we must always select the first member of the structure, because the structure will be applied to that memory address and its corresponding higher memory addresses. After the structure is applied to the current memory address, the disassembly will look like the picture below: We can see that it was worth it, because now the disassembly view is much clearer and easier to read. Notice that we now have a local variable named mystruct that is used later by the function to reference different members inside it. Conclusion We’ve seen how structures are reverse engineered in Ida debugger and how to recognize them. But what is more important is the fact that we’ve looked at how to import the structures in Ida and apply them to memory locations, which automatically updates the disassembly view to make it more readable and easier to understand. We should also keep in mind that Ida applies known structures from various system libraries to the executable by default when being analyzed. Usually, different structures are used in different API functions that are part of the system. All the recognized structures will also be added to the structures window, which we can use throughout the program analysis. References [1] Chris Eagle, The IDA Pro Book: The unofficial guide to the world’s most popular disassembler.
https://resources.infosecinstitute.com/reverse-engineering-structures/
CC-MAIN-2018-22
refinedweb
2,229
63.43
Paul Kimmel on VB/VB .NET : Creating Visual Studio .NET Add-Ins Implementing the IDTExtensibility2 Interface The Add-In wizard provides an implementation for OnConnection automatically. OnConnection initializes the applicationObject and the AddInInstance references. These objects are used to create and insert a NamedCommand and add a menu item to the Tools menu on lines 34 to 67. The applicationObject references refers to the Development Tools Environment (DTE), which is the root object representing the host IDE. The addInInstance object is a reference to the specific instance of the Add-In, ensuring that an invocation refers to a specific object instance. The other four IDTExtensibility2 interface methods are implemented as empty procedures. Add code to these procedures if you need additional code for initialization, startup, or de-initialization. Implementing the IDTCommandTarget Interface The IDTCommandTarget interfaces are implemented too. QueryStatus determines if the command is available, returning this state in the statusOption parameter, and Exec is represents the point of invocation. Choose to implement those interface methods that you need to support your Add-In and leave the rest as empty procedures. Insert your response code between lines 76 and 77; for example, you might simply insert DoExecute method and implement your custom behavior beginning in the DoExecute method. Insert the statement MsgBox("MyAddIn") on line 77 and press F5 to run and test the Add-In. Pressing F5 will run a second copy of Visual Studio.Net with the Add-In available on the tools menu. Click the new Add-In menu item, and you will see the message box with the text MyAddIn displayed. (Keep in mind that the setup target created by the wizard will be compiled too, so be patient when you press F5. Building both the Add-In and setup project may take a couple of minutes.) Debugging Add-Ins in Visual Studio.Net Before you register your Add-In and put it into general purpose use you can debug it from VS .NET. The wizard sets debug properties indicating that VS .NET the host application. When you press F5 VS .NET will run another instance of the IDE and allow you test your Add-In. Set a break point in the source code of the Add-In in the first instance of the IDE. When you run your Add-In from the second instance it will halt when you breakpoint is hit. At that point you can debug your Add-In as you would any other application. Registering Add-Ins Add-Ins are assemblies. You can register Add-Ins as private assemblies for personal use or in the Global Assembly Cache (GAC) for shared used. Both types of registration are covered here. Private Assembly Registration Applications, like an Add-In DLL, have application settings that are stored in the Registry. You may have heard that .Net assemblies support xcopy deployment. This is true of .Net assemblies but not of .Net assemblies that are used by COM-based applications. Add-Ins use the system.Runtime.InteropServices which suggests that Add-Ins are used by COM-based applications, specifically the Add-In manager. For this reason you will need to register your Add-Ins. Additionally you will need to add registry settings allowing the Add-In Manager to display the Add-In in the Add-In Manager. There are several steps that you must perform to register your Add-In assembly. The first thing you need to do after you have tested your assembly is to run regasm.exe. The regasm.exe utility is found by default in the \winnt\Microsoft.Net\Framework. The command to register an assembly is regasm <path\>myaddin.dll /codebase where myaddin is the name of your Add-In, including the path information. The second step is to add information, instructing the Add-In manager how to make your Add-In accessible. You can create a registry script by copying the structure of the following listing in a text file with a .reg extension. Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\VisualStudio\7.0\AddIns\MyAddIn.Connect]"FriendlyName"="MyAddIn""Description"="My First AddIn""LoadBehavior"=dword:00000001"CommandLineSafe"=dword:00000001"CommandPreload"=dword:00000001 Figure 1: Registry entries describe the Add-In in the Add-in Manager. The preceding registry script adds a key to the registry. Replace MyAddIn.Connect with the namespace and class of your Add-In. (Connect is the class name created by the Add-In wizard by default.) FriendlyName is the name of the Add-In that is displayed in the Available Add-Ins list of the Add-in Manager (see figure 1). Description indicates the text shown in the Description field, and the three remaining keys indicate the load behavior of the Add-In. Shared Assembly RegistrationShared assemblies are stored in the Global Assembly Cache, called the GAC. The GAC can be viewed by navigating Windows Explorer to the \winnt\assembly folder. When you navigate to this folder the GAC snap-in is automatically loaded by Windows Explorer (see figure 2). Figure 2: The Global Assembly Name cache folder: a plug-in used by Windows Explorer automatically when you navigate to the \winnt\assembly folder. Global assemblies can be shared. Global assemblies are distinguished by the string name, public key rather than the file name. Hence you may have more than one file with an identical name in the GAC, but you will need to generate a strong name for shared assemblies. Strong names are generated by the Add-In wizard, or you can explicitly use the sn.exe utility to generate a string name file. When you have run the Add-In wizard, added your custom code, and tested the Add-In then you are ready to register it. The gacutil.exe program can be used to add an assembly to the GAC. The command is gacutil /i <path\>myaddin.dll where myaddin is the name of your Add-In. (By default the gacutil.exe utility is located in the "C:\Program Files\Microsoft.NET\FrameworkSDK\Bin\gacutil.exe" directory.) Include the complete path information for gacutil.exe and your Add-In. If you browse to the shared assembly directory (see figure 2) then you will be able to determine that your registry has been added to the GAC. Finally you will need to add the registry entries necessary for the Add-in Manager to manage the Add-In. In early versions of beta 2 this process seems to be a little unreliable. This is to be expected from beta software. Expect refinements in the creation and testing of Add-Ins in release versions of Visual Studio.Net. Return to this column for more information on shared assemblies and building Add-Ins as revisions to VS.NET are made available.. # # # Page 2 of 2 This article was originally published on September 18, 2001
https://www.developer.com/lang/other/article.php/10942_886091_2/Paul-Kimmel-on-VBVB-NET--Creating-Visual-Studio-NET-Add-Ins.htm
CC-MAIN-2020-50
refinedweb
1,137
58.38
Contributing to PHP: How to Contribute to PHP’s Manual. Why Contribute to PHP? Why should you consider contributing to PHP? PHP is an open source project that relies upon the willingness of its community to invest their time into the project. The more people become involved, the more the community at large stands to benefit. Whether it’s improving the documentation around the language or contributing bug fixes or features to the core, the cumulative efforts of every developer quickly add up. Becoming more involved with PHP will also help to take your knowledge of the language to the next level. Contributing to the documentation will give you a more thorough knowledge of the language, and contributing to the core will keep you up to date with any changes that are happening to it. Becoming a contributor will also enable you to ultimately request a php.net account, which will put you in a position to help decide what direction the language is heading in. It is therefore definitely a worthwhile thing to do, if you enjoy working with PHP. About PHP’s Documentation The documentation is maintained in the DocBook XML format. Generally speaking, little knowledge of this format is required to be able to contribute to PHP’s documentation. It’s easy to pick up, so you can simply follow along with the XML syntax used in other files of the documentation. The folder structure for the documentation looks as follows: The doc-base folder contains some tools for converting the XML-based documentation into other formats. You probably don’t need to concern yourself much with this folder, except when creating custom entities (typically used when adding external links to the docs). The en folder is specific to the English documentation (other translations will follow their respective two letter country code names). This folder is the one you’ll predominantly be working in. The reference folder contains directories that each pertain to an extension. Each extension folder follows the convention of either having a functions folder (for procedural extensions) or folders named after the extension’s classes (for object-oriented extensions). Each extension folder also contains a few other files, including a book.xml file for the extension’s landing page and a versions.xml file for holding versioning information for when each function was introduced. For more information on the documentation structure, see the Files structure section of the Manual sources structure page. Also, while an on-going effort is being made to transition the documentation to Git, it is currently maintained using SVN. This means that you’ll need to install SVN and have a basic familiarity with it if you want to set the docs up locally (as seen later). A First-Time Contributor If you’re a first-time contributor, then you’ll want to start off by using the online documentation editor. This provides a user interface to enable anyone to log in (using OAuth) and to submit simple patches to the documentation. It’s generally best to log in with the same account each time, so that your contributions remain under a single name only (making it easier to track them if you decide to apply for a php.net account later). The online editor is almost always the starting point for new documentation contributors. By demonstrating the ability to submit patches and have them subsequently accepted, it shows competence and willingness to contribute. As a first-time contributor, you’ll want to also familiarize yourself with the style guidelines of the documentation before submitting any patches. Example Let’s resolve bug #71716 from bugs.php.net. The report says that one of the demonstrative examples is incorrect, where the MongoDB Client class is namespaced incorrectly. After verifying this (by running the connection script locally), we can fix this using the online editor: For more information on how to use the online editor, see the wiki editor’s Getting Started page. PHP Docs Local Setup The online editor, however, isn’t the nicest of ways to contribute to the documentation and it is also very limited in what it can do. It should therefore only be used for minor edits and serve as a first stepping stone to getting involved in the documentation side of the project. If you would like to do anything outside of the editor’s capabilities (such as documenting a function or extension from scratch), or you are becoming a frequent contributor to the docs, then you’ll want to set up the docs locally and request a php.net account. Setting up the PHP docs locally on your computer is a one-time inconvenience. It can be done with the following steps: Create a docs directory. This will be used to hold the docs and docs-related stuff in one place: mkdir phpdocs cd phpdocs The rest of the steps all assume you are in the phpdocs directory, unless stated otherwise. Clone the docs. Use SVN to pull down a copy of the repo. In our case, we are looking to get the English manual, and so we specify the doc-en module (at the end): svn checkout Clone PhD. PhD is a tool that renders the DocBook XML format into different output formats. git clone Clone the php.net website. This will be used to view the documentation locally, so that you can see the changes as they would appear on the website before pushing them. git clone web-php rm -fR web-php/manual/en ln -s rendered-docs/php-web web-php/manual/en We need to remove the dummy docs located in web-php/manual/en, and then set up a symbolic link to here from the generated documentation files (generated via PhD) at rendered-docs/php-web. Setup SVN Keywords. The DocBook files all have a <!-- $Revision: 338832 $ -->comment near the top of the file. We need to configure SVN to automatically update this value whenever a file changes by configuring its automatic properties. To do this, simply add the following line to ~/.subversion/config(the automatic properties part is somewhere near the end): *.xml = svn:eol-style=native;svn:keywords=Id Rev Revision Date LastChangedDate LastChangedRevision Author LastChangedBy HeadURL URL The revision ID tracks file changes, which is used by docs translators to see which files need to be updated. Add workflow file. This is optional, but you’ll find it very useful to have the commands to validate, build, and view the docs locally at hand. Paste the following into a file called ref: # Validate the DocBook XML php ~/phpdocs/doc-en/doc-base/configure.php # Build the docs php ~/phpdocs/phd/render.php --docbook ~/phpdocs/doc-en/doc-base/.manual.xml --package PHP --format php --output ~/phpdocs/rendered-docs # Run the php.net website locally cd ~/phpdocs/web-php php -S 0.0.0.0:4000 The above assumes that your phpdocsfolder is located in your home directory – if not, then update the paths accordingly. And now you’re all set up! Docs Workflow With everything set up, it’s time to take a look at the workflow for contributing to the docs locally. For this, let’s resolve bug #71866. We start by making sure the versioned repos are all up-to-date. This means performing an svn up in doc-en/en and doc-en/doc-base, and a git pull in web-php and phd (typically, you’ll find that only the first repo, doc-en/en, needs to be updated). Next, we open up the doc-en/en/reference/mbstring/functions/mb-detect-order.xml file and rectify the function’s return value description as per the bug report information: <refsect1 role="returnvalues"> &reftitle.returnvalues; <para> - &return.success; + When setting the encoding detection order, &true; is returned on success or &false; on failure. + </para> + <para> + When getting the encoding detection order, an ordered array of the encodings is returned. </para> </refsect1> We then ensure that we haven’t broken the docs builds by running the docs validator: php ~/phpdocs/doc-en/doc-base/configure.php The docs should validate successfully, and as a result, we should see a picture of a nice ASCII cat. Now, we can either view the changes locally or commit them. Typically, you will just want to commit the changes (unless you’ve made any drastic changes to confirm that they look fine), but we’ll do it this time for didactic reasons. We build the documentation by running the PhD tool: php ~/phpdocs/phd/render.php --docbook ~/php-docs/doc-en/doc-base/.manual.xml --package PHP --format php --output ~/php-docs/rendered-docs Sometimes this stage fails for me the first time because PHP runs out of memory – just simply re-run the command and it should build fine. Then change into the php.net website’s directory to start the local PHP server: cd ~/phpdocs/web-php php -S 0.0.0.0:4000 Visiting the localhost (on port 4000) and browsing to the mb_detect_order() function, we can see the changes made: Now that the changes have been made and we’re happy with them, we can commit them. The following parts assume that you have a php.net account. If that’s not the case, then you need not concern yourself with the rest of this section just yet (instead, you may wish to request a php.net account (see below). cd ~/phpdocs/doc-en/en svn ci -m "Resolve doc bug #71866" reference/mbstring/functions/mb-detect-order.xml The commit message references the doc bug so that an automated comment on our behalf can be made on the bug report we’re resolving. You also won’t be able to view your changes right away on php.net, but they should propagate to the servers within a few hours typically. We can then visit bug #71866, and, assuming we’re logged into our php.net account, navigate to the “Developer” tab and close the bug report. That’s the basic workflow of resolving documentation bugs. Quick and easy! Requesting a php.net account Having set the docs up locally and seen the general workflow, you may then want to consider requesting a php.net account with docs karma. (Karma gives contributors differing privileges for various parts of the PHP project. In order to contribute directly to the documentation, docs karma is required on your php.net account.) There are no concrete prerequisites that must be met prior to requesting a php.net docs account. Generally, though, accounts will only be granted to those who are looking to actively contribute to and maintain the documentation. Past contributions and current efforts are therefore looked at as evidence of competence and willingness to contribute when requesting an account. To request an account, visit the account request page and read it carefully. Then, fill out the form, submit it, and then email the PHP docs mailing list (phpdoc@lists.php.net) stating why you would like karma, what your wiki account username is, and reference any past contributions you’ve made to the project. If your request is successful, then you’ll be granted a @php.net account. Documentation to-dos Aside from fixing the numerous filed bug reports on the documentation, there are other areas of the manual that could be worked on too, including: - Translating the documentation - Expanding upon the partially documented material (typically by adding examples) – the reflection extension is a prime example of this. - Documenting the undocumented – this will involve digging into the PHP source code using the lxr tool - General page tidy-ups – rephrasing parts (such as removing any “you”s), removing PHP 4-related material, merging useful comments into the manual itself, etc. General Tips The following are some tips on contributing, some based on my experience, and others simply reiterating from above because they are important: Follow the style guidelines. Please make sure you’re following the style guidelines, and when contributing, try to rectify documentation that doesn’t follow it. I generally perform at least a quick search for “I ” and “you” in the file I’m working on. Check tangential things. This generally refers to resolving bug reports – don’t just fix the bug, check that related things aren’t affected too. This helps to ensure that bugs are properly resolved. For example, whilst bug #71638 only referenced the stream_filter_appendfunction, its counterpart stream_filter_prependalso needed to be rectified. Be terse. This applies to both writing and code examples. Poorly worded documentation and convoluted code examples make things more difficult to understand. Keep things short and simple. Separate example code from its output. Make sure the output of examples is not put with the examples themselves. This convention helps to keep the example clean and clearly shows its output. For example, when cleaning up the token_get_allfunction page, I had to move a commented dump of a variable and an explanation out of the example. Check the page order. Ensure that the order of content on the page is correct. This creates consistency in the manual, which enables developers to reference it easier. Again, looking at the cleaning up of the token_get_allfunction, we can see that the changelog section was initially at the bottom of the page, when it should have been above the examples section. Remove references to PHP 4. PHP 4 is long gone, and references to it are not only irrelevant now, but they convolute the documentation. Look out for any mentions of PHP 4 (I usually perform a quick grep for it in the file I’m working on) and remove any mentions of it. Properly version files. When creating a new file in the documentation, ensure that its revision id is set to nothing: <!-- $Revision$ --> Other than when creating new files or translating the documentation, you will not need to worry about the revision ID. If you’re unsure about something, start by checking out the docs FAQ. If that doesn’t help, then simply send an email to the php-docs mailing list for support. And just to reiterate, please check that the documentation build passes before committing any changes to the manual! Conclusion We’ve seen two workflows in this article when contributing to PHP’s documentation. The first was using the online editor to resolve bug #71716, and the second was using a local setup of the docs to resolve bug #71866. I hope this has served as a clear and pragmatic introduction on how to contribute to PHP’s documentation. In part 2 of this series, I will be showing you how to get involved with PHP’s internals. This will involve looking at how to fix bugs in the core by taking a pragmatic approach again.
https://www.sitepoint.com/how-to-contribute-to-phps-documentation/
CC-MAIN-2019-22
refinedweb
2,465
55.24
03, 2008 05:33 PMDatabases have been gathering a significant amount of buzz lately. IBM recently invested in EnerpriseDB which supports a Cloud edition running on Amazon EC2. Amazon released their own cloud database late last year. Google's BigTable has also been studied by the community even though it is not open source. Continuing along these lines two open source projects, HBase and Hypertable, have leveraged the open source Map/Reduce platform Hadoop to provide Big Table inspired scalable database implementations. InfoQ sat down with Doug Judd, Principal Search Architect at Zvents, Inc. and Hypertable project founder, to discuss their implementation. 1. How would you describe Hypertable to someone first hearing about it? Hypertable is an open source, high performance, scalable database, modeled after Google's Bigtable. Over the past several years, Google has built three key pieces of scalable computing infrastructure designed to run on clusters of commodity PCs. The first piece of infrastructure is the Google File System (GFS) which is a highly available filesystem that provides a global namespace. It achieves high availability by replicating file data inter-machine (and inter-rack), which makes it impervious to a whole class of hardware failures that traditional file and storage systems aren't, including failures of power supplies, memory, and network ports. The second piece of infrastructure is a computation framework called Map-Reduce that works closely with the GFS to allow you to efficiently process the massive amount of data that you have collected. The third piece of infrastructure is something called Bigtable, which is analogous to a traditional database. It allows you to organize massive amounts of data by some primary key and efficiently query the data. Hypertable is an open source implementation of Bigtable with improvements where we see fit. If you run a website that sees high traffic volume, then you should care about scalable computing infrastructure. Your web server logs contain valuable information concerning user behavior on your site. You can run analytic calculations over this log data and use the results to provide a better service. It allows you to answer questions such as, "If a customer buys product X, what else are they likely to buy?" or "If a user lands on page Y, what is the average number of subsequent clicks they do on the site before terminating the session?" 2. Why did the team start the project? The Engineering team at Zvents started this project because we recognized the value of data and data-driven engineering. We realized that at scale, the traditional tools for storing and processing information fall down. At the time that we started on the project, an open source Bigtable implementation did not exist, so we decided to build it ourselves. The reason that we chose open source is because we felt that an open source implementation of Bigtable would be inevitable. By leading the development effort, we would have a leg up on the competition in terms of knowledge, expertise, and credibility. 3. Does Hypertable require Hadoop to run? No, Hypertable does not strictly require Hadoop to run. Hypertable is designed to run on top of an existing distributed file system like the one provided by Hadoop, HDFS. The interface to the underlying file system has been abstracted via a broker mechanism. Hypertable communicates with the underlying file system by speaking a standard protocol to a DFS broker process. This allows Hypertable to run on top of any file system, as long as a broker has been implemented for it. The primary DFS broker that we've been using is the one for HDFS, but we also have a broker for KFS, the Kosmos File System, and one called the "local broker" which just reads and writes data to and from a locally mounted filesystem. The "local broker" is one that we use for testing, but can also be used to run Hypertable on top of any distributed file system that is mountable via FUSE. As far as the rest of Hadoop goes, we intend to write an InputFormat class that will allow tables in Hypertable to be used as inputs to map-reduce jobs. 4. How does it compare to HBase and why not contribute to that project instead? Hypertable differs from HBase in that it is a higher performance implementation of Bigtable (editors note, an interview from the HBase team will run in the coming weeks on InfoQ). I initially started working with Jim Kellerman and some members of the Hadoop team on the HBase effort. We had some differences of opinion on how it should be developed. In particular, we disagreed on the choice of implementation language. They insisted on Java, while I pushed for C++. That's when I forked and started the Hypertable project. The following document entitled, "Why We Chose C++ Over Java" gives a technical explanation of our decision to use C++: Although we had a split early on, we're still friends with the HBase team. In fact, we all went out to lunch last week and had an enjoyable time trading implementation war stories and exchanging ideas. 5. In my novice exploration of Hadoop the M/R paradigm applies well to batch processing of data. How does Hadoop apply in a more transaction/single request based paradigm? Map-reduce is definitely an offline batch processing system. It's great for sorting and sifting through massive amounts of information (e.g. log data), offline. Often, these large scale, offline computations will generate massive tables of information to be consulted by a live service to provide a better user experience. This information might include per-user profile data, or per-query statistical information. This is where Hypertable enters the picture. It provides a scalable solution to store data indexed by a primary key that can be queried online. 6. What has been the best thing you've found working with Hadoop? It basically works. We've used it successfully at Zvents and many others have used it in an offline production setting. In fact, Yahoo recently migrated their WebMap processing onto Hadoop. They've described their recent accomplishment in the following press release: 7. The worst? I guess the worst part about working with Hadoop is that the project has been going on for years without a stable 1.0 release. At times, it's been a challenge to pin down a stable release. The project has gotten a lot more stable lately. The other problem that we've had with Hadoop is the lack of support for a 'sync' operation in their distributed filesystem. This makes it unsuitable for storing a commit log. They are actively working on this and should have it available by the time we release Hypertable 1.0. 8. When is 1.0 being targeted for? We are shooting for a "beta" release sometime over the next month or two. The 1.0 release should follow soon thereafter. If you would like to be notified about the "beta" and 1.0 releases, please join the Hypertable announcement mailing list: 9. What companies are using Hypertable? Given its "alpha" status, there are not yet any companies that have built their service on top of Hypertable. However, there has been a fair amount of interest and there have been over 3000 downloads of the software. We've also gotten great feedback from a wide variety of people on the mailing lists. One person from a company outside of Omaha, Nebraska reported a regression failure. After a bit of back and forth on the mailing list, he put his machine outside the firewall and gave me VNC access to debug the problem. I was able to isolate the problem, which turned out to be a timestamp conversion problem that was caused by running the software outside of the PST time zone. Another person who is a Ph.D. candidate at Stony Brook University in New York recreated Google's Bigtable benchmark test using Hypertable with promising results. This kind of community feedback has been a big help in solidifying the software. Last week we gave a Hypertable presentation to a group of thirty or so engineers at Facebook. They indicated that they have a need for this kind of infrastructure in their backend tool chain and would be willing giving it a try in a couple of months when the software is closer to being production ready. We expect to see real applications built on Hypertable around the time of the 1.0 release. 10. What does the future hold for Hypertable? Near term, we're tightly focused on high availability. Specifically, server recovery. As soon as that is in place and well tested, the project will move into "beta" status with a 1.0 release sometime soon after that. We expect Hypertable to replace MySQL for many web service applications. We're working to replace the acronym LAMP ( see) with LAHP, the 'H' referring to Hypertable, of course. This means rock solid reliability and continued performance improvements. That should keep the team busy for the foreseeable future. Doug will be giving a Hypertable presentation at the SDForum, Software Architecture & Modeling SIG on April 23rd in Palo Alto. 5 Ways to Ensure Application Performance Usage Landscape: Enterprise Open Source Data Integration Comprehensive Threat Protection for REST, SOA, and Web 2.0
http://www.infoq.com/news/2008/04/hypertable-interview
crawl-002
refinedweb
1,556
63.7
Generics in .NET. generic<typename T> public ref class Generics { public: T Field; }; public class Generic<T> { public T Field; } Public Class Generic(Of T) Public Field As T End Class When you create an instance of a generic class, you specify the actual types to substitute for the type parameters. This establishes a new generic class, referred to as a constructed generic class, with your chosen types substituted everywhere that the type parameters appear. The result is a type-safe class that is tailored to your choice of types, as the following code illustrates. static void Main() { Generics<String^>^ g = gcnew Generics<String^>(); g->Field = "A string"; //... Console::WriteLine("Generics.Field = \"{0}\"", g->Field); Console::WriteLine("Generics.Field.GetType() = {0}", g->Field->GetType()->FullName); } public static void Main() { Generic<string> g = new Generic<string>(); g.Field = "A string"; //... Console.WriteLine("Generic.Field = \"{0}\"", g.Field); Console.WriteLine("Generic.Field.GetType() = {0}", g.Field.GetType().FullName); } Public Shared Sub Main() Dim g As New Generic(Of String) g.Field = "A string" '... Console.WriteLine("Generic.Field = ""{0}""", g.Field) Console.WriteLine("Generic.Field.GetType() = {0}", g.Field.GetType().FullName) End Sub Generics terminology The following terms are used to discuss generics in .NET:<TKey,TValue> generic type has two type parameters, TKey. Constraints are limits placed on generic type parameters. For example, you might limit a type parameter to types that implement the System.Collections.Generic.IComparer<T> generic interface, to ensure that instances of the type can be ordered. You can also constrain type parameters to types that have a particular base class, that have a parameterless. generic<typename T> T Generic(T arg) { T temp = arg; //... return temp; } T Generic<T>(T arg) { T temp = arg; //... return temp; } Function Generic(Of T)(ByVal arg As T) As T Dim temp As T = arg '... Return temp End Function. ref class A { generic<typename T> T G(T arg) { T temp = arg; //... return temp; } }; generic<typename T> ref class Generic { T M(T arg) { T temp = arg; //... return temp; } }; class A { T G<T>(T arg) { T temp = arg; //... return temp; } } class Generic<T> { T M(T arg) { T temp = arg; //... return temp; } } Class A Function G(Of T)(ByVal arg As T) As T Dim temp As T = arg '... Return temp End Function End Class Class Generic(Of T) Function M(ByVal arg As T) As T Dim temp As T = arg '... Return temp End Function End Class Advantages and disadvantages of generics: LinkedList<String^>^ llist = gcnew LinkedList<String^>(); LinkedList<string> llist = new LinkedList<string>(); Dim llist As New LinkedList(Of String)(), FindLast, and Find. Note A nested type that is defined by emitting code in a dynamic assembly or by using the Ilasm.exe (IL Assembler) is not required to include the type parameters of its enclosing types; however, if it does not include them, the type parameters are not in scope in the nested class. For more information, see "Nested Types" in MakeGenericType. Class Library and Language Support .NET provides a number of generic collection classes in the following namespaces: The System.Collections.Generic namespace contains most of the generic collection types provided by .NET, such as the List<T> and Dictionary<TKey,TValue> generic classes. The System.Collections.ObjectModel namespace contains, Introduction to Generics,. Related Topics Reference System.Collections.Generic System.Collections.ObjectModel System.Reflection.Emit.OpCodes Feedback
https://docs.microsoft.com/en-us/dotnet/standard/generics/index
CC-MAIN-2020-10
refinedweb
561
50.73
All the fundamental React.js concepts, jammed into this single Medium article (updated August 2019) An introduction to learn React’s Why, What, and How This article is an adaptation of an interactive guide at jsComplete.com/react-intro. The jsComplete version has embedded code examples and links to navigate the content. Before you begin, please note that this is a beginner-friendly guide that covers the concepts I classify as fundamentals for working with React. It is not a complete guide to React but rather a complete introduction. At the end of this guide, I list a few next-level resources for you. This guide will pave the way for you to understand them. React is defined as a JavaScript library for building user interfaces. Let’s start by talking about the two different parts of this definition. React is a JavaScript “library”. It is not exactly a “framework”. It is not a complete solution and you will often need to use more libraries with React to form any solution. React does not assume anything about the other parts in any solution. Frameworks serve a great purpose, especially for young teams and startups. When working with a framework, many smart design decisions are already made for you, which gives you a clear path to focus on writing good application-level logic. However, frameworks come with some disadvantages. For experienced developers working on large codebases, these disadvantages are sometimes a deal breaker. Frameworks are not flexible, although some claim to be. A framework usually wants you to code everything a certain way. If you try to deviate from that way, the framework usually ends up fighting you about it. Frameworks are also usually large and full of features. If you need to use only a small piece of them, you have to include the whole thing anyway. Admittedly, this point is changing today but it is still not ideal. Some frameworks are going modular, which I think is great, but I am a big fan of the pure Unix philosophy: Write programs that do one thing and do it well. Write programs to work together. — Doug McIlroy React follows the Unix philosophy because it is a small library that focuses on just one thing and on doing that thing extremely well. That “one thing” is the second part of the React’s definition: Building User Interfaces. A User Interface (UI) is anything we put in front of users to have them interact with a machine. UIs are everywhere, from the simple buttons on a microwave to the dashboard of a space shuttle. If the device we are trying to interface can understand JavaScript, we can use React to describe a UI for it. Since Web browsers understand JavaScript, we can use React to describe Web UIs. I like to use the word describe here because that is what we basically do with React. We just tell it what we want! React will then build the actual UI, on our behalf, in the Web browser. Without React or similar libraries, we would need to manually build UIs with native Web APIs and JavaScript and that is not as easy. When you hear the statement that “React is declarative” this is exactly what it means. We describe UIs with React and tell it what we want (not how to do it). React will take care of the “how” and translate our declarative descriptions (which we write in the React language) to actual UIs in the browser. React shares this simple declarative power with HTML itself, but with React we get to be declarative for HTML UIs that represent dynamic data, not just static data. When React was released, there was a lot of buzz about its performance because it introduced the smart idea of a virtual DOM that can be used to reconcile the actual DOM (and we’ll talk about that in the next section). DOM is “Document Object Model”. It’s the browsers’ programming interface for HTML (and XML) documents that treats them as tree structures. The DOM API can be used to change a document structure, style, and content. While React’s performance is still one of the most important reasons why it is extremely popular today, I don’t classify it as the “best” thing about it. I think React was a game changer because it created a common language between developers and browsers that allows developers to declaratively describe UIs and manage actions on their state instead of actions on their DOM elements. It’s simply the language of user interface “outcomes”. Instead of coming up with steps to describe actions on interfaces, developers just describe the interfaces in terms of a “final” state (like a function). When actions happen to that state, React takes care of updating the UIs in the DOM based on that (and it does that efficiently as we’ll see next). If someone asked you to give one reason why React is worth learning, this outcomes-based UI language is it. I call this language “the React language”. The React language Say that we have a list of TODOs like this one: const todos: [ { body: 'Learn React Fundamentals', done: true }, { body: 'Build a TODOs App', done: false }, { body: 'Build a Game', done: false }, ]; This todos array is the starting state of your UI. You’ll need to build a UI to display them and manage them. The UI will have a form to add new TODOs, a way for you to mark a TODO as complete, and a way to remove all completed TODOs. Each of these actions will require the app to do a DOM operation to create, insert, update, or delete DOM nodes. With React, you don’t worry about all of these DOM operations. You don’t worry about when they need to happen or how to efficiently perform them. You just place the todos array in the "state" of your app then use the React language to "command" React to display that state a certain way in the UI: <header>TODO List</header><ul> {todos.map(todo => <li>{todo.body}</li> )} </ul>// Other form elements... Don’t worry about the syntax yet but if you’re wondering what is going on we simply mapped an array of JavaScript objects into an array of React elements. More on that soon. After that you can focus on just doing data operations on that todos array! You add, remove, and update the items of that array and React will reflect the changes you make on this object in the UI rendered in the browser. This mental model about modeling the UI based on a final state is easier to understand and work with, especially when the views have lots of data transitions. For example, consider the view that tells you how many of your friends are online. That view’s “state” will be just one single number of how many friends are currently online. It does not care that a moment ago three friends came online, then one of them disconnected, and then two more joined. It just knows that at this current moment, four friends are online. React’s tree reconciliation Before React, when we needed to work with a browser’s API, which is known as the DOM API, we avoided traversing the DOM tree as much as possible and there is a reason for that. Any operation on the DOM is done in the same single thread that’s responsible for everything else that’s happening in the browser, including reactions to user events like typing, scrolling, resizing, etc. Any expensive operation on the DOM means a slow and janky experience for the user. It is extremely important that your applications do the absolute minimum operations and batch them where possible. React came up with a unique concept to help us do exactly that! When we tell React to render a tree of elements in the browser, it first generates a virtual representation of that tree and keeps it around in memory for later. Then it’ll proceed to perform the DOM operations that will make the tree show up in the browser. When we tell React to update the tree of elements it previously rendered, it generates a new virtual representation of the updated tree. Now React has 2 versions of the tree in memory! To render the updated tree in the browser, React does not discard what has already been rendered. Instead, it will compare the 2 virtual versions of the tree that it has in memory, compute the differences between them, figure out what sub-trees in the main tree need to be updated, and only update these sub-trees in the browser. This process is what’s known as the tree reconciliation algorithm and it is what makes React a very efficient way to work with a browser’s DOM tree. We’ll see an example of it shortly. Besides the declarative outcomes-based language and the efficient tree reconciliation, here are a few of the other reasons why I think React gained its massive popularity: - Working with the DOM API is hard. React gives developers the ability to work with a “virtual” browser that is friendlier than the real browser. React basically acts like your agent who will do the communication with the DOM on your behalf. - React is often given the “Just JavaScript” label. This means it has a very small API to learn and after that your JavaScript skills are what make you a better React developer. This is an advantage over libraries with bigger APIs. Also, the React API is mostly functions (and optionally classes if you need them). When you hear that a UI view is a function of your data, in React that’s literally the case. - Learning React pays off big-time for iOS and Android mobile applications as well. React Native allows you to use your React skills to build native mobile applications. You can even share some logic between your web, iOS, and Android applications. - The React team at Facebook tests all improvements and new features that get introduced to React right there on facebook.com, which increases the trust in the library among the community. It’s rare to see big and serious bugs in React releases because they only get released after thorough production testing at Facebook. React also powers other heavily used web applications like Netflix, Twitter, Airbnb, and many more. Your first React example To see the practical benefit of the tree reconciliation process and the big difference it makes, let’s work through a simple example focused on just that concept. Let’s generate and update a tree of HTML elements twice, once using the native Web API and then using the React API (and its reconciliation work). To keep this example simple, I will not use components or JSX (the JavaScript extension that’s popularly used with React). I will also do the update operation inside a JavaScript interval timer. This is not how we write React applications but let’s focus on one concept at a time. Start with this jsComplete playground session: jsdrops.com/react-dom1. In this session, a simple HTML element is rendered to the display using 2 methods: Method #1: Using the Web DOM API directly document.getElementById('mountNode').innerHTML = ` <div> Hello HTML </div> `; Method #2: Using React’s API ReactDOM.render( React.createElement( 'div', null, 'Hello React', ), document.getElementById('mountNode2'), ); The ReactDOM.render method and React.createElement method are the core API methods in a React application. In fact, a React web application cannot exist without using both of these methods. Let me briefly explain them: ReactDOM.render This is basically the entry point for a React application into the browser’s DOM. It has 2 arguments: - The first argument is WHAT to render to the browser. This is always a “React element”. - The second argument is WHERE to render that React element in the browser. This has to be a valid DOM node that exists in the statically rendered HTML. The example above uses a special mountNode2element which exists in the playground’s display area (the first mountNodeis used for the native version). What exactly is a React element? It’s a VIRTUAL element describing a DOM element. It’s what the React.createElement API method returns. React.createElement Instead of working with strings to represent DOM elements (as in the native DOM example above) in React we represent DOM elements with objects using calls to the React.createElement method. These objects are known as React elements. The React.createElement function has many arguments: - The first argument is the HTML “tag” for the DOM element to represent, which is divin this example. - The second argument is for any attributes (like id, href, title, etc.) we want the DOM element to have. The simple divwe’re using has no attributes, so we used nullin there. - The third argument is the content of the DOM element. We’ve put a “Hello React” string in there. The optional third argument and all the optional arguments after it form the children list for the rendered element. An element can have 0 or more children. React.createElementcan also be used to create elements from React components. React elements are created in memory. To actually make a React element show up in the DOM, we use the ReactDOM.render method which will do many things to figure out the most optimal way to reflect the state of a React element into the actual DOM tree in the browser. When you execute the 2 methods in this code session, you’ll see a “Hello HTML” box and a “Hello React” box: Nesting React elements We have two nodes: one being controlled with the DOM API directly and another being controlled with the React API (which in turn uses the DOM API). The only major difference between the ways we are building these two nodes in the browser is that in the HTML version we used a string to represent the DOM tree, while in the React version we used pure JavaScript calls and represented the DOM tree with an object instead of a string. No matter how complicated the HTML UI is going to get, when using React every HTML element will be represented with a React element. Let’s add more HTML elements to this simple UI. Let’s add a text box to read input from the user. For the HTML version, you can just inject the new element’s tag directly inside the template: document.getElementById('mountNode').innerHTML = ` <div> Hello HTML <input /> </div> `; To do the same with React, you can add more arguments after the third argument for React.createElement above. To match what’s in the native DOM example so far, we can add a fourth argument that is another React.createElement call that renders an inputelement: ReactDOM.render( React.createElement( "div", null, "Hello React ", React.createElement("input") ), document.getElementById('mountNode2'), ); Let’s also render the current time in both versions. Let’s put it in a pre element (just to give it a monospace font in the playground). You can use new Date().toLocaleTimeString() to display a simple time string. Here’s what you need to do for the native DOM version: document.getElementById('mountNode1').innerHTML = ` <div> Hello HTML <input /> <pre>${new Date().toLocaleTimeString()}</p> </div> `; To do the same in React, we add a fifth argument to the top-level div element. This new fifth argument is another React.createElement call, this time using a pre tag with the new Date().toLocaleTimeString() string for content: ReactDOM.render( React.createElement( 'div', null, 'Hello React ', React.createElement('input'), React.createElement( 'pre', null, new Date().toLocaleTimeString() ) ), document.getElementById('mountNode2') ); Both versions are still rendering the exact same HTML in the browser. As you’re probably thinking by now, using React is a lot harder than the simple and familiar native way. What is it that React does so well that is worth giving up the familiar HTML and having to learn a new API to write what can be simply written in HTML? The answer is not about rendering the first HTML view. It is about what we need to do to update any existing view in the DOM. Updating React elements Let’s do an update operation on the DOM trees that we have so far. Let’s simply make the time string tick every second. We can easily repeat a JavaScript function call in a browser using the setInterval Web timer API. Let’s put all of our DOM manipulations for both versions into a function, name it render, and use it in a setInterval call to make it repeat every second. Here is the full final code for this example: // jsdrops.com/react-dom2const/react-dom2 and notice how the time string is ticking every second in both versions. We are now updating our UI in the DOM. This is the moment when React will potentially blow your mind. If you try to type something in the text box of the native DOM version, you will not be able to. This is very much expected because we are basically throwing away the whole DOM node on every tick and regenerating it. However, if you try to type something in the text box that is rendered with React, you can certainly do so! Although the whole React rendering code is within the ticking timer, React is changing only the content of the pre element and not the whole DOM tree. This is why the text input box was not regenerated and we were able to type in it. You can see the different ways we are updating the DOM visually if you inspect the two DOM nodes in a Chrome DevTools elements panel. The Chrome DevTools elements panel highlights any DOM elements that get updated. You will see how the native HTML version is regenerating its entire div#mountNode container with every tick, while React is smartly only regenerating the pre tag in its div#mountNode2 container. This is React’s smart diffing algorithm in action. It only updates in the main DOM tree what actually needs to be updated while it keeps everything else the same. This diffing process is possible because of React’s virtual DOM representation that it keeps around in memory. No matter how many times the UI views need to be regenerated, React will take to the browser only the needed “partial” updates. Not only is this method a lot more efficient but it also removes a big layer of complexity in the way we think about updating UIs. Having React do all the computations about whether we should or should not update the DOM enables us to focus on thinking about our data (state) and the way to describe a UI for it. We then manage the updates on the data state as needed without worrying about the steps needed to reflect these updates in the actual UI in the browser (because we know React will do exactly that and it will do it in an efficient way!) React is all about components In React, we describe UIs using components that are reusable, composable, and stateful. We define small components and then put them together to form bigger ones. All components small or big are reusable, even across different projects. You can think of components as simple functions (in any programming language). We call functions with some input and they give us some output. We can reuse functions as needed and compose bigger functions from smaller ones. React components are exactly the same; their input is a set of “props” and their output is a description of a UI. We can reuse a single component in multiple UIs and components can contain other components. The basic form of a React component is actually a plain-old JavaScript function. Some React components are pure but you can also introduce side effects in a component. For example, a component might change the HTML “title” of a web page when it gets mounted in the browser or it might scroll the browser view into a certain position. Most importantly, a React component can have a private state to hold data that may change over the lifecycle of the component. This private state is an implicit part of the input that drives the component’s output and that’s actually what gives React its name! Why is React named “React” anyway? When the state of a React component (which is part of its input) changes, the UI it represents (its output) changes as well. This change in the description of the UI has to be reflected in the device we are working with. In a browser, we need to update the DOM tree. In a React application we don’t do that manually. React will simply react to the state changes and automatically (and efficiently) update the DOM when needed. Creating components using functions A React component — in its simplest form — is a plain-old JavaScript function: // jsdrops.com/bx1function Button (props) { // Returns a DOM/React element here. For example: return <button type="submit">{props.label}</button>; }// To render a Button element in the browser ReactDOM.render(<Button label="Save" />, mountNode); Note how I wrote what looks like HTML in the returned output of the Button function above. This is neither HTML nor JavaScript and it is not even React. This is JSX. It’s an extension to JavaScript that allows us to write function calls in an HTML-like syntax. Go ahead and try to return any other HTML element inside the Buttonfunction and see how they are all supported (for example, return an inputelement or a textareaelement). JSX is not HTML JSX is not understood by browsers. If you try to execute the Button function in a regular browser console, it’ll complain about the first character in the JSX part: What browsers understand (given the React library is included) is the React.createElementAPI calls. The same Button example can be written without JSX as follows: // jsdrops.com/bx2function Button (props) { return React.createElement( "button", { type: "submit" }, props.label ); }ReactDOM.render( React.createElement(Button, { label: "Save"}), mountNode ); You can use React like this (without JSX). You can execute the Button function in a browser directly (after loading the React library) and things will work just fine. However, we like to see and work with HTML instead of dealing with function calls. When was the last time you built a website with just JavaScript and not used HTML? You can if you want to, but no one does that. That’s why JSX exists. JSX is basically a compromise. Instead of writing React components using the React.createElement syntax, we use a syntax very similar to HTML and then use a compiler to translate it into React.createElement calls. A compiler that translates one form of syntax into another is known as a “transpiler”. To translate JSX we can use transpilers like Babel or TypeScript. For example, the jsComplete playground uses TypeScript to transpile any JSX you put into it. When you use create-react-app, the generated app will internally use Babel to transpile your JSX. You can use babeljs.io/repl/ to see what any JSX syntax get converted to for React but JSX can also be used on its own. It is not a React-only thing. So a React component is a JavaScript function that returns a React element (usually with JSX). When JSX is used, the <tag></tag syntax becomes a call to React.createElement("tag"). It’s absolutely important for you to keep this in mind while building React components. You are not writing HTML. You are using a JavaScript extension to return function calls that create React elements (which are essentially JavaScript objects). The name has to start with a capital letter Note how I named the component “Button”. The first letter being a capital one is actually a requirement since we will be dealing with a mix of HTML elements and React elements. A JSX compiler (like Babel) will consider all names that start with a lowercase letter as names of HTML elements. This is important because HTML elements are passed as strings to React.createElement calls while React elements need to be passed as variables: Go ahead and try naming the React component “button” instead of “Button” and see how ReactDOM will totally ignore the function and render a regular empty HTML button element. // jsdrops.com/bx3// Wrong: function button () { return <div>My Fancy Button</div>; };// The following will render an HTML button // (and ignore the fancy button function) ReactDOM.render(<button />, mountNode); The first argument is an object of “props” Just like HTML elements can be assigned attributes like id or title, a React element can also receive a list of attributes when it gets rendered. The Button element above (jsdrops.com/bx2) received a label attribute. In React, the list of attributes received by a React element is known as props. A React function component receives this list as its first argument. The list is passed as an object with keys representing the attributes names and values representing the values assigned to them. When using a function component, you don’t have to name the object holding the list of attributes as “props” but that is the standard practice. When using class components, which we will do below, the same list of attributes is always presented with a special instance property named props. Note that receiving props is optional. Some components will not have any props. However, a component’s return value is not optional. A React component cannot return “undefined” (either explicitly or implicitly). It has to return a value. It can return “null” to cause the renderer to ignore its output. I like to use object destructuring whenever I use component props (or state, really). For example, the Button component function can be written like this with props destructuring: const Button = ({ label }) => ( <button type="submit">{label}</button> ); This approach has many benefits but the most important one is to visually inspect what props are used in a component and make sure a component does not receive any extra props that are not needed. Note how I used an arrow function instead of a regular one. This is just a style preference for me personally. Some people prefer the regular function style and there is nothing wrong with that. I think what’s important is to be consistent with the style that you pick. I’ll use arrow functions here but don’t interpret that as a requirement. Expressions in JSX You can include a JavaScript expression using a pair of curly brackets anywhere within JSX: // jsdrops.com/bx4const RandomValue = () => ( <div> { Math.floor(Math.random() * 100) } </div> );ReactDOM.render(<RandomValue />, mountNode); Note that only expressions can be included inside these curly brackets. For example, you cannot include a regular if-statement but a ternary expression is okay. Anything that returns a value is okay. You can always put any code in a function, make it return something, and call that function within the curly brackets. However, keep the logic you put in these curly brackets to a minimum. JavaScript variables are also expressions, so when the component receives a list of props you can use these props inside curly brackets. That’s how we used {label} in the Button example. JavaScript object literals are also expressions. Sometimes we use a JavaScript object inside curly brackets, which makes it look like double curly brackets: {{a:42}}. This is not a different syntax; it is just an object literal defined inside the regular JSX curly brackets. For example, one use case for using an object literal in these curly brackets is to pass a CSS style object to the special style attribute in React: // jsdrops.com/bx5const ErrorDisplay = ({ message }) => ( <div style={ { color:'red', backgroundColor:'yellow' } }> </div> );ReactDOM.render( <ErrorDisplay message="These aren't the droids you're looking for" />, mountNode ); The style attribute above is a special one. We use an object as its value and that object defines the styles as if we are setting them through the JavaScript DOM API (camel-case property names, string values). React translates these style objects into inline CSS style attributes. This is not the best way to style a React component but I find it extremely convenient to use when applying conditional styles to elements. For example, here is a component that will output its text in either green or red randomly about half the time: // jsdrops.com/bx6class ConditionalStyle extends React.Component { render() { return ( <div style={{ color: Math.random() < 0.5 ? 'green': 'red' }}> How do you like this? </div> ); } }ReactDOM.render( <ConditionalStyle />, mountNode, ); The logic for this styling is right there in the component. I like that! This is easier to work with than conditionally using a class name and then go track what that class name is doing in the global CSS stylesheet. JSX is not a template language Some libraries that deal with HTML provide a template language for it. You write your dynamic views with an “enhanced” HTML syntax that has loops and conditionals. These libraries will then use JavaScript to convert the templates into DOM operations. The DOM operations can then be used in the browser to display the DOM tree described by the enhanced HTML. React eliminated that step. We do not send to the browser a template at all with a React application. We sent it a tree of objects described with the React API. React uses these objects to generate the DOM operations needed to display the desired DOM tree. When an HTML template is used, the library parses your application as a string. A React application is parsed as a tree of objects. While JSX might look like a template language, it really isn’t. It’s just a JavaScript extension that allows us to represent React’s tree of objects with a syntax that looks like an HTML template. Browsers don’t have to deal with JSX at all and React does not have to deal with it either! Only the compiler does. What we send to the browser is template-free and JSX-free code. For example, for the todos array we’ve seen above, if we’re to display that array in a UI using a template language, we’ll need to do something like: <ul> <% FOR each todo in the list of todos %> <li><%= todo.body %></li> <% END FOR %> </ul> The <% %>is one syntax to represent the dynamic enhanced parts. You might also see the {{ }}syntax. Some template languages use special attributes for their enhanced logic. Some template languages make use of whitespace indentation (off-side rule). When changes happen to the todos array (and we need to update what’s rendered in the DOM with a template language) we’ll have to either re-render that template or compute where in DOM tree we need to reflect the change in the todos array. In a React application, there is no template language at all. Instead, we use JSX: <ul> {todos.map(todo => <li>{todo.body}</li> )} </ul> Which, before being used in the browser, gets translated to: React.createElement( "ul", null, todos.map(todo => React.createElement("li", null, todo.body) ), ); React takes this tree of objects and transforms it into a tree of DOM elements. From our point of view, we’re done with this tree. We don’t manage any actions on it. We just manage actions in the todos array itself. Creating components using classes React supports creating components through the JavaScript class syntax as well. Here is the same Button component example written with the class syntax: // jsdrops.com/bx7class Button extends React.Component { render() { return ( <button>{this.props.label}</button> ); } }// Use it (same syntax) ReactDOM.render(<Button label="Save" />, mountNode); In this syntax, you define a class that extends React.Component, which is one of the main classes in the React top-level API. A class-based React component has to at least define an instance method named render. This render method returns the element that represents the output of an object instantiated from the component. Every time we use the Button class-based component (by rendering a <Button … />), React will instantiate an object from this class-based component and use that object’s representation to create a DOM element. It’ll also associate the DOM-rendered element with the instance it created from the class. Note how we used this.props.label inside the rendered JSX. Every component gets a special instance property named props that holds all values passed to that component’s element when it was instantiated. Unlike function components, the render function in class-based components does not receive any arguments. Functions vs classes Components created with functions used to be limited in React. The only way to make a component “stateful” was to use the class syntax. This has changed with the release of “React Hooks” beginning with React version 16.8, which was released in early 2019. The React hooks release introduced a new API to make a function component stateful (and give it many other features). With this new API, most of what is usually done with React can be done with functions. The class-based syntax is only needed for advanced and very rare cases. I believe the new API will slowly replace the old one but that’s not the only reason I want to encourage you to use it (exclusively if you can). I’ve used both APIs in large applications and I can tell you that the new API is far more superior than the old one for many reasons but here are the ones I personally think are the most important: - You don’t have to work with class “instances” and their implicit state. You work with simple functions that are refreshed on each render. The state is explicitly declared and nothing is hidden. All of this basically means that you’ll encounter less surprises in your code. - You can group related stateful logic and separate it into self-contained composable and sharable units. This makes it easier to break complex components into smaller parts. It also makes testing components easier. - You can consume any stateful logic in a declarative way and without needing to use any hierarchical “nesting” in components trees. While class-based components will continue to be part of React for the foreseeable future, as a newcomer to the ecosystem I think it makes sense for you to start purely with just functions (and hooks) and focus on learning the new API (unless you have to work with a codebase that already uses classes). Components vs Elements You might find the words “component” and “element” mixed up in the React guides and tutorials out there. I think a React learner needs to understand the important distinctions. A React Component is a template. A blueprint. A global definition. This can be either a function or a class (with a render method). A React Element is what gets returned from components. It’s an object that virtually describes the DOM nodes that a component represents. With a function component, this element is the object that the function returns and with a class component the element is the object that the component’s render method returns. React elements are not what you see in the browser. They are just objects in memory and you can’t change anything about them. React internally creates, updates, and destroys objects. For function components, React just uses the invocation of the function to determine what DOM element to render. Benefits of components The term “component” is used by many other frameworks and libraries. We can even write web components natively using HTML5 features like custom elements and HTML imports. Components, whether we are working with them natively or through a library like React, have many advantages. First, components make your code more readable and easier to work with. Consider this UI: <a href=””> <img src=”facebook.png” /> </a> What does this UI represent? If you speak HTML, you can parse it quickly here and say, “it’s a clickable image.” If we’re to convert this UI into a component, we can just name it ClickableImage! <ClickableImage /> When things get more complex, this parsing of HTML becomes harder so components allow us to quickly understand what a UI represent using the language that we’re comfortable with. Here’s a bigger example: <TweetBox> <TextAreaWithLimit limit="280" /> <RemainingCharacters /> <TweetButton /> </TweetBox> Without looking at the actual HTML code, we know exactly what this UI represents. Furthermore, if we need to modify the output of the remaining characters section we know exactly where to go. React components can also be reused in the same application and across multiple applications. For example, here’s a possible implementation of the ClickableImage component: const ClickableImage = ({ href, src }) => { return ( <a href={href}> <img src={src} /> </a> ); }; Having variables for both the href and the src props is what makes this component reusable. For example, to use this component we can render it with a set of props: <ClickableImage href="" src="google.png" /> And we can reuse it by using a different set of props: <ClickableImage href="" src="bing.png" /> In functional programming, we have the concept of pure functions. These are basically protected against any outside state; if we give them the same input, we’ll always get the same output. If a React component does not depend on (or modify) anything outside of its definition (for example, if it does not use a global variable) we can label that component pure as well. Pure components have a better chance at being reused without any problems. We create components to represent views. For ReactDOM, the React components we define will represent HTML DOM nodes. The ClickableImage component above was composed of two nested HTML elements. We can think of HTML elements as built-in components in the browser. We can also use our own custom components to compose bigger ones. For example, let’s write a component that displays a list of search engines. const SearchEngines = () => { return ( <div className="search-engines"> <ClickableImage href="" src="google.png" /> <ClickableImage href="" src="bing.png" /> </div> ); }; Note how I used the ClickableImage component to compose the SearchEngines component! We can also make the SearchEngines component reusable as well by extracting its data into a variable and designing it to work with that variable. For example, we can introduce a data array in a format like: const data = [ { href: "", src: "google.png" }, { href: "", src: "bing.png" }, { href: "", src: "yahoo.png" } ]; Then, to make <SearchEngines data={data} /> work, we just map the data array from a list of objects into a list of ClickableImage components: const SearchEngines = ({ engines }) => { return ( <List> {engines.map(engine => <ClickableImage {...engine} />)} </List> ); };ReactDOM.render( <SearchEngines engines={data} />, document.getElementById("mountNode") ); This SearchEngines can work with any list of search engines we give to it. What exactly are hooks? A hook in a React component is a call to a special function. All hooks functions begin with the word “use”. Some of them can be used to provide a function component with stateful elements (like useState), others can be used to managed side effects (like useEffect) or to cache/memoize functions and objects (like useCallback). Hooks are very powerful and sky is the limit when it comes to things you can do with them. React hook functions can only be used in function components. You can’t use them in class components. To see an example of the basic useState hook, let’s make the Button component above respond to a click event. Let’s maintain the number of times it gets clicked in a "count" variable and display the value of that variable as the label of the button it renders. const Button = () => { let count = 0; return ( <button>{count}</button> ); };ReactDOM.render(<Button />, mountNode); This count variable will be the state element that we need to introduce to the example. It’s a piece of data that the UI will depend on (because we’re displaying it) and it is a state element because it is going to change over time. Every time you define a variable in your code you will be introducing a state and every time you change the value of that variable you are mutating that state. Keep that in mind. Before we can change the value of the count state, we need to learn about events. Responding to user events You can add an event handler with an “onEvent” property (to the button element in this case). This could be an onClick, onMouseOver, onScroll, onSubmit, etc. What we need here is an onClick event and we just define it as an attribute on the target element. For example, to make the program log a message to the console every time the button is clicked, we can do something like: const Button = () => { let count = 0; return ( <button onClick={() => console.log('Button clicked')}> {count} </button> ); };ReactDOM.render(<Button />, mountNode); Unlike the DOM version of the onClick attribute (which uses a string) the React’s onClick attribute uses a function reference. You specify that inside curly brackets. function func() {}<button onClick={func} /> Note how we passed the func reference (name) as the onClick handler. We did not invoke func in there. React will invoke func when the button gets clicked. For the onClick event in the Button component above, we "inlined" a function definition that when invoked will output a message to the console. Each time we click on the button the onClick handler (the inline arrow function) will be invoked and we’ll see that message. Note how the event name is camel-case. All DOM-related attributes (which are handled by React) need to be camel-case (and React will display an error if that’s not the case). React also supports using custom HTML attributes and those have to be in all-lowercase format. Some DOM attributes in React are slightly different from what they do in the regular DOM API. An example of that is the onChangeevent. In a regular browser, it’s usually fired when you click outside a form field (or tab out of it). In React, onChangefires whenever the value of a form field is changed (on every character added/removed). Some attributes in React are named differently from their HTML equivalent. An example of that is the classNameattribute in React which is equivalent to using the classattribute in HTML. For a complete list of the differences between React attributes and DOM attributes, see jscomplete.com/react-attributes. Reading and updating state To track state updates and trigger virtual DOM diffing and real DOM reconciliation, React needs to be aware of any changes that happen to any state elements that are used within components. To do this in an efficient way, React requires the use of special getters and setters for each state element you introduce in a component. This is where the useState hook comes into play. It defines a state element and give us back a getter and setter for it! Here’s what we need for the count state element we’re trying to implement: const [count, setCount] = React.useState(0); The useState function returns an array with exactly 2 items. The first item is a value (getter) and the second item is a function (setter). I used array destructuring to give these items names. You can give them any names you want but [name, setName] is the convention. The first item “value” can be a string, number, array, or other types. In this case, we needed a number and we needed to initialize that number with 0. The argument to React.useStateis used as the initial value of the state element. The second item “function” will change the value of the state element when invoked (and it will trigger DOM processing if needed). Each time the setCount function is invoked, React will re-render the Button component which will refresh all variables defined in the component (including the count value). The argument we pass to setCount becomes the new value for count. What we need to do to make the button increment its label is to invoke the setCount function within the onClick event and pass the current count value incremented by 1 to it. Here’s the full code of the label-incrementing button example: const Button = () => { const [count, setCount] = useState(0); return ( <button onClick={() => setCount(count + 1)}> {count} </button> ); };ReactDOM.render(<Button />, mountNode); Go ahead and test that. The button will now increment its label on each click. Note how we did not implement any actions to change the UI itself. Instead, we implemented an action to change a JavaScript object (in memory)! Our UI implementation was basically telling React that we want the label of the button to always reflect the value of the countobject. Our code didn’t do any DOM updates. React did. Note also how I used the const keyword to define count although it’s a value that gets changed! Our code will not change that value. React will when it uses a fresh call of the Button function to render the UI of its new state. In that fresh call, the useState function call will give us a new fresh count value. The useStatefunction is available globally in the playground. This is just an alias to React.useState. In your code, you can use named imports to have useStateavailable directly in the scope of a module: import React, { useState } from 'react'; You’ll need a few more examples to appreciate this power. Let’s add some more features to this basic example. Let’s have many buttons and make all of them increment a single shared count value. Working with multiple components Let’s split the Button component that we have so far into two components: - Keep a Buttoncomponent to represent a button element, but with a static label. - Add a new Displaycomponent to display the count’s value. The new Display component will be a purely presentational one with no state or interactions of its own. That’s normal. Not every React component has to have stateful hooks or be interactive. const Display = (props) => ( <pre>COUNT VALUE HERE...</pre> ); The responsibility of the Display component is to simply display a value that it will receive as a prop. For example, the fact that a pre element was used to host the value is part of that responsibility. Other components in this application have no say about that! Rendering sibling components We now have two elements to render: Button and Display. We can’t render them directly next to each other like this: // This will not workReactDOM.render(<Button /><Display />, mountNode); Adjacent elements can’t be rendered like this in React because each of them gets translated into a function call when JSX gets converted. You have a few options to deal with this issue. First, you can pass an array of elements to ReactDOM.render and insert into that array as many React elements as you wish. Option #1 ReactDOM.render([<Button />, <Display />], mountNode); This is usually a good solution when all the elements you’re rendering are coming from a dynamic source. However, it’s not ideal for the case we’re doing here. Another option is to make the sibling React elements the children of another React element. For example, we can just enclose them in a div element. Option #2 ReactDOM.render( <div> <Button /> <Display /> </div>, mountNode ); The React API supports this nesting. In fact, React has a special object if you need to enclose multiple adjacent elements like this without introducing a new DOM parent node. You can use React.Fragment: Option #3 ReactDOM.render( <React.Fragment> <Button /> <Display /> </React.Fragment>, mountNode ); This case is so common in React that the JSX extension has a shortcut for it. Instead of typing React.Fragment, you can just use an empty tag <></>. Option #3+ ReactDOM.render( <> <Button /> <Display /> </>, mountNode ); The empty tag will get transpiled into the React.Fragment syntax. I’ll use this syntax to continue with the example. However, you should always try to make the first argument to ReactDOM.render a single component call instead of the nested tree that we just did. This is essentially a code quality preference. It forces you into thinking about your components hierarchy, names, and relations. Let’s do that next. The top-level component Let’s introduce a top-level component to host both the Button and Display components. The question now is: what should we name this new parent component? Believe it or not, naming your components and their state/props elements is a very hard task that will affect the way these components work and perform. The right names will force you into the right design decisions. Take some time and think about every new name you introduce to your React apps. Since this new parent component will host a Display with a Button that increments the displayed count, we can think of it as the count value manager. Let’s name it CountManager. const CountManager = () => { return ( <> <Button /> <Display /> </> ); };ReactDOM.render(<CountManager />, mountNode); Since we’re going to display the count’s value in the new Display component, we no longer need to show the count’s value as the label of the button. Instead, we can change the label to something like "+1". const Button = () => { return ( <button onClick={() => console.log('TODO: Increment counter')}> +1 </button> ); }; Note that I’ve also removed the state element from the Button component because we can’t have it there anymore. With the new requirement, both the Button and Display components need access to the count state element. The Display component will display it and the Button component will update it. When a component needs to access a state element that’s owned by its sibling component, one solution is to "lift" that state element one level up and define it inside their parent component. For this case the parent is the CountManager component that we just introduced. By moving the state to CountManager, we can now "flow" data from parent to child using component props. That’s what we should do to display the count value in the Display component: const Display = ({ content }) => ( <pre>{content}</pre> );const CountManager = () => { const [count, setCount] = useState(0); return ( <> <Button /> <Display content={count} /> </> ); };ReactDOM.render(<CountManager />, mountNode); Note how in CountManager I used the exact same useState line that was in the Button component. We are lifting the same state element. Also note how when I flowed the count value down to the Display component via a prop, I used a different name for it ( content). That’s normal. You don’t have to use the exact same name. In fact, in some cases, introducing new generic names are better for children component because they make them more reusable. The Display component could be reused to display other numeric values besides count. Parent components can also flow down behavior to their children, which is what we need to do next. Since the count state element is now in the CountManager component, we need a function on that level to handle updating it. Let’s name this function incrementCounter. The logic for this function is actually the same logic we had before in the handleClick function in the Button component. The new incrementCounter function is going to update the CountManager component count state to increment the value using the previous value: const CountManager = () => { // .... const incrementCounter = () => { setCount(count + 1); } // ... }; The onClick handler in the Button component now has to change. We want it to execute the incrementCounter function that’s in the CountManager component but a component can only access its own functions. So, to make the Button component able to invoke the incrementCounter function in the CountManager component we can pass a reference to incrementCounter to the Button component as a prop. Yes, props can hold functions as well, not just data. Functions are just objects in JavaScript and just like objects you can pass them around. We can name this new prop anything. I’ll name it clickAction and pass it a value of incrementCounter, which is the reference to the function we defined in the CountManager component. We can use this new passed-down behavior directly as the onClick handler value. It will be a prop for the Button component: const Button = ({ clickAction }) => { return ( <button onClick={clickAction}> +1 </button> ); };// ...const CountManager = () => { // ... return ( <div> <Button clickAction={incrementCounter} /> <Display content={count} /> </div> ); }; Something very powerful is happening here. This clickAction property allowed the Button component to invoke the CountManager component’s incrementCounter function. It’s like when we click that button, the Button component reaches out to the CountManager component and says, "Hey Parent, go ahead and invoke that increment counter behavior now". In reality, the CountManager component is the one in control here and the Button component is just following generic rules. If you analyze the code as it is now, you’ll realize how the Button component has no clue about what happens when it gets clicked. It just follows the rules defined by the parent and invokes a generic clickAction. The parent controls what goes into that generic behavior. This follows the concept of responsibility isolation. Each component here has certain responsibilities and they get to focus on that. Look at the Display component for another example. From its point of view, the count value is not a state. It is just a prop that the CountManager component is passing to it. The Display component will always display that prop. This is also a separation of responsibilities. As the designer of these components, you get to choose their level of responsibilities. For example, we could have made the responsibility of displaying the count value part of the CountManager component itself and not use a new Display component for that. The CountManager component has the responsibility of managing the count state. That’s an important design decision that we made and it’s one you’re going to have to make a lot in a React application. Where to define the state? The practice I follow is to define a state element in a shared parent node that’s as close as possible to the all the children who need to access that state element. For a small application like this one, that usually means the top-level component itself. In bigger applications, a sub-tree might manage its own state “branch” rather than relying on global state elements that are defined on the top-level root component. The top-level component is popularly used to manage shared application state and actions because it’s parent to all other components. Be careful about this design because updating a state element on the top-level component means that the whole tree of components will be re-rendered (in memory). Here’s the full code for this example so far: // jsdrops.com/bx8const Button = ({ clickAction }) => { return ( <button onClick={clickAction}> +1 </button> ); };const Display = ({ content }) => ( <pre>{content}</pre> );const CountManager = () => { const [count, setCount] = useState(0); const incrementCounter = () => { setCount(count + 1); }; return ( <div> <Button clickAction={incrementCounter} /> <Display content={count} /> </div> ); }; Making components reusable Components are all about reusability. Let’s make the Button component reusable by changing it so that it can increment the global count with any value, not just 1. Let’s start by adding more Button elements in the CountManager component so that we can test this new feature: const CountManager = () => { // .. return ( <> <Button clickAction={incrementCounter} /> {/* +1 */} <Button clickAction={incrementCounter} /> {/* +5 */} <Button clickAction={incrementCounter} /> {/* +10 */} <Display count={count} /> </> ); }; All Button elements rendered above will currently have a +1 label and they will increment the count with 1. We want to make them display a different label that is specific to each button and make them perform a different action based on a value that is specific to each of them. Remember that you can pass any value to a React element as a prop. Here’s the UI I have in mind after clicking each button once: In the screenshot above, the count started with 0. I added 1, then 5, and then 10 to get to 16 Before we go through this exercise, take some time and think about it and try to implement it yourself. It is mostly straightforward. Hint: you’ll need to introduce 1 new prop for Button. Give it a shot. Come back when you are ready to compare your solution with mine. Adding new props The first thing we need to do is make the +1 label in the Button component a customizable one. To make something customizable in a React component we introduce a new prop (which the parent component can control) and make the component use its value. For our example, we can make the Button component receive the amount to increment ( 1, 5, 10) as a new prop. I’ll name it clickValue. We can change the render method in CountManager to pass the values we want to test with to this new prop. return ( <> <Button clickAction={incrementCounter} clickValue={1} /> <Button clickAction={incrementCounter} clickValue={5} /> <Button clickAction={incrementCounter} clickValue={10} /> <Display content={count} /> </> ); Note a couple of things about this code so far: - I did not name the new property with anything related to count. The Buttoncomponent does not need to be aware of the meaning of its click event. It just needs to pass this clickValuealong when its click event is triggered. For example, naming this new property countValuewould not be the best choice because now, in the Buttoncomponent, we read the code to understand that a Buttonelement is related to a count.. Passing a number as a string is a common mistake in React. See this article for more React-related common mistakes. Customizing behaviors The other thing we need to make generic in the CountManager component is the incrementCounter action function. It cannot have a hardcoded count + 1 operation as it does now. Similar to what we did for the Button component, to make a function generic we make it receive an argument and use that argument’s value. For example: incrementCounter = (incrementValue) => { setCount(count + incrementValue); }; Now all we need to do is make the Button component use its clickValue prop as its label and make it invoke its onClick action with its clickValue as an argument. const Button = ({ clickValue, clickAction }) => { return ( <button onClick={() => clickAction(clickValue)}> +{clickValue} </button> ); }; Note how I had to wrap the onClick prop with an inline arrow function in order to make it bound to the Button’s clickValue. The JavaScript closure for this new arrow function will take care of that. The three buttons should now increment with their three different click values. You can see this example’s code at jsdrops.com/bx9. Accepting input from the user Imagine we need to count the characters a user type in a text area, just like Twitter’s tweet form. With each character the user types we need to update the UI with the new count of characters. Here’s a component that displays a textarea input element with a placeholder div for the character count: // jsdrops.com/bx10const CharacterCounter = () => { return ( <div> <textarea cols={80} rows={10} /> <div>Count: X</div> </div> ); };ReactDOM.render(<CharacterCounter />, mountNode); To update the count as the user types in the textarea, we need to customize the event that fires when the user types. This event in React is implemented as onChange. We’ll also need to use a state element for the count of characters and fire its updater function within the onChange event. In the new onChange event handler that we need to come up with, we’ll need access to the text that was typed in the textarea element. We need to read it somehow because React by default is not aware of it. As the user types, the rendered UI changes through the browser’s own state management. We did not instruct React to change the UI based on the value of the textarea element. We can read the value using two main methods. First, we can read it by using the DOM API itself directly. We’ll need to “select” the element with a DOM selection API and once we do that we can read its value using an element.value call. To select the element we can simply give it an ID and using the document.getElementById DOM API to select it. Because React renders the textarea element we can actually do the element selection through React itself. React has a special "ref" attribute that we can assign to each DOM element and later use it to access it. We can also access the element through the onChange event’s target object directly. Each event exposes its target and in the case of an onChange event on a textarea the target is the textarea element. That means all we need to do is: // jsdrops.com/bx11const CharacterCounter = () => { const [count, setCount] = useState(0); const handleChange = (event) => { const element = event.target; setCount(element.value.length); }; return ( <div> <textarea cols={80} rows={10} onChange={handleChange} /> <div>Count: {count}</div> </div> ); }; This is the simplest solution and it actually works fine. The not-ideal thing about this solution is that we’re mixing concerns. The handleChange event has the side effect of calling the setCount function and computing the length of the text. This is really not the concern of an event handler. The reason we needed to mix these concerns is that React is not aware of what is being typed. It’s a DOM change, not a React change. We can make it into a React change by overriding the value of textarea and updating it through React as a state change. In the onChange handler, instead of counting the characters we just set the value of what has been typed on the state of the component. Then the concern of what to do with that value becomes part of the React UI render logic. Here’s a version of the solution that uses this strategy: // jsdrops.com/bx12const CharacterCounter = () => { const [inputValue, setInputValue] = useState(''); const handleChange = (event) => { const element = event.target; setInputValue(element.value); }; return ( <div> <textarea cols={80} rows={10} value={inputValue} onChange={handleChange} /> <div>Count: {inputValue.length}</div> </div> ); }; Although this is a bit more code it has clear separation of concerns. React is now aware of the input element state. It controls it. This pattern is known as the controlled component pattern in React. This version is also easier to extend. If we’re to compute the number of words as the user types, this becomes another UI computed value. No need to add anything else on the state. Managing side effects Rendering a React component in the browser for the first time is referred to as “mounting” and removing it from the browser is referred to as “unmounting”. Mounting, updating, and unmounting components might need to have a “side effect”. For example, a React TODOs app might need to display the number of active TODO items in the title of the browser page. This is not something you can do directly with the React API. You need to use the DOM API for it. Similarly, when rendering an input form you might want to auto-focus a text box. That too has to be done with the DOM API. Side effects usually need to happen either before or after React’s render task. This is why React provides “lifecycle methods” in class components to let you perform custom operations before or after the render method. You can do things after a component is first mounted inside a componentDidMount class method, you can do things after a components gets an update inside a componentDidUpdate class method, and you can do things right before a component is removed from the browser inside a componentWillUnmount class method. For function components, side effects are managed using the React.useEffect hook function, which takes 2 arguments: a callback function and an array of dependencies. useEffect(() => { // Do something after each render // but only if dep1 or dep2 changed }, [dep1, dep2]); The first time React renders a component that has a useEffect call it’ll invoke its callback function. After each new render of that component, if the values of the dependencies are different from what they used to be in the previous render, React will invoke the callback function again. When a function component is updated or unmounted, React can invoke a side effect “cleanup” function. That cleanup function can be returned from the useEffect callback function. Side effects methods are also very handy for analyzing what is going on in the application and for further optimizing the performance of React updates. What’s next? You’re ready to build simple but awesome apps with React. Go through this interactive lab to get comfortable with using and managing data in a React app. Then, build something bigger. Something practical. Something fun. If you’re up for the challenge, build a simple memory game! While building things with React, keep a browser’s window open on this list featuring the common problems beginner developers usually run into when working with React. When you run into a problem, make sure it’s not one of them. If you feel confident to take a deep dive into more advanced areas in React, I have a Pluralsight course, an online workshop, a book, and a few other written materials at the jsComplete library. Do you have any questions or feedback? Tweet me or find me on the jsComplete slack help channel. I offer on-site training for teams covering all levels in JavaScript, Node, React and React Native, GraphQL, PostgreSQL, MongoDB, and more. Email training@agilelabs.com if you want to book a training for your team. I first published this article in 2017. The following section is the original text I used back then. React has changed significantly since then so some of this original content is outdated. This original content also focused on the class-way to create React components (because React didn’t have hooks back then). Fundamental #1: React is all about components React is designed around the concept of reusable components. You define small components and you put them together to form bigger components. All components small or big are reusable, even across different projects. A React component — in its simplest form — is a plain-old JavaScript function: // Example 1 // Button (props) { // Returns a DOM element here. For example: return <button type="submit">{props.label}</button>; }// To render the Button component to the browser ReactDOM.render(<Button label="Save" />, mountNode) The curly braces used for the button label are explained below. Don’t worry about them now. ReactDOM will also be explained later, but if you want to test this example and all upcoming code examples, the above render function is what you need. The second argument to ReactDOM.render is the destination DOM element which React is going to take over and control. In the jsComplete React Playground, you can just use the special variable mountNode. JavaScript REPL and Playground for React.js Test modern JavaScript and React.js code in the browser without any configurations jscomplete.com/react Note the following about Example 1: - The component name starts with a capital letter. This is required since we will be dealing with a mix of HTML elements and React elements. Lowercase names are reserved for HTML elements. In fact, go ahead and try to name the React component just “button” and see how ReactDOM will ignore the function and renders a regular empty HTML button. - Every component receives a list of attributes, just like HTML elements. In React, this list is called props. With a function component, you can name it anything though. - We weirdly wrote what looks like HTML in the returned output of the Buttonfunction component above. This is neither JavaScript nor HTML, and it’s not even React.js. But, it’s so popular that it became the default in React applications. It’s called JSX and it’s a JavaScript extension. JSX is also a compromise! Go ahead and try and return any other HTML element inside the function above and see how they are all supported (for example, return a text input element). Fundamental #2: What the flux is JSX? Example 1 above can be written in pure React.js without JSX as follows: // Example 2 - React component without JSX // Button (props) { return React.createElement( "button", { type: "submit" }, props.label ); }// To use Button, you would do something like ReactDOM.render( React.createElement(Button, { label: "Save" }), mountNode ); The createElement function is the main function in the React top-level API. It’s 1 of a total of 8 things in that level that you need to learn. That’s how small the React API is. Much like the DOM itself having a document.createElement function to create an element specified by a tag name, React’s createElement function is a higher-level function that can do what document.createElement does, but it can also be used to create an element to represent a React component. We did the latter when we used the Button component in Example 2 above. Unlike document.createElement, React’s createElement accepts a dynamic number of arguments after the second one to represent the children of the created element. So createElement actually creates a tree. Here’s an example of that: // Example 3 - React’s createElement API // InputForm = React.createElement( "form", { target: "_blank", action: "" }, React.createElement("div", null, "Enter input and click Search"), React.createElement("input", { name: "q", className: "input" }), React.createElement(Button, { label: "Search" }) );// InputForm uses the Button component, so we need that too: function Button (props) { return React.createElement( "button", { type: "submit" }, props.label ); }// Then we can use InputForm directly with .render ReactDOM.render(InputForm, mountNode); Note a few things about the example above: InputFormis not a React component; it’s just a React element. This is why we used it directly in the ReactDOM.rendercall and not with <InputForm />. - The React.createElementfunction accepted multiple arguments after the first two. Its list of arguments starting from the 3rd one comprises the list of children for the created element. - We were able to nest React.createElementcalls because it’s all JavaScript. - The second argument to React.createElementcan be null or an empty object when no attributes or props are needed for the element. - We can mix HTML element with React elements. - React’s API tries to be as close to the DOM API as possible, that’s why we use classNameinstead of classfor the input element. Secretly, we all wish the React’s API would become part of the DOM API itself. Because, you know, it’s much much better. The code above is what the browser understands when you include the React library. The browser does not deal with any JSX business. However, we humans like to see and work with HTML instead of these createElement calls (imagine building a website with just document.createElement, which you can!). This is why the JSX compromise exists. Instead of writing the form above with React.createElement calls, we can write it with a syntax very similar to HTML: // Example 4 - JSX (compare with Example 3) // InputForm = <form target="_blank" action=""> <div>Enter input and click Search</div> <input name="q" className="input" /> <Button label="Search" /> </form>;// InputForm "still" uses the Button component, so we need that too. // Either JSX or normal form would do function Button (props) { // Returns a DOM element here. For example: return <button type="submit">{props.label}</button>; }// Then we can use InputForm directly with .render ReactDOM.render(InputForm, mountNode); Note a few things about the above: - It’s not HTML. For example, we’re still doing classNameinstead of class. - We’re still considering what looks like HTML above as JavaScript. See how I added a semicolon at the end. What we wrote above (Example 4) is JSX. Yet, what we took to the browser is the compiled version of it (Example 3). To make that happen, we need to use a pre-processor to convert the JSX version into the React.createElement version. That is JSX. It’s a compromise that allows us to write our React components in a syntax similar to HTML, which is a pretty good deal. The word “Flux” in the header above was chosen to rhyme, but it’s also the name of a very popular application architecture popularized by Facebook. The most famous implementation of which is Redux. Flux fits the React reactive pattern perfectly. JSX, by the way, can be used on its own. It’s not a React-only thing. Fundamental #3: You can use JavaScript expressions anywhere in JSX Inside a JSX section, you can use any JavaScript expression within a pair of curly braces. // Example 5 - Using JavaScript expressions in JSX // RandomValue = () => <div> { Math.floor(Math.random() * 100) } </div>;// To use it: ReactDOM.render(<RandomValue />, mountNode); Any JavaScript expression can go inside those curly braces. This is equivalent to the ${} interpolation syntax in JavaScript template literals. This is the only constraint inside JSX: only expressions. So, for example, you can’t use a regular if statement, but a ternary expression is ok. JavaScript variables are also expressions, so when the component receives a list of props (the RandomValue component didn’t, props are optional), you can use these props inside curly braces. We did this in the Button component above (Example 1). JavaScript objects are also expressions. Sometimes we use a JavaScript object inside curly braces, which makes it look like double curly braces, but it’s really just an object inside curly braces. One use case of that is to pass a CSS style object to the special style attribute in React: // Example 6 - An object passed to the special React style prop // ErrorDisplay = ({message}) => <div style={ { color: 'red', backgroundColor: 'yellow' } }> </div>;// Use it: ReactDOM.render( <ErrorDisplay message="These aren't the droids you're looking for" />, mountNode ); Note how I destructured only the message out of the props argument. Also note how the style attribute above is a special one (again, it’s not HTML, it’s closer to the DOM API). We use an object as the value of the style attribute. That object defines the styles as if we’re doing so with JavaScript (because we are). You can even use a React element inside JSX, because that too is an expression. Remember, a React element is essentially a function call: // Example 7 - Using a React element within {} // MaybeError = ({errorMessage}) => <div> {errorMessage && <ErrorDisplay message={errorMessage} />} </div>; // The MaybeError component uses the ErrorDisplay component: const ErrorDisplay = ({message}) => <div style={ { color: 'red', backgroundColor: 'yellow' } }> </div>;// Now we can use the MaybeError component: ReactDOM.render( <MaybeError errorMessage={Math.random() > 0.5 ? 'Not good' : ''} />, mountNode ); The MaybeError component above would only display the ErrorDisplay component if there is an errorMessage string passed to it and an empty div. React considers {true}, {false}, {undefined}, and {null} to be valid element children, which do not render anything. You can also use all of JavaScript functional methods on collections ( map, reduce, filter, concat, and so on) inside JSX. Again, because they return expressions: // Example 8 - Using an array map inside {} // Doubler = ({value=[1, 2, 3]}) => <div> {value.map(e => e * 2)} </div>;// Use it ReactDOM.render(<Doubler />, mountNode); Note how I gave the value prop a default value above, because it’s all just Javascript. Note also that I outputted an array expression inside the div. React is okay with that; It will place every doubled value in a text node. Fundamental #4: You can write React components with JavaScript classes Simple function components are great for simple needs, but sometimes we need more. React supports creating components through the JavaScript class syntax as well. Here’s the Button component (in Example 1) written with the class syntax: // Example 9 - Creating components using JavaScript classes // Button extends React.Component { render() { return <button>{this.props.label}</button>; } }// Use it (same syntax) ReactDOM.render(<Button label="Save" />, mountNode); The class syntax is simple. Define a class that extends React.Component (another top-level React API thing that you need to learn). The class defines a single instance function render(), and that render function returns the virtual DOM element. Every time we use the Button class-based component above (for example, by doing <Button ... />), React will instantiate an object from this class-based component and use that object to render a DOM element in the DOM tree. This is the reason why we used this.props.label inside the JSX in the rendered output above. Because every element rendered through a class component gets a special instance property called props that holds all values passed to that element when it was created. Since we have an instance associated with a single use of the component, we can customize that instance as we wish. We can, for example, customize it after it gets constructed by using the regular JavaScript constructor function: // Example 10 - Customizing a component instance // Button extends React.Component { constructor(props) { super(props); this.id = Date.now(); } render() { return <button id={this.id}>{this.props.label}</button>; } }// Use it ReactDOM.render(<Button label="Save" />, mountNode); We can also define class functions and use them anywhere we wish, including inside the returned JSX output: // Example 11 — Using class properties // Button extends React.Component { clickCounter = 0; handleClick = () => { console.log(`Clicked: ${++this.clickCounter}`); }; render() { return ( <button id={this.id} onClick={this.handleClick}> {this.props.label} </button> ); } }// Use it ReactDOM.render(<Button label="Save" />, mountNode); Note a few things about Example 11 above: - The handleClickfunction is written using the new proposed class-field syntax in JavaScript. This is still at stage-2, but for many reasons it’s the best option to access the component mounted instance (thanks to arrow functions). But, you need to use a compiler like Babel configured to understand stage-2 (or the class-field syntax) to get the code above to work. The jsComplete REPL has that pre-configured. - We’ve also defined the clickCounterinstance variables using the same class-field syntax. This allows us to skip using a class constructor call altogether. - When we specified the handleClickfunction as the value of the special onClickReact attribute, we did not call it. We passed in the reference to the handleClickfunction. Calling the function on that level is one of the most common mistakes when working with React. // Wrong: onClick={this.handleClick()}// Right: onClick={this.handleClick} Fundamental #5: Events in React: Two Important Differences When handling events inside React elements, there are two very important differences from the way we do so with the DOM API: - All React elements attributes (events included) are named using camelCase, rather than lowercase. It’s onClick, not onclick. - We pass an actual JavaScript function reference as the event handler, rather than a string. It’s onClick={handleClick}, not onClick="handleClick". React wraps the DOM event object with an object of its own to optimize the performance of events handling. But inside an event handler, we can still access all methods available on the DOM event object. React passes that wrapped event object to every handle call. For example, to prevent a form from the default submission action, you can do: // Example 12 - Working with wrapped events // Form extends React.Component { handleSubmit = (event) => { event.preventDefault(); console.log('Form submitted'); }; render() { return ( <form onSubmit={this.handleSubmit}> <button type="submit">Submit</button> </form> ); } }// Use it ReactDOM.render(<Form />, mountNode); Fundamental #6: Every React component has a story The following applies to the class component only (those that extend React.Component). Function components have a slightly different story. - First, we define a template for React to create elements from the component. - Then, we instruct React to use it somewhere. For example, inside a rendercall of another component, or with ReactDOM.render. - Then, React instantiates an element and gives it a set of props that we can access with this.props. Those props are exactly what we passed in step 2 above. - Since it’s all JavaScript, the constructormethod will be called (if defined). This is the first of what we call: component lifecycle methods. - React then computes the output of the render method (the virtual DOM node). - Since this is the first time React is rendering the element, React will communicate with the browser (on our behalf, using the DOM API) to display the element there. This process is commonly known as mounting. - React then invokes another lifecycle method, called componentDidMount. We can use this method to, for example, do something on the DOM that we now know exists in the browser. Prior to this lifecycle method, the DOM we work with was all virtual. - Some components stories end here. Other components get unmounted from the browser DOM for various reasons. Right before the latter happens, React invokes another lifecycle method, componentWillUnmount. - The state of any mounted element might change. The parent of that element might re-render. In either case, the mounted element might receive a different set of props. React magic happens here and we actually start needing React at this point! Prior to this point, we did not need React at all, honestly. The story of this component continues, but before it does, we need to understand this state thing that I speak of. Fundamental #7: React components can have a private state The following is also only applicable to class components. Did I mention that some people call presentational-only components dumb? The state property is a special one in any React class component. React monitors every component state for changes. But for React to do so efficiently, we have to change the state field through another React API thing that we need to learn, this.setState: // Example 13 - the setState API // CounterButton extends React.Component { state = { clickCounter: 0, currentTimestamp: new Date(), }; handleClick = () => { this.setState((prevState) => { return { clickCounter: prevState.clickCounter + 1 }; }); }; componentDidMount() { setInterval(() => { this.setState({ currentTimestamp: new Date() }) }, 1000); } render() { return ( <div> <button onClick={this.handleClick}>Click</button> <p>Clicked: {this.state.clickCounter}</p> <p>Time: {this.state.currentTimestamp.toLocaleString()}</p> </div> ); } }// Use it ReactDOM.render(<CounterButton />, mountNode); This is the most important example to understand. It will basically complete your fundamental knowledge of the React way. After this example, there are a few other small things that you need to learn, but it’s mostly you and your JavaScript skills from that point. Let’s walk through Example 13, starting with class fields. It has two of them. The special state field is initialized with an object that holds a clickCounter that starts with 0, and a currentTimestamp that starts with new Date(). The second class field is a handleClick function, which we passed to the onClick event for the button element inside the render method. The handleClick method modifies this component instance state using setState. Take notice of that. The other place we’re modifying the state is inside an interval timer that we started inside the componentDidMount lifecycle method. It ticks every second and executes another call to this.setState. In the render method, we used the two properties we have on the state with a normal read syntax. There is no special API for that. Now, notice that we updated the state using two different ways: - By passing a function that returned an object. We did that inside the handleClickfunction. - By passing a regular object. We did that inside the interval callback. Both ways are acceptable, but the first one is preferred when you read and write to the state at the same time (which we do). Inside the interval callback, we’re only writing to the state and not reading it. When in doubt, always use the first function-as-argument syntax. It’s safer with race conditions because setState should always be treated as an asynchronous method. How do we update the state? We return an object with the new value of what we want to update. Notice how in both calls to setState, we’re only passing one property from the state field and not both. This is completely okay because setState actually merges what you pass it (the returned value of the function argument) with the existing state. So, not specifying a property while calling setState means that we wish to not change that property (but not delete it). Fundamental #8: React will react React gets its name from the fact that it reacts to state changes (although not reactively, but on a schedule). There was a joke that React should have been named Schedule! However, what we witness with the naked eye when the state of any component gets updated is that React reacts to that update and automatically reflects the update in the browser DOM (if needed). Think of the render function’s input as both: - The props that get passed by the parent - The internal private state that can be updated anytime When the input of the render function changes, its output might change. React keeps a record of the history of renders and when it sees that one render is different than the previous one, it’ll compute the difference between them and efficiently translate it into actual DOM operations that get executed in the DOM. Fundamental #9: React is your agent You can think of React as the agent we hired to communicate with the browser. Take the current timestamp display above as an example. Instead of us manually going to the browser and invoking DOM API operations to find and update the p#timestamp element every second, we just changed a property on the state of the component and React did its job of communicating with the browser on our behalf. I believe this is the true reason why React is popular. We hate talking to Mr. Browser (and the so many dialects of the DOM language that it speaks) and React volunteered to do all the talking for us, for free. Fundamental #10: Every React component has a story (part 2) Now that we know about the state of a component and how when that state changes some magic happens, let’s learn the last few concepts about that process. - A component might need to re-render when its state gets updated or when its parent decides to change the props that it passed to the component - If the latter happens, React invokes another lifecycle method, componentWillReceiveProps. - If either the state object or the passed-in props are changed, React has an important decision to do. Should the component be updated in the DOM? This is why it invokes another important lifecycle method here, shouldComponentUpdate. This method is an actual question, so if you need to customize or optimize the render process on your own, you have to answer that question by returning either true or false. - If there is no custom shouldComponentUpdatespecified, React defaults to a very smart thing that’s actually good enough in most situations. - First, React invokes another lifecycle method at this point, componentWillUpdate. React will then compute the new rendered output and compare it with the last rendered output. - If the rendered output is exactly the same, React does nothing (no need to talk to Mr. Browser). - If there is a difference, React takes that difference to the browser, as we’ve seen before. - In any case, since an update process happened anyway (even if the output was exactly the same), React invokes the final lifecycle method, componentDidUpdate. Lifecycle methods are actually escape hatches. If you’re not doing anything special, you can create full applications without them. They’re very handy for analyzing what is going on in the application and for further optimizing the performance of React updates. That’s it. Believe it or not, with what you learned above (or parts of it, really), you can start creating some interesting React applications. If you’re hungry for more, check out my Learn React.js by Building Games book! Thanks to the many readers who reviewed and improved this article, Łukasz Szewczak, Tim Broyles, Kyle Holden, Robert Axelse, Bruce Lane, Irvin Waldman, and Amie Wilt.
https://medium.com/edge-coders/all-the-fundamental-react-js-concepts-jammed-into-this-single-medium-article-c83f9b53eac2?source=collection_home---4------21-----------------------
CC-MAIN-2020-16
refinedweb
14,756
56.05
import pygame as pg import sys def main(): pg.init() clock = pg.time.Clock() fps = 60 bg = (255, 255, 255) size = [400, 400] this = pg.display.set_mode(size) ay = .5 vy = 2 vx = 4 player = pg.Rect(100, 50, 20, 20) platform = pg.Rect(0, size[1]-20, size[0], 20) objs = [player, platform] colors = [[255, 0, 0], [0, 255, 0]] move = [pg.K_LEFT, pg.K_RIGHT] def collide(player, platform, vy): hit = player.colliderect(platform) if hit: if player.bottom >= platform.top: player.bottom = platform.top vy = 0 while True: for event in pg.event.get(): if event.type == pg.QUIT: return False if event.type == pg.KEYDOWN: if event.key == pg.K_SPACE: vy -= 10 key = pg.key.get_pressed() for i in range(2): if key[move[i]]: player.x += vx * [-1, 1][i] this.fill(bg) vy += ay player.y += vy collide(player, platform, vy) for i, obj in enumerate(objs): pg.draw.rect(this, colors[i], obj) pg.display.update() clock.tick(fps) pg.quit() sys.exit if __name__ == '__main__': main() In your inner function, vy is a local variable (an argument to the function). Changing its value doesn't change the value of the vy variable in the outer function. You either need to return the new velocity (and change the code calling the function to vy = collide(player, platform, vy)), or change your design around so that the velocity is an attribute of some other object (maybe player if you changed it to be an instance of a custom class). If you were directly accessing vy in the outer namespace (without it being an argument), you could use a nonlocal statement in the inner function to let it write to the outer function's local variable. However, letting things scribble on other namespaces like this is often considered poor design. Using objects or return values is generally easier to reason about and much simpler to debug when things go wrong.
https://codedump.io/share/htIAEb5dGAVa/1/velocity-reassignment-for-collision-detection-is-none-responsive-when-collision-is-triggered-through-a-function
CC-MAIN-2017-13
refinedweb
324
68.57
(Resource Acquisition Is Initialization) is an incredibly important technique in C++ (D and Ada) which helps ensure that memory leaks are prevented. Without it the developer would have to manually free up used memory once it is no longer needed which could then cause issues with exception safety. RAII is discussed in detail in [The C++ programming Language, Bjarne Stroustrup] so I will not go into too much depth here. The idea is that the destructors of all objects residing on the stack are called as they go out of scope (perhaps due to propagation of an exception during stack unwinding). These destructors then contain any important cleanup code such as stopping a thread, closing a file etc... In order to take advantage of RAII, any C libraries must be wrapped in C++ classes first. An example of which would be GTK+ which provides an official C++ binding called gtkmm. I have also found that I needed to do the same with the Motif library in order to safely use it with C++ in the OpenCDE project (motifmm). This was a large amount of work and is nowhere near complete. There is also a downfall in that since other developers have started to join the project and some are more experienced with C than C++ or visa versa, it is quite awkward for code reuse to take place. For example a C++ .ini parser can only be used by C++ developers or a C .ini parser can only safely be used by the C developers. What I have been looking for is a way that both the C and C++ developers can correctly use the same code. Looking at two ideas which are slightly similar to what I require, The C++ auto_ptr template class and the GNU C Compiler's extension of the cleanup variable attribute. The auto_ptr is one of the earliest and simplest of C++ smart pointers that allow for RAII to take place on objects stored on the heap or that use file resources etc... However it cannot be used with C code at all so the following would not make any sense. std::auto_ptr<FILE> file; file.reset(fopen("example.txt", "r")); // Prepare for a crash! This compiled and ran but results in undefined behavior. What would happen behind the scenes is that delete would be called on our FILE pointer which is obviously incorrect and should have been fclose(). It probably wouldn't work even if free() correctly cleans up the memory since delete and free() are generally not interchangeable in the majority of C++ compilers (I have only seen it work on a few select versions of GCC). So it seems that the smart pointer needs a way of explicitly knowing how to clean up the data of the contained pointer rather than just making an (often incorrect) assumption. This brings us onto the GNU C Compiler extension. #define RAII_VARIABLE(vartype,varname,initval,dtor) \ void _dtor_ ## varname (vartype * v) { dtor(*v); } \ vartype varname __attribute__((cleanup(_dtor_ ## varname))) = (initval) void example_usage() { RAII_VARIABLE(FILE*, logfile, fopen("logfile.txt", "w+"), fclose); fputs("hello logfile!", logfile); } Though implemented with a slightly complex macro and not portable between C compilers, this does work. The important thing to note is that an fclose() function pointer has been passed in so the cleanup method is explicitly stated. If this can be implemented with a C++ template class, it will allow our new smart pointer class to work with most C++ compilers available rather than being a compiler specific extension. I have implemented a "toy" version of a template class which should demonstrate the idea discussed above. It has been kept deliberately simple and as such does not throw any kind of exception when the reset() fails if fopen() returns NULL) #ifndef C_PTR_H #define C_PTR_H #include <iostream> template<class T, class R> class c_ptr { private: T t; R (*func)(T); void clean() { if(func != 0 && t != 0) { (*func)(t); t = 0; func = 0; } } public: c_ptr() { t = 0; func = 0; } ~c_ptr() { clean(); } void reset(T t, R (*func)(T)) { clean(); this->t = t; this->func = func; } T get() { return t; } }; #endif With the above template class, an example of working with files the "C way" rather than std::ofstream is demonstrated. c_ptr<FILE*> file1; c_ptr<FILE*> file2; file1.reset(fopen("example.txt", "w+"), fclose); if(file1.get() == NULL) { return; } file2.reset(fopen("test.txt", "w+"), fclose); if(file2.get() == NULL) { // No need to clean up file1 return; } fputs("hello world!", file1.get()); // No need to do any cleanup With this, as the smart pointer container (c_ptr) goes out of scope, fclose is called on the contained pointer rather than the more generic free() or delete. Note that a FILE* pointer is explicitly specified as the template argument rather than FILE, this is because sometimes the data is not handled via a pointer such as an OpenGL texture (GLuint) or a sys/socket (int). The above c_ptr implementation will need improvements such as for cleanup functions that require more than one parameter (XtDestroyWidget(XtDisplay(widget), widget)). This is made even worse because the secondary parameter (XtDisplay*) can only be derived from the primary parameter (Widget, the one that needs cleanup) but that doesn't yet exist when passing in the cleanup function and parameters. (Luckily XtDestroyWidget() only requires one parameter, I had to lie a little bit to show the worse case scenario). When.
https://www.ibm.com/developerworks/mydeveloperworks/blogs/karsten/?lang=en
CC-MAIN-2015-35
refinedweb
900
59.33
You're familiar by now with the standard function foo() { return 'bar' } style of functions in JavaScript. What if I told you that ES6 introduced a new way of writing functions that's terser and more readable? (It has a few other perks that we'll go over, too.) Note: ES6 is the newest version of JavaScript to be released. It offers some new syntax, but it is not yet supported by all browsers. Would you believe me? Have you seen these guys: var arrowFunction = () => { return 'Arrow functions are great!' }; (If you've been checking out the test files in our labs, you've seen them :)) These are called arrow functions in reference to the little => that characterizes them. You can call arrow functions just like regular functions. var arrowFunction = () => { console.log('I was called!') } var regularFunction = function() { console.log('I was called, too!') } arrowFunction() // 'I was called!' regularFunction() // 'I was called, too!' And they take arguments in a similar way: var arrowFunction = (arg1, arg2) => { console.log(arg1, arg2) } arrowFunction('Hey,', 'you!') // 'Hey, you!' If an arrow function accepts exactly one argument, you can omit the parentheses. var arrowFunction = myArg => { console.log(myArg) } arrowFunction('Hi!') // 'Hi!' In a greater divergence from regular functions, arrow functions give us implicit returns. var square = n => n * n square(3) // 9 square(4) // 16 Just omit the curly braces from around the function body. All arrow functions are anonymous. Regular functions take their names from their identifiers. function iHaveAName() {} iHaveAName.name // 'iHaveAName' But arrow functions don't have identifiers, so they're always anonymous. (() => {}).name // '' We've talked about functions returning functions, and arrow functions provide a nice, terse way of performing such a task. function nester() { return () => { return () => { return 'Found me!' } } } nester()()() // 'Found me!' You'd agree that that's more readable than the alternative, right? function nester() { return function() { return function() { return 'Found me!' } } } nester()()() // 'Found me!' (It also takes way fewer keystrokes to type, which seems like it's not a big deal now, but trust us when we say that you'll come to appreciate syntactical brevity in programming.) Sometimes you might encounter a library or function that takes a function as an argument — arrow functions are great here, too! [1, 2, 3, 4].map(n => n * n).reduce((sum, n) => (sum + n), 0) // 30 Imagine writing that out with function all over the place! [1, 2, 3, 4].map(function(n) { return n * n }).reduce(function(sum, n) { return sum + n }, 0) Gross. View Arrow Functions on Learn.co and start learning to code for free.
https://learn.co/lessons/javascript-arrow-functions
CC-MAIN-2019-09
refinedweb
427
70.19
Java runtime & enviroment How to make simple java program with command-line tools In this example is showned how to make simplest java program with command-line java tools. There are three steps. STEP 1: Create program file Java program has .java extension. Name the file First.java and create inner class First with main() method: /* This is a first java program. */ public class First { public static void main( String[] args ) { System.out.println( "This is a first java program" ); } } STEP 2: Compile First.java with java compiler From command-line (and from created file directory) run javac.exe for program compilation. If is all OK, in the directory will be created new compiled file First.class. STEP 3: Run compiled file Call java.exe with compiled file as parameter for program running.
http://www.josefpirkl.com/javaexamplecenter/java/java_first.php
CC-MAIN-2018-17
refinedweb
133
61.02
Protected (Visual Basic) Specifies that one or more declared programming elements are accessible only from within their own class or from a derived class. Sometimes a programming element declared in a class contains sensitive data or restricted code, and you want to limit access to the element. However, if the class is inheritable and you expect a hierarchy of derived classes, it might be necessary for these derived classes to access the data or code. In such a case, you want the element to be accessible both from the base class and from all derived classes. To limit access to an element in this manner, you can declare it with Protected. Rules Declaration Context. You can use Protected only at class level. This means the declaration context for a Protected element must be a class, and cannot be a source file, namespace, interface, module, structure, or procedure. Combined Modifiers. You can use the Protected modifier together with the Friend (Visual Basic) modifier in the same declaration. This combination makes the declared elements accessible from anywhere in the same assembly, from their own class, and from derived classes. You can specify Protected Friend only on members of classes. Behavior Access Level. All code in a class can access its elements. Code in any class that derives from a base class can access all the Protected elements of the base class. This is true for all generations of derivation. This means that a class can access Protected elements of the base class of the base class, and so on. Protected access is not a superset or subset of friend access. Access Modifiers. The keywords that specify access level are called access modifiers. For a comparison of the access modifiers, see Access Levels in Visual Basic. The Protected modifier can be used in these contexts:
https://msdn.microsoft.com/en-us/library/8050kawf(v=vs.100).aspx
CC-MAIN-2016-07
refinedweb
302
56.15
Would anyone happen to have an exaple of a program to approximate the eponential function e^x? Thanks Would anyone happen to have an exaple of a program to approximate the eponential function e^x? Thanks Sorry, damsel - don't double post. Wait a while for someone to answer your question. You could always use #define e whatever e is and use the pow function. I felt bad because there was a mean face attaced to it and I ment to put a happy face. Could you explain a little more your idea? OK... You can use #define e 2.1345 //I don't know the value of e off the top of my head and then use the pow(x,y) function from math.h to do the exponent. Last edited by Dummies102; 02-25-2002 at 12:21 PM. Would you happen to have an example program so I have something to reference????? #include <iostream.h> #include <math.h> int main() { double x, y; cout <<"This program will find x^y. please enter x(number) and then y(power)"; x = pow(x, y);//this is the pow function, it takes the first number to the power of the second. cout <<"\nx to the power of y ="<<x; return 0; } that should do it for ya, post again if you still need help. There is a function in the math header: double exp( double arg ) returns the value of e^arg. Conversely, double log( double num ) returns the natural logarithm of num. There are many other useful functions in that header you should look at a reference for it. I suggest picking up the C/C++ programmer's reference (good reference book to have). If a tree falls in the forest, and no one is around to see it, do the other trees make fun of it? The thing that keeps tripping me up is that I need to obtain an output as follows: e to the x 7.38 approximation 6.333 the difference 1.056 With a keyboard inut such as: 2.0 I am just not getting it??? Well... what do you base your approximation and actual answer on? =. And I am just starting to learn this stuff. All of your answeres were great thanks if you need to write each function and it's ok to use a little recursion, try something like: int factorial(a) //computes factorial { if(a > 1) a = a * N(a-1); else return 1; return a; } float power(x,y) //computes the power { int i; float result = x; for(i=y; i>1; i--) result = result * x; return result; } main() { float x, result; printf("\nPlease enter value for x :>"); scanf("%f", &x); for(i=1; i<10;i++) //make the "10" bigger if you want more loops { result += power(x, i) / factorial( i ); printf("Term %i = %f", i, x); // move the print outside the for() loop if you only want the final answer } } //I think all that would work, I just wrote it out off the top of my head, there might be some minor issues or something, but that's the basics of what you'll need. Also, it's assuming you'll only be using positive, non zero numbers. You'll have to add a couple modifications to fix those, but it's not hard and is just more of the same. Those are fun programs to write. Last edited by bgrahambo; 02-25-2002 at 08:15 PM. What you probably want is in the <iomanip.h> header. It is scientific notation (2.55e+32) is what it looks like you want. With iomanip I believe you can just denote the output format like so... cout>>dec>>x; //outputs the x variable in decimal format. ...The scientific notation manip(dec) needs to be looked up. My Avatar says: "Stay in School" Rocco is the Boy! "SHUT YOUR LIPS..."
https://cboard.cprogramming.com/cplusplus-programming/11770-damsel-distress.html
CC-MAIN-2017-39
refinedweb
651
82.14
Social might calculation after if. user input user input???? My apologies good evening peoples Once again I am on top of my head if anyone could point me in the right direction I would be very...calculation after if. user input System.out.print ("please select How to Save Your Site from Google Penguin Update site from Google penguin update all are not realistic enough to change your web.... Replace all bad links by effective and good links and guest links... navigation After all the best content technique, marketing and link building said How to Upload Site Online . Uploading site can be done in many ways, but the most popular is FTP. After hosting... on your server. Its very important to learn all these information specifically if you... of the user input. After completing and testing of the website you can upload web site creation web site creation Hi All , i want to make a web site , but i am using Oracle data base for my application . any web hosting site giving space for that with minimum cost . please let me know which site offering all hi all - Java Beginners hi all hi, i need interview questions of the java asap can u.../Good_java_j2ee_interview_questions.html?s=1 Regards, Prasanth HI friend, I am sending you a link. This link will help you. Please read return to main menu after adding all the info. - Java Beginners how to return to main menu after adding all the info. import java.util.Scanner; public class RegistrationSystem { public static void main... list: "); int id = scan.nextInt(); int i=id-1; list.remove(i); case 3 JSP,JDBC and HTML(Very Urgent) - JSP-Servlet exact requirement in which if I get an immediate help,I will be very much grateful..."; For query radio button, i changed slightly. Not immediately after clicking...JSP,JDBC and HTML(Very Urgent) Respected Sir/Madam, Thanks WEB SITE WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology) like theme selection in orkut like forgot password feature.. or any more features Sorry but Java programming tutorial for very beginners Java programming tutorial for very beginners Hi, After completing my 12th Standard in India I would like to learn Java programming language. Is there any Java programming tutorial for very beginners? Thanks Hi Programming: Hammurabi I Java NotesProgramming: Hammurabi I Description Your program will simulate... computer game Hammurabi, named after a Babylonian king (See en.wikipedia.org/wiki/Hammurabi) famous for his laws. He also ran a very authoritarian society what should i do next?? - Java Beginners all idea.then after go through jsp then different framework.all d best. ...what should i do next?? I know java basics.actully i passed the SCJP exam.Then now i have no idea about what should i do next.I like to come Example program to get all the available time zones to get all the available time zones using java program. This example is very simple java code that will list all the available time zones. We have used... Example program to get all the available time zones Professional Web Design Services For You Web Site Professional Web Design Services For You Web Site  ... and things like access, slow modems, bandwidths etc. Thus the job of a good designer... and reduce all sorts of errors. Frames should be handled carefully as they can easily... techniques, duplicate content all are being downgraded by search engines. Many websites Open Source Shopping Cart in products is very important. It comes with good search... This very good module which can increase customer loyalty. Bar..., Coupons & Guest Checkout and much more. It very good shopping cart HTML FAQ site HTML FAQ site For a school project need simple website. It is an FAQ site which uses natural language processing to get the answers. Natural... of answers or the actual answer should be generated. As close as possible. I need Programming help Very Urgent - JSP-Servlet Programming help Very Urgent Respected Sir/Madam, Actually my code shows the following output: There is a combo box which contains all the ID's present in the database.. When I click or select any of the ID,it automatically very urgent - Java Server Faces Questions it goes to hello.jsp page and gives the output with all the details whatever i...very urgent Hi sir, see my code and please tell me mistake. it is very urgent for me. here is my code: addmin.jsp: Users WANT TO RESTRICT EDITING AFTER A SET OF DATE , basically in a grey color)... i still not do any coding for this coz i really dont...WANT TO RESTRICT EDITING AFTER A SET OF DATE localhost : Server version: 5.0.51b-community-nt phpMyAdmin - 2.11.7 hello guys, i have and offline is evolving very fast. New software development and testing techniques... India website. Index | Ask Questions | Site Map Web Services Web Site Goals - Goal of Web Designing Web Site Goals - Goal of Web Designing What are the prime features necessary... for their choice. Thus a good website fulfills the requirement of both... and the client. What is Custom Web Design? Custom web site is little bit (very urgent) - Java Server Faces Questions ; In the above code is there any way i can replace all those " " with a single sign... (very urgent) hi friends, This is my code in JSF   (very urgent) - Design concepts & design patterns (very urgent) hi friends, This is my code in html...; In the above code is there any way i can replace all those " " with a single sign or character. thanks in advance In this section of sitemap we have listed all the important sections of java tutorials. Select the topics you want..., Spring, SQL, JSF and XML Tutorials. These All Java Tutorials links file uploads to my web site file uploads to my web site How can I allow file uploads to my web site Programming Style Guideline Here is a big list of what I look at in trying to assess the quality of programs.... If there's one idea that could serve as a guide to good programming, it's M A I N T A I N A B I L I T Y Cost or fun. Most of the cost Deleting All Rows From the database Table .style1 { margin-right: 0px; } Deleting All Rows... after sometime . Rather than go through with that data its better to delete that data. We can do it very easily through our program, what we need is a How to save image after edit in php How to save image after edit in php Hello, I have create a simple image editor with the effected of brightness, contrast, and desaturat etc. I have... give me answer for this. I shall be very thankful. here below given code which I display all the values on next page display all the values on next page i am inserting values in the database through textboxes and dropboxes, when i am clicking on submit button those values are inserting in the database and i am moving on another page after very important - Kindly help very important - Kindly help I am creating web page for form registration to my department ..I have to Reprint the Application Form (i.e Download the PDf File from the Database ) , when the user gives the Application number Must to See Agra Fort the court below. Visit to Mina Masjid: This mosque very close to the Diwan-i-Khas... the most popular place visited in Agra after the Taj Mahal. This huge Red Sandstone.... It is a World heritage Site listed under the UNESCO and boasts of some Hibernate Tools Download Tools for the development. Hibernate Tools is really a good tool that help... The latest version of Hibernate Tools can be downloaded from its official site... from the hibernate tool site. Please visit Have you tried PHP resource site; PHPKode.com? Have you tried PHP resource site; PHPKode.com? is a good free open source PHP resource site which helps me a lot during my PHP learning. Have you tried it before Should I be Looking for Fashion Design Colleges? the rest of them, but at the end of the day you really need a good education if you..., this is not really very simple because you will always end up with a very large list...Should I be Looking for Fashion Design Colleges? We talked about a career Updating variables in a loop? and only printing certain messages after the first iteration?! computed data. But I'm really confused on how I would go about this, such as how...Updating variables in a loop? and only printing certain messages after the first iteration?! I've written a program that reads two values. I Java String : Replace All Java String : Replace All This section defines how to replace all character with specified character without using system methods. Replace All method : You can replace all specific character in a given string with another character Hibernate Tools Update Site Hibernate Tools Update Site Hibernate Tools Update Site In this section we... Site. The anytime you can user Hibernate Tools Update Manager from your eclipse trigger after insert for deletion trigger after insert for deletion String query2 = "create TRIGGER trig after insert on "cart" + " FOR EACH ROW... = st1.executeUpdate( query2); i have to do the above trigger operation, but i'm stuck to display NEXT and previous option after 10 entries in java to display NEXT and previous option after 10 entries in java As after jsp code we refer to java for connectivity and i want that directly only 10 entries will be open and next there will be pages. so what would i do for coding How to run our application after deploying .war file in tomcat6.0 How to run our application after deploying .war file in tomcat6.0 I did all the things already, but I don't want this way. What I did, 1) first I created one html file,one servlet file and one web.xml file 2) Then I put all Update after delete Update after delete sir,i am working on online examination project in servlet.I am facing some problem that are -i have assign id column 1,2,3..... and i am deleting 3 record using id column of database mysql so now id What a PHP Programmer Can Do to be very high all around the world. The services that PHP programmers can... the site can work well with them all. Also, a programmer will understand how... code will be included with an application. After all, PHP is an open source To Retain the values entered in the text box after submit in jsp page given all those. Now my problem is how can i retain those values entered in box...To Retain the values entered in the text box after submit in jsp page i am working on a jsp pge which has many text boxes and one dynamic drop Getting all XML Elements Getting all XML Elements In this section, you will learn to retrieve all... and methods which helps us to parse the XML file and retrieve all elements the last data entered into database is getting stored again after refreshing the last data entered into database is getting stored again after refreshing hey all i made a shout box using php and mysql but the last data entered into the DB is getting retrieved again as i refresh the page.. even How to show all errors in PHP? How to show all errors in PHP? I this tutorial I will explain you how you can make changes in your code for displaying all the warning and error messages... applications? It is very easy to instruct the PHP to display all the errors How to revert back ServletInputStream object after reading once. - JSP-Servlet How to revert back ServletInputStream object after reading once. Hi All, I am trying to get parameter from request object which is multipart/form-data in type. I have created my own file upload parser to get parameter Smarty MySQL Tutorials for the PHP and it supports caching. It provides good performance and can be used to develop application of any size. Its performance is also very good... and then change the MySqL connection varibales in the news.php. After that you can
http://www.roseindia.net/tutorialhelp/comment/1072
CC-MAIN-2014-35
refinedweb
2,080
65.73
The Timer component raises a Tick event at specified time intervals. The Tick event handler can then process a regularly occurring event, such as repainting the screen for animation or a clock display, updating a status report, or terminating a program based on elapsed time. The interval between Tick events, specified by the Interval property, is measured in milliseconds, with valid values between 1 and 2,147,483,647, inclusive. The maximum value corresponds to approximately 597 hours, or a little over 24 days. If there are heavy demands on the system running the application, either from the current or other applications, the Tick events may not be raised as often as specified by the Interval property, especially if the Interval is very short. Under any circumstances, the interval is not guaranteed to be accurate. If accuracy is required, the system clock should be checked as needed, especially for long intervals. Although the Interval property is in milliseconds, the system clock generates only 18 ticks per second. Therefore, the true precision of the Timer is no better than one-eighteenth of a second, which is about 55 milliseconds. Three types of timers are provided in the .NET Framework. The first is a member of the System.Threading namespace. It is used primarily for multi-threaded applications and will not be covered in this book. It is not represented in any Visual Studio .NET Toolbox. The second is a member of the System.Timers namespace. It is primarily intended for server applications and also is designed for multithreaded applications. It is found on the Components tab of the Visual Studio .NET Toolbox. It too will not be covered in this book. The third type of timer component, described in this book, is a member of the System.Windows.Form namespace. It is designed for the single-threaded environment of Windows Forms, where the UI thread controls processing. This single-threaded timer is available from the Windows Forms tab of the Visual Studio .NET Toolbox. The Timer component is not a control, since it does not have a visual aspect. Because it is not a control, it does not have a Parent property and is not part of the Controls collection. In Visual Studio .NET, a Timer component is added to the form by dragging it from the Toolbox onto the form. However, it doesn't stay or appear on the form, but displays in the component tray at the bottom of the design window, as shown in Figure 16-6. Figure 16-6. Timer component in Visual Studio .NET The Timer component has only two properties, listed in Table 16-18. The Enabled property must be set to true in order to turn the timer function on. This can be done in the Properties window of Visual Studio .NET, in the code in the constructor (which is effectively the same), or in some other part of the program, in the event handler for a button Click. Setting the Enabled property false turns the timer off. The Timer component has two methods. The Start method starts the timer; it is equivalent to setting the Enabled property to true. The Stop method turns the timer off; it is equivalent to setting the Enabled property to false. The Timer component has a single event, Tick. It is raised every time the number of milliseconds in the Interval property has passed. The Tick event has an event argument of type EventArgs, which means that no additional properties are associated with the event. If the Enabled property is set to false in the Tick event handler, then the timer will be a one-shot deal: once the event is raised and handled, it will not be raised again until the Enabled property is toggled. If the Enabled property is not changed in the Tick event handler, then the timer will keep recurring until the property is set to false. The first timer example, listed in Example 16-5 (in VB.NET only; the C# version is very similar) is a simple demonstration of a label control being used as a clock. The text value of the label is updated every 10 seconds. The result is shown in Figure 16-7. Figure 16-7. Timer demo Example 16-5. Timer example in VB.NET (Timers.vb) Option Strict On imports System imports System.Drawing imports System.Windows.Forms namespace ProgrammingWinApps public class Timers : inherits Form dim lblTime as Label dim strFormat as String public sub New( ) Text = "Timer Demo" Size = new Size(300,100) strFormat = "dddd, MMMM d, yyyy h:mm:ss tt" lblTime = new Label( ) lblTime.Parent = me lblTime.Size = new Size(CInt(ClientSize.Width * .8), 25) lblTime.Location = new Point(CInt(ClientSize.Width * .1), _ CInt(ClientSize.Height * .4)) lblTime.BorderStyle = BorderStyle.FixedSingle lblTime.Text = DateTime.Now.ToString(strFormat) lblTime.TextAlign = ContentAlignment.MiddleCenter dim t as new Timer( ) t.Interval = 10000 ' 10 seconds t.Start( ) AddHandler t.Tick, AddressOf t_Tick end sub ' close for constructor public shared sub Main( ) Application.Run(new Timers( )) end sub private sub t_Tick(ByVal sender as object, _ ByVal e as EventArgs) lblTime.Text = DateTime.Now.ToString(strFormat) end sub end class end namespace In this example, the Timer is instantiated in the constructor with an Interval property of 10,000 milliseconds, which is equivalent to 10 seconds. The Timer Start method is called so the timer will run as soon as the form is loaded. In the Tick event handler, t_Tick, the Text property of the label is updated to display the current time, DateTime.Now, using the ToString method. The format of the label is controlled by an argument to the ToString method, a formatting string instantiated back in the constructor. The next example is a countdown timer. It is similar to the previous example in that it displays a text string with the timein this case, updated every second. It also provides a DateTimePicker control for the user to enter a time interval to count down. The countdown begins when the user clicks the Start button, with the remaining time displayed. When the specified time elapses, a message is displayed in a label control. The resulting application looks like Figure 16-8 during countdown. Figure 16-8. Countdown timer In addition to using a different technique for displaying updated text strings, this example also demonstrates the use of TimeSpan objects, described earlier in this chapter. The C# version of the CountDownTimer application is listed in Example 16-6, and the VB.NET version is listed in Example 16-7. As you will see in the analysis that follows the code listings, it was necessary to jump through some DateTime and TimeSpan hoops to get the times to display properly. Example 16-6. CountDownTimer in C# (CountDownTimer.cs) using System; using System.Drawing; using System.Windows.Forms; namespace ProgrammingWinApps { public class CountDownTimer : Form { DateTimePicker dtpTotalTime; Button btnStart; Button btnStop; bool boolStart = false; DateTime dtEndTime; Label lblTimesUp; Label lblTitle; public CountDownTimer( ) { Text = "CountDown Timer"; Size = new Size(500,400); FormBorderStyle = FormBorderStyle.FixedDialog; Font = new Font("Arial", 12); Timer t = new Timer( ); t.Interval = 1000; t.Start( ); t.Tick += new EventHandler(t_Tick); lblTitle = new Label( ); lblTitle.Parent = this; lblTitle.Font = new Font("Arial Black", 24); lblTitle.Text = "CountDown Timer"; lblTitle.TextAlign = ContentAlignment.MiddleCenter; lblTitle.Size = new Size((int)(lblTitle.Font.Height * .7) * lblTitle.Text.Length, 35); lblTitle.Location = new Point(ClientSize.Width / 2 - lblTitle.Width / 2, 25); Label lblTotalTime = new Label( ); lblTotalTime.Parent = this; lblTotalTime.Text = "Total Time (h:m:s):"; lblTotalTime.Size = new Size((int)(Font.Height * .5) * lblTotalTime.Text.Length, 25); lblTotalTime.Location = new Point(ClientSize.Width / 10, 100); dtpTotalTime = new DateTimePicker( ); dtpTotalTime.Parent =((int)(Font.Height * .6) * dtpTotalTime.Value.ToString("t").Length, dtpTotalTime.PreferredHeight); dtpTotalTime.Location = new Point(lblTotalTime.Right, 100); btnStart = new Button( ); btnStart.Parent = this; btnStart.Text = "Start"; btnStart.Location = new Point(ClientSize.Width / 4, 300); btnStart.Click += new EventHandler(btnStart_Click); btnStop = new Button( ); btnStop.Parent = this; btnStop.Text = "Stop"; btnStop.Location = new Point(btnStart.Right + 10, 300); btnStop.Click += new EventHandler(btnStop_Click); lblTimesUp = new Label( ); lblTimesUp.Parent = this; lblTimesUp.Size = new Size(200, 35); lblTimesUp.Location = new Point(btnStart.Left, btnStart.Top - 75); lblTimesUp.Text = ""; lblTimesUp.Font = new Font("Times New Roman Bold", 20); } // close for constructor static void Main( ) { Application.Run(new CountDownTimer( )); } private void t_Tick(object sender, EventArgs e) { Invalidate( ); } protected override void OnPaint(PaintEventArgs e) { base.OnPaint(e); && (dtEndTime - DateTime.Now) <= TimeSpan.Zero) { TimesUp( ); } } private void btnStart_Click(object sender, EventArgs e) { lblTimesUp.Text = ""; boolStart = true; TimeSpan ts = new TimeSpan( ); ts = TimeSpan.Parse(dtpTotalTime.Value.Hour.ToString( ) + ":" + dtpTotalTime.Value.Minute.ToString( ) + ":" + dtpTotalTime.Value.Second.ToString( )); dtEndTime = DateTime.Now + ts; } private void btnStop_Click(object sender, EventArgs e) { boolStart = false; } private void TimesUp( ) { lblTimesUp.Text = "Times Up!"; boolStart = false; } } // close for form class } // close form namespace Example 16-7. CountDownTimer in VB.NET (CountDownTimer.vb) Option Strict On imports System imports System.Drawing imports System.Windows.Forms namespace ProgrammingWinApps public class CountDownTimer : inherits Form dim dtpTotalTime as DateTimePicker dim btnStart as Button dim btnStop as Button dim boolStart as Boolean = false dim dtEndTime as DateTime dim lblTimesUp as Label dim lblTitle as Label public sub New( ) Text = "CountDown Timer" Size = new Size(500,400) FormBorderStyle = FormBorderStyle.FixedDialog Font = new Font("Arial", 12) dim t as new Timer( ) t.Interval = 1000 t.Start( ) AddHandler t.Tick, AddressOf t_Tick lblTitle = new Label( ) lblTitle.Parent = me lblTitle.Font = new Font("Arial Black", 24) lblTitle.Text = "CountDown Timer" lblTitle.TextAlign = ContentAlignment.MiddleCenter lblTitle.Size = new Size(CInt(lblTitle.Font.Height * .7) * _ lblTitle.Text.Length, 35) lblTitle.Location = new Point(CInt(ClientSize.Width / 2 - _ lblTitle.Width / 2), 25) dim lblTotalTime as new Label( ) lblTotalTime.Parent = me lblTotalTime.Text = "Total Time (h:m:s):" lblTotalTime.Size = new Size(CInt(Font.Height * .5) * _ lblTotalTime.Text.Length, 25) lblTotalTime.Location = new Point( _ CInt(ClientSize.Width / 10), 100) dtpTotalTime = new DateTimePicker( ) dtpTotalTime.Parent =(CInt(Font.Height * .6) * _ dtpTotalTime.Value.ToString("t").Length, _ dtpTotalTime.PreferredHeight) dtpTotalTime.Location = new Point(lblTotalTime.Right, 100) btnStart = new Button( ) btnStart.Parent = me btnStart.Text = "Start" btnStart.Location = new Point(CInt(ClientSize.Width / 4), 300) AddHandler btnStart.Click, AddressOf btnStart_Click btnStop = new Button( ) btnStop.Parent = me btnStop.Text = "Stop" btnStop.Location = new Point(btnStart.Right + 10, 300) AddHandler btnStop.Click, AddressOf btnStop_Click lblTimesUp = new Label( ) lblTimesUp.Parent = me lblTimesUp.Size = new Size(200, 35) lblTimesUp.Location = new Point(btnStart.Left, _ btnStart.Top - 75) lblTimesUp.Text = "" lblTimesUp.Font = new Font("Times New Roman Bold", 20) end sub ' close for constructor public shared sub Main( ) Application.Run(new CountDownTimer( )) end sub private sub t_Tick(ByVal sender as object, _ ByVal e as EventArgs) Invalidate( ) end sub protected overrides sub OnPaint(ByVal e as PaintEventArgs) myBase.OnPaint(e) dim g as Graphics = e.Graphics dim b as new SolidBrush(ForeColor) dim fmt as new StringFormat( ) fmt.Alignment = StringAlignment.Near dim pt as new PointF(CInt(ClientSize.Width / 10), 150) dim fnt as new Font("Arial", 12) dim str as string = "Current Time: " + _ DateTime.Now.ToString("F") + vbCrLf + vbCrLf) if (boolStart and _ (TimeSpan.op_LessThanOrEqual(DateTime.op_Subtraction( _ dtEndTime, DateTime.Now), TimeSpan.Zero))) then TimesUp( ) end if end sub private sub btnStart_Click(ByVal sender as object, _ ByVal e as EventArgs) lblTimesUp.Text = "" boolStart = true dim ts as new TimeSpan( ) ts = TimeSpan.Parse(dtpTotalTime.Value.Hour.ToString( ) + ":" + _ dtpTotalTime.Value.Minute.ToString( ) + ":" + _ dtpTotalTime.Value.Second.ToString( )) dtEndTime = DateTime.op_Addition(DateTime.Now, ts) end sub private sub btnStop_Click(ByVal sender as object, _ ByVal e as EventArgs) boolStart = false end sub private sub TimesUp( ) lblTimesUp.Text = "Times Up!" boolStart = false end sub end class end namespace The FormBorderStyle is set in the constructor to FormBorderStyle.FixedDialog, which prevents the user from resizing the form. By doing so, there is no need to anchor any of the controls or otherwise worry about how the look of the form will be affected by user interaction. Also in the constructor, the default font for the form is set to 12-point Arial. FormBorderStyle = F ormBorderStyle.FixedDialog Font = new Font("Arial", 12) The Timer is declared and instantiated in the constructor, with the Interval property set to one second (1,000 milliseconds) and the Start method called to enable the timer. An event handler is added for the Tick event: Timer t = new Timer( ); t.Interval = 1000; t.Start( ); t.Tick += new EventHandler(t_Tick); dim t as new Ti mer( ) t.Interval = 1000 t.Start( ) AddHandler t.Tick, AddressOf t_Tick The DateTimePicker control is used here as a convenient way for the user to enter the time to count down in hours, minutes, and seconds. This use requires that the value initially be set to "00:00:00" (which is cast from a string to a DateTime object using the static DateTime.Parse method) and displayed using a custom format, "H:mm:ss." As you recall from Table 16-9, the leading uppercase H in that format string specifies a one- or two-digit hour in a 24-hour format. dtpTotalTime.Format = DateTimePickerFormat.Custom dtpTotalTime.CustomFormat = "H:mm:ss" dtpTotalTime.Value = DateTime.Parse("00:00:00") At first, this custom formatting may seem unnecessary, since the DateTimePickerFormat.Time format should provide what you are looking for. However, if the Time format is used, the DateTimePicker control displays 12:00:00, rather than the desired 00:00:00. Notice that the horizontal component of the Size property of the DateTimePicker control is calculated using the Value property of the control in conjunction with the ToString method, taking a formatting argument to retrieve the number of characters. From Table 16-10, you saw that the "t" formatting string corresponds to a short time display, which is effectively how the custom format used by the control appears. As with all the Size calculations based on the Font.Height property, the ".6" factor is arrived at empirically: dtpTotalTime.Size = new Size((int)(Font.Height * .6) * dtpTotalTime.Value.ToString("t").Length, dtpTotalTime.PreferredHeight); dtpTotalTime.Size = new Size(CInt(Font.Height * .6) * _ dtpTotalTime.Value.ToString("t").Length, _ dtpTotalTime.PreferredHeight) The TimesUp label is positioned, sized, and given a nice bold 20-point font, but the Text property is initially set to an empty string. The Text property will be set appropriately as necessary, as you will see in a moment. Now turn your attention to the Start button. The Click event handler for the Start button first clears the TimesUp label. lblTimesUp.Text = "" Next it sets a flag, boolStart, to true. This flag was initialized to false as a class member variable. boolStart = true Now comes a tricky part. The value in the DateTimePicker control is a DateTime object. It must be converted to a TimeSpan object so that the ending time, dtEndTime, which is a DateTime object, can be calculated. This is necessary because the DateTime Addition operator (and the DateTime Add method, as well) can only add a TimeSpan to a DateTime, not add together two DateTimes. The conversion of the DateTimePicker value to a TimeSpan is accomplished by using the static TimeSpan.Parse method, which takes a string argument. That string argument is built up by calling the ToString method against the Hour, Minute, and Second components of the DateTimePicker control's Value property. Then the ending time can be calculated by adding the resulting TimeSpan object to the current time: TimeSpan ts = new TimeSpan( ); ts = TimeSpan.Parse(dtpTotalTime.Value.Hour.ToString( ) + ":" + dtpTotalTime.Value.Minute.ToString( ) + ":" + dtpTotalTime.Value.Second.ToString( )); dtEndTime = DateTime.Now + ts; dim ts as new TimeSpan( ) ts = TimeSpan.Parse(dtpTotalTime.Value.Hour.ToString( ) + ":" + _ dtpTotalTime.Value.Minute.ToString( ) + ":" + _ dtpTotalTime.Value.Second.ToString( )) dtEndTime = DateTime.op_Addition(DateTime.Now, ts) Notice that the C# version allows the use of the + DateTime operator, and the VB.NET version does not. Instead, it uses the shared DateTime method DateTime.op_Addition. Now that you have the boolStart flag and the ending time, dtEndTime, you can handle the Tick event. The Tick event handler consists of a single line of code, which invalidates the form and causes the OnPaint method to be invoked. This OnPaint method has been overridden, so it draws the text strings containing the current time of day and the remaining time being counted down. The overridden OnPaint method first chains up to the base class: base.OnPaint(e); myBase.OnPaint(e) Next it declares and instantiates several objects, which will be used shortly in the Graphics DrawString method:") + " "; dim g as Graphics = e.Graphics dim b as new SolidBrush(ForeColor) dim fmt as new StringFormat( ) fmt.Alignment = StringAlignment.Near dim pt as new PointF(CInt(ClientSize.Width / 10), 150) Font fnt = new Font("Arial", 12) dim str as string = "Current Time: " + _ DateTime.Now.ToString("F") + vbCrLf + vbCrLf The specified string will be drawn with every timer ticki.e., every second. The contents of the next string, however, depend on whether the application is counting down. For this, it tests the boolStart flag, which was set in the Start button Click event handler. If the boolStart flag is true, then another string is built up by subtracting the current time, DateTime.Now, from the ending time, dtEndTime, which was calculated in the Start Button event handler. This process is surprisingly tricky. You might think you could use the following code to display the remaining time, where the TimeSpan is calculated and then displayed using the TimeSpan ToString method. TimeSpan ts = new TimeSpan( ); ts = dtEndTime - DateTime.Now; str += "Remaining Time: " + ts.ToString( ); This works, but it displays the time with hours, minutes, and fractional seconds, as in 01:01:01.1234567, with the seconds displaying seven decimal digits. You should, however, display only hours, minutes, and whole seconds, as in 01:01:01. No problem, you think: I'll just add a formatting argument to the ToString method. However, this causes a compiler error. The DateTime.ToString method accepts a formatting argument, but the TimeSpan.ToString does not. So you need to convert the TimeSpan to a DateTime, using the static DateTime.Parse method. This method takes a string argument, so you give it the TimeSpan object converted to a string with ToString. Then the string for display can be built up using the DateTime ToString, which accepts the formatting argument. The complete code section for testing the boolStart flag, constructing the line that displays the remaining time and drawing the two lines of text, is reproduced here:) The final piece of the OnPaint method tests to see if time has expired. If so, it calls the TimesUp helper method: if (boolStart && (dtEndTime - DateTime.Now) <= TimeSpan.Zero) { TimesUp( ); } if (boolStart and _ (TimeSpan.op_LessThanOrEqual(DateTime.op_Subtraction( _ dtEndTime, DateTime.Now), TimeSpan.Zero))) then TimesUp( ) end if Again, as with the TimeSpan and DateTime operators used previously, the C# version uses the <= operator, while the VB.NET version must use the shared TimeSpan.op_LessThanOrEqual method. The TimesUp method is simple. It sets the Text property of the lblTimesUp label and resets the boolStart flag to false. The Stop button Click event handler is also simpleit just sets the boolStart flag to false. The next time the Tick event fires and the OnPaint method is called, the form will correctly display with the count down stopped.
https://flylib.com/books/en/2.654.1/timer_component.html
CC-MAIN-2019-22
refinedweb
3,195
60.82
Introduction : sample() method is defined in random module of Python. It is used to get a random length list of unique elements. We can provide the length of the final list and it will pick some random elements from the given sequence or set. Note that elements of the provided list need not be unique. If it contains any duplicate elements, that the final result may pick duplicates. In this post, we will learn how sample method works with different examples. Syntax of sample : This method is defined as below : random.sample(data, k) Here, data is the given sequence or set and k is the final length of list that we want to pick from data. It raises one ValueError if the size of k is more than the size of data . Find Random sample from a range of integers : Let’s start with a simple example. To find random list from a range of integers, we can use range . The advantage of using range is that it is more memory efficient than list or tuple . import random print(random.sample(range(1000),5)) If you run the above program, it will print five random numbers each time you execute. It returns one list. You will get output as like below : [804, 188, 683, 921, 626] [557, 589, 85, 851, 278] [371, 443, 803, 753, 263] Note that we need to import the random module to use this method. Errors : random.sample() is throwing a ValueError() : This is a common problem with sample . If the value of k is larger than the total size of data or if its value is negative, it will throw ValueError . For example : import random print(random.sample(range(3),5)) If you run this program, it will give one ValueError as like below : Traceback (most recent call last): File "example.py", line 3, in <module> print(random.sample(range(3),5)) File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/random.py", line 321, in sample raise ValueError("Sample larger than population or is negative") ValueError: Sample larger than population or is negative TypeError: sample throws TypeError if any of the parameter is missing. It will say that missing 1 required positional argument . For example : import random print(random.sample(range(3))) This program will throw TypeError : Traceback (most recent call last): File "example.py", line 3, in <module> print(random.sample(4)) TypeError: sample() missing 1 required positional argument: 'k' Example to get one random list from a Python dictionary using sample: We have learned how to use random() with one simple example. Let’s try it now with a dictionary. The problem is that you can’t use dictionary directly with random() because it works only with a sequence or set and dictionary is not a sequence. Instead, we can use dictionary.items() method that returns a list of all dictionary key value pairs. It doesn’t take any parameters. Let’s take a look at the example below : import random data = { 'one': 1, 'two': 2, 'three': 3, 'four': 4, 'five': 5, 'six': 6, 'seven': 7, 'eight': 8, 'nine': 9, 'ten': 10 } print(random.sample(data.items(), 3)) It will print one output with three items of data as like below : [('seven', 7), ('ten', 10), ('three', 3)] How to use python random.sample() with tuple : We can use sample() directly with a tuple. For example : import random dataTuple = (1, 2, 3, 'four', 'five' , 6) print(random.sample(dataTuple,2)) It works as like the other examples. Each time you run this program, it will print two values from the tuple. ['five', 3] [1, 3] Using python random.sample() with a set : As I have mentioned before, sample() works with python set as well. It works in a similar manner : import random dataSet = [1,2,3,4,5] print(random.sample(dataSet,2)) It will print similar output. Conclusion : Python random.sample() is a useful method to get a list of random data from a list or set. It helps us to get a random data list easily. In this post, we have seen how to use sample() with different data types. If you want only a single random item, you can use random.choice() method.
https://www.codevscolor.com/python-random-sample-example
CC-MAIN-2020-40
refinedweb
712
66.33
" ? I don't like to **bump **but when I did a google-search the first link listed was to my original post import win32api, win32print # or .. import pywin32 How can I get this to work, as I currently receive the error "no module named win32api". Please Andy. I'd also like an answer to this question, if anyone could be so helpful. I'm toying with the idea of just byte compiling the win32api package and pasting it into the Python26.zip file located at C:\Program Files\Sublime Text 2.. I'm just not keen on the idea of having to install the entire VC++ 2008 toolchain.. (to compile the win32api package to work with the embedded python in the first place) Alex I have just the same issue here, and it's really weird, because I thought win32api would be there de facto, but it clearly seems not.
https://forum.sublimetext.com/t/connect-to-win32api/5529/3
CC-MAIN-2016-44
refinedweb
150
76.45
Gather Emails on Your Next.js Site with StaticKit and ZEIT Now Use StaticKit and Next.js to collect emails from a landing page deployed with ZEIT Now. StaticKit is a collection of dynamic components for static sites, enabling developers to build dynamic interfaces with ease. In this guide, you will discover how to create a simple landing page to gather email addresses using StaticKit and Next.js. Step 1: Set Up Your Next.js Project Run the following command to create and enter into a Next.js project: npm init next-app next-landing-page && cd next-landing-page Bootstrapping a Next.js project with create-next-app and moving into the /next-landing-page directory. Replace the contents of the /pages/index.js file with the following code. import React from 'react' import Head from 'next/head' const OptInForm = () => { return ( <form> <p className="pb-3 font-bold text-gray-800 text-lg"> Sign up to be notified when we launch. </p> <div className="flex flex-wrap items-center"> <label htmlFor="email" className="hidden"> Email Address </label> <input id="email" type="email" name="email" className="flex-grow mr-3 mb-3 p-3 rounded-lg bg-gray-200 text-gray-700 text-lg border border-gray-200 focus:outline-none focus:border-gray-500 focus:bg-white" placeholder="Your email address" required /> <button type="submit" className="mb-3 px-5 py-3 rounded-lg border border-purple-700 bg-purple-700 text-lg font-bold text-white" > Notify me </button> </div> </form> ) } const Home = () => ( <div> <Head> <title>Vaporware</title> <link href="^1.0/dist/tailwind.min.css" rel="stylesheet" /> </Head> <div className="mx-auto container px-8 py-16 sm:py-32 antialiased"> <div className="max-w-lg mx-auto"> <div className="flex flex-wrap items-center pb-4 text-5xl font-bold text-gray-800"> <h1 className="mr-3">Vaporware</h1> <div className="mt-2 px-3 py-1 text-sm font-bold bg-orange-300 text-orange-800 rounded-full"> Coming Soon </div> </div> <p className="pb-6 text-gray-700 text-lg"> Vaporware is a fictitious app that does not yet exist. This is where you’d make a compelling pitch for your new product. </p> <OptInForm /> </div> </div> </div> ) export default Home An example /pages/index.js file with two components. You may have noticed that the <form> tag is missing an onSubmit property, we will address that next. Step 2: Creating the StaticKit Form From your StaticKit Dashboard, click Add a site... in the top navigation bar. Enter your site name and click the Add site button. You will be returned to the StaticKit Dashboard where the site you just added will now be visible. Click on Click here to create a form. Click the StaticKit logo to return to the dashboard, you will find your site now has a form with an ID, make a note of this ID for later use. To provide your app with access to StaticKit components, run this command from the root of your project directory: npm install @statickit/react Adding the @statickit/react dependency to the project. Next, import the useForm hook and bind the form onSubmit to StaticKit in your /pages/index.js file. Be sure to replace [YOUR FORM ID] with your actual form ID from StaticKit you received earlier. import React from "react"; import Head from "next/head"; + import { useForm } from "@statickit/react"; const OptInForm = () => { + const [state, submit] = useForm("[YOUR FORM ID]"); + if (state.succeeded) { + return ( + <p className="pb-3 font-bold text-gray-800 text-lg"> + Thank you for signing up! + </p> + ); + } return ( - <form> + <form onSubmit={submit}> <p className="pb-3 font-bold text-gray-800 text-lg"> Sign up to be notified when we launch. </p> Updating the /pages/index.js file with an onSubmit hook. You now have a working landing page, the last step is to deploy it with ZEIT Now. Step 3: Deploying with ZEIT Now If you have not yet installed Now, you can do so by installing Now CLI. You can now deploy your StaticKit + Next.js app with a single command: now Deploying the app with the now command. When your app has deployed, it will look like the example below: The StaticKit + Next.js landing page created with this guide. If you want to deploy your StaticKit + Next.js app from a Git repository, you can use either Now for GitHub or Now for GitLab to have your project automatically deployed on every push, and aliased on push to master. For more information, you can find the source code for this example on GitHub along with the live example.
https://zeit.co/guides/deploying-statickit-with-zeit-now/
CC-MAIN-2019-39
refinedweb
774
55.84
Building and Testing "Critical Mass" In our last installment, we started building support into the Texas Hold ’Em application for the actual game. We got to the point of proving that we could deal hole cards to players. In an attempt to get a bit of critical mass built into the application, I coded about an hours’ worth of test and code since our last installment. I’m going to expect you to be willing to go forward with this code, but we’ll want to make sure that you have a good understanding of it first. The bulk of changes I made were driven by tests in GameTest. The complete source for GameTest is shown in Listing 1. Listing 1 GameTest. package domain; import junit.framework.*; public class GameTest extends TestCase { private static final int BURN = 1; private Game game; private Deck deck; private Player player1; private Player player2; private Player player3; private static final int STAKE = 1000; private static final int SMALL = 10; private static final int BIG = 20; protected void setUp() { game = new Game(); game.setBlinds(SMALL, BIG); deck = game.deck(); player1 = new Player("a"); player1.add(STAKE); player2 = new Player("b"); player2.add(STAKE); player3 = new Player("c"); player3.add(STAKE); } public void testCreate() { assertPlayers(); } public void testAddSinglePlayer() { final String name = "Jeff"; game.add(new Player(name)); assertPlayers(name); } public void testAddMaximumNumberOfPlayers() { for (int i = 0; i < Game.CAPACITY; i++) game.add(new Player("" + i)); assertPlayers("0", "1", "2", "3", "4", "5", "6", "7", "8", "9"); } public void testDealCompleteHand() { addTwoPlayers(); game.setButton(2); game.startHand(); Card[] hole = deck.top(4); game.dealHoleCards(); assertHoleCards(player1, hole, 0, 2); assertHoleCards(player2, hole, 1, 3); int remaining = Deck.SIZE - hole.length; assertEquals(remaining, deck.cardsRemaining()); Card[] flop = deck.top(BURN + 3); game.dealFlop(); remaining -= flop.length; assertCardsDealt(remaining, flop); CardTest.assertCards( game.community(), flop[1], flop[2], flop[3]); Card[] turn = deck.top(BURN + 1); game.dealTurn(); remaining -= turn.length; assertCardsDealt(remaining, turn); CardTest.assertCards( game.community(), flop[1], flop[2], flop[3], turn[1]); Card[] river = deck.top(BURN + 1); game.dealRiver(); remaining -= river.length; assertCardsDealt(remaining, river); CardTest.assertCards(game.community(), flop[1], flop[2], flop[3], turn[1], river[1]); } public void testDealOrderStartsFromButton() { addTwoPlayers(); game.setButton(1); game.startHand(); Card[] hole = deck.top(4); dealAllCardsInHand(); assertHoleCards(player1, hole, 1, 3); assertHoleCards(player2, hole, 0, 2); game.stopHand(); game.startHand(); hole = deck.top(4); dealAllCardsInHand(); assertHoleCards(player1, hole, 0, 2); assertHoleCards(player2, hole, 1, 3); } public void testBlinds() { addThreePlayers(); game.setButton(3); game.startHand(); assertEquals(STAKE - SMALL, player1.chipCount()); assertEquals(STAKE - BIG, player2.chipCount()); assertEquals(STAKE, player3.chipCount()); } public void testHandFlow() { addThreePlayers(); game.setButton(3); game.startHand(); dealAllCardsInHand(); game.stopHand(); assertEquals(1, game.buttonPosition()); assertNoCardsOut(); game.startHand(); dealAllCardsInHand(); game.stopHand(); assertEquals(2, game.buttonPosition()); assertNoCardsOut(); game.startHand(); dealAllCardsInHand(); game.stopHand(); assertEquals(3, game.buttonPosition()); assertNoCardsOut(); fail("need to ensure blinds are extracted properly"); } // missing tests: // - use a new deck each time! private void assertNoCardsOut() { for (Player player: game.players()) assertTrue(player.holeCards().isEmpty()); assertTrue(game.community().isEmpty()); } private void dealAllCardsInHand() { game.dealHoleCards(); game.dealFlop(); game.dealTurn(); game.dealRiver(); } private void addThreePlayers() { game.add(player1); game.add(player2); game.add(player3); } private void addTwoPlayers() { game.add(player1); game.add(player2); } private void assertCardsDealt(int remaining, Card[] turn) { assertDeckCount(remaining); DeckTest.assertCardsDealt(deck, turn); } private void assertHoleCards( Player player, Card[] hole, int... indices) { Card[] cards = new Card[indices.length]; for (int i = 0; i < indices.length; i++) cards[i] = hole[indices[i]]; DeckTest.assertCardsDealt(deck, cards); CardTest.assertCards(player.holeCards(), cards); } private void assertDeckCount(int expected) { assertEquals(expected, deck.cardsRemaining()); } private void assertPlayers(String... expected) { assertEquals(expected.length, game.players().size()); int i = 0; for (Player player : game.players()) assertEquals(expected[i++], player.getName()); } } Let’s step through each of the tests and see what we have. - testCreate, testAddSinglePlayer, testAddMaximumNumberOfPlayers: These three tests remain unchanged from what we built in the last installment. - testDealCompleteHand: This test grew out of testDealHoleCards, which we started in the last installment. The idea of this test is to demonstrate that cards are dealt properly to all players within a single Texas Hold ’Em hand. Paraphrased, the test says the following: - Add two players to the game. - Set the button to the second (last) player. This means that dealing starts from the first player. - Start the hand. This implies that a new deck is ready and shuffled. - Peek at the top four cards from the deck, so that we have a way of verifying the actual hole cards that get dealt. A Texas Hold ’Em hand starts with two cards dealt to each player in turn, starting with the player to the left of the button. The ability to peek at cards is a testing need that required some changes to the Deck class. We’ll review these changes shortly. - Deal the hole cards. We call assertHoleCards twice, to verify that player 1 received the first (0th, using Java’s zero-based indexing) and third cards dealt and player 2 received the second and fourth cards dealt. We also verify that 48 cards remain in the deck. - Peek at the top four cards, representing a burn plus the flop. The flop is three community cards—they are dealt face-up at the center of the table. Prior to dealing the flop, the dealer must "burn" (discard) a card, per Texas Hold ’Em dealing convention. - Deal the flop. Similar to the way we verified the hole cards, we compare the community cards against the cards we peeked. We also verify the number of cards remaining in the deck. - Peek at the next two cards, representing a burn and the "turn." The turn is the fourth community card dealt. We compare the turn to the result of calling dealTurn against the Game object. We also verify the number of cards remaining in the deck. - Peek at the next two cards, representing a burn and the "river." The river is the fifth and final community card dealt. We compare the river to the result of calling dealRiver against the Game object. We also verify the number of cards remaining in the deck. The DeckTest method assertCardsDealt originally started in GameTest. It makes more sense on the DeckTest class, since it deals with a deck and cards, but knows nothing about a game. Here’s what it looks like: public static void assertCardsDealt(Deck deck, Card... cards) { for (Card card: cards) assertFalse(deck.contains(card.getRank(), card.getSuit())); } The method assertCards in CardTest originally came from PlayerTest. I changed assertCards to be a static method so that other tests could use it. Here’s what it looks like now: public static void assertCards(List<Card> cards, Card... expected) { assertEquals(expected.length, cards.size()); int i = 0; for (Card card: expected) assertEquals(card, cards.get(i++)); } Test code in GameTest needs the capability to look at cards in the deck without dealing them. This means that our Deck class needed to change. Listing 2 shows a couple of tests in DeckTest that drove out support for peeking. Listing 2 Testing the ability to peek in DeckTest. // seeing the top card is very valuable in testing public void testTop() { Card top = deck1.top(); assertEquals(Deck.SIZE, deck1.cardsRemaining()); Card card = deck1.deal(); assertEquals(top, card); } // seeing the top N cards is very valuable in testing public void testTopN() { Card[] top = deck1.top(3); assertEquals(Deck.SIZE, deck1.cardsRemaining()); assertEquals(3, top.length); assertEquals(top[0], deck1.deal()); assertEquals(top[1], deck1.deal()); assertEquals(top[2], deck1.deal()); } These two "top" tests resulted in the production methods in Deck shown in Listing 3. Listing 3 Peek code in Deck. // primarily used for testing Card top() { return cards.get(0); } // primarily used for testing public Card[] top(int count) { Card[] results = new Card[count]; for (int i = 0; i < count; i++) results[i] = cards.get(i); return results; } Let’s step through each of the tests. - testHandFlow: In Texas Hold ’Em, one hand is almost never the entire game. It results in one player winning the pot, to which that player and others contributed during the course of the hand. Once the pot is won, a new hand begins. The purpose of testHandFlow is to demonstrate the game flow from hand to hand. We show that the button moves upon completion of each hand. Also, at the end of a hand, we show that no cards should be outstanding—no players should hold any cards, and the community should contain no cards. Note the fail method call at the very end of the test. We’ll discuss why this call exists later in this installment. - testDealOrderStartsFromButton: This test verifies that the player to the left of the button receives the first hole card. It does so by dealing two hands, and verifying that the deal moves appropriately with each hand. - testBlinds: In order to promote more betting action in each hand, Texas Hold ’Em requires that blinds be posted by the two players to the left of the button. Blinds are preset chip amounts. The player to the left of the button is known as the small blind; the second player in line is the big blind. Usually, but not always, the small blind is half of the big blind. Our setUp method sets the blinds: game.setBlinds(SMALL, BIG); The code in testBlinds starts a hand by calling startHand. It then verifies that each player’s chip count was appropriately decremented (or not). The code in this test required changes to the Player class to manage chips. We’ll review these changes later in this installment. The production Game code appears in Listing 4. Listing 4 Production Game code. package domain; import java.util.*; public class Game { public static final int CAPACITY = 10; private List<Player> players = new ArrayList<Player>(); private List<Card> community = new ArrayList<Card>(); private Deck deck = new Deck(); private int button = 1; private int smallAmount; private int bigAmount; public List<Player> players() { return players; } public void dealHoleCards() { for (int round = 0; round < 2; round++) { int dealer = button + 1; for (int position = dealer; position < dealer + players.size(); position++) { Player player = getPlayer(position); player.dealToHole(deck.deal()); } } } public void add(Player player) { players.add(player); } // needed for testing Deck deck() { return deck; } public void dealFlop() { burn(); for (int i = 0; i < 3; i++) community.add(deck.deal()); } private void burn() { deck.deal(); } public List<Card> community() { return community; } public void dealTurn() { burnAndTurn(); } public void dealRiver() { burnAndTurn(); } private void burnAndTurn() { burn(); community.add(deck.deal()); } public void setButton(int i) { button = i; } public void setBlinds(int small, int big) { this.smallAmount = small; this.bigAmount = big; } public void startHand() { collectBlinds(); } private void collectBlinds() { Player small = getPlayer(button + 1); Player big = getPlayer(button + 2); small.bet(smallAmount); big.bet(bigAmount); } public int buttonPosition() { return button; } public void stopHand() { removeAllCards(); advanceButton(); } private void removeAllCards() { for (Player player: players()) player.removeCards(); community.clear(); } private void advanceButton() { button++; if (button > players.size()) button = 1; } private Player getPlayer(int position) { int index = position - 1; if (position > players.size()) index -= players.size(); return players.get(index); } } The Game class is starting to do way too much. It’s managing both the flow of the game from hand to hand as well as the hand itself. That description suggests violation of the single-responsibility principle, a good class-design guideline that says classes should have one reason to change. We’ll do something about this design concern in an upcoming installment. The methods advanceButton and getPlayer have some duplicate concepts. One significant key to keeping your system clean through refactoring is to recognize duplication where it may not be obvious. Here, both methods have logic that deals with finding the next position in the ring of players. Refactoring them resulted in the slightly cleaner code shown in Listing 5. I think the dealHoleCards method is now much easier to follow. Listing 5 Refactored Game code. public void dealHoleCards() { for (int round = 0; round < 2; round++) { for (int i = 1; i <= players.size(); i++) { Player player = getPlayer(button + i); player.dealToHole(deck.deal()); } } } private void advanceButton() { button = ringPosition(button + 1); } private int ringPosition(int position) { if (position > players.size()) return position - players.size(); return position; } private Player getPlayer(int position) { return players.get(ringPosition(position) - 1); } The changes to Player were minor. In addition to the changes need to manage the player’s bankroll (chips), we need the ability to remove cards from each Player: public void testRemoveCards() { player.dealToHole(CardTest.CARD1); player.dealToHole(CardTest.CARD2); player.removeCards(); assertTrue(player.holeCards().isEmpty()); } The implementation for Player.removeCards is trivial. (Remember that the code for each installment of this series is always available for download.) A couple of tests in PlayerTest show how we manage a player’s chips (see Listing 6). The production code resulting from those two tests is shown in Listing 7. Listing 6 PlayerTest. public void testBankroll() { assertEquals(0, player.chipCount()); player.add(1000); assertEquals(1000, player.chipCount()); player.bet(200); assertEquals(800, player.chipCount()); } public void testInsufficientFunds() { try { player.bet(1); fail("expected insuff. funds exception"); } catch (InsufficientFundsException expected) { assertEquals(0, player.chipCount()); } } Listing 7 Player. public class Player { ... private int chips = 0; ... public int chipCount() { return chips; } public void add(int amount) { chips += amount; } public void bet(int amount) { if (amount > chips) throw new InsufficientFundsException(); chips -= amount; } } InsufficientFundsException is simply a RuntimeException subclass. You might want to look further through the rest of the code. I made some minor refactorings for clarity and organizational reasons.
http://www.informit.com/articles/article.aspx?p=454164&amp;seqNum=2
CC-MAIN-2017-17
refinedweb
2,241
52.56
ReSharper has always wanted to speed up your code editing, refactoring and navigation. ReSharper 10 now wants to speed up your build. ReSharper Build is a new feature in ReSharper 10 that will reduce the time it takes to build your solution. It replaces Visual Studio’s build management with a system that applies heuristics to only build projects that need updating. Note that it doesn’t replace MSBuild — your projects are still built normally. They’re just not built as often. In this post, we’re going to take a deep look at how ReSharper Build works, and how you can make use of it in your own solution. The benefits of a faster build are obvious — we all want to speed up the development feedback cycle. No one wants to be interrupted when they’re in the flow, and a slow compile can be a real productivity killer. The quicker we can get a solution rebuilt, the sooner we can get feedback from our tests, and the sooner we can move on to new code, or fix up tests. And if this happens quickly enough, we stay in the zone, and remain productive. Furthermore, ReSharper Build is also an important part of another new feature — Continuous Testing. This is a feature of dotCover and ReSharper Ultimate which intelligently runs only the subset of tests that cover the code that you’ve just changed. In order to be able to do this, we need to be able to build the changed code as quickly and efficiently as possible. We’ll take a closer look at Continuous Testing in a future blog post. Some of you power users might recognize ReSharper Build’s heritage — it’s the latest evolution of a (somewhat) hidden/internal feature that we’ve been using on the development team for several years. When the uber-solution to everything that makes up ReSharper Ultimate is 600 projects, and the everyday solution for feature development can be as large as 300 projects, you need something to speed the build up! How does ReSharper Build work? Put simply, ReSharper Build manages the build process, and decides if each individual project needs to be built, or not. When building a solution, if a project doesn’t need to be rebuilt, it is intelligently skipped — what’s faster than doing nothing? There are three main optimisations: - All build management happens out of process. Visual Studio has been moving more of the build pipeline out of process in recent versions, but even in Visual Studio 2015, the core build management is still hosted in-process. ReSharper Build is hosted out of process, meaning that Visual Studio remains snappy and responsive while a build is running. - Efficient timestamp monitoring. MSBuild supports building incrementally, by comparing the timestamps of inputs and outputs of a target — it they are up to date, the target isn’t run. But an MSBuild process must still be run so that it can check the inputs of outputs of each target and check all of the timestamps — this gives us an overhead to do nothing! ReSharper Build maintains a dependency graph of inputs and outputs to tasks, targets and projects, and efficiently monitors the file system for changes. If a build is requested, ReSharper Build already knows if the timestamps are up to date, without having to invoke MSBuild. If it’s up to date, don’t build it. This optimisation alone can greatly improve the time your solution takes to build — but we can go one better. - Public API surface monitoring. Traditionally, if a project is edited and rebuilt, Visual Studio and MSBuild will also rebuild all other projects that reference it. If we’re changing the public API of an assembly, then this is a good thing — referencing projects need to know if we change the name of a method, add parameters or remove an interface. But if we’re only changing the internal business logic of a class, we’re rebuilding the referencing projects unnecessarily — the generated code in the referenced assemblies won’t change. When a project is built, ReSharper will scan the just compiled output assembly. If its public API hasn’t changed, then ReSharper Build knows that it doesn’t need to build any of the referencing projects, and they are intelligently skipped. This has a huge impact on larger solutions. If a change is made to the business logic of a root assembly, then traditionally, that would require the rest of the solution to be rebuilt. ReSharper Build will only rebuild the root assembly, and skip the rest. How do you use it? It’s very easy. Once enabled, ReSharper Build replaces and overrides the standard Visual Studio build process, and is invoked whenever a build is required — building, rebuilding or cleaning the solution, running or debugging the project, or running unit tests. ReSharper Build is disabled by default, meaning you have to opt-in to the new build management (there are some edge cases that can cause issues, particularly custom build steps, so we’ve decided that you need to make an explicit step to opt-in. See below for more details). You can enable ReSharper Build by first displaying the ReSharper Build & Run tool window, from ReSharper → Windows → Build & Run. If ReSharper Build is not enabled, the tool window will show you that Visual Studio’s build process is being used, and provide a link to “Enable ReSharper Build”. You can also enable ReSharper Build and change the settings through ReSharper → Options → Build, and if you use the Save To button, you can enable/disable on a per-solution basis. Once enabled, the ReSharper Build & Run tool window will show all of the projects in the solution, as coloured boxes. The colour of each box represents the state of the project, such as not requiring build, excluded from this build, skipped, built with errors, or successfully built. There are several more states available, and hovering the mouse over a project will display a tooltip explaining the state. Once a build completes, any output from the build (errors, warnings, info and messages) are displayed in the ReSharper Build Results tool window, from which you can double click to navigate to the warnings and errors. We are still working on the UI during the EAP, so expect this window to change, as we will be adding support for keyboard shortcuts, and also grouping results. Update for RTM: The Build Results tool window was removed from the final RTM release of ReSharper 10.0. We didn’t get the design and feature set finalised and working well enough to be included in this release. Instead, we have rolled back to the prototype implementation, which will automatically display build results as a tab in the Find Results window. While this location might seem a little surprising, ReSharper displays the window automatically, and whenever you double click one of the projects in the Build & Run tool window. Furthermore, the Find Results implementation offers more features, including grouping, filtering and keyboard shortcuts. The standard Visual Studio F8 and Shift+F8 shortcuts will navigate the errors, as will ReSharper’s standard previous/next shortcuts Ctrl+Alt+PageUp/PageDown (Ctrl+Alt+Up/Down for IntelliJ shortcut users). We will be re-evaluating how we display build results for a future release. ReSharper Build overrides the standard Visual Studio build commands, meaning it gets invoked from the normal build keyboard shortcut, which will build the whole solution. Similarly, rebuild and clean keyboard shortcuts will build the whole solution. When running tests, launching an application for debugging, or right clicking and selecting Build on the project’s context menu, ReSharper Build is invoked on a subset of the solution, and only processes those projects. Any other projects that need rebuilding are ignored, and marked in blue, showing they will be rebuilt in the future. Similarly, ReSharper Build is available under the ReSharper → Build menu option, which allows for building, cleaning and rebuilding the solution. It also offers menu items to just build, rebuild and clean the current selection. You can also cancel the current build, if it’s taking too long (although we don’t expect this to get used much). Update for RTM: The ReSharper → Build menu item has been removed – all options are already available on Visual Studio’s Build menu, with the exception of building a selection. This can be achieved from a project’s context menu, or automatically by ReSharper when building before running tests. Configuring ReSharper Build can be configured from ReSharper Options (ReSharper → Options → Build, or by clicking the “spanner” options icon in the ReSharper Build & Run tool window). You can set which version of MSBuild to use (Visual Studio’s, by default) and how many MSBuild processes to run for parallel build. You can also configure which projects should never be built, and which projects should always be built. These two options can be very useful. Excluding a project means it won’t get rebuilt, ever,. See below). Technical limitations There are a couple of technical limitations that are due to ReSharper Build being based on MSBuild: - doesn’t know what’s happening in the custom build step, so can’t track the input and output. Furthermore, if the project is skipped, the build step doesn in the ReSharper → Options → ReSharper Build page. Known issues As well as continuing to work on the UI, there are still a couple of known issues. If you encounter any other issues, PLEASE let us know! - NuGet package restore is not currently supported. Right now, ReSharper Build does not trigger NuGet’s package restore functionality. To restore packages, go to the NuGet package UI, or temporarily switch to the Visual Studio build process using the “cog” icon in the ReSharper Build & Run tool window. - Does this work with Copy Local? Yes. ReSharper Build also tracks copy tasks, and will replay them if timestamps have changed. So a project that is skipped will still get the latest version of any modified referenced assemblies in its output folder. For even faster build times, we recommend building to a single bin folder (and NOT disabling Copy Local). This results in far fewer files being duplicated and copied around, and as an added bonus, reduces the number of files being monitored by ReSharper Build for timestamp changes. - Does this work for changes in resource files? Yes. ReSharper Build monitors the timestamps of all inputs to targets, rather than changes to just source files. Any resource files (images, xml files, etc.) that are modified are flagged as timestamp changes, so the project is rebuilt. - Does this work with C++, F#, etc? Yes. ReSharper Build monitors the inputs and outputs of targets and tasks, and can apply the timestamp heuristic to these projects. It can apply the surface API heuristic to any CLR based project — ReSharper Build calculates the public API surface from the output assembly, rather than the source of the project, so can monitor any CLR based assembly for API changes. However, please remember that custom build steps are a black box that ReSharper Build cannot track. If your project has a custom build step, including C++ pre- and post-build steps, ReSharper Build cannot know when they should be run. You can either rewrite the custom build steps to use proper MSBuild targets and tasks, or exclude the project from heuristics in the options. Of course, if you encounter a project that doesn’t play nicely with ReSharper Build, please let us know. - Does this work with strong naming? Yes. When referencing an assembly, the version is embedded as part of the reference. When an assembly is strong named, the version numbers must match exactly, or assembly loading will fail. This means referencing assemblies need to be recompiled, even if the public API hasn’t changed. ReSharper Build considers the assembly version to be part of the public API. If the version changes, referencing assemblies are recompiled. The AssemblyVersionAttribute specifies the version of the assembly. If it’s set to a static value such as "1.0.0.0", the version doesn’t change with each compile, and ReSharper Build’s optimisations can be applied. However, if the assembly version is in the form "1.0.*", the version is updated on each compile, and ReSharper Build considers the public API to have changed, and the surface API heuristic no longer applies. The default C# project template sets the assembly version attribute value to be "1.0.0.0", which fits nicely with ReSharper Build’s optimisations. However, the managed C++ template sets the attribute value to "1.0.*", which will make ReSharper Build consider the public API as having changed. It is recommended to change the assembly version attribute to be a static value in order to get the most from ReSharper Build. - Does this require Solution Wide Analysis to be enabled? No. The public API build heuristic works on the compiled assembly, and doesn’t need to analyse source code at all. If you have excluded a project from building in the options page, Solution Wide Analysis can be very useful to capture compile errors that you wouldn’t otherwise see. - How does this work with Run Configurations? ReSharper 9.2 introduced Run Configurations, which are ways of defining targets to run and debug. These can be running an executable with different command line parameters, launching a project, or even debugging a static method. Naturally, ReSharper Build integrates nicely with Run Configurations, providing a drop-down of different configurations in the ReSharper Build & Run tool window. From here, you can select the default configuration that will happen when you run or debug, or define new ones. And finally, a small tip to finish up — ReSharper Build monitors the public API surface of your assemblies. To reduce unnecessary builds, consider using “internal” instead of “public”. If a class, interface, method, etc. isn’t actually required to be public, making it internal reduces the API surface being monitored, and will reduce the number of unnecessary rebuilds. Conclusion We are very excited to launch a new feature such as this. We’re hoping it will have as much of a positive impact on your build times, as ReSharper has had on your development experience. Please download the latest EAP and try it out with your projects, and PLEASE let us know if you encounter problems. This is why we run the Early Access Program — to get feedback on products before they’re finalised, to find issues before we release. Let us know if it’s working for you, and especially let us know if it doesn’t! D’oh! It was all sounding great till the bit about AssemblyVersion 1.0.* not being supported. We use that a lot as it makes it easy to ensure that our MSIs will always overwrite DLLs when upgrading a program. Unfortunately, assembly version is a part of the public API, and it can cause issues if we ignore it, especially with strong naming. Our suggestion is to use a static version number when developing, but specify a proper version at release build time. You can do this with something like TeamCity’s AssemblyInfo patcher build feature, which allows you to set the assembly version during CI builds, or something simple, like this: #ifdef DEBUG [assembly: AssemblyVersion("1.0.0.0")] #else [assembly: AssemblyVersion("1.0.*")] #endif This will use a static version during development, but calculate a version number when doing a release build. Thanks, neat idea with the debug flag, we could definitely use that! I realise the proper way would be something better on the build server, but for now it’s not something we could do. Hi mark, great feature! @”Unfortunately, assembly version is a part of the public API, and it can cause issues if we ignore it, especially with strong naming.”. Have your development team thought about batching depended assemblies on binary level instead of recompiling? It’s very simple, I have done this already, so if you are interested in a code snipped you can contact me. Kind regards, Michael Hi Michael. I’m not sure what you mean by batching. Could you explain a bit more, please? Hi Matt, sorry for the wrong spelling, should mean patching. It is possible to rewrite the referenced assemblies in a compiled assembly on binary level. So if an assembly which was recompiled becomes another version (e.g. be using AssemblyVersionAttribute(“1.0.*”)) and your great build tool knowns that a dependent assembly need no recompile, you can change in dependent assembly only the reference without recompiling. If the assembly is not signed, this works great, I don’t know if this works also for signed assemblies, but I assume you must to the patch before the assembly will be signed. If you are interested in a code snipped, write me an email. How does this play along with other toolsets running in VS like Xamarin? It should work just fine, as long as the toolsets are using normal msbuild techniques that specify inputs and outputs, which allow us to track timestamps. Please let us know if you do encounter any issues – everybody’s builds are different, and we can’t necessarily test all combinations. Also, if you do encounter issues, you can mark those projects as “always build”, which means ReSharper Build doesn’t apply heuristics, and lets msbuild run the project as normal. I just loaded the Xamarin starter Snake project and it refuses to build using Resharper build because “building from the command line requires a business license”. FYI. +1 to the AssemblyVersion * support. Otherwise, this sounds fantastic. MSBuild is a terrible technology and should be gutted from the inside out. Or really, in any manner possible. Looks like it has already begun to do just that. 😛 Hi Michael. Take a look to the reply above about AssemblyVersion wildcard support. We need to take assembly version into account when looking at the public API, but it can be worked around during development builds. How does that sound? Will building UWP apps using .NET Native be supported? That’d be awesome as currently as the .NET Native build pipeline is currently not very performant. As long as the UWP build process is normal msbuild tasks and targets, then it will work. As I understand it, the .net native compilation only applies in Release mode, while Debug mode is normal .net compilation, so building in Debug mode should be faster than Release mode even without ReSharper Build. Fantastic! I hope JetBrains will expand even further and cover even more of what developers do all day. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1948 Looks great. Definitely a productivity booster. Does ReSharper Build also work with VS2013? Or only with VS2015? We’re still testing full compatibility, but ReSharper Build works with all versions of Visual Studio that ReSharper itself supports, so that’s VS2010, VS2012, VS2013 and VS2015. Of course, if you encounter any issues with a particular version of Visual Studio, please let us know. I didn’t try it yet. I jst wanted to ask if it is generally possible. Nice to hear that also older VS versions support ReSharper Build. I will tray it as soon as ReSharper 10 is released… Pingback: Dew Drop – October 16, 2015 (#2113) | Morning Dew It would be fine to extend this example by providing concrete measure on Resharper solution itself. If i am right, it is about 200 projects, so there is space for measurements. Isn’t it? i mean performance improvement measures… Hi Jirka. I’m afraid we don’t really have any performance figures to hand – the performance improvements depend very much on the scenario of the build (rebuild, tests, public API changes, etc.). But you can check out this gif of fixing a bug, recompiling and running tests in a ReSharper solution made up of 541 projects. It’s pretty quick One thing I liked about the sadly defunct .NET Demon was that it automatically rebuilt each time I saved a file – I really miss this in VS2015 and even now I still forget regularly to manually build and wonder why my changes haven’t been applied. Is build-on-save something that this tool will offer? Thanks; Richard Moss Yes, although this is actually part of the Continuous Testing feature, as build on save only really makes sense if you use the compiled output for something. To enable this, go to ReSharper → Unit Tests → Show Continuous Testing Session, and change the Mode drop-down to one of the options – “On Save, Build and Run Dirty Tests” or “On Save, Build and Detect Dirty Tests”. We’ll look more at Continuous Testing in a forthcoming blog post. If you have any feedback on this, please let us know while the EAP is still in progress! Thanks for the response. I don’t use Resharper’s unit testing functionality (in fact I disable it in the hopes of freeing some resources). For continuous testing I use NCrunch, so “Save and Build” works for me, “Save, Build and Do Something With Tests That I Don’t Care About” won’t. It could be that Resharper’s test stuff could supplant NCrunch in time, but it would have to be pretty exceptional to do that. I mainly use this feature for web applications – if I change server side code that needs compiling, I used to rely on .NET Demon recompiling the DLL, so all I had to do in the web browser was refresh the page. Now I have to remember to trigger a compile 😉 Good scenario, thanks – I’ve added a feature request to allow enabling “build on save” without having to enable or use Continuous Testing. You can track and vote on the issue here: RSRP-449679 I’m excited and intrigued by the continuous build. I’ve installed EAP 5 to try it out, but the solution-wide build tells me many of my projects failed. When I look at the messages from it, there are no errors. And if I build the individual project, it succeeds. So I’m still intrigued, but now I’m also apprehensive. Could you log the details about the error, please? The more details that enable us to reproduce and debug, the better. Thanks! Is it possible to use only the build feature and turn off *all* other ReSharper features? Yes. Most features (such as code completion or inspections) can be disabled, or configured to use default Visual Studio functionality, so you can tune how ReSharper works for you. Furthermore, ReSharper allows a specific feature set or even a whole product to be disabled, in the ReSharper → Options → Products and Features options page. Feature sets can be something like navigation, unit testing or CSS support, while a product can be everything to do with dotTrace, or dotCover. ReSharper Build is a product, which means that while it shares common code with the rest of ReSharper, it can be enabled/disabled independently of the rest of the suite. So, yes, you can disable everything apart from ReSharper Build. Obviously, we’d like you to use some of the other features of the suite, too :). ReSharper is very configurable, and can usually be set up to fit most people’s coding styles and habits. If there’s anything you don’t like, or that doesn’t work for, please let us know! Excellent! Just what I wanted. Thanks a lot. Pingback: F# Weekly #42, 2015 | Sergey Tihon's Blog Pingback: Les liens de la semaine – Édition #154 | French Coding Is this only for local builds? Or would it work for builds run through TFS? This is only for local builds. It’s an optimisation for msbuild to intelligently skip building when you don’t need to, by monitoring your environment between builds. It’s designed to make repeated local builds faster, rather than to speed up a CI system. Pingback: NYC’s subway needs system update, ‘Star Wars’ fans care about website performance, and JetBrains develops new tool—SD Times news digest: Oct. 19, 2015 - SD Times Have you guys spoken with the MSBuild team to pass in your learnings? I dislike MSBuild and am glad someone is tackling it head on. Now that MSBuild is open sourced perhaps MS will be ameanable to suggestions etc? That DNX doesn’t use it at all I hope means we can move on eventually and leave the xml fetish behind. This isn’t an msbuild issue. We’re replacing how Visual Studio orchestrates a build with a process that optimises based on extra information that ReSharper manages. There’s not much that can feed back into msbuild – nothing ReSharper Build does affects a command line build, for example. Is there a trial or evaluation version of this functionality? It sounds pretty useful and I’d be very interested in evaluating it. We use Resharper although we’re not on the latest version. Yes, you can download the latest EAP of ReSharper 10. It’s a trial version, that will expire as new EAP versions, and the final version are released. You can always uninstall the EAP and reinstall the version of ReSharper, dotTrace, dotPeek, etc. that you’re currently using. Is this present in version 9.2 or only in the 10 EAP? This is a new feature in the 10 EAP. But you can download it and evaluate it for free. The build process doesn’t feel a lot faster, maybe we’re running into some of the caveats you mentioned, so I’d like to investigate. Matt, can I see what MSBuilds are triggered? It looks like the output window stays empty. Not as such, there isn’t a log file or anything. However, you can use the ReSharper Build & Run tool window to see what projects are built or need building. The colours of each box shows what has happened, and what will happen. Hovering over a project’s box will show a tooltip that explains why it’s been rebuilt – due to a dependency that has a change in public API, for example. Hopefully, this will be enough to show you why a build was triggered, even if it doesn’t tell you exactly what has changed – that is, you know the public API has changed, but not what in the API has changed. If you need more details, could you log an issue, please? Details of what you’d like to see would be brilliant. Also remember that compilation isn’t sped up – compilation takes as long as it ever did. ReSharper Build simply tries to make it so that you don’t compile when you don’t need to. So if you make a change to the public API of a root project, then your whole solution will likely need to be rebuilt, and that will take as long as it ever did. So as of EAP 7, I see the output, and from the colors too I can see that it identifies the changed project correctly. However it still takes much, much longer to run the build (12s from VS Build vs. 50s in R# build). I have since filed an issue.. When using Resharper build I seem to be getting build errors related to projects that are not set to build under my active build configuration. Compilation seems to cease when this error is encountered in an erroneous project. Is this a known issue or am I doing something wrong here? ReSharper Build will stop after it encounters the first project with errors – this is because it knows it can’t build anything else, as this project is broken, so it stops. As for ReSharper building projects that should be excluded, do you mean they are set to “never build” in ReSharper Build’s options? Would you mind logging a YouTrack issue with more details? (and a repro is always an enormous help!) Actually what I’m saying is that in the visual studio solution build configuration (“Debug”, “Release”) I have a project set to NOT build (unchecked Build checkbox). I’m assuming at this point that Resharper doesn’t look at any of the Visual Studio build configurations to know what to build and what not to. This is actually already marked as a bug it seems: In the released version the menu has changed. It is still in Windows menu but called “Build and run”. It’s missing an icon, and I’m not a fan of the name as it sounds like an action (i.e. it will build and run if you select “build and run”). Finally, I’ve closed the build warnings/errors window and now I can’t find a way to reopen it. Good work though, I think it will be helpful. To reopen the warning/errors dialog you just have to click on one of the projects in the build and run window. Thanks. I’ve updated the post to point out some of the bigger differences in the final RTM build. Is it possible to get resharper to inform you of how long the build took so i can compare it to the normal visual studio build as well as incredibuild? Not at the moment, but it’s a nice idea! I’ve added a feature request you can track, add details and vote on. Thanks! What about starting the build after a file is saved? .NET Demon used to do that but Red Gate discontinued the product because some similar functionality is/would be included in VS 2015. But not everyone is switching to VS 2015. This is implemented by dotCover’s Continuous Testing feature. We’ll have a blog post on this soon, but in the meantime, you can enable Continuous Testing from the Unit Tests menu, and from the Continous Testing Session window, you can select to build on save, run tests on build, etc. Pingback: Resharper 10 MVC Problems - A Blog about Coding The build doesn’t work if an Azure Host project doesn’t have a default configuration. Error WAT200: No default service configuration “ServiceConfiguration.cscfg” could be found in the project. Sad, because this is how all projects are setup and a perfectly legitimate way. Not sure how Visual Studio handles it though. Hi, this appears to be a known issue. Here’s the YouTrack ticket that you can use to vote, track or add more details to: RSRP-450390 WHat happen if we are using PrivateObject in UnitTest or any other way like reflection, typically in UnitTest. What’s happening ? Answering to myself : it will not break anything as any issue with be found at runtime not compile time. This is an amazing tool, finally I can carry on working whilst building. Currently you can’t build solutions with web projects though even if they’re not set to build. I have a large solution with one web project which i’d love to use with your tool, i never actually need to build the web project but do need it to remain part of the solution for ease of access to aspx files etc. Excellent new feature to ReSharper, I can’t wait to try it out. Good tip as well about making classes/interfaces internal instead of public (which we should do anyway if it’s not used outside the assembly, regardless of using ReSharper build), but does it support InternalsVisibleToAttribute attributes which are sometimes used to make internal methods visible to a test assembly for instance? Would ReSharper Build know a change to an internal method could affect the test assembly and that it would need rebuilding as well as a consequence? Unfortunately, right now it doesn’t support the InternalsVisibleToAttribute. Here’s an issue for you to vote, track and add any details: RSRP-450733 hi Matt, Is it possible to customize the layout of project boxes in ReSharper Build ? like, change font size, or re-grouping these project boxes by different rules ? Cheers, There’s no support for changing the layout at the moment. The best thing to do is file a feature request in our issue tracker with details about what you’re after, as well as why you’d like to see those features. Added comment related to web site project types to the mentioned issue request Hello, Is it possible to call resharper build from command line (maybe with devenv.exe call or ReSharper Command Line Tools ?) Thanks How can we force build order? I have a solution with a wixlib project. The output has to be embedded into another projects output. WIth default VS build manager I can manually set the dependency and force the wixlib project to be build before the depending project. Build & Run ignores this and the build fails. Great feature, but I´m missing the log output. Obviously ReSharper build catches it, as I see the last (maybe 200) lines if the build is successfull. But I cannot see it, while the build is running (gives you a good feedback) nor do I see it in case of an error or a warning (helps in analyzing some tricky errors) THIS IS AWESOME SAUCE! I’m annoyed I have only just now found out about this. Is this available as a stand-alone console tool?! No, although it is something we’re considering. Hi, When building and debugging and android app on a Genymotion emulator, the Resharper build took twice as long as the Visual Studio build. Just an FYI Sorry to hear that. Could you log some details about the issue for us, so we can take a look, please? Hi, is there some way to remove the ‘green’ background of the Windows Task bar when the build is done? Disable the ‘Task bar: show build progress in windows task bar’ does not remove the green background when the build is done. I have the same problem. Any solutions? Pingback: Visual studio performance tips – maesterz Pingback: Why Visual Studio keeps rebuilding my projects for no good reason - Michael's Coding Spot >. That does not appear to be true for e.g. vdproj (Setup projects, extensions)… it seems like ReSharper simply skips these rather than doing the described fallback, only way to build them seems to be to disable ReSharper build, which is a bit unfortunate. Any way to fix that? When I create a simple example solution with two projects it does not work. Project A uses class X from project B. Project B contains public class X used in Project A. Project B also contains internal class Y. Both X and Y are seperate files as well. When I change (the internal) class Y Resharper Build still builds Project A as well. Why? Class Y is not part of the public API of Project B because it is internal. So I thought Resharper Build was able to detect this and skip the build for Project A. Using VS 2017 and Resharper Ultimate 2018.2. Both Project A and B are .Net Standard 2.0 projects. Any idea why it doesn’t work as I expected? Just did the same with .net framework projects and it works! So it has something to do with .net core I guess? .NET Core and .NET Standard projects are supported by ReSharper Build, so this should work as expected. Could you log an issue and attach the test projects, please? If I change a comment within a private method, it is still building the assembly. As I understand, even if I change the logic within a private method, it should build the project but then just copy over the compiled dll instead of rebuilding all the referencing projects. Is that correct? Yes, it should still build the assembly that contains the private method, but if only a comment has changed, then it shouldn’t build any other projects. If that’s happening, can you log an issue so we can follow up with more details, please? Can you make a setting also whether you want the build to come to the foreground. I sometimes like to do a build in the background to get things up to date while I’m working on some frontend changes so find the Resharper Build annoying the way it gets the focus. I’m constantly getting the build error “Destination array was not long enough. Check destIndex and length, and the array’s lower bounds.” when using Resharper Build. I have the latest version Could you log an issue on YouTrack with details of the exception, please? I have. Its RSRP-473116 which hasn’t been looked at since 24th Jan.
https://blog.jetbrains.com/dotnet/2015/10/15/introducing-resharper-build/
CC-MAIN-2019-51
refinedweb
6,136
71.65
TextWriterTraceListener Class Directs tracing or debugging output to a TextWriter or to a Stream, such as FileStream. Assembly: System (in System.dll) The TextWriterTraceListener class provides the Writer property to get or set the text writer that receives the tracing or debugging output. This class also provides methods to Close the Writer so that it no longer receives tracing or debugging output, to Flush the output buffer for the Writer, and to Write a message to the Writer. like the following example. The following example implements an instance of the TextWriterTraceListener class that uses a StreamWriter called myOutputWriter to write to a file named TestFile.txt. First the example creates a file for output. Then it creates the StreamWriter for the first text writer, assigns it the output file, and adds it to the Listeners. Then, the code outputs one line of text to the file. Finally, the example flushes the output buffer. After running this sample, you can open the TestFile.txt file to see the output. public class Sample { public static int.Listeners.Add(myTextListener); // Write output to the file. Trace.Write("Test output "); // Flush the output. Trace.Flush(); return 0; } } Available since 1.1 Any public static ( Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
https://msdn.microsoft.com/en-us/library/system.diagnostics.textwritertracelistener.aspx?cs-save-lang=1&cs-lang=csharp
CC-MAIN-2018-09
refinedweb
221
68.06
Tutorial for Creating Simple DXF Drawings¶ Fast DXF R12 File/Stream Writer - create simple DXF R12 drawings with a restricted entities set: LINE, CIRCLE, ARC, TEXT, POINT, SOLID, 3DFACE and POLYLINE. Advantage of the r12writer is the speed and the low memory footprint, all entities are written direct to the file/stream without building a drawing data structure in memory. See also Fast DXF R12 File/Stream Writer Create a new DXF drawing with ezdxf.new() to use all available DXF entities: import ezdxf dwg = ezdxf.new('R2010') # create a new DXF R2010 drawing, official DXF version name: 'AC1024' msp = dwg.modelspace() # add new entities to the model space msp.add_line((0, 0), (10, 0)) # add a LINE entity dwg.saveas('line.dxf') New entities are always added to layouts, a layout can be the model space, a paper space layout or a block layout.
http://ezdxf.readthedocs.io/en/latest/tutorials/simple_drawings.html
CC-MAIN-2018-13
refinedweb
145
63.09
Python library for configuring logs in the FIAAS way Project description This library configures logging according to the current FIAAS recomended format. Usage: from fiaas_logging import init_logging init_logging(format="json") This would configure your application to emit JSON formatted logs on STDOUT. Available options (all are keyword arguments to init_logging): The plain format contains the fields timestamp, level name, message, logger name, and thread name. In the json format, there are more fields, with more detail. The fields in the json output are: Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Source Distribution fiaas-logging-0.1.1.tar.gz (12.8 kB view hashes)
https://pypi.org/project/fiaas-logging/
CC-MAIN-2022-33
refinedweb
126
56.76
I am not able to put view animation for inflated layouts. I used the following code snippet pageView.startAnimation(AnimationUtils.loadAnimation(this,R.anim.right_to_left_anim.xml)); and xml <set xmlns:android=”” android:shareInterpolator=”false”> <translate android:fromXDelta=”0%” android:toXDelta=”100%” android:fromYDelta=”0%” android:toYDelta=”0%” android:duration=”700″/> </set> Is any thing i missing? Thanks. This library works perfectly, but i have a doubt. When I send a message to users with more than two lines, users can’t see all message in notification area. But I know that ANDROID can do it. How to do it for notification from parse.com ? Look the images to explain my problem Image1 […] I am trying to mimic the Google Plus application in my project, as it seems to be the reference now. The listview effect when scrolling is really nice and I would like to do something similar. I have started with the LayoutAnimationController LayoutAnimationController controller = AnimationUtils.loadLayoutAnimation( this, R.anim.list_layout_controller); getListView().setLayoutAnimation(controller); and that seems bad, as […] Good afternoon! I have a website with href in it which redirected me to https <a id=”mA” href=”javascript:pLogin(2)” class=”login-link__link private-cab-link”><i class=”icon-user”></i>Авторизация</a> So, I can click on it by javascript. It works in chrome console javascript:(function(){document.getElementById(‘mA’).click();})() Now I’m trying to do the same in WebView by clicking my app’s button. public class RostelecomLoginActivity extends Activity { […] I have an EditText inside an AlertDialog. It looks like this. See where it says tddjdjck and how it is indented quite a lot. That is what I want (I used setPadding with left and right set to 50), but I also want the blue line under it to be indented too. How do I […] I have follow this post to make ImageButton in android android image button The image appears to the button but it has some background , my Image is a png image and I want the button to be transparent background any one help please I have implemented the IabHelper class in my android project and it says that the ‘getBuyIntentToReplaceSkus’ cannot be resolved. The full method: buyIntentBundle = mService.getBuyIntentToReplaceSkus(5, mContext.getPackageName(),oldSkus, sku, itemType, extraData); I implemented in app billing in my project but I have not yet created any items to be purchased, though the rest of the methods don’t […] I am developing a player app and I am using MediaPlayer for that. Now, I want to change the speed of the playing track. I’ve seen so many apps with this functionality. How can I do this? How do I close a Dialog in android programmatically for example by a button? Imagine I have a Dialog with a OK button on it, and want to close it by OK button, but I cant do that! I googled and found nothing useful, and almost all of them for closing AlertDialog not a Dialog. I am using Espresso to do some UI testing on my app. I have a fragment with a map, and I display some items on it that I get through a call to my backend. When I click on a marker, I’m doing some UI things Is there any way I can do unit testing […]
http://babe.ilandroid.com/page/2864
CC-MAIN-2018-26
refinedweb
551
55.24
A Java class is a group of Java methods and variables. Each Java source code file can contain one public class. The name of this public class must match the name of the Java source code file. If the public class is called "HelloWorld", then the filename would be "HelloWorld.java". A sample Java class is written below: public class HelloWorld { private String name; public HelloWorld(String name) { this.name = name; } public void sayHello() { System.out.println("Hello " + name); } public static void main(String[] args) { HelloWorld hw = new HelloWorld("Java Tips"); hw.sayHello(); } } You can share your information about this topic using the form below! Please do not post your questions with this form! Thanks.
http://www.java-tips.org/java-se-tips/java.lang/what-is-a-java-class.html
CC-MAIN-2014-15
refinedweb
115
67.96
It's Time to Test Your Vue Components: Getting Started With Jest Unit testing Vue components is something that has always intimidated me. I’ve relied on acceptance testing tools like Laravel Dusk for all my JavaScript testing needs because of its ease of use and place in the Laravel ecosystem. Although extremely useful, Dusk tests can be painfully slow to run and difficult to maintain. Having a speedier way to receive feedback while writing Vue components has been a big improvement to my front-end workflow. This post walks through setting up a basic Vue testing suite in a Laravel app. Thanks to new developments in the JavaScript testing world, getting started is much easier than it used to be. Hopefully, you'll find the process simple and easy to wrap your head around. After setting up the test suite, we’ll also explore a powerful testing feature offered by Jest: snapshot testing. Snapshot tests are an easy way to get comprehensive test coverage of your Vue components without doing the tedious work of writing custom assertions. Towards the end of this post, we will explore what it looks like to start using this technique in place of the more typical ways you’re used to writing tests. The Tools In the past, I’ve felt overwhelmed by the number of JavaScript testing tools out there and generally confused about how they fit into writing and running my actual tests. Words like karma, mocha, chai, and sinon seemed to blur together. This time around, things are a bit less overwhelming thanks to a more mature ecosystem bringing us nicely packaged tools like Jest and Vue Test Utils. If you start to feel overwhelmed by the setup at any point, remember these two utilities are the only core tools we are using, despite the number of Node.js packages we will be installing. What Is Jest? Jest is a JavaScript testing tool created by Facebook that handles running your tests and making assertions. Think of it like PHPUnit for your JavaScript; as PHPUnit is to Laravel, Jest is to Vue. There are other testing tools out there besides Jest, but I prefer it because of its ease of use. Installing Jest First, let’s grab all the dependancies we will need for the setup via npm. Run the following command from your project's root: npm install -—save-dev jest vue-jest jest-serializer-vue We can configure Jest for our needs by adding "jest" and "babel" entries to the project’s package.json file. This configuration will tell Jest where to look for components to test and how to interpret non-standard JavaScript files like ES6 syntax, and .vue components. ... "jest": { "moduleFileExtensions": [ "js", "vue" ], "moduleNameMapper": { "^@/(.*)$": "<rootDir>/resources/assets/js/components/$1" }, "transform": { "^.+\\.js$": "<rootDir>/node_modules/babel-jest", ".*\\.(vue)$": "<rootDir>/node_modules/vue-jest" }, "snapshotSerializers": [ "<rootDir>/node_modules/jest-serializer-vue" ] }, "babel": { "env": { "test": { "presets": [ ["env", { "targets": { "node": "current" }}] ] } } }, That’s it for our Jest setup! Let’s move on to setting up the second and final testing utility we’ll be using: Vue Test Utils. What is Vue Test Utils? Vue Test Utils is what makes our tests “Vue aware”, allowing us to easily interact with Vue components in our Jest tests. Without it, we would only be able to write tests against plain JavaScript code, not Vue. Installing Vue Test Utils Vue Test Utils is much more straightforward to install and configure. Run the following command to install it via npm, and we will be all set to write our first test: npm install --save-dev @vue/test-utils Writing Our First Test For demonstration purposes, let’s create a simple counter component with an “increment” button to write our test against: resources/assets/js/components/Counter.vue <template> <div> <h1>Count: {{ counter }}</h1> <button @+1</button> </div> </template> <script> export default { data() { return { counter: 0, } } } </script> Note: I added an HTML attribute to the button element called “jest”. This isn’t special syntax; just my own personal way of making it clear that these CSS selectors are for testing purposes. See this tutorial for more info on this technique. Now let’s write the test for our Counter.vue component: tests/javascript/Counter.spec.js import { mount } from '@vue/test-utils' import Counter from '@/Counter.vue' describe('Counter.vue', () => { it('increments counter', () => { const wrapper = mount(Counter); expect(wrapper.vm.counter).toBe(0); wrapper.find('[jest="increment-button"]').trigger('click') expect(wrapper.vm.counter).toBe(1); }) }) Notice we are wrapping our Counter component using a utility called mount provided by Vue Test Utils. The mount() method internally mounts the Vue component and provides an interface for us to interact with and make assertions against the underlying component. For more details on what’s possible with Vue Test Utils head over to the docs for more reference. After mounting the component and storing it in a wrapper object, we: - Make an initial assertion about the state of the component - Perform an action (in this case, clicking on the increment button) - Make an assertion that the initial data property has changed Assuming you are familiar with basic unit testing principles, the learning curve will mostly entail familiarizing yourself with what Jest and Vue Test Utils makes available to you. Like I mentioned above, I recommend skimming the docs to learn more about these tools and what you can do with them. It’s Go Time! Pop open your terminal and run the following command from the project root: node_modules/.bin/jest Hooray! If everything went smoothly, you should see a passing unit test. Before we go any further, let’s clean up that Jest command a bit. Creating an npm Script to Run Your Tests Add these npm scripts to your package.json file for a cleaner test running experience: "scripts": { ... "test": "jest", "test-watch": "npm run test -- --watch" }, npm run test will run all your Jest tests. npm run test-watch will re-run the tests when you change a component under test. Pro Tip: Jest provides some handy features you can access by pressing “w” after running your tests in watch mode. Features like re-running only failed tests that are sorely lacking in PHPUnit are provided to us out of the box with Jest. Introducing: Snapshot Testing There’s a lot you can do with your new testing setup. However, if you want to take your Vue testing a step further, let’s explore another powerful testing technique Jest offers out of the box: snapshot testing. In the previously-covered paradigm, when you act on a Vue component in your test, you write an expectation for a specific outcome. However, in the snapshot testing paradigm, every time you act on a component, Jest takes a “snapshot” of the entire rendered DOM output; in simpler terms, Jest basically converts your rendered component to a string. These snapshots are stored in plain text and used the next time you run your test suite to compare future results against. Consider them automatic assertions, which makes them a great way to put a large system under test quickly. If it doesn’t make sense to you right away, that’s OK. For me, the concept was better understood through examples. Let’s modify our original test to use snapshots instead of asserting against specific data attributes. tests/javascript/Counter.spec.js import { mount } from '@vue/test-utils' import Counter from '@/Counter.vue' describe('Counter.vue', () => { it('increments counter', () => { const wrapper = mount(Counter); expect(wrapper.html()).toMatchSnapshot() wrapper.find('[jest="increment-button"]').trigger('click') expect(wrapper.html()).toMatchSnapshot() }) }) Notice the new .toMatchSnapshot() statements. Now let’s run the new tests with snapshot matching: npm run test After the first run, Jest will generate a __snapshots__ directory in the same directory as the original test. Let’s take a peek at the newly generated snapshot. tests/javascript/__snapshots__/Counter.spec.js.snap // Jest Snapshot v1, exports[`Counter.vue increments counter 1`] = ` <div> <h1>Count: 0</h1> <button jest="increment-button">+1</button> </div> `; exports[`Counter.vue increments counter 2`] = ` <div> <h1>Count: 1</h1> <button jest="increment-button">+1</button> </div> `; As you can see, Jest renders the Vue component at every step and saves its rendered state as plain text. The next time the tests are run, if any part of the rendered component changes, the test will fail and point you to the difference. If you intentionally change the behavior of your component, you will need to regenerate these snapshots. To do so, run: npm run test -- -u Snapshot tests are a great way to get comprehensive coverage of a component without having to test specific outcomes. Remember, this approach is only useful in certain circumstances. Snapshot tests are extremely rigid; for instance, small, non-critical changes to a component, such as changing a class name, will cause the tests to fail. You also have to be careful not to allow bugs into your snapshots. If you update your snapshots, be sure to review them thoroughly before committing them. Wrapping Up Until recently, Vue testing has been a bit of a fuzzy concept to me. I hope this post will bring some clarity to the topic and make it easy for you to get up and running with some simple component tests. For deeper info on what to test and how to test it, remember to take a look at the docs for Jest and Vue Test Utils. Finally, there are no more excuses for not testing your Vue components! ✌️ — Caleb
https://tighten.co/blog/its-time-to-start-testing-your-vue-components-getting-started-with-jest
CC-MAIN-2020-10
refinedweb
1,588
61.56
As a simple example to back up PJE's explanation, consider: 1. encodings becomes a namespace package 2.. Cheers, Nick. -- Sent from my phone, thus the relative brevity :) On May 22, 2012 4:10 AM, "PJ Eby" pje@telecommunity.com wrote: On Mon, May 21, 2012 at 9:55 AM, Guido van Rossum guido@python.orgwrote:. To do that, you just assign to __path__, the same as now, ala __path__ = pkgutil.extend_path(). The auto-updating is in the initially-assigned __path__ object, not the module object or some sort of generalized magic. I'd like to hear more about this from Philip -- is that feature actually widely used? Well, it's built into setuptools, so yes. ;-) It gets used any time a dynamically specified dependency is used that might contain a namespace package. This means, for example, that every setup script out there using "setup.py test", every project using certain paste.deploy features... it's really difficult to spell out the scope of things that are using this, in the context of setuptools and distribute, because there are an immense number of ways to indirectly rely on it. This doesn't mean that the feature can't continue to be implemented inside setuptools' dynamic dependency system, but the code to do it in setuptools is MUCH more complicated than the PEP 420 code, and doesn't work if you manually add something to sys.path without asking setuptools to do it. It's also somewhat timing-sensitive, depending on when and whether you import 'site' and pkg_resources, and whether you are mixing eggs and non-eggs in your namespace packages. In short, the implementation is a huge mess that the PEP 420 approach would vastly simplify. But... that wasn't the original reason why I proposed it. The original reason was simply that it makes namespace packages act more like the equivalents do in other languages. While being able to override __path__ can be considered a feature of Python, its being static by default is NOT a feature, in the same way that *requiring* an __init__.py is not really a feature. The principle of least surprise says (at least IMO) that if you add a directory to sys.path, you should be able to import stuff from it. That whether it works depends on whether or not you already imported part of a namespace package earlier is both surprising and confusing. (More on this below.) What would a package have to do if the feature didn't exist? Continue to depend on setuptools to do it for them, or use some hypothetical update API... but that's not really the right question. ;-) The right question is, what happens to package *users* if the feature didn't exist? And the answer to that question is, "you must call this hypothetical update API *every time* you change sys.path, because otherwise your imports might break, depending on whether or not some other package imported something from a namespace before you changed sys.path". And of course, you also need to make sure that any third-party code you use does this too, if it adds something to sys.path for you. And if you're writing cross-Python-version code, you need to check to make sure whether the API is actually available. And if you're someone helping Python newbies, you need to add this to your list of debugging questions for import-related problems. And remember: if you forget to do this, it might not break now. It'll break later, when you add that other plugin or update that random module that dynamically decides to import something that just happens to be in a namespace package, so be prepared for it to break your application in the field, when an end-user is using it with a collection of plugins that you haven't tested together, or in the same import sequence... The people using setuptools won't have these problems, but *new* Python users will, as people begin using a PEP 420 that lacks this feature. The key scope question, I think, is: "How often do programs change sys.path at runtime, and what have they imported up to that point?" (Because for the other part of the scope, I think it's a fairly safe bet that namespace packages are going to become even *more* popular than they are now, once PEP 420 is in place.) But the key API/usability question is: "What's the One Obvious Way to add/change what's importable?" And I believe the answer to that question is, "change sys.path", not "change sys.path, and then import some other module to call another API to say, 'yes, I really *meant* to update sys.path, thank you very much.'" (Especially since NOT requiring that extra API isn't going to break any existing code.) I'd really much rather not have this feature, which reeks of too much magic to me. (An area where Philip and I often disagree. :-) My take on it is that it only SEEMS like magic, because we're used to static __path__. But other languages don't have per-package __path__ in the first place, so there's nothing to "automatically update", and so it's not magic at all that other subpackages/modules can be found when the system path changes! So, under the PEP 420 approach, it's *static* __path__ that's really the weird special case, and should be considered so. (After all, __path__ is and was primarily an implementation optimization and compatibility hack, rather than a user-facing "feature" of the import system.) For example, when *would* you want to explicitly spell out a namespace package __path__, and restrict it from seeing sys.path changes? I've not seen *anybody* ask for this feature in the context of setuptools; it's only ever been bug reports about when the more complicated implementation fails to detect an update. So, to wrap up: - The primary rationale for the feature is that "least surprise" for a new user to Python is that adding to sys.path should allow importing a portion of a namespace, whether or not you've already imported some other thing in that namespace. Symmetry with other languages and with other Python features (e.g. changing the working directory in an interactive interpreter) suggests it, and the removal of a similar timing dependency from PEP 402 (preventing direct import of a namespace-only package unless you imported a subpackage first) suggests that the same type of timing dependency should be removed here, too. (Note, for example, that I may not know that importing baz.spam indirectly causes some part of foo.wiz to be imported, and that if I then add another directory to sys.path containing a foo.* portion, my code will *no longer work* when I try to import foo.ham. This is much more "magical" behavior, in least-surprise terms!) - The constraints on sys.path and package __path__ objects can and should be removed, by making the dynamic path objects refer to a module and attribute, instead of directly referencing parent __path__ objects. Code that currently manipulates __path__ will not break, because such code will not be using PEP 420 namespace packages anyway (and so, __path__ will be a list. (Even so, the most common __path__ manipulation idiom is "__path__ = pkgutil.extend_path(...)" anyway!) - Namespace packages are a widely used feature of setuptools, and AFAIK nobody has *ever* asked to stop dynamic additions to namespace __path__, but a wide assortment of things people do with setuptools rely on dynamic additions under the hood. Providing the feature in PEP 420 gives a migration path away from setuptools, at least for this one feature. (Specifically, it does away with the need to use declare_namespace(), and the need to do all sys.path manipulation via setuptools' requirements API.) - Self-contained (__init__.py packages) and fixed __path__ lists can and should be considered the "magic" or "special case" parts of importing in Python 3, even though we're accustomed to them being central import concepts in Python 2. Modules and namespace packages can and should be the default case from an instructional POV, and sys.path updating should reflect this. (That is, future tutorials should introduce modules, then namespace packages, and finally self-contained packages with __init__ and __path__, because the *idea* of a namespace package doesn't depend on __path__ existing in the first place; it's essentially only a historical accident that self-contained packages were implemented in Python first.) Python-Dev mailing list Python-Dev@python.org Unsubscribe:
https://mail.python.org/archives/list/python-dev@python.org/message/LI7SU7YPUA3SQJUDAVIIPGMIY2JOJ433/
CC-MAIN-2022-27
refinedweb
1,439
62.17
In this article we will learn about one of the reusable object oriented features of C#, we will learn about polymorphism from the basics because I wrote this article focusing on students and beginners. Before proceeding further please refer to my previous articles for a better understandability. begin learning about polymorphism with a definition. What polymorphism is Polymorphism means one thing having many (poly) forms. Suppose the example shown in the following diagram: /> In the preceding example a vehicle is something that has various forms; two-wheeler, three-wheeler and four-wheeler and so on. So how to differentiate each form in the preceding example using wheels (parameters)? Let us see more about polymorphism, Compile time polymorphism Method overloading For example: There are two types of polymorphism; they are:/> Compile time polymorphism is done at compile time. The following are examples of compile time polymorphism. - Method overloading - Operator overloading Creating multiple methods in a class with the same name but different parameters and types is called method overloading. Method overloading can be done in any of the following ways: - By changing the number of parameters used. - By changing the order of parameters. - By using different data types for the parameters. namespace BAL { public class Methodoveloading { public int add(int a, int b) //Method 1 { return a + b; } public int add(int a, int b, int c) //Method 2 { return a + b + c; } public float add(float a, float b, float c, float d) //Method 3 { return a + b + c + d; } } } In the preceding method their are three methods with the same name but different parameter type. This is called method overloading. In my next article we will see the program method of overloading in details. Run time polymorphism Run time polymorphism happens at run time, in other words values are passed at runtime to the method. Runtime polymorphism can be done using method overriding. Method overridingOutput For example: Creating the method in a derived class with the same name, same parameters and same return type as in the base class is called method overriding. - Method overriding is only possible in a derived class, not within the same class where the method is declared. - Only those methods overriden in the derived class declared in the base class using the virtual keyword or abstract keyword. namespace BAL { public class methodoverriding { public virtual int balance() { return 10; } } public class Amount : methodoverriding { public override int balance() { return 500; } } } 10 and 500 In the preceding program we declare the Virtual method that returns 10 and the same method we are using in the class Amount using the override keyword that at runtime returns 500 without changing the values of the method, even the names are the same so the example above shows runtime polymorphism. In my next article we will see in details a program with runtime polymorphism. Difference between method overloading and method overriding Method overloading Creating multiple methods in a class with the same name but different parameters and types is called method overloading. Method overriding Creating the method in a derived class with the same name, the same parameters and the same return type as in a base class is called method overriding. Difference between virtual method and abstract method Some of the FAQs on polymorphism are: Question: Can method overloading have the same number of parameters with different return types? Ans: No, because a conflict occurs in methods when passing the parameters. Question: What is operator overloading? Ans: We can redefine operators like +, - and * with additional functionality. Summary I hope you have learned an overview of polymorphism and its types. In the next article we will learn each polymorphism type in detail. If you have any suggestions regarding this article then please contact me. Nice explanation :)Reply thanks sirReply
http://www.compilemode.com/2015/05/polymorphism-in-C-Sharp.html
CC-MAIN-2019-26
refinedweb
628
51.78
GraphQL Node Types Creation This documentation isn’t up to date with the latest schema customization changes. Outdated areas are: - the inferObjectStructureFromNodesfunction doesn’t exist anymore setFieldsOnGraphQLNodeTypehas been deprecated due to the new createTypesaction - file node creation has been moved away from setFileNodeRootType You can help by making a PR to update this documentation. Gatsby creates a GraphQLObjectType for each distinct node.internal.type that is created during the source-nodes phase. Find out below how this is done. GraphQL Types for each type of node When running a GraphQL query, there are a variety of fields that you will want to query. Let’s take an example, say we have the below query: When GraphQL runs, it will query all file nodes by their relativePath and return the first node that satisfies that query. Then, it will filter down the fields to return by the inner expression. I.e { childMarkdownRemark ... }. The building of the query arguments is covered by the Inferring Input Filters doc. This section instead explains how the inner filter schema is generated (it must be generated before input filters are inferred). During the sourceNodes phase, let’s say that gatsby-source-filesystem ran and created a bunch of File nodes. Then, different transformers react via onCreateNode, resulting in children of different node.internal.types being created. There are 3 categories of node fields that we can query. Fields on the created node object Child/Parent fields created by setFieldsOnGraphQLNodeType Each of these categories of fields is created in a different way, explained below. gqlType Creation The Gatsby term for the GraphQLObjectType for a unique node type, is gqlType. GraphQLObjectTypes are simply objects that define the type name and fields. The field definitions are created by the createNodeFields function in build-node-types.js. An important thing to note is that all gqlTypes are created before their fields are inferred. This allows fields to be of types that haven’t yet been created due to their order of compilation. This is accomplished by the use of fields being a function (basically lazy functions). The first step in inferring GraphQL Fields is to generate an exampleValue. It is the result of merging all fields of all nodes of the type in question. This exampleValue will therefore contain all potential field names and values, which allows us to infer each field’s types. The logic to create it is in getExampleValues. With the exampleValue in hand, we can use each of its key/values to infer the Type’s fields (broken down by the 3 categories above). Fields on the created node object Fields on the node that were created directly by the source and transform plugins. E.g. for File type, these would be relativePath, size, accessTime etc. The creation of these fields is handled by the inferObjectStructureFromNodes function in infer-graphql-type.js. Given an object, a field could be in one of 3 sub-categories: - It involves a mapping in gatsby-config.js - It’s value is a foreign key reference to some other node (ends in ___NODE) - It’s a plain object or value (e.g. String, number, etc) Mapping field Mappings are explained in the gatsby-config.js docs. If the object field we’re generating a GraphQL type for is configured in the gatsby-config mapping, then we handle it specially. Imagine our top level Type we’re currently generating fields for is MarkdownRemark.frontmatter. And the field we are creating a GraphQL field for is called author. And, that we have a mapping setup of: The field generation in this case is handled by inferFromMapping. The first step is to find the type that is mapped to. In this case, AuthorYaml. This is known as the linkedType. That type will have a field to link by. In this case name. If one is not supplied, it defaults to id. This field is known as linkedField Now we can create a GraphQL Field declaration whose type is AuthorYaml (which we look up in list of other gqlTypes). The field resolver will get the value for the node (in this case, the author string), and then search through the react nodes until it finds one whose type is AuthorYaml and whose name field matches the author string. Foreign Key reference ( ___NODE) If not a mapping field, it might instead end in ___NODE, signifying that its value is an ID that is a foreign key reference to another node in redux. Check out the Source Plugin Tutorial for how this works from a user point of view. Behind the scenes, the field inference is handled by inferFromFieldName. This is actually quite similar to the mapping case above. We remove the ___NODE part of the field name. E.g. author___NODE would become author. Then, we find our linkedNode. I.e given the example value for author (which would be an ID), we find its actual node in redux. Then, we find its type in processed types by its internal.type. Note, that also like in mapping fields, we can define the linkedField too. This can be specified via nodeFieldname___NODE___linkedFieldName. E.g. for author___NODE___name, the linkedField would be name instead of id. Now we can return a new GraphQL Field object, whose type is the one found above. Its resolver searches through all redux nodes until it finds one with the matching ID. As usual, it also creates a page dependency, from the query context’s path to the node ID. If the foreign key value is an array of IDs, then instead of returning a Field declaration for a single field, we return a GraphQLUnionType, which is a union of all the distinct linked types in the array. Plain object or value field If the field was not handled as a mapping or foreign key reference, then it must be a normal every day field. E.g. a scalar, string, or plain object. These cases are handled by inferGraphQLType. The core of this step creates a GraphQL Field object, where the type is inferred directly via the result of typeof. E.g. typeof(value) === 'boolean' would result in type GraphQLBoolean. Since these are simple values, resolvers are not defined (graphql-js takes care of that for us). If however, the value is an object or array, we recurse, using inferObjectStructureFromNodes to create the GraphQL fields. In addition, Gatsby creates custom GraphQL types for File (types/type-file.js) and Date (types/type-date.js). If the value of our field is a string that looks like a filename or a date (handled by should-infer functions), then we return the appropriate custom type. Child/Parent fields Child fields creation Let’s continue with the File type example. There are many transformer plugins that implement onCreateNode for File nodes. These produce File children that are of their own type. E.g. markdownRemark, postsJson. Gatsby stores these children in redux as IDs in the parent’s children field. And then stores those child nodes as full redux nodes themselves (see Node Creation for more). E.g. for a File node with two children, it will be stored in the redux nodes namespace as: An important note here is that we do not store a distinct collection of each type of child. Rather we store a single collection that they’re all packed into. The benefit of this is that we can easily create a File.children field that returns all children, regardless of type. The downside is that the creation of fields such as File.childMarkdownRemark and File.childrenPostsJson is more complicated. This is what createNodeFields does. Another convenience Gatsby provides is the ability to query a node’s child or children, depending on whether a parent node has 1 or more children of that type. child resolvers When defining our parent File gqlType, createNodeFields will iterate over the distinct types of its children, and create their fields. Let’s say one of these child types is markdownRemark. Let’s assume there is only one markdownRemark child per File. Therefore, its field name is childMarkdownRemark. Now, we must create its graphql Resolver. The resolve function will be called when we are running queries for our pages. A query might look like: To resolve file.childMarkdownRemark, we take the node we’re resolving, and filter over all of its children until we find one of type markdownRemark, which is returned. Remember that children is a collection of IDs. So as part of this, we lookup the node by ID in redux too. But before we return from the resolve function, remember that we might be running this query within the context of a page. If that’s the case, then whenever the node changes, the page will need to be rerendered. To record that fact, we call createPageDependency with the node ID and the page, which is a field in the context object in the resolve function signature. parent field When a node is created as a child of some node (parent), that fact is stored in the child’s parent field. The value of which is the ID of the parent. The GraphQL resolver for this field looks up the parent by that ID in redux and returns it. It also creates a page dependency, to record that the page being queried depends on the parent node. Plugin fields These are fields created by plugins that implement the setFieldsOnGraphQLNodeType API. These plugins return full GraphQL Field declarations, complete with type and resolve functions. File types As described in plain object or value field, if a string field value looks like a file path, then we infer File as the field’s type. The creation of this type occurs in type-file.js setFileNodeRootType(). It is called just after we have created the GqlType for File (only called once). It creates a new GraphQL Field Config whose type is the just created File GqlType, and whose resolver converts a string into a File object. Here’s how it works: Say we have a data/posts.json file that has been sourced (of type File), and then the gatsby-transformer-json transformer creates a child node (of type PostsJson) Notice that the image value looks like a file. Therefore, we’d like to query it as if it were a file, and get its relativePath, accessTime, etc. The File type resolver takes care of this. It gets the value ( images/BdiU-TTFP4h.jpg). It then looks up this node’s root NodeID via Node Tracking which returns the original data/posts.json file. It creates a new filename by concatenating the field value onto the parent node’s directory. I.e data + images/BdiU-TTFP4h.jpg = data/images/BdiU-TTFP4h.jpg. And then finally it searches redux for the first File node whose path matches this one. This is our proper resolved node. We’re done!
https://www.gatsbyjs.org/docs/schema-gql-type/
CC-MAIN-2020-24
refinedweb
1,816
65.01
getchar_unlocked Get next character or word from input stream DescriptionThe fgetc function obtains the next input character (if present) from the stream pointed at by stream, or the next character pushed back on the stream via ungetc. before calling them. These functions may be used to avoid the overhead of locking the stream for each character, and to avoid input being dispersed among multiple threads reading from the same stream. Example: Example - Get next character or word from input stream Workings #include <stdio.h> int main() { int ch; FILE *input; if (input = fopen("fred.txt", "rt")) { ch = getc(input); while (ch != EOF) { printf("%c", ch); ch = getc(input); } fclose(input); } return 0; }
http://www.codecogs.com/library/computing/c/stdio.h/getc.php?alias=getchar_unlocked
CC-MAIN-2018-34
refinedweb
112
73.17
Hi everyone, Say I have a MovieClip that has nested MovieClips for example person_mc > body_mc > toe_mc > toe_nail_mc > leg_mc > hand_mc Then, I want to export this person_mc and use AS3 to move the body parts _mc around. How do I reference the body parts? Ideally, I am thinking there might be such thing as: public class ToeNail {} public class Toe { public var toeNail:ToeNail; } // ... and Leg and Hand class public class Body { public var toe:Toe; } public class Person { public var body:Body; public var leg:Leg; public var hand:Hand; } is there such a thing? otherwise how to reference? furthermore, what happens if the nested _mc is on some frames and not on some other frames?
https://www.daniweb.com/digital-media/ui-ux-design/threads/188496/as3-export-movieclip-symbol-as-custom-class
CC-MAIN-2018-43
refinedweb
116
69.62
PS: The wrapper approach DOES work because you would then just assume the convention that getParameter() would return an absolute path of where the file was stored on disk. Yes this has to assume you're using a DiskFileItemFactory, But if you want the simplicity for the caller, that's a reasonable limitation. Finally, I'm not assuming anything about STRUTS. Controller and Action are general terms having to deal with any decent MVC framework. There's nothing that says you couldn't have just Servlet classes, and have a BaseServlet which you would always extend. The bottom line is that whatever pattern you use for your servlet, you can do this. All you would have to do would be to make sure to create the FileUpload, if necessary, and call the UploadManager.put method. The wrapper class without proxies is problematic due to API changes, and that you're only interested in a few different methods which you would need to override the behavior? Why should your code have to adapt if some method irrelevant to what you're doing is changed or added? I've given two simplistic approaches. Either could be implemented within a reasonably small amount of code (1 - 2 classes). Why not provide them to the users, and let them use it if they want? -----Original Message----- From: Martin Cooper [mailto:martinc@apache.org] Sent: Tuesday, April 06, 2004 5:32 PM To: commons-dev@jakarta.apache.org Subject: Re: COMMONS UPLOAD QUESTION "request.getParameter()" "Inger, Matthew" <Inger@Synygy.com> wrote in message news:F369B5B779D1794FB19653920D62D3BD045772@kossuth.synygy.net... > One idea: > > First, create a container to hold the upload instances, using a ThreadLocal > variable to key the instances by thread, and ensure that they don't survive > beyond the thread's lifespan. ThreadLocal is also synchronized, so all the > main concerns have been take care of. Whoa! First proxies and now thread local - this just seems much more complicated than it needs to be. As I mentioned before, I'm much more inclined to use an HttpServletRequestWrapper approach, and make that an optional add-on for Servlet 2.3 environments. Note that, with a thread local approach, you would need a way to clean up the thread local data after each request, not just when the thread goes away. Otherwise you'd still have it hanging around when another request comes in and is handled on the same thread. That's going to be error-prone. > > final class UploadManager { > private static ThreadLocal uploads; > > static { > uploads = new ThreadLocal(); > } > > > public static final void put(FileUpload upload) { > uploads.set(upload); > } > > public static final FileUpload get() { > return (FileUpload)uploads.get(); > } > } > > > Then, in your controller, create the FileUpload, and put > it into the UploadManager: > > if (isFileUpload) { > FileUpload upload = ...; > UploadManager.put(upload); > } > > Finally, in your action handler: > > FileUpload upload = UploadManager.get(); > if (upload != null) { > ... > } > > > Personally, i prefer the wrapper approach as it's neater. Not to > mention, everywhere you expect and upload, you're going to have to > have an "if" statement to pull your parameters. In best case, you'll > have a utility method to do it for you based on whether the upload item > for the current thread is present or not. The wrapper approach doesn't entirely work for uploaded file items, at least not without casting. You still need something more for file items, because getParameter() returns a String, and, well, file items aren't strings. ;-) By the way, are you assuming a particular usage scenario? The above code looks like you're assuming Struts, because you mention a controller and action handlers. We have no particular usage model for FileUpload beyond a servlet. -- Martin Cooper > > > > -----Original Message----- > From: Martin Cooper [mailto:martinc@apache.org] > Sent: Tuesday, April 06, 2004 1:41 AM > To: Jakarta Commons Developers List > Subject: RE: COMMONS UPLOAD QUESTION "request.getParameter()" > > > > > > -----Original
http://mail-archives.apache.org/mod_mbox/commons-dev/200404.mbox/%3CF369B5B779D1794FB19653920D62D3BD04578A@kossuth.synygy.net%3E
CC-MAIN-2017-22
refinedweb
638
56.15
54032/choose-a-random-starting-word-for-building-markov-chain Hey @Greg, this is possible. You can use the random function. It will choose random numbers/words from the list or in this case, dictionary. import random previous[0] = random.choice(Map.keys()) Assuming X_test is a pandas dataframe, you ...READ MORE Try something like this @Gujjar HashMap<String, int> wordCount; int ...READ MORE While using Markov chain, is it possible ...READ MORE Hi @Dipti, you could try something like ...READ MORE Use higher-precision floats, if available on your ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You have to use the tuples from ...READ MORE The logic here is simple. Apply Markov Property ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/54032/choose-a-random-starting-word-for-building-markov-chain
CC-MAIN-2021-10
refinedweb
147
72.22
When you instantiate a COM object, you are actually working with a proxy known as the Runtime Callable Wrapper (RCW). The RCW is responsible for managing the lifetime requirements of the COM object and translating the methods called on it into the appropriate calls on the COM object. When the garbage collector finalizes the RCW, it releases all references to the object it was holding. For situations in which you need to release the COM object without waiting for the garbage collector to finalize the RCW, you can use the static ReleaseComObject method of the System.Runtime.InteropServices.Marshal type. The following example demonstrates how to change your MSN Instant Messenger friendly name using C# via COM Interop: // RenameMe.cs - compile with: // csc RenameMe.cs /r:Messenger.dll // Run RenameMe.exe "new name" to change your name // as it is displayed to other users. // Run TlbImp.exe "C:\Program Files\Messenger\msmsgs.exe" // to create Messenger.dll using System; using Messenger; class MSNFun { static void Main(string[ ] args) { MsgrObject mo = new MsgrObject( ); IMsgrService ims = mo.Services.PrimaryService; ims.FriendlyName = args[0]; } } You can also work with COM objects using the reflection API. This is more cumbersome than using TlbImp.exe, but is handy in cases in which it's impossible or inconvenient to run TlbImp.exe. To use COM through reflection, you have to get a Type from Type.GetTypeFromProgID( ) for each COM type you want to work with. Then, use Activator.CreateInstance( ) to create an instance of the type. To invoke methods or set or get properties, use the reflection API, which is covered in Chapter 13: using System; using System.Reflection; public class ComReflect { public static void Main( ) { object obj_msword; // Microsoft Word Application Type wa = Type.GetTypeFromProgID("Word.Application", true); // Create an instance of Microsoft Word obj_msword = Activator.CreateInstance(wa); // Use the reflection API from here on in... } }
http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+II+Programming+with+the+.NET+Framework/Chapter+18.+Integrating+with+COM+Components/18.2+Exposing+COM+Objects+to+C/
CC-MAIN-2018-05
refinedweb
310
50.12
I have a code, which calculates the nearest voxel (which is unassigned) to a voxel ( which is assigned). That is i have an array of voxels, few voxels already have a scalar (1,2,3,4....etc) values assigned, and few voxels are empty (lets say a value of '0'). This code below finds the nearest assigned voxel to an unassigned voxel and assigns that voxel the same scalar. So, a voxel with a scalar '0' will be assigned a value (1 or 2 or 3,...) based on the nearest voxel. This code below works, but it takes too much time. Is there an alternative to this ? or if you have any feedback on how to improve it further? """ #self.voxels is a 3D numpy array""" def fill_empty_voxel1(self,argx, argy, argz): """ where # argx, argy, argz are the voxel location where the voxel is zero""" argx1, argy1, argz1 = np.where(self.voxels!=0) # find the= self.mean) # self.mean is a mean radius search value argx2, argy2, argz2 = a[ndx][:][:,0],a[ndx][:][:,1],a[ndx][:][:,2] self.voxels[argx,argy,argz] = self.voxels[argx2,argy2,argz2] # update the voxel array import numpy as np from scipy.spatial import cKDTree import timeit voxels = np.zeros((10,10,5), dtype=np.uint8) voxels[1:2,:,:] = 5. voxels[5:6,:,:] = 2. voxels[:,3:4,:] = 1. voxels[:,8:9,:] = 4. argx, argy, argz = np.where(voxels==0) tic=timeit.default_timer() argx1, argy1, argz1 = np.where(voxels!=0) #= 5.) argx2, argy2, argz2 = a[ndx][:][:,0],a[ndx][:][:,1],a[ndx][:][:,2] voxels[argx,argy,argz] = voxels[argx2,argy2,argz2] toc=timeit.default_timer() timetaken = toc - tic #elapsed time in seconds print '\nTime to fill empty voxels', timetaken from mayavi import mlab data = voxels.astype('float') scalar_field = mlab.pipeline.scalar_field(data) iso_surf = mlab.pipeline.iso_surface(scalar_field) surf = mlab.pipeline.surface(scalar_field) vol = mlab.pipeline.volume(scalar_field,vmin=0,vmax=data.max()) mlab.outline() mlab.show() distances, ndx = tree.query(b, k=1, distance_upper_bound= 5., n_jobs=-1) It would be interesting to try sklearn.neighbors.NearestNeighbors, which offers n_jobs parameter: The number of parallel jobs to run for neighbors search. This package also provides the Ball Tree algorithm, which you can test versus the kd-tree one, however my hunch is that the kd-tree will be better (but that again does depend on your data, so research that!). You might also want to use dimensionality reduction, which is easy. The idea is that you reduce your dimensions, thus your data contain less info, so that tackling the Nearest Neighbour Problem can be done much faster. Of course, there is a trade off here, accuracy! You might/will get less accuracy with dimensionality reduction, but it might worth the try. However, this usually applies in a high dimensional space, and you are just in 3D. So I don't know if for your specific case it would make sense to use sklearn.decomposition.PCA. A remark: If you really want high performance though, you won't get it with python, you could switch to c++, and use CGAL for example.
https://codedump.io/share/a84RHtMR4pbo/1/how-can-i-speed-up-nearest-neighbor-search-with-python
CC-MAIN-2016-50
refinedweb
513
59.6
Last class we went over some general memory stuff -- we learned that the last address in the code segment is &etext, and the last address in the globals segment is &end. As the program runs, and memory is allocated from the heap using malloc(), the heap grows. To figure out the boundary of the heap, we must use brk() or sbrk(). Both are system calls, and you can read their man pages. We will only discuss sbrk() as it is the only call you will need. caddr_t sbrk(int incr);A caddr_t is a "c address pointer". It is the same as a (char *) or a (void *). This specifies for the operating system to give incr more bytes to the heap. It returns a pointer to the end of the heap before sbrk() was called. Thus, the new end of the heap after an sbrk() call is at address sbrk(incr) + incr;If you call sbrk(0), then it returns the current end of the heap. Now, malloc() (and the related programs realloc() and calloc()) all call sbrk() to get the memory to allocate in the heap. They are the only routines that call sbrk(). Thus, the only way that you can get memory in the heap is through malloc() or sbrk(). However, you should use malloc(), as it is more efficient. #include < stdio.h > #include < sys/types.h > main() { int *i1, *i2; printf("sbrk(0) before malloc(4): 0x%x\n", sbrk(0)); i1 = (int *) malloc(4); printf("sbrk(0) after `i1 = (int *) malloc(4)': 0x%x\n", sbrk(0)); i2 = (int *) malloc(4); printf("sbrk(0) after `i2 = (int *) malloc(4)': 0x%x\n", sbrk(0)); printf("i1 = 0x%x, i2 = 0x%x\n", i1, i2); }This prints sbrk(0) before and after some malloc() calls. Here's the result of running it on hydra3a. UNIX> fb2 sbrk(0) before malloc(4): 0x21ab0 sbrk(0) after `i1 = (int *) malloc(4)': 0x23ab0 sbrk(0) after `i2 = (int *) malloc(4)': 0x23ab0 i1 = 0x21ac0, i2 = 0x21ad0 UNIX>Note that sbrk() doesn't change after the malloc() calls. Why? Because malloc() does buffering. When it calls sbrk() it calls it with a large number -- something like 12K or 8K, and then it doles out memory from this buffer. In other words after i1 and i2 are allocated, there is still a whole bunch of memory -- from 0x0x21ad0 to 0x0x23ab0 that malloc() can use before calling sbrk() again. This is roughly 8160 bytes. Thus, in fb2a.c, when we do a malloc(8164) after allocating i1 and i2, we expect to see that sbrk() was called to get more memory, and indeed this is the case: UNIX> fb2a sbrk(0) before malloc(4): 0x21b68 sbrk(0) after `i1 = (int *) malloc(4)': 0x23b68 sbrk(0) after `i2 = (int *) malloc(4)': 0x23b68 i1 = 0x21b78, i2 = 0x21b88, sbrk(0)-i2 = 8160 sbrk(0) after `i3 = (int *) malloc(8164)': 0x25b68 i3 = 0x21f78 UNIX>Now, look at fb3.c. This calls malloc(4) 10 times and prints out the memory allocated: #include < stdio.h > main() { int j, *buf; for (j = 0; j < 10; j++) { buf = (int *) malloc(4); printf("malloc(4) returned 0x%x\n", buf); } } UNIX> fb3 malloc(4) returned 0x219d0 malloc(4) returned 0x219e0 malloc(4) returned 0x219f0 malloc(4) returned 0x21a00 malloc(4) returned 0x21a10 malloc(4) returned 0x21a20 malloc(4) returned 0x21a30 malloc(4) returned 0x21a40 malloc(4) returned 0x21a50 malloc(4) returned 0x21a60 UNIX>You'll note that each return value from malloc() is 16 bytes greater than the previous one. You might expect it to be only 4 bytes greater since it is only allocating 4 bytes. What is happening is that malloc() allocates some extra bytes each time it is called so that it can do bookkeeping. These extra bytes help out when free() is called. These extra bytes are often allocated before the returned memory. You'll see why when we start to look at free(). Look at fb4.c. What this does is allocate a whole bunch of memory regions using malloc(), and then it prints out their starting addresses, and the values that are located one and two words (I use "word" to denote a 4-byte quantity) before the starting addresses. Again, this is the kind of code which (for good reason) most programmers deem as ``unsafe''. However, it's the only way to check out these things. As you can see, two words before the return value from malloc() contains how many bytes were actually allocated. This is a little confusing, so lets look at the output of fb4 in detail: (On different systems, malloc works in different ways. If the output on your system looks different from this, please see this note). UNIX> fb4 sbrk(0) = 0x70f8 Allocated 4 bytes. buf = 0x61a8, buf[-1] = 0, buf[-2] = 16, buf[0] = 1000 Allocated 8 bytes. buf = 0x61b8, buf[-1] = 0, buf[-2] = 16, buf[0] = 1001 Allocated 12 bytes. buf = 0x61c8, buf[-1] = 0, buf[-2] = 24, buf[0] = 1002 Allocated 16 bytes. buf = 0x61e0, buf[-1] = 0, buf[-2] = 24, buf[0] = 1003 Allocated 20 bytes. buf = 0x61f8, buf[-1] = 0, buf[-2] = 32, buf[0] = 1004 Allocated 24 bytes. buf = 0x6218, buf[-1] = 0, buf[-2] = 32, buf[0] = 1005 Allocated 28 bytes. buf = 0x6238, buf[-1] = 0, buf[-2] = 40, buf[0] = 1006 Allocated 100 bytes. buf = 0x6260, buf[-1] = 0, buf[-2] = 112, buf[0] = 1007 sbrk(0) = 0x70f8 UNIX>So, look at the heap after the first call to malloc(), and buf[0] is set to i = 1000: |---------------| | ... | | | | 16 | 0x61a0 | | 0x61a4 | 1000 | 0x61a8 <--------- return value | | 0x61ac | | 0x61b0 | | 0x61b4 | ... | | | | | | | |---------------| 0x70f8 (sbrk(0));Now, when malloc() is called a second time (buf = malloc(8)), malloc() returns 0x61b8. After buf[0] is set to i = 1001, the heap looks as follows: |---------------| | ... | | | | 16 | 0x61a0 | | 0x61a4 | 1000 | 0x61a8 | | 0x61ac | 16 | 0x61b0 | | 0x61b4 | 1001 | 0x61b8 <--------- return value | | 0x61bc | | 0x61c0 | | 0x61c4 | ... | | | | | |---------------| 0x70f8 (sbrk(0));And so on -- when the final sbrk(0) is called, the heap looks as follows: |---------------| | ... | | | | 16 | 0x61a0 | | 0x61a4 | 1000 | 0x61a8 | | 0x61ac | 16 | 0x61b0 | | 0x61b4 | 1001 | 0x61b8 | | 0x61bc | 24 | 0x61c0 | | 0x61c4 | 1002 | 0x61c8 | | 0x61cc | | 0x61d0 | | 0x61d4 | 24 | 0x61d8 | | 0x61dc | 1003 | 0x61e0 | | 0x61e4 | | 0x61e8 | | 0x61ec | 32 | 0x61f0 | | 0x61f4 | 1004 | 0x61f8 | | 0x61fc | | 0x6200 | | 0x6204 | | 0x6208 | | 0x620c | 32 | 0x6210 | | 0x6214 | 1005 | 0x6218 | | 0x621c | | 0x6220 | | 0x6224 | | 0x6228 | | 0x622c | 40 | 0x6230 | | 0x6234 | 1006 | 0x6238 | | 0x623c | | 0x6240 | | 0x6244 | | 0x6248 | | 0x624c | | 0x6250 | | 0x6254 | 112 | 0x6258 | | 0x625c | 1007 | 0x6260 | | 0x6264 | ... | | | | | |---------------| 0x70f8 (sbrk(0));So, malloc() calls sbrk() to get memory from the operating system into a buffer, and then it doles out the memory from that buffer as it is called. When it runs out of buffer space, it calls sbrk() to get more. Why do malloc(4) and malloc(8) allocate 16 bytes, and malloc(12) and malloc(16) allocate 24? Because malloc() pads out the memory allocated to multiples of 8 bytes. Thus malloc(4) and malloc(8) allocate 8 bytes for the user, plus an extra 8 bytes for bookkeeping. Malloc(12) and malloc(16) allocate 16 bytes for the user, plus an extra 8 bytes for bookkeeping for a total of 24 bytes. Malloc(100) allocates 104 bytes for the user, plus an extra 8 bytes for bookkeeping. Why does malloc() perform this padding? So that the addresses returned will be multiples of eight, and thus will be valid for pointers of any type. Suppose malloc() didn't do this, and instead could return any pointer. Then if you did the following: int *i; i = (int *) malloc(4); *i = 4;you might generate a bus error, because malloc() may return a value that is not a multiple of 4. As it is, malloc() returns multiples of eight, so that pointers to doubles and long integers (if the architecture supports them as 8-byte quantities) will not cause bus errors. How does malloc() know where to dole memory from? It uses a global variable or two. For example, it may have two global variables defined as follows: char *malloc_begin = NULL; char *malloc_end = NULL;When malloc() is called it first checks to see if malloc_begin == NULL. If so, it calls sbrk() to get a buffer. It uses malloc_begin and malloc_end to denote the beginning and end of that buffer. As malloc() gets called it doles out memory from the beginning of the buffer, and updates malloc_begin accordingly. If there is not enough room in the buffer, then sbrk() is called to get more buffer space, and malloc_end is incremented to denote the enlarged buffer. Now, this describes how to write a simple malloc() with no free() calls. When free() gets called, you should have malloc() be able to reuse that memory. This means that you have to do something more sophisticated with malloc(). We'll talk about it in Malloc lecture #2. Think about it in the meantime.
http://web.eecs.utk.edu/~huangj/cs360/360/notes/Malloc1/lecture.html
CC-MAIN-2014-15
refinedweb
1,467
79.19
get all the active namespaces rospy | roscpp In my project, multiple robots may be spawned both at launch and at runtime, each under a certain namespace. Generally, that is 'robot_ID' where ID == a robot-unique integer, but nothing stops the user from selecting its own namespace. I need to programmaticaly retrieve a list of namespaces currently active from a node that is in the global namespace. That must be like that. When I launch my project, on std rosout this gets printed: NODES /robot_2/ # <- NAMESPACE amcl (amcl/amcl) move_base (move_base/move_base) robot_state_publisher (robot_state_publisher/robot_state_publisher) spawn_urdf (gazebo_ros/spawn_model) /robot_1/ # <- NAMESPACE amcl (amcl/amcl) move_base (move_base/move_base) robot_state_publisher (robot_state_publisher/robot_state_publisher) spawn_urdf (gazebo_ros/spawn_model) / # <- STANDARD NAMESPACE gazebo (gazebo_ros/gzserver) map_loader (map_server/map_server) rviz (rviz/rviz) So this leads me into thinking that there is a way to do so. I'm mostly using Python, but for this purpose only I would also be willing to use a C++ node. EDIT: I've marked what I would like to retrieve from the above piece of output. Can you provide more details about what you want to achieve please ? If you just want to list the namespaces without processing the list then a simple bash/python script parsing the ouput of rosnode listwould work. I thought it was clear, I'm sorry. I would like to retrieve a "list" of all the namespaces, being all the "prefixes" being currently applied to nodes. In the rosout, at launch, that "resume" gets printed: I want to know if there is a direct/indirect way to get all the shown namespaces. Parsing the output of rosnode listis an option, but that requires some work, meaning that you have to do string comparison to find recurring pieces of string in all the nodes. Not much of a deal, but I was asking if there was a better way
https://answers.ros.org/question/340189/get-all-the-active-namespaces-rospy-roscpp/
CC-MAIN-2020-05
refinedweb
309
67.18
hey guys im working on a homework problem which requires two things -count the total number of clumps -determine the longest clump i finally made my code to recognize the number of total clumps however i still cnt figure out how to determine the longest clump any help be great!thanks:) #include <iostream> #include <string> using namespace std; int countClumps(int k, string s, string & longest); int main () { string s, string2; int length; char play; do { cout << "Enter a minmum clump length of 2 or more: "; cin >> length; while (length <= 1) { cout << "ILLEGAL VALUE!!!!" << endl; cout << "Please Enter a minmum clump length of 2 or more:"; cin >> length; } cout << "Enter one or more words each having at least "; cout << "2 characters. When you want to quit, enter any word with fewer than 2 characters." << endl; cin >> s; cout << countClumps(length, s, string2) << endl; cout << "string2 is" << string2 << endl; cout << "play again"; cin >> play; } while (play != 'n'); return 0; } int countClumps(int k, string s, string & longest) { longest = ""; // cout<<"long is" <<longest<<endl; int i = 1, g = 0, total = 0; while (g < s.length()) { while (i < s.length() && s[g] == s[i]) { i++; } int y = i - g; int p = g; if (y >= k) total++; if (y > longest.length()) { longest = s[g], s[y]; } g = i; } return total; //cout <<s<<"has " << total << " clumps." << endl; }
https://www.daniweb.com/programming/software-development/threads/268956/how-to-determine-the-longest-clump
CC-MAIN-2017-39
refinedweb
222
73.41
Editor’s note: This post was last updated 29 July 2021. Some information may still be out of date. What is Framer Motion? If you’re like me, your first thought when you read this headline might be, “Why do we need yet another animation library for React? This is getting tiring!” Think of Framer Motion as more of an improvement or reinvention of an existing animation library than a brand new one. Framer Motion is the successor to Pose, which was one of the most popular animation libraries used with React. Like Pose, it’s built upon promotion, which is a low-level, unopinionated animation library, but it provides abstractions to streamline the process. Framer Motion improves upon and simplifies the API in a way that couldn’t have been done without breaking changes and rewriting. One difference is that whereas Framer Motion only has support for React, Pose has support for React-Native and Vue. If you’re currently using Pose, I would recommend updating to Framer Motion because Pose has been deprecated. To learn more about Framer Motion, our tutorial will cover these topics: - What are spring animations? - Why use Framer Motion? - How Framer Motion works - How do you use Framer Motion in React? - SVG animation in Framer Motion - Dragging in Framer Motion What are spring animations? Framer Motion uses spring animations by default. You’ll need to customize these if you want different behavior. Spring animations have gained tremendous popularity in recent years. The alternative is easing animations, which you create with CSS, e.g., 1s ease-in. Easing animations simply don’t look natural or realistic with their behavior and set duration. Spring animations apply the laws of physics to have smoother, more natural animations. They do this through physics principals such as momentum. They don’t simply reach and stop at the end state; they sort of bounce past and settle into place. Two values are used to define a spring animation: stiffness and damping. Tweaking these values will make the animation behave differently. Why use Framer Motion? If most animation libraries use spring-based animations, then, why should you use Framer Motion? For starters, it has a great API that is simple and doesn’t fill your components with extra code. In most cases, you can simply replace your HTML element with a motion element — for example, div with motion.div, which results in the same markup but has additional props for animation. The documentation and examples on Framer Motion’s official website are the best I’ve seen in an animation library. For most common use cases, you can find a CodeSandbox example in the documentation. Some are simple, some more complex, but they all give you an excellent understanding and enable you to tweak to build your own solution. Framer Motion can also handle SVG animations, unlike most libraries. Lastly, since it is part of Framer and integrates with the Framer X design tool, using both tools can help make your workflow smoother and more efficient (I don’t have any experience with this personally, but I imagine they are great together). How Framer Motion works As mentioned, Framer Motion replaces HTML elements with motion elements. There is a motion element for every HTML and SVG element (e.g., <motion.div>). These motion elements hook into Framer Motion and accept additional props which define animation behavior. The simplest of these props is animate and is also the one you will be reaching for most often. Any valid value you pass to animate will cause the component to animate to that state upon mount. <motion.div animate={{ x: '100px' }} > Weeee I'm animated </motion.div> This will cause the div to slide 100 pixels to the right when loaded. If you want it to animate through a series of states, you can use keyframes, which are an array of state values. <motion.div animate={{ x: ['100px', '0px', '100px'] }} > Weeee I'm animated </motion.div> This will slide it 100 pixels to the right, back to its original position, and then 100 pixels back to the right. By default, these all take equal amounts of time, but you can tweak that setting using the times prop if desired. By default, an element will animate from where it is naturally styled to the state defined in animation. In some cases, you will want to define the state at which the animation starts. Take opacity, for example. Let’s say you want it to start not from 0 but rather a visible value such as 0.5. This can be defined by passing the state to the initial prop. If you pass a value of false, the initial state will be the value in animation and no initial animation will occur. <motion.div initial={{ opacity: 0.5 }} animation={{ opacity: 1 }} > </motion.div> Although I mentioned that Framer Motion is a spring-based library, that’s not technically 100 percent true. For animations that don’t involve movement, springs aren’t possible. These use tween animations, such as for opacity and color. In addition, inertia is calculated on the initial velocity and used for dragTransition. Animations can be changed to use a different type or tweaked in other ways via the transition prop. If, for example, you want to set the duration to 2 and use the tween style, you would pass the following. <motion.div animate={{ x: '100px' }} transition={{ type: 'tween', duration: 2 }} > Weeee I'm animated </motion.div> There are other powerful transitions available, such as stagger and delay children. These allow you to easily perform what have traditionally been complex animations involving multiple child elements. Often you want an animation to fire when an element is clicked, hovered over, or otherwise interacted with. Thankfully, Framer Motion has built-in props called gestures. The gestures available are hover, tap, pan, and drag. hover and tap are prefixed with while and, as you might expect, apply the animation only while hovered or tapped. When the user is no longer hovering or tapping, it will revert back to the default style with a smooth animation. drag and pan work differently. Enabling dragging allows you to drag the element anywhere on the page and it will remain in the location it is dragged to. The dragging feels very natural with the default physics; dragging builds up momentum and, when released, the element continues in that direction depending on how hard and fast you’re dragging. These are what I consider the foundational building blocks of Framer Motion. With these techniques, you can build almost any type of animation you could think of. More advanced techniques are necessary to do things such as animate the unmounting of components, but we’ll cover some of these later with an example. How do you use Framer Motion in React? That’s enough about the library itself. Now let’s build a practical model to showcase common techniques and help us see this library in action. Starting from a basic React application (such as Create React App), we need to install Framer Motion.We will also install a package to create unique ids named uuid. npm install framer-motion uuid Our basic application will look like this, and we’ll write our application in App. // index.js import React from "react"; import ReactDOM from "react-dom"; import App from "./App"; import "./styles.css"; ReactDOM.render( <React.StrictMode> <App /> </React.StrictMode>, document.getElementById("root"), ); styles.css has just a few rules for some basic styles to work with. // styles.css html, body, #root { height: 100%; width: 100%; } .app { height: 100%; width: 100%; } .list { display: flex; flex-wrap: wrap; width: 1200px; } img { // disables dragging on img elements pointer-events: none; } We’ll also use Bootstrap to shortcut the styling. Add the following link to the head of your HTML to include" /> Here’s our starting App.js. We’ll have a list of items/cards where additional items can be added or existing ones removed. They’ll span multiple rows and columns and you’ll be able to reorder them by dragging. I’ve included random images from Unsplash to make it more interesting. import React, { useState } from 'react'; import { motion } from 'framer-motion'; import { v4 as uuid } from ‘uuid’; const App = () => { const [cards, setCards] = useState(defaultCards); async function addCard() { const res = await fetch(' if (!res.ok) return; const id = uuid() const card = { id, img: res.url }; setCards([...cards, card]); } function removeCard() { setCards(cards.filter((card) => card !== card.id)); } return ( <div className='app'> <div> <h1>Sweet Animations!</h1> </div> <button className="btn btn-primary" onClick={addCard}> Add Card </button <div className='list'> {cards.map(card => ( <Card card={card} setCards={setCards} removeCard={removeCard} key={card.id} /> ))} </div> </div> ) } const Card = ({ card, setCards, removeCard }) => { function handleRemove() { removeCard(card.id); } return ( <div className="card" style={{ width: "18rem" }}> <img src={card.img} <div className="card-body"> <h5 className="card-title">Cool Image</h5> </div> </div> ); } const defaultCards = [ { id: 0, img: " }, { id: 1, img: " }, ]; Let’s start by adding an animation to play when a new image is added. First, we’ll need to convert our Card component to use a motion.div rather than div so that we have access to all the animations. const Card = ({ card, removeCard }) => { function handleRemove() { removeCard(card.id); } return ( <motion.div <div className="card-body"> <h5 className="card-title">Cool Image</h5> </div> </motion.div> ); }; We’ll then need to add both an initial value and an animate value to our motion.div. We’ll make it slide and fade in from the left. initial={{ x: "-300px", opacity: 0 }} animate={{ x: 0, opacity: 1 } SVG animation in Framer Motion Animating SVGs is one of my favorite features of React Motion. We’ll add a remove button which will be a motion.svg with motion.paths. You can animate the pathLength, pathSpacing, and pathOffset properties of a path element. <motion.div <motion.path </motion.svg> <img src={card.img} <div className="card-body"> <h5 className="card-title">Cool Image</h5> </div> </motion.div> Let’s add a couple of animations to this SVG. On mount, it will draw the two lines one after another. On hover, it will thicken the lines. To achieve this effect, we need the parent motion.svg and child motion.paths to be linked so we can stagger the animations of the child paths. We can do this using a Framer Motion prop called variants. variants accepts an object with properties of defined states. You can then pass these as the value of animation states, such as animate or initial. These values will automatically be passed down to children unless overridden. // example usage <motion.div variants={{ mount: { opacity: 0.5 }, rest: { opacity: 1 } }} We’ll focus on three states: initial, animate, and hover. Within animate, we also need to define a transition, which will determine how long the animation takes and the staggering of children. staggerChildren defines how much time should pass between each child’s animation starting. when defines the point at which these animations should begin. A value of afterChildren means that the stagger delay will occur after the child animation. beforeChildren means the opposite — waiting the defined time before the animation. The major difference here is on the first animation, whether it starts immediately ( afterChildren) or waits briefly ( beforeChildren). We don’t want to delay the first child’s animation, so we’ll use afterChildren. const variants = { initial: { strokeWidth: 2, pathLength: 0, }, animate: { pathLength: 1, transition: { duration: 1, when: "afterChildren", staggerChildren: 1 }, }, hover: { strokeWidth: 4, }, }; <motion.svg xmlns=" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="#333" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" variants={variants} initial="initial" animate="animate" whileHover="hover" onClick={handleRemove} > <motion.path d="M 18 6 L 6 18" variants={variants} /> <motion.path d="M 6 6 L 18 18" variants={variants} /> </motion.svg> Similar to the animation we created on mount, we want to create one for the exit animation, which will slide up and fade out. When a Card is removed, React removes the component from the DOM. Because of this, we need to wrap our Card‘s components with a special component, AnimatePresence. This allows Framer Motion to track when the component is unmounting and delay it until the animation is finished. We can actually wrap the list of components rather than wrapping each individually, but we need to make sure they are the direct children. <div className="list"> <AnimatePresence> {cards.map((card) => ( <Card key={card.id} card={card} removeCard={removeCard} /> ))} </AnimatePresence> </div> Once wrapped by this component, the motion.div accepts an exit prop, which will be the state animated to on exit. <motion.div initial={{ x: "-300px", opacity: 0 }} animate={{ x: 0, opacity: 1 }} exit={{ y: '-300px', opacity: 0 }} > </motion.div> When a Card is removed, let’s say we want the other Cards to slide across rather than jump into position. Framer Motion makes this typically difficult task a breeze. A motion component accepts a prop, layout, which will automatically animate any changes to its position in the layout. <motion.div initial={{ x: "-300px", opacity: 0 }} animate={{ x: 0, opacity: 1 } exit={{ exit={{ y: '-300px', opacity: 0 }} layout > </motion.div> One thing you may notice is that, unfortunately, the animations to the new positions won’t occur until the removed items animation is complete. The item still exists in the DOM until the animation is complete and, therefore, the position of the other items has yet to change. This is not an easy issue to solve, and the author of Framer Motion has acknowledged it as a difficult case. It’s unsure whether the library will provide a solution to this. Thankfully, you can get around this by adding some conditional styling to the items, which will remove them from the layout of the DOM while the exit animation is occurring. This allows the position transitions to occur simultaneously. To achieve this, we’ll need to know when the card is being removed and its position at this time. Framer Motion has two inbuilt hooks which are available to children of AnimatePresence these are usePresence and useIsPresent. Both of these hooks return isPresent which informs you of when the item has been removed, but before any exit animations have occurred. usePresence also returns a safeToRemove function, which you have to manually call to fully remove the element from the DOM. In this scenario we only need the isPresent variable so we will use useIsPresent and Framer Motion will handle removing the element. To get the card’s location, we will use a ref, which is React’s method for accessing DOM nodes, and getBoundingClientRect. When removing an item, it will first update position. When isPresent is false, we’ll style the Card with position: absolute, removing it from the layout and setting left and top to the calculated positions. const Card = ({ card, removeCard }) => { const [position, setPosition ] = useState({ left: 0, top: 0}) const cardRef = useRef(null); const isPresent = useIsPresent(); function handleRemove() { const position = cardRef.current.getBoundingClientRect(); setPosition(position); removeCard(card.id); } return ( <motion.div initial={{ x: "-300px", opacity: 0 }} animate={{ x: 0, opacity: 1 }} exit={{ y: "-300px", opacity: 0 }} layout // ... svg <div className="card-body"> <h5 className="card-title">Cool Image</h5> </div> </motion.div> ); } Dragging in Framer Motion Another common feature is the ability to reorder a list of items by dragging them. We’ll enable a drag to move a card to the left/right or up/down. First, we need to enable dragging with the drag prop. We’ll allow dragging in any direction, but if you had a vertical/horizontal-only list, you could constrain the direction with drag='x' or drag='y'. Note: Since we’re using an image, we’ll also have to disable the natural drag on an *img* element. Depending on your browser support, you can do this a few ways. Here we kept it simple with some CSS to remove pointer events, which is already done in our CSS file. <motion.div // other properties drag /> This will allow you to drag the element anywhere on the page, but it will actually stay in that position. You can define the to which area you want an element to be draggablewith dragConstraints. dragConstraints takes an object with top/bottom/left/right properties. We’ll set these all to 0 so the element returns to the same position after dragging (you can still somewhat drag it beyond these bounds). If you want to tweak how far the element can drag from these constraints, you can use the dragElastic prop, which takes a value between 0 (no dragging beyond the constraints) and 1 (drag as far as you desire from the constraints). <motion.div // other properties drag dragConstraints={{ top: 0, right: 0, bottom: 0, left: 0 }} /> There are also a few callback function props you can use to plug in to different behavior — namely, onDrag, onDragStart, onDragEnd. These pass arguments of the event and a drag object with relevant information. We’ll use onDragEnd to calculate our repositioning. <motion.div // other properties drag dragConstraints={{ top: 0, right: 0, bottom: 0, left: 0 }} onDragEnd={(e, drag) => moveCard(drag.offset, i)} // we will write the move card function in a moment /> With the drag argument passed to onDragEnd, we can get the distance the element is dragged from its natural position. In moveCard, we’ll use this point to determine the direction and whether it was dragged far enough to constitute a reorder. // App.js function moveCard(offset, index) { let reorderedCards = [...cards]; let movedFrom = index; let movedTo = index; if (offset.x <= -100) { movedTo = index - 1; } if (offset.x >= 100) { movedTo = index + 1; } if (offset.y >= 100) { movedTo = index + 4; } if (offset.y <= -100) { movedTo = index - 4; } if (movedFrom !== movedTo && cards[movedFrom] && cards[movedTo]) { reorderedCards[movedFrom] = cards[movedTo]; reorderedCards[movedTo] = cards[movedFrom]; setCards(reorderedCards); } } // don't forget to pass it down as a prop to the `Card` element. Summary Now we have a basic application that implements a variety of common features and animations. Hopefully, this tutorial gives you a solid foundation to add some great animations to your library and build upon them over time to suit all your needs. “Framer Motion tutorial: How to easily create React animations” Can you share the app you build on the tutorial? Thanks for the article by the way
https://blog.logrocket.com/framer-motion-tutorial/
CC-MAIN-2022-21
refinedweb
3,058
56.96
I‘m not sure how this one came about. But, it‘s a story. This article is more about grokking a concept, one that’s going to help you think about your animations in a different way. It so happens that this particular example features infinite scrolling — specifically the “perfect” infinite scroll for a deck of cards without duplicating any of them. Why am I here? Well, this all started from a tweet. A tweet that got me thinking about layouts and side-scrolling content. I took that concept and used it on my site. And it’s still there in action at the time of writing. Then I got to thinking more about gallery views and side-scrolling concepts. We hopped on a livestream and decided to try and make something like the old Apple “Cover Flow” pattern. Remember it? My first thoughts for making this assumed I‘d make this so it works without JavaScript, as it does in the demo above, in a way that uses “progressive enhancement.” I grabbed Greensock and ScrollTrigger, and off we went. I came away from that work pretty disappointed. I had something but couldn‘t quite get infinite scrolling to work how the way I wanted. The “Next” and “Previous” buttons didn’t want to play ball. You can see it here, and it requires horizontal scrolling. So I opened up a new thread on the Greensock forum. Little did I know I was about to open myself up to some serious learning! We solved the issue with the buttons. But, being me, I had to ask whether something else was possible. Was there a “clean” way to do infinite scrolling? I‘d tried something on stream but had no luck. I was curious. I’d tried a technique like that used in this pen which I created for the ScrollTrigger release. The initial answer was that it is kinda tricky to do: The hard part about infinite things on scroll is that the scroll bar is limited while the effect that you’re wanting is not. So you have to either loop the scroll position like this demo (found in the ScrollTrigger demos section) or hook directly into the scroll-related navigation events (like the wheel event) instead of actually using the actual scroll position. I figured that was the case and was happy to leave it “as-is.” A couple of days passed and Jack dropped a reply that kinda blew my mind when I started digging into it. And now, after a bunch of going through it, I’m here to share the technique with you. Animate anythingAnimate anything One thing that is often overlooked with GSAP, is that you can animate almost anything with it. This is often because visual things are what spring to mind when thinking about animation — the actual physical movement of something. Our first thought isn’t about taking that process to a meta-level and animating from a step back. But, think about animation work on a larger scale and then break it down into layers. For example, you play a cartoon. The cartoon is a collection of compositions. Each composition is a scene. And then you have the power to scrub through that collection of compositions with a remote, whether it’s on YouTube, using your TV remote, or whatever. There are almost three levels to what is happening. And this is the trick we need for creating different types of infinite loops. This is the main concept right here. We animate the play head position of a timeline with a timeline. And then we can scrub that timeline with our scroll position. Don‘t worry if that sounds confusing. We’re going to break it down. Going “meta”Going “meta” Let‘s start with an example. We’re going to create a tween that moves some boxes from left to right. Here it is. Ten boxes that keep going left to right. That’s quite straightforward with Greensock. Here, we use fromTo and repeat to keep the animation going. But, we have a gap at the start of each iteration. We’re also using stagger to space out the movement and that’s something that will play an important role as we continue. gsap.fromTo('.box', { xPercent: 100 }, { xPercent: -200, stagger: 0.5, duration: 1, repeat: -1, ease: 'none', }) Now comes the fun part. Let’s pause the tween and assign it to a variable. Then let’s create a tween that plays it. We can do this by tweening the totalTime of the tween, which allows us to get or set the tween’s playhead tween, while considering repeats and repeat delays. const SHIFT = gsap.fromTo('.box', { xPercent: 100 }, { paused: true, xPercent: -200, stagger: 0.5, duration: 1, repeat: -1, ease: 'none', }) const DURATION = SHIFT.duration() gsap.to(SHIFT, { totalTime: DURATION, repeat: -1, duration: DURATION, ease: 'none', }) This is our first “meta” tween. It looks exactly the same but we’re adding another level of control. We can change things on this layer without affecting the original layer. For example, we could change the tween ease to power4.in. This completely changes the animation but without affecting the underlying animation. We’re kinda safeguarding ourselves with a fallback. Not only that, we might choose to repeat only a certain part of the timeline. We could do that with another fromTo, like this: The code for that would be something like this. gsap.fromTo(SHIFT, { totalTime: 2, }, { totalTime: DURATION - 1, repeat: -1, duration: DURATION, ease: 'none' }) Do you see where this is going? Watch that tween. Although it keeps looping, the numbers flip on each repeat. But, the boxes are in the correct position. Achieving the “perfect” loopAchieving the “perfect” loop If we go back to our original example, there’s a noticeable gap between each repetition. Here comes the trick. The part that unlocks everything. We need to build a perfect loop. Let‘s start by repeating the shift three times. It’s equal to using repeat: 3. Notice how we’ve removed repeat: -1 from the tween. const getShift = () => gsap.fromTo('.box', { xPercent: 100 }, { xPercent: -200, stagger: 0.5, duration: 1, ease: 'none', }) const LOOP = gsap.timeline() .add(getShift()) .add(getShift()) .add(getShift()) We’ve turned the initial tween into a function that returns the tween and we add it to a new timeline three times. And this gives us the following. OK. But, there’s still a gap. Now we can bring in the position parameter for adding and positioning those tweens. We want it to be seamless. That means inserting each each set of tweens before the previous one ends. That’s a value based on the stagger and the amount of elements. const stagger = 0.5 // Used in our shifting tween const BOXES = gsap.utils.toArray('.box') const LOOP = gsap.timeline({ repeat: -1 }) .add(getShift(), 0) .add(getShift(), BOXES.length * stagger) .add(getShift(), BOXES.length * stagger * 2) If we update our timeline to repeat and watch it (while adjusting the stagger to see how it affects things)… You‘ll notice that there‘s a window in the middle there that creates a “seamless” loop. Recall those skills from earlier where we manipulated time? That’s what we need to do here: loop the window of time where the loop is “seamless.” We could try tweening the totalTime through that window of the loop. const LOOP = gsap.timeline({ paused: true, repeat: -1, }) .add(getShift(), 0) .add(getShift(), BOXES.length * stagger) .add(getShift(), BOXES.length * stagger * 2) gsap.fromTo(LOOP, { totalTime: 4.75, }, { totalTime: '+=5', duration: 10, ease: 'none', repeat: -1, }) Here, we’re saying tween the totalTime from 4.75 and add the length of a cycle to that. The length of a cycle is 5. And that’s the middle window of the timeline. We can use GSAP’s nifty += to do that, which gives us this: Take a moment to digest what‘s happening there. This could be the trickiest part to wrap your head around. We’re calculating windows of time in our timeline. It’s kinda hard to visualize but I’ve had a go. This is a demo of a watch that takes 12 seconds for the hands go round once. It‘s looped infinitely with repeat: -1 and then we‘re using fromTo to animate a specific time window with a given duration. If you, reduce the time window to say 2 and 6, then change the duration to 1, the hands will go from 2 o‘clock to 6 o’clock on repeat. But, we never changed the underlying animation. Try configuring the values to see how it affects things. At this point, it’s a good idea to put together a formula for our window position. We could also use a variable for the duration it takes for each box to transition. const DURATION = 1 const CYCLE_DURATION = BOXES.length * STAGGER const START_TIME = CYCLE_DURATION + (DURATION * 0.5) const END_TIME = START_TIME + CYCLE_DURATION Instead of using three stacked timelines, we could loop over our elements three times where we get the benefit of not needing to calculate the positions. Visualizing this as three stacked timelines is a neat way to grok the concept, though, and a nice way to help understand the main idea. Let’s change our implementation to create one big timeline from the start. const STAGGER = 0.5 const BOXES = gsap.utils.toArray('.box') const LOOP = gsap.timeline({ paused: true, repeat: -1, }) const SHIFTS = [...BOXES, ...BOXES, ...BOXES] SHIFTS.forEach((BOX, index) => { LOOP.fromTo(BOX, { xPercent: 100 }, { xPercent: -200, duration: 1, ease: 'none', }, index * STAGGER) }) This is easier to put together and gives us the same window. But, we don’t need to think about math. Now we loop through three sets of the boxes and position each animation according to the stagger. How might that look if we adjust the stagger? It will squish the boxes closer together. But, it’s broken the window because now the totalTime is out. We need to recalculate the window. Now’s a good time to plug in the formula we calculated earlier. const DURATION = 1 const CYCLE_DURATION = STAGGER * BOXES.length const START_TIME = CYCLE_DURATION + (DURATION * 0.5) const END_TIME = START_TIME + CYCLE_DURATION gsap.fromTo(LOOP, { totalTime: START_TIME, }, { totalTime: END_TIME, duration: 10, ease: 'none', repeat: -1, }) Fixed! We could even introduce an “offset” if we wanted to change the starting position. const STAGGER = 0.5 const OFFSET = 5 * STAGGER const START_TIME = (CYCLE_DURATION + (STAGGER * 0.5)) + OFFSET Now our window starts from a different position. But still, this isn’t great as it gives us these awkward stacks at each end. To get rid of that effect, we need to think about a “physical” window for our boxes. Or think about how they enter and exit the scene. We’re going to use document.body as the window for our example. Let’s update the box tweens to be individual timelines where the boxes scale up on enter and down on exit. We can use yoyo and repeat: 1 to achieve entering and exiting. SHIFTS.forEach((BOX, index) => { const BOX_TL = gsap .timeline() .fromTo( BOX, { xPercent: 100, }, { xPercent: -200, duration: 1, ease: 'none', }, 0 ) .fromTo( BOX, { scale: 0, }, { scale: 1, repeat: 1, yoyo: true, ease: 'none', duration: 0.5, }, 0 ) LOOP.add(BOX_TL, index * STAGGER) }) Why are we using a timeline duration of 1? It makes things easier to follow. We know the time is 0.5 when the box is at the midpoint. It‘s worth noting that easing won’t have the effect we usually think of here. In fact, easing will actually play a part in how the boxes position themselves. For example, an ease-in would bunch the boxes up on the right before they move across. The code above gives us this. Almost. But, our boxes disappear for a time in the middle. To fix this, let’s introduce the immediateRender property. It acts like animation-fill-mode: none in CSS. We’re telling GSAP that we don’t want to retain or pre-record any styles that are being set on a box. SHIFTS.forEach((BOX, index) => { const BOX_TL = gsap .timeline() .fromTo( BOX, { xPercent: 100, }, { xPercent: -200, duration: 1, ease: 'none', immediateRender: false, }, 0 ) .fromTo( BOX, { scale: 0, }, { scale: 1, repeat: 1, zIndex: BOXES.length + 1, yoyo: true, ease: 'none', duration: 0.5, immediateRender: false, }, 0 ) LOOP.add(BOX_TL, index * STAGGER) }) That small change fixes things for us! Note how we’ve also included z-index: BOXES.length. That should safeguard us against any z-index issues. There we have it! Our first infinite seamless loop. No duplicate elements and perfect continuation. We’re bending time! Pat yourself on the back if you’ve gotten this far! 🎉 If we want to see more boxes at once, we can tinker with the timing, stagger, and ease. Here, we have a STAGGER of 0.2 and we’ve also introduced opacity into the mix. The key part here is that we can make use of repeatDelay so that the opacity transition is quicker than the scale. Fade in over 0.25 seconds. Wait 0.5 seconds. Fade back out over 0.25 seconds. .fromTo( BOX, { opacity: 0, }, { opacity: 1, duration: 0.25, repeat: 1, repeatDelay: 0.5, immediateRender: false, ease: 'none', yoyo: true, }, 0) Cool! We could do whatever we want with those in and out transitions. The main thing here is that we have our window of time that gives us the infinite loop. Hooking this up to scrollHooking this up to scroll Now that we have a seamless loop, let’s attach it to scroll. For this we can use GSAP’s ScrollTrigger. This requires an extra tween to scrub our looping window. Note how we’ve set the loop to be paused now, too. const LOOP_HEAD = gsap.fromTo(LOOP, { totalTime: START_TIME, }, { totalTime: END_TIME, duration: 10, ease: 'none', repeat: -1, paused: true, }) const SCRUB = gsap.to(LOOP_HEAD, { totalTime: 0, paused: true, duration: 1, ease: 'none', }) The trick here is to use ScrollTrigger to scrub the play head of the loop by updating the totalTime of SCRUB. There are various ways we could set up this scroll. We could have it horizontal or bound to a container. But, what we‘re going to do is wrap our boxes in a .boxes element and pin that to the viewport. (This fixes its position in the viewport.) We’ll also stick with vertical scrolling. Check the demo to see the styling for .boxes which sets things to the size of the viewport. import ScrollTrigger from '' gsap.registerPlugin(ScrollTrigger) ScrollTrigger.create({ start: 0, end: '+=2000', horizontal: false, pin: '.boxes', onUpdate: self => { SCRUB.vars.totalTime = LOOP_HEAD.duration() * self.progress SCRUB.invalidate().restart() } }) The important part is inside onUpdate. That’s where we set the totalTime of the tween based on the scroll progress. The invalidate call flushes any internally recorded positions for the scrub. The restart then sets the position to the new totalTime we set. Try it out! We can go back and forth in the timeline and update the position. How cool is that? We can scroll to scrub a timeline that scrubs a timeline that is a window of a timeline. Digest that for a second because that‘s what’s happening here. Time travel for infinite scrollingTime travel for infinite scrolling Up to now, we‘ve been manipulating time. Now we’re going to time travel! To do this, we‘re going to use some other GSAP utilities and we‘re no longer going to scrub the totalTime of LOOP_HEAD. Instead, we’re going to update it via proxy. This is another great example of going “meta” GSAP. Let’s start with a proxy object that marks the playhead position. const PLAYHEAD = { position: 0 } Now we can update our SCRUB to update the position. At the same time, we can use GSAP’s wrap utility, which wraps the position value around the LOOP_HEAD duration. For example, if the duration is 10 and we provide the value 11, we will get back 1. const POSITION_WRAP = gsap.utils.wrap(0, LOOP_HEAD.duration()) const SCRUB = gsap.to(PLAYHEAD, { position: 0, onUpdate: () => { LOOP_HEAD.totalTime(POSITION_WRAP(PLAYHEAD.position)) }, paused: true, duration: 1, ease: 'none', }) Last, but not least, we need to revise ScrollTrigger so it updates the correct variable on the SCRUB. That’s position, instead of totalTime. ScrollTrigger.create({ start: 0, end: '+=2000', horizontal: false, pin: '.boxes', onUpdate: self => { SCRUB.vars.position = LOOP_HEAD.duration() * self.progress SCRUB.invalidate().restart() } }) At this point we’ve switched to a proxy and we won’t see any changes. We want an infinite loop when we scroll. Our first thought might be to scroll to the start when we complete scroll progress. And it would do exactly that, scroll back. Although that‘s what we want to do, we don’t want the playhead to scrub backwards. This is where totalTime comes in. Remember? It gets or sets the position of the playhead according to the totalDuration which includes any repeats and repeat delays. For example, say the duration of the loop head was 5 and we got there, we won‘t scrub back to 0. Instead, we will keep scrubbing the loop head to 10. If we keep going, it‘ll go to 15, and so on. Meanwhile, we‘ll keep track of an iteration variable because that tells us where we are in the scrub. We’ll also make sure that we only update iteration when we hit the progress thresholds. Let’s start with an iteration variable: let iteration = 0 Now let’s update our ScrollTrigger implementation: const TRIGGER = ScrollTrigger.create({ start: 0, end: '+=2000', horizontal: false, pin: '.boxes', onUpdate: self => { const SCROLL = self.scroll() if (SCROLL > self.end - 1) { // Go forwards in time WRAP(1, 1) } else if (SCROLL < 1 && self.direction <; 0) { // Go backwards in time WRAP(-1, self.end - 1) } else { SCRUB.vars.position = (iteration + self.progress) * LOOP_HEAD.duration() SCRUB.invalidate().restart() } } }) Notice how we‘re now factoring iteration into the position calculation. Remember that this gets wrapped with the scrubber. We‘re also detecting when we hit the limits of our scroll, and that’s the point where we WRAP. This function sets the appropriate iteration value and sets the new scroll position. const WRAP = (iterationDelta, scrollTo) => { iteration += iterationDelta TRIGGER.scroll(scrollTo) TRIGGER.update() } We have infinite scrolling! If you have one of those fancy mice with the scroll wheel that you can let loose on, give it a go! It’s fun! Here’s a demo that displays the current iteration and progress: Scroll snappingScroll snapping We‘re there. But, there are always ”nice to haves” when working on a feature like this. Let’s start with scroll snapping. GSAP makes this easy, as we can use gsap.utils.snap without any other dependencies. That handles snapping to a time when we provide the points. We declare the step between 0 and 1 and we have 10 boxes in our example. That means a snap of 0.1 would work for us. const SNAP = gsap.utils.snap(1 / BOXES.length) And that returns a function we can use to snap our position value. We only want to snap once the scroll has ended. For that, we can use an event listener on ScrollTrigger. When the scroll ends, we are going to scroll to a certain position. ScrollTrigger.addEventListener('scrollEnd', () => { scrollToPosition(SCRUB.vars.position) }) And here’s scrollToPosition: const scrollToPosition = position => { const SNAP_POS = SNAP(position) const PROGRESS = (SNAP_POS - LOOP_HEAD.duration() * iteration) / LOOP_HEAD.duration() const SCROLL = progressToScroll(PROGRESS) TRIGGER.scroll(SCROLL) } What are we doing here? - Calculating the point in time to snap to - Calculating the current progress. Let’s say the LOOP_HEAD.duration()is 1and we’ve snapped to 2.5. That gives us a progress of 0.5resulting in an iterationof 2, where 2.5 - 1 * 2 / 1 === 0.5. We calculate the progress so that it’s always between 1and 0. - Calculating the scroll destination. This is a fraction of the distance our ScrollTrigger can cover. In our example, we’ve set a distance of 2000and we want a fraction of that. We create a new function progressToScrollto calculate it. const progressToScroll = progress => gsap.utils.clamp(1, TRIGGER.end - 1, gsap.utils.wrap(0, 1, progress) * TRIGGER.end) This function takes the progress value and maps it to the largest scroll distance. But we use a clamp to make sure the value can never be 0 or 2000. This is important. We are safeguarding against snapping to these values as it would put us in an infinite loop. There is a bit to take in there. Check out this demo that shows the updated values on each snap. Why are things a lot snappier? The scrubbing duration and ease have been altered. A smaller duration and punchier ease give us the snap. const SCRUB = gsap.to(PLAYHEAD, { position: 0, onUpdate: () => { LOOP_HEAD.totalTime(POSITION_WRAP(PLAYHEAD.position)) }, paused: true, duration: 0.25, ease: 'power3', }) But, if you played with that demo, you‘ll notice there‘s an issue. Sometimes when we wrap around inside the snap, the playhead jumps about. We need to account for that by making sure we wrap when we snap — but, only when it’s necessary. const scrollToPosition = position => { const SNAP_POS = SNAP(position) const PROGRESS = (SNAP_POS - LOOP_HEAD.duration() * iteration) / LOOP_HEAD.duration() const SCROLL = progressToScroll(PROGRESS) if (PROGRESS >= 1 || PROGRESS < 0) return WRAP(Math.floor(PROGRESS), SCROLL) TRIGGER.scroll(SCROLL) } And now we have infinite scrolling with snapping! What next?What next? We’ve completed the groundwork for a solid infinite scroller. We can leverage that to add things, like controls or keyboard functionality. For example, this could be a way to hook up “Next” and “Previous” buttons and keyboard controls. All we have to do is manipulate time, right? const NEXT = () => scrollToPosition(SCRUB.vars.position - (1 / BOXES.length)) const PREV = () => scrollToPosition(SCRUB.vars.position + (1 / BOXES.length)) // Left and Right arrow plus A and D document.addEventListener('keydown', event => { if (event.keyCode === 37 || event.keyCode === 65) NEXT() if (event.keyCode === 39 || event.keyCode === 68) PREV() }) document.querySelector('.next').addEventListener('click', NEXT) document.querySelector('.prev').addEventListener('click', PREV) That could give us something like this. We can leverage our scrollToPosition function and bump the value as we need. That’s it!That’s it! See that? GSAP can animate more than elements! Here, we bent and manipulated time to create an almost perfect infinite slider. No duplicate elements, no mess, and good flexibility. Let’s recap what we covered: - We can animate an animation. 🤯 - We can think about timing as a positioning tools when we manipulate time. - How to use ScrollTrigger to scrub an animation via proxy. - How to use some of GSAP’s awesome utilities to handle logic for us. You can now manipulate time! 😅 That concept of going “meta” GSAP opens up a variety of possibilities. What else could you animate? Audio? Video? As for the ”Cover Flow” demo, here’s where that went! Great article. I’m constantly amazed at the power of GSAP. GSAP rules ;) Best IMHO animate library, must have – to all and for all. I recommend the channel on Y. #snorklTV
https://css-tricks.com/going-meta-gsap-the-quest-for-perfect-infinite-scrolling/
CC-MAIN-2021-21
refinedweb
3,868
69.38
Microsoft Fund Scala for .NET Project PLUS, new IDE for Clojure. Microsoft Fund Scala for .NET Project Microsoft is funding a project to make Scala available for .Net developers. “By using Scala on .Net, developers can produce applications more quickly and have the possibility of deploying them across the two major industry platforms, JVM and .Net,” said Miguel Garcia, about the project. According to Miguel, most users who have Scala programs working on the JVM only need to recompile it with the Scala.Net compiler. Please note that, currently, Scala programs cannot use libraries in .Net that are compiled using CLR generics. A Visual Studio plugin for Scala with support for basic IDE functionality (for example, code completion, code browsing and line breaks) is expected to be released in the autumn. A “how to” guide is available now (PDF.) Spring Integration 2.0.5 Released Spring Integration 2.0.5 has been released. This version addresses 48 issues, including adding the ‘delimiters’ attribute to the <splitter> element, and adding wildcards support for header-filter. Other new additions include additional mail headers for the mail:header-enricher, and cacheLevel property for jms:message-driven-channel-adapter configuration. JavaMail has been upgraded to version 1.4.4. More information on the 48 issues addressed in this release, can be viewed at the Release Notes. Oracle VM VirtualBox 4.1 Supports VM Cloning Oracle have released version 4.1 of their Oracle VM VirtualBox virtualisation software package. This release introduces support for copying existing VMs. VM cloning can be used for backing up a VM, as oppose to snapshots, or giving people their own VMs to use. Linked Clones, where the existing vm is the parent of the clone, are also supported in 4.1. ‘UDP Tunneling’ has been introduced to allow users to interconnect virtual machines running on different hosts. More information on the new features in version 4.1, can be found in the ‘What’s in Oracle VM VirtualBox 4.1?’ article. There is also a list of bug fixes, listed at the Changelog. Spring Data Redis 1.0.0 M4 Upgrades Spring Framework The fourth milestone of Spring Data Redis 1.0.0 can be downloaded now. This release introduces a Spring 3.1 cache implementation for the Redis key-value store. The pub-sub namespace has been simplified, and the Spring Framework has been upgraded to 3.1 M2 and Jedis to 2.0.0. The build system has also been changed to Gradle. More information is available at the Changelog. Apache Commons Lang Reaches 3.0 The Apache Commons Lang project has reached version 3.0. The Lang project provides helper utilities for the java.lang API, such as String manipulation methods, basic numerical methods and object reflection, and contains enhancements to java.util.Date. From this release, Commons Lang is now Java 5-based, and introduces some additional classes related to multi-threaded programming below the java.util.concurrent package. This includes a configurable ThreadFactory implementation or utility methods. Some deprecated parts of the API, along with some features that were deemed weak or unnecessary, have been dropped for version 3.0. These include the StringEscapeUtils.escapeSql method, the JVMRandom class, and various Exception classes. Please note that Lang 3.0 is not backwards compatible. New IDE for Clojure A new IDE for Clojure, Clooj, is available from GitHub. Clooj is written in Clojure, uses a swing-based GUI, and is cross-platform, assuming Java 1.6 is installed on the operating system. Clooj runs as both a standalone application and as a Clojure editor embedded in another Java or Clojure app. The source code editor currently features highlighting functionality, for example mismatched or unmatched brackets are highlighted in pink; and automatic indenting.
http://jaxenter.com/microsoft-fund-scala-for-net-project-103595.html
CC-MAIN-2014-52
refinedweb
627
60.51
Contents - Get in Touch... - Information for Developers - Information on Remote Desktop like Servers in Debian - Information on Remote Desktop like Clients in Debian - Remote Desktop Technologies / Protocols This area on the Debian Wiki is for everyone who seeks information about Remote Desktop applications (client side and server side) within Debian. There is a packaging team for bringing related software to Debian and maintaining it in Debian continuously: The Debian Remote Maintainers team (aka pkg-remote-team on Alioth): Get in Touch... Mailing list pkg-remote-team@lists.alioth.debian.org: Debian QA Overview page (aka DDPO) IRC: #debian-remote Channel on irc.debian.org Please note that not all mentioned applications related to Remote Desktop in the widest sense are maintained by the pkg-remote Team. However, we nonetheless attempt at best to collect information on those software pieces here unter this wiki namespace. Information for Developers The Remote Desktop experience on a GNU/Linux desktop or terminal server can be affected by all graphical applications available. Often it happens, that a change in this or that graphical application results in problems in remote sessions, but not local sessions. Please file such bugs with the following BTS header at the top of your mail: Package: <gui-package> Version: <version-with-problems> Severity: <how-bad-you-think-it-is> User: pkg-remote-team@lists.alioth.debian.org Usertags: debian-remote Information on Remote Desktop like Servers in Debian ... in alphabetical order Arctica Project FIXME: todo... Guacomole Server FIXME: todo... TightVNC FIXME: todo... X2Go Server FIXME: todo... xpra Server FIXME: todo... xRDP Support FIXME: todo... Information on Remote Desktop like Clients in Debian ... in alphabetical order RDP Clients (MS-RDP protocol) FreeRDP 1.1 / FreeRDP 2 FIXME: todo... (g)rdesktop FIXME: todo... Remmina FIXME: todo... Clients for X2Go X2Go Client FIXME: todo... PyHoca-GUI / PyHoca-CLI FIXME: todo... Clients for NX (v3) Listed here for completeness, there are still some NXv3 servers deployed at various sites, but they are gradually vanishing from the planets these days (writing this at the end of DebConf17, Montreal). Remmina FIXME: todo... Remote Desktop Technologies / Protocols MS-RDP protocol FIXME: todo NXv3 protocol FIXME: todo VNC protocol FIXME: todo Plain X11 FIXME: todo
https://wiki.debian.org/DebianRemote
CC-MAIN-2021-39
refinedweb
366
50.53
Exporting to XML (Report Builder and SSRS) Topic Status: Some information in this topic is preview and subject to change in future releases. Preview information describes new features or changes to existing features in Microsoft SQL Server 2016 Community Technology Preview 2 (CTP2). The XML rendering extension returns a report in XML format. The schema for the report XML is specific to the report, and contains data only. Layout information is not rendered and pagination is not maintained by the XML rendering extension. The XML generated by this extension can be imported into a database, used as an XML data message, or sent to a custom application. The following table describes how report items are rendered. Reports that are rendered using the XML rendering extension also follow these rules: XML elements and attributes are rendered in the order that they appear in the report definition. Pagination is ignored. Page headers and footers are not rendered. Hidden items that cannot be made visible by toggling are not rendered. Initially visible items and hidden items that can be made visible through a toggle are rendered. Images, lines, and custom report items are ignored. The text box element or attribute is assigned an XSD data type based on the values that the text box displays. The following sections describe how the XML rendering extensions interprets the items within the report. Report Body A report is rendered as the root element of the XML document. The name of the element comes from the DataElementName property set in the Properties pane. XML namespace definitions and schema reference attributes are also included in the report element. Variables are noted in bold face type: <Report xmlns=”SchemaName” xmlns:xsi=”” xsi:schemaLocation=”SchemaName ReportURL&rc%3aSchema=true” Name=”ReportName”> The values for the variables are as follows: Text boxes Text boxes are rendered as elements or attributes according to the DataElementStyle RDL property. The name of the element or attribute comes from the TextBox.DataElementName RDL property. Charts, Data Bars, and Sparklines Charts ,data bars, and sparklines are rendered in XML. The data is structured. Gauges and Indicators Gauges and indicators are rendered in XML. The data is structured. Subreports A subreport is rendered as an element. The name of the element is taken from the DataElementName RDL property. The TextBoxesAsElements property setting of the report overrides that of the subreport. Namespace and XSLT attributes are not added to the subreport element. Rectangles A rectangle is rendered as an element. The name of the element is taken from the DataElementName RDL property. Custom Report Items CustomReportItems (CRI) are not visible to the rendering extension. If a custom report item exists in the report, the rendering extension renders it as a conventional report item. Images Images are not rendered. Lines Lines are not rendered. Tables, Matrices, and Lists Tables, matrices, and lists, are rendered as an element. The name of the element comes from the Tablix DataElementName RDL property. Rows and Columns Columns are rendered within rows. Tablix Corner The corner is not rendered. Only the contents of the corner are rendered. Tablix Cells Tablix cells are rendered as elements. The name of the element is taken from the cell’s DataElementName RDL property. Automatic Subtotals Tablix automatic subtotals are not rendered. Row and Column Items that Do Not Repeat with a Group Items that do not repeat with a group, such as labels, subtotals and totals, are rendered as elements. The name of the element comes from the TablixMember.DataElementName RDL property. The TablixMember.DataElementOutput RDL property controls whether a non-repeating item is rendered. If the DataElementName property of the Tablix member is not provided, a name for the non-repeating item is dynamically generated in this form: RowX For non-repeating rows, where X is a zero-based row index within the current parent. ColumnY For non-repeating columns, where Y ix a zero-based column index within the current parent. A non-repeating header is rendered as a child of the row or column that does not repeat with a group. If a non-repeating member has no corresponding Tablix cells, it is not rendered. This may occur in the case of a Tablix cell where it spans more than one column. Rows and Columns that Repeat with a Group Rows and columns that repeat within a group are rendered according to Tablix.DataElementOutput rules. The name for the element is taken from the DataElementName property. Each unique value within a group is rendered as a child element of the group. The name for the element is taken from the Group.DataElementName property. If the DataElementOutput property value equals Output, a repeating item's header is rendered as a child of the detail element.. If there are duplicate data element names within the same scope, the renderer displays an error message. The XML renderer can apply a server-side XSLT transformation to the original XML data. When an XSLT is applied, the renderer outputs the transformed content instead of the original XML data. The transformation occurs on the server, not on the client. The XSLT to apply to the output is defined either in the report definition file with the DataTransform property of the report or with the XSLT DeviceInfo parameter. If either of these values are set, the transform occurs each time the XML renderer is used. When using subscriptions, the XSLT must be defined in the RDL DataTransform property. If an XSLT file is specified, by both the DataTransform definition property and the device information setting, the XSLT specified in DataTransform occurs first, followed by the XSLT set by the device information settings. Device Information Settings You can change some default settings for this renderer by changing the, see XML Device Information Settings.
https://msdn.microsoft.com/en-us/library/dd283112.aspx
CC-MAIN-2015-48
refinedweb
960
57.06
Expected input data. Augmenting an image with imgaug takes only a few lines of code. But before doing that, we first have to load the image. imgaug expects images to be numpy arrays and works best with dtype uint8, i.e. when the array's values are in the range 0 to 255. The channel-axis is always expected to be the last axis and may be skipped for grayscale images. For non-grayscale images, the expected input colorspace is RGB. Non-uint8 data. If you work with other dtypes than uint8, such as float32, it is recommended to take a look at the dtype documentation for a rough overview of each augmenter's dtype support. The API contains further details. Keep in mind that uint8 is always the most thoroughly tested dtype. Image loading function. As imgaug only deals with augmentation and not image input/output, we will need another library to load our image. A common choice to do that in python is imageio, which we will use below. Another common choice is OpenCV via its function cv2.imread(). Note however that cv2.imread() returns images in BGR colorspace and not RGB, which means that you will have to re-order the channel axis, e.g. via cv2.imread(path)[:, :, ::-1]. You could alternatively also change every colorspace-dependent augmenter to BGR (e.g. Grayscale or any augmenter changing hue and/or saturation). See the API for details per augmenter. The disadvantage of the latter method is that all visualization functions (such as imgaug.imshow() below) are still going to expect RGB data and hence BGR images will look broken. Lets jump to our first example. We will use imageio.imread() to load an image and augment it. In the code block below, we call imageio.imread(uri) to load an image directly from wikipedia, but we could also load it from a filepath, e.g. via imagio.imread("/path/to/the/file.jpg") or for Windows imagio.imread("C:\\path\to\the\file.jpg"). imageio.imread(uri) returns a numpy array of dtype uint8, shape (height, width, channels) and RGB colorspace. That is exactly what we need. After loading the image, we use imgaug.imshow(array) to visualize the loaded image. import imageio import imgaug as ia %matplotlib inline image = imageio.imread("") print("Original:") ia.imshow(image) Original: Now that we have loaded the image, let's augment it. imgaug contains many augmentation techniques in the form of classes deriving from the Augmenter parent class. To use one augmentation technique, we have to instantiate it with a set of hyperparameters and then later on apply it many times. Our first augmentation technique will be Affine, i.e. affine transformations. We keep it simple here and use that technique to simply rotate the image by a random value between -25° and +25°. from imgaug import augmenters as iaa ia.seed(4) rotate = iaa.Affine(rotate=(-25, 25)) image_aug = rotate.augment_image(image) print("Augmented:") ia.imshow(image_aug) Augmented: Of course, in reality we rarely just want to augment a single image. The standard scenario would rather be to have large batches of images. imgaug offers the function augment_images(images) to augment image batches. It is often significantly faster than augmenting each image individually via augment_image(image). So let's try the function with an image batch. For simplicity, we will just copy our original image several times and then feed it through augment_images(). To visualize our results, we use numpy's hstack() function, which combines the images in our augmented batch to one large image by placing them horizontally next to each other. import numpy as np images = [image, image, image, image] images_aug = rotate.augment_images(images) print("Augmented batch:") ia.imshow(np.hstack(images_aug)) Augmented batch:
https://nbviewer.jupyter.org/github/aleju/imgaug-doc/blob/master/notebooks/A01%20-%20Load%20and%20Augment%20an%20Image.ipynb
CC-MAIN-2019-43
refinedweb
627
60.51
- NAME - VERSION - SYNOPSIS - DESCRIPTION - USAGE - CONFIGURATION - LIMITATIONS - PUBLIC ATTRIBUTES - PUBLIC METHODS - PRIVATE ATTRIBUTES - PRIVATE METHODS - CONTRIBUTORS - AUTHOR NAME CatalystX::Controller::ExtJS::REST - RESTful interface to dbic objects VERSION version 2.1.3 SYNOPSIS package MyApp::Controller::User; use base qw(CatalystX::Controller::ExtJS::REST); __PACKAGE__->config({ ... }); 1; # set the Accept header to 'application/json' globally Ext.Ajax.defaultHeaders = { 'Accept': 'application/json' }; DESCRIPTION This controller will make CRUD operations with ExtJS dead simple. Using REST you can update, create, remove, read and list objects which are retrieved via DBIx::Class. USAGE Set-up Form Configuration To use this controller, you need to set up at least one configuration file per controller If you create a controller MyApp::Controller::User: package MyApp::Controller::User; use Moose; extends 'CatalystX::Controller::ExtJS::REST'; 1; Forms can be defined either in files or directly in the controller. To see how to define forms directly in the controller see "forms". If you are creating files, you need at least one file called root/forms/user.yml. For a more fine grained control over object creation, deletion, update or listing, you have to create some more files. root/ forms/ user.yml user_get.yml user_post.yml user_put.yml lists/ user.yml Only root/forms/user.yml is required. All other files are optional. If ExtJS issues a GET request, this controller will first try to find the file root/forms/user_get.yml. If this file does not exist, it will fall back to the so called base file root/forms/user.yml. This controller tries to guess the correct model and resultset. The model defaults to DBIC and the resultset is derived from the name of the controller. In this example the controller uses the resultset $c->model('DBIC::User'). You can override these values in the form config files: --- model_config: resultset: User schema: DBIC elements: - name: username - name: password - name: name - name: forename # root/forms/user_put.yml # make username and password required an object is created --- load_config_file: root/forms/user.yml constraints: - type: Required name: username - type: Required name: password Now you can fire up your Catalyst app and you should see two new chained actions: Loaded Chained actions: ... | /users/... | /user/list | /user/... | /user/object Accessing objects To access an object, simply request the controller's url with the desired method. A POST request to /user will create a new user object. The response will include the id of the new object. You can get the object by requesting /user/$id via GET or remove it by using the DELETE method. To update an object, use PUT. PUT is special since it also allows for partial submits. This means, that the object is loaded into the form before the request parameters are applied to it. You only need to send changed columns to the server. Accessing a list of objects You can access or to get a list of users, which can be used to populate an ExtJS store. If you access this URL with your browser you'll get a HTML representation of all users. If you access using a XMLHttpRequest using ExtJS the returned value will be a valid JSON string. Listing objects is very flexible and can easily be extended. There is also a built-in validation for query parameters. By default the following parameters are checked for sane defaults: dir (either asc, ASC, descor DESC) limit (integer, range between 0 and 100) start (positive integer) You can extend the validation of parameters by providing an additional file. Place it in root/lists/ and add the suffix _options (e. g. root/lists/user_options.yml). You can overwrite or extend the validation configuration there. Any more attributes you add to the url will result in a call to the corresponding resultset. # $c->model('DBIC::Users')->active($c)->all; As you can see, the Catalyst context object is passed as first parameter. You can even supply arguments to that method using a comma separated list: # $c->model('DBIC::Users')->active($c, 'arg1', 'arg2')->all; You can chain those method calls to any length. Though, you cannot access resultset method which are inherited from DBIx::Class::ResultSet. This is a security restriction because an attacker could call which will lead to $c->model('DBIC::Users')->delete. This would remove all rows from DBIC::Users! To define a default resultset method which gets called every time the controller hits the result table, set: __PACKAGE__->config({default_rs_method => 'restrict'}); This will lead to the following chain: # $c->model('DBIC::Users')->restrict($c)->active($c, 'arg1', 'arg2')->all; # same for GET, POST and PUT # $c->model('DBIC::Users')->restrict($c)->find(1234); The default_rs_method defaults to the value of "default_rs_method". If it is not set by the configuration, this controller tries to call extjs_rest_$class (i.e. extjs_rest_user). Handling Uploads This module handles your uploads. If there is an upload and the name of that field exists in you form config, the column is set to an IO::File object. You need to handle this on the model side because storing a filehandle will most likely fail. Fortunately, there are modules out there which can help you with that. Have a look at DBIx::Class::InflateColumn::FS. Don't use DBIx::Class:InflateColumn::File because it is deprecated and broken. If you need a more advanced processing of uploaded files, don't hesitate and overwrite "handle_uploads". CONFIGURATION Local configuration: __PACKAGE__->config({ ... }); Global configuration for all controllers which use CatalystX::Controller::ExtJS::REST: MyApp->config( { CatalystX::Controller::ExtJS::REST => { key => value } } ); find_method The method to call on the resultset to get an existing row object. This can be set to the name of a custom function function which is defined with the (custom) resultset class. It needs to take the primary key as first parameter. Defaults to find. default_rs_method This resultset method is called on every request. This is useful if you want to restrict the resultset, e. g. only find objects which are associated to the current user. The first parameter is the Catalyst context object and the second parameter is either list (if a list of objects has been requested) or object (if only one object is manipulated). Nothing is called if the specified method does not exist. This defaults to extjs_rest_[controller namespace]. A controller MyApp::Controller::User expects a resultset method extjs_rest_user. root_property Set the root property used by "list", update and create which will contain the data. Defaults to data. context_stash To allow your form validation packages, etc, access to the catalyst context, a weakened reference of the context is copied into the form's stash. $form->stash->{context}; This setting allows you to change the key name used in the form stash. Default value: context form_base_path Defaults to root/forms forms If you define forms in the controller, files will not be loaded and are not required. You need to have at least the default form defined. It is equivalent to the file without the request method appended. Example: forms => { default => [ { name => 'id' }, { name => 'title' }, ], get => ... list => ... options => ... } See t/lib/MyApp/Controller/InlineUser.pm for a working example. limit The maximum number of rows to return. Defaults to 100. list_base_path Defaults to root/lists no_list_metadata If set to a true value there will be no meta data send with lists. Defaults to undef. That means the metaData hash will be send by default. model_config schema Defaults to DBIC resultset Defaults to "default_resultset" namespace Defaults to "namespace" in Catalyst::Controller order_by Specify the default sort order. Examples: order_by => 'productid' order_by => { -desc => 'updated_on' } list_namespace Defaults to the plural form of "namespace". If this is the same as "namespace" list_ is prepended. LIMITATIONS This module is limited to HTML::FormFu as form processing engine, DBIx::Class as ORM and Catalyst as web application framework. PUBLIC ATTRIBUTES To change the default value of an attribute, either set it as default value package MyApp::Controller::MyController; use Moose; extends 'CatalystX::Controller::ExtJS::REST'; has '+default_result' => ( default => 'MyUser' ); use the config __PACKAGE__->config( default_result => 'MyUser' ); or overwrite the builder sub _build_default_result { return 'MyUser' }; default_resultset Determines the default name of the resultset class from the Model / View or Controller class if the forms contains no <model_config/resultset> config value. Defaults to the class name of the controller. list_base_path Returns the path in which form config files for grids will be searched. list_base_file Returns the path to the specific form config file for grids or the default form config file if the specfic one can not be found. path_to_forms Returns the path to the specific form config file or the default form config file if the specfic one can not be found. form_base_path base_path Returns the path in which form config files will be searched. form_base_file base_file Returns the path to the default form config file. PUBLIC METHODS get_form Returns a new HTML::FormFu::ExtJS class, sets the model config options and the request type to Catalyst. The first parameter is the Catalyst context object $c and optionally a Path::Class::File object to load a config file. list List Action which returns the data for a ExtJS grid. handle_uploads Handles uploaded files by assigning the filehandle to the column accessor of the DBIC row object. As an upload field is a regular field it gets set twice. First the filename is set and $row->update is called. This is entirely handled by HTML::FormFu::Model::DBIC. After that "handle_uploads" is called which sets the value of a upload field to the corresponding IO::File object. Make sure you test for that, if you plan to inflate such a column. If you want to handle uploads yourself, overwrite "handle_uploads" sub handle_uploads { my ($self, $c, $row) = @_; if(my $file = c->req->uploads->{upload}) { $file->copy_to('yourdestination/'.$filename); $row->upload($file->filename); } } However, this should to be part of the model. Since you cannot upload files with an XMLHttpRequest, ExtJS creates an iframe and issues a POST request in there. If you need to make a PUT request you have to tunnel the desired method using a hidden field, by using the params config option of Ext.form.Action.Submit or extraParams in Ext.Ajax.request. The name of that parameter has to be x-tunneled-method. Make sure you do not include a file field in your GET form definition. It will cause a security error in your browser because it is not allowed set the value of a file field. object REST Action which returns works with single model entites. object_PUT REST Action to update a single model entity with a PUT request. object_POST REST Action to create a single model entity with a POST request. object_PUT_or_POST Internal method for REST Actions to handle the update of single model entity with PUT or POST requests. This method is called before the form is being processed. To add or remove form elements dynamically, this would be the right place. object_GET REST Action to get the data of a single model entity with a GET request. object_DELETE REST Action to delete a single model entity with a DELETE request. PRIVATE ATTRIBUTES _extjs_config This attribute holds the configuration for the controller. It is created by merging by __PACKAGE__->config with the default values. PRIVATE METHODS These methods are private. Please don't overwrite those unless you know what you are doing. begin Run this code before any action in this controller. It sets the ActionClass to CatalystX::Action::ExtJS::Deserialize. This ActionClass makes sure that no deserialization happens if the body's content is a file upload. end If the request contains a file upload field, extjs expects the json response to be serialized and returned in a document with the Content-type set to text/html. _parse_NSPathPart_attr _parse_NSListPathPart_attr CONTRIBUTORS Mario Minati AUTHOR Moritz Onken <onken@netcubed.de> This software is Copyright (c) 2014 by Moritz Onken. This is free software, licensed under: The (three-clause) BSD License
https://metacpan.org/pod/CatalystX::Controller::ExtJS::REST
CC-MAIN-2017-09
refinedweb
1,983
56.96
The File class explains only the file system. Java provides two special types of stream called the File input Stream and File Output Stream to read data from and write data into the file. These classes operate on the files in the native file system. File input Stream and File Output Stream are subclasses of Input Stream and Output Stream, respectively. Usually, we use File input Stream(for byte streams) or File Reader (for character streams) for reading from a file and File Output Stream(for byte streams) or File Writer (for character streams) for writing into a file. We can create a file stream by giving the file name as a parameter in the form of a string, File object or File Descriptor object. File Writer and File Output Stream will create a new file by that name if the file does not already exist. The constructors of these two streams are: • FileinputStream(String filename); • File Output Stream(String filename); • FileinputStream(File file object); • File Output Stream(File file object); • FileinputStream(File Descriptor fdesobject); • File Output Stream(File Descriptor fdes object); File Descriptor is an object that holds the information about a file. A File input Stream object can be created in the following manner: File input Stream fin = new File input Stream(string filename); Alternatively, it can be created with the following set of statements: File = new File(string filename); File input Stream fin = new File input Stream(f); A File Output Stream object can be created as follows: File Output Stream foot = new File Output Stream(string filename); Note that in the case of File Output Stream if it is opened on a file that does not already exist then a new file by that name is created. Program implementing the class Copy File uses File input Stream and File Output Stream to copy the contents of input File to output File: input File and output File are the command line arguments arg[0] and arg[1]. Using Copy File, File input Stream and File Output Stream. public class Copy File { public static void main(String[] args) throws IO Exception { if(args.length<2) System.out.println("\n No sufficient parameters"); else { File input File = new File(args[0]); File output File = new File(args[1]); FileinputStream in = new FileInputStream(inputFile); FileOutputStream out = new FileOutputStream(outputFile); int c; while ((c= in.read()) != -1) out. write(c); in.c1ose(); out.c1ose(); } } } In Program the file input File is opened using File input Stream and another file output File is opened using File Output Stream. If input File does not exist, the program gives an error because File input Stream cannot work if the file does not exist. On the other hand, if output File does not exist, then a file by that name will be created. The program continues to read characters from File input Stream as long as there are inputs in the input file and writes these characters on to File Output Stream. When all the inputs have been read, the program closes both File input Stream and File Output Stream. Program can also be written using File Reader and File Writer with the small changes shown below: File inputFile = new File(args[0]); FileReader in = new FileReader(input File); This code creates a File object that represents the named file on the native file system. File is a utility class provided by java.io. Program the Copy File program uses this file object only to construct a File Reader on a file; however, the program could also use the input file to get information about the file, such as its full path name. If we run Program an exact copy of input File (args[0]) will be found in a file named output File (args[1]) in the same directory. It should be remembered that File Reader and File Writer read and write 16-bit characters; however, most native file systems are based on 8-bit bytes. These streams encode the characters as they operate according to the default character-encoding scheme. The default character-encoding scheme can be found by using System. Get Property("file. Encoding"). To specify an encoding scheme other than the default, we should construct an Output Stream Writer on a File Output Stream and specify the encoding scheme. It is important to note that the File Output Stream object does not support the means to append file. If the file exists already, File Output Stream only overwrites and cannot add content to amend of the file.
http://ecomputernotes.com/java/stream/file-input-stream-and-file-output-stream
CC-MAIN-2017-13
refinedweb
756
54.05
Feb 15, 2019 02:43 PM|ex glider pilot|LINK Hi I have a user control on my masterpage and I am trying to cast the parent page as a strong type. I cant find the right syntax to resolve the page class reference. I have read quite a few posts that seem to skate around the issue I am having or my be I'm just not seeing the point. In the user control I have some code which is called from the (Default.aspx) page_init Public Property dbmlVBCtrl As MyDataContext Get If mdbmlVBctrl Is Nothing Then '' go off to see if we can load this from the parent page Dim pt As Type = Page.GetType() If pt.Name = "default_aspx" Then Dim st As String = pt.UnderlyingSystemType.ToString() ''st values {Name = "default_aspx" FullName = "ASP.default_aspx"} Dim parentPage As default_aspx = TryCast(Page, default_aspx) '' build error here default_aspx is not defined Dim parentPage As ASP.default_aspx = TryCast(Page, ASP.default_aspx) '' build error here ASP.default_aspx is not defined End If mdbmlVBctrl = New VBCode.MyDataContext(Tools.GetConnectionString()) mynDbmlVBctrlLoaded = True End If Return mdbmlVBctrl End Get Set(ByVal Value As MyDataContext) mdbmlVBctrl = Value End Set End Property All-Star 40651 Points Feb 15, 2019 04:00 PM|mgebhard|LINK Just follow the same approach you would use to expose master page members (server controls) to the content page. Feb 15, 2019 04:07 PM|jzero|LINK When you have a Master Page to get any control, you hahe to use Mater.FindControl. So if I want to get text of a label in Master Page Dim myLabel As Label = CType(Master.FindControl("myLabelID"), Label) Dim lblStr as string = myLabel.Text Feb 15, 2019 08:11 PM|ex glider pilot|LINK? Feb 15, 2019 08:28 PM|ex glider pilot|LINK Thanks, but I want to access a public (vb) Property on the master page not a control this is where I'm having the problem All-Star 40651 Points Feb 15, 2019 09:24 PM|mgebhard|LINK ex glider pilot? Remember a user control is just a type (class). When placed inside another class (master page) is becomes a member of that class. In general we expose members through properties, methods, and events. In this case we'll use properties. User Control where the label is hard coded for demo purposes. <%@ Control</asp:Label> using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace WebFormsIdentity.UserControls { public partial class SimpleUc : System.Web.UI.UserControl { public string Text { get { return this.Message.Text; } set { this.Message.Text = value; } } protected void Page_Load(object sender, EventArgs e) { } } } The master page also has a property which exposes the SimpleUc Text property. <div class="container body-content"> <div> <uc1:SimpleUc </div> <asp:ContentPlaceHolder </asp:ContentPlaceHolder> <hr /> <footer> <p>© <%: DateTime.Now.Year %> - My ASP.NET Application</p> </footer> </div> public partial class SiteMaster : MasterPage { public string SimpleUcText { get { return this.SimpleUc.Text; } set { this.SimpleUc.Text = value; } } The content page grabs the SimpleUc text value and assigns the value to the Result label. Note the page directive provides intellisense access to the master page. <%@ Page <asp:Label</asp:Label> </asp:Content> namespace WebFormsIdentity { public partial class SimplePage : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Result.Text = Master.SimpleUcText; } } } The results are Hello from the user control Hello from the user control Now you can set the SimpleUc text from the content page like so. namespace WebFormsIdentity { public partial class SimplePage : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Result.Text = Master.SimpleUcText; Master.SimpleUcText = "Hello from the content page"; } } } Results Hello from the content page Hello from the user control 5 replies Last post Feb 15, 2019 09:24 PM by mgebhard
https://forums.asp.net/t/2152726.aspx?communicate+between+user+control+on+masterpage+and+page+and+then+user+control+on+page+
CC-MAIN-2019-30
refinedweb
632
50.84
Sync music and gaming events at Unity Sync music and gaming events at Unity I have tried to describe the design, convenient for programming a large number of gaming events and running as soon as possible in an optimal way. Which can be applied in almost any game, and perhaps it is useful to you if you are involved in developing them. So, first you need to define a class-event: [Serializable] public class Game_event { public char key; // Depending on the key will occur or that event public float time; // Since the start of the event [NonSerialized]public float finish_time; // Requires that the event was not created again after completion public bool isFinish(){ //The function of checking the completion of the event return false; } public void Create(){ // Creates the need for an event object // It is important that all the movements of objects depended on (Main.sound_time - time) } public void Destroy(){ // delete them } public Game_event (float time, char key){ this.time = time; this.key = key; } } Further, the required class inherited from MonoBehaviour, which is the basic code and, of course, a link to an audio object. In my case it is a class Main. public static float sound_time=0; // global variable in which to store the current time to play sound public static List<Game_event> game_event = new List<Game_event>(); // list of events void Update () { sound_time = sound.time; //sound - object type AudioSource, comprising playing music foreach (Game_event e in game_event) { if (sound_time>=e._time && sound_time<e.finish_time && !e.active) { e.active = true; e.finish_time = float.MaxValue; current_event =e; e.Create(); } if (e.active) if (e.isFinish()) // function isFinish function can be resource-intensive, because the activity is checked before the event { e.active=false; e.finish_time = sound_time; e.Destroy(); } } } There are several ways to create a variety of events: by transferring directly in code Game_event, create additional classes, or the use of a scripting language like Lua, which of course is more convenient. Editor The most convenient way to edit, in my opinion, the binding of certain events to the keys, then the level of creation is transformed into "playing the piano", where your task is just to press the keys in rhythm with the music. That is why a key uses the appropriate symbols. To realize the need to determine the available input keys: public static char[] keys_s = { 'Q','W','E','R','T', 'A','S','D','F','G', 'Z','X','C','V','B'}; // And add the following code void Update () { ̢̢̮ââ¬Å¡Ã¬Ãâæ Event c_e = Event.current; if (c_e.isKey && c_e.type == EventType.KeyDown) { if (Array.Exists(Main.keys_s, c=>c==c_e.keyCode.ToString()[0])) // Check whether there is a pressed key in the array of permissible { game_event.Add (new Game_event (sound_time,c_e.keyCode.ToString()[0])); } } } Now when you press the button, the list will be written to the corresponding event, which will be played in synchronou sound. This can be very useful to adjust the event under the figure of a sound wave. Get the texture with the image in the following way: float[] samples = new float[sound.clip.samples * sound.clip.channels]; sound.clip.GetData(samples, 0); // Gets an array with the data sample on which to build texture int frequency = sound.clip.frequency; // the bit rate of the sample int scale = 10; // pixels per sample 1c SoundTex = new Texture2D ((int)(sound.clip.length*sound.clip.channels*scale), 200); int height = (int)(SoundTex.height / 2); for (int i=0; i<SoundTex.width; i++) { int c_hi = 0; int c_low = 0; float s_hi = 0; float s_low = 0; // Calculate the average lower and upper secondary importance to 1px texture for (int k=0; k<(int)(frequency/scale); k++) { if (samples[k+i*(int)(frequency/scale)]>=0) { c_hi++; s_hi+=samples[k+i*(int)(frequency/scale)]; } else { c_low++; s_low+=samples[k+i*(int)(frequency/scale)]; } } // Draw a line from the middle of the lower to upper secondary // Post it on the lighter and darker inner upper part, solely for beauty for (int j=0; j<(int)(SoundTex.height); j++) { if (j<(int)((s_hi/c_hi)*height*0.6f+height) && j>(int)((s_low/c_low)*height*0.6f+height)) SoundTex.SetPixel(i,j,new Color(0.7f,1,0.7f)); else if (j<=(int)((s_hi/c_hi)*height+height) && j>=(int)((s_low/c_low)*height+height)) SoundTex.SetPixel(i,j,new Color(0,1,0)); else SoundTex.SetPixel(i,j,new Color(0,0,0,0)); } } SoundTex.Apply (); // Apply changes to the texture // The result can be seen in the main picture What it looks like in the editor, you can view a short video. All data posted on the site represents accessible information that can be browsed and downloaded for free from the web. No replies yet
https://unionassets.com/blog/sync-music-and-gaming-events-at-unity-193
CC-MAIN-2017-13
refinedweb
784
63.19
packaging bitmaps in exe By rm65453, in AutoIt General Help and Support Recommended Posts Similar Content - By kuhicop Hello, I need to find an image on screen and return it's position left, top, right, botton. I'm using the ImageSearch function but it only returns 1 or 0. Any ideas? Thanks! - By Errious Hello, maybe i am just tired or i did not understand exactly how to use ImageSearch for different pictures, but i tried now a few things with my example script and only the first part is working. Explaining what i try to do, at work we have a programm running which has three different states, they are visible but sometimes the statuses of the process is not running and to avoid a long time without progress of the programm i searched to get a way to have a better visible notification as existing, so there is grey as work in progress, orange for not running and red for aborted, now when the states are changing i would like to receive a little window that is just telling the actual status. #include <ImageSearch2015.au3> #include <MsgBoxConstants.au3> #include <AutoItConstants.au3> $x1=0 $y1=0 While 1 sleep(100) $image1 = _ImageSearch("red.png", 1,$x1,$y1,0) If $image1 = 1 then SplashTextOn ( "", "RED !" , 100 , 50 , 1800 , 220 , $DLG_TEXTLEFT, "Arial" , 12 , 500 ) Sleep(1000) SplashOff() sleep(100) EndIf $image2 = _ImageSearch("orange.png", 1,$x1,$y1,0) If $image2 = 1 Then SplashTextOn ( "", "Orange !" , 100 , 50 , 1800 , 220 , $DLG_TEXTLEFT, "Arial" , 12 , 500 ) Sleep(1000) SplashOff() sleep(100) EndIf $image3 = _ImageSearch("grey.png", 1,$x1,$y1,0) If $image3 = 1 Then SplashTextOn ( "", "Grey !" , 100 , 50 , 1800 , 220 , $DLG_TEXTLEFT, "Arial" , 12 , 500 ) Sleep(1000) SplashOff() EndIf WEnd As i said before, part one to receive the state for aborted is working, so i get the splash window for this, but what do i miss for the other two? I put the While statement in for continuously running as long it is needed when the program is in use, but i believe i missed something. - By VIP Use: -? - -
https://www.autoitscript.com/forum/topic/179961-packaging-bitmaps-in-exe/
CC-MAIN-2019-13
refinedweb
344
58.52
Aahz <aahz@pythoncraft.com> writes: > I'm fine with "local scope" and "object attributes" to disambiguate > them; I just think it's important that people understand that a name is > a name is a name, and all names live in *some* namespace. That isn't really true: a computed attribute lives in no namespace, instead, some function is invoked to determine the attribute value. Furthermore, some attributes live in multiple namespaces. Given obj.name what namespace is considered to find the name? NOT the namespace of obj, alone - Python also considers the namespace of obj's class (if obj is an instance), of the base classes, etc. OTOH, obj.name = None modifies the namespace of obj (unless name is a computed attribute). Regards, Martin
https://mail.python.org/pipermail/python-dev/2002-April/022088.html
CC-MAIN-2021-31
refinedweb
124
60.35
0 Hi, I am goint to create a very large array but have this error when running this code #include <iostream> #include <limits.h> #include <cstddef> #include <cmath> using namespace std; int main() { cout << UINT_MAX << " " << ULLONG_MAX << endl; cout << pow(pow(24,2),4) << endl; unsigned long long int n = pow(pow(24,2),4) ; cout << n << endl; double * p = new double (pow(pow(24,2),4)); for (unsigned long long int i = 0; i < n; i++) p[i] = i; delete [] p; return 0; } The running stops at i = 16895 giving Debugger name and version: GNU gdb 6.8-debian Program received signal SIGSEGV, Segmentation fault. What's the ways to deal with large size array? I am not sure it C++ std library's various containers will work but I guess they might be too slow and not efficient, so it is slightly better to stick to normal array? Thanks in advance!
https://www.daniweb.com/programming/software-development/threads/181360/how-to-get-and-use-large-size-array
CC-MAIN-2017-30
refinedweb
152
65.76
Handling missing values – Part 1 Hi ML Enthusiasts! Today, we will learn handling missing values in Python using pandas and numpy. If you are new to this, we would advise you to first go through our introductory tutorials on both these libraries: Introduction to NumPy, Introduction to Pandas, Basics of NumPy arrays – Part 1 and Basics of NumPy Arrays Part – 2. Dynamics behind None and NaN There are two ways to denote missing values in Python: None and NaN (Not a Number). Let’s examine them, but, before doing that, let’s first import the libraries: import pandas as pd import numpy as np None has object data type and if it’s included in any array, all the elements of that array get converted to object data type only. For example: np.array([None]) array([None], dtype=object) np.array([0.5, 5, 9.5, None]) array([0.5, 5, 9.5, None], dtype=object) np.array([0.5, 5, 9.5]).dtype dtype('float64') As you would have noticed from the above examples, the numpy arrays get converted to the highest level data type. With inclusion of None as one of the array elements, the array got converted to object data type. It is advisable not to use None much as it takes lot of time to get executed especially in loop. Numpy arrays based on native data types generally take very less time, but, they don’t perform that well with object data type. With None included in your data, you won’t be able to perform even basic calculations like sum, min max etc. np.array([0.5, 5, 9.5, None]).sum() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-00832b7d6dec> in <module>() ----> 1 np.array([0.5, 5, 9.5, None]).sum() /usr/local/lib/python3.6/dist-packages/numpy/core/_methods.py in _sum(a, axis, dtype, out, keepdims, initial, where) 36 def _sum(a, axis=None, dtype=None, out=None, keepdims=False, 37 initial=_NoValue, where=True): ---> 38 return umr_sum(a, axis, dtype, out, keepdims, initial, where) 39 40 def _prod(a, axis=None, dtype=None, out=None, keepdims=False, TypeError: unsupported operand type(s) for +: 'float' and 'NoneType' np.array([0.5, 5, 9.5, None]).min() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-7-dc6dbd26468d> in <module>() ----> 1 np.array([0.5, 5, 9.5, None]).min() /usr/local/lib/python3.6/dist-packages/numpy/core/_methods.py in _amin(a, axis, out, keepdims, initial, where) 32 def _amin(a, axis=None, out=None, keepdims=False, 33 initial=_NoValue, where=True): ---> 34 return umr_minimum(a, axis, None, out, keepdims, initial, where) 35 36 def _sum(a, axis=None, dtype=None, out=None, keepdims=False, TypeError: '<=' not supported between instances of 'float' and 'NoneType' np.array([0.5, 5, 9.5, None]).max() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-442472594ea8> in <module>() ----> 1 np.array([0.5, 5, 9.5, None]).max() /usr/local/lib/python3.6/dist-packages/numpy/core/_methods.py in _amax(a, axis, out, keepdims, initial, where) 28 def _amax(a, axis=None, out=None, keepdims=False, 29 initial=_NoValue, where=True): ---> 30 return umr_maximum(a, axis, None, out, keepdims, initial, where) 31 32 def _amin(a, axis=None, out=None, keepdims=False, TypeError: '>=' not supported between instances of 'float' and 'NoneType' Handling missing values with None included can make life little messy. But, don’t worry! We have other way to represent them and that is NaN (Not a Number) that coems to our rescue! np.array([np.nan]) array([nan]) np.array([np.nan]).dtype dtype('float64') np.array([1, 2, 2.5, np.nan]) array([1. , 2. , 2.5, nan]) np.array([1, 2, 2.5, np.nan]).dtype dtype('float64') Thus, from above exapmles, we can see that the default data type of NaN is float64v which is one of the native data type of numpy and this makes the manipulations and calculations really faster in case of NaN than that of None (which has object data type). NaN converts everything into NaN in which it’s included 6 + 7 + 8 + np.nan nan 0 - np.nan nan np.array([1, 2, 2.5, np.nan]).sum() nan np.array([1, 2, 2.5, np.nan]).min() nan np.array([1, 2, 2.5, np.nan]).max() nan You may be thinking that it’s just like coronavirus! It infects everything it comes in contact with! So, what’s the way to come out of this? Well, python has ways out of every problem! np.nansum(np.array([1, 2, 2.5, np.nan])) 5.5 np.nanmax(np.array([1, 2, 2.5, np.nan])) 2.5 np.nanmin(np.array([1, 2, 2.5, np.nan])) 1.0 Functions for handling missing values Now, let’s learn handling on missing values or null values. Following functions are used for this purpose: - isnull(): gives True/False as out depending on the presence of null values. If missing values are present, True values are returned. Else, False values are returned. - notnull(): It’s the opposite of isnull() - dropna(): This returns a list of all items with missing values excluded. - fillna(): Function used for the purpose of missing value imputation – i.e. – replacing missing values with mean, median, mode, etc of the specific column to which the missing values belong. Handling missing values – Detection First, let’s learn how to detect missing values # Let's pass a numpy array having missing value in pd.DataFrame function and then apply isnull() function on it. pd.DataFrame(np.array([1, 2, 2.5, np.nan])).isnull() As can be seen, out of four values, the first three return False and last one returns True. Now, let’s apply notnull() and see what its outcome is! # Let's pass a numpy array having missing value in pd.DataFrame function and then apply notnull() function on it. pd.DataFrame(np.array([1, 2, 2.5, np.nan])).notnull() As was expected, the opposite of isnull is returned. Out of four values, first three return True and last one False. Now, let’s fetch the subset of not null values. #Below code returns all values of df which are not null df = pd.DataFrame(np.array([1, 2, 2.5, np.nan])) df[df[0].notnull()] Handling missing values – dropping Now, let’s learn how to drop null values using dropna() function. df.dropna() This drops all the rows/columns having na values. df = pd.DataFrame([[1, 2, np.nan], [7, 5, 3], [np.nan, 1, 34]]) df df.dropna() By default, dropna drops rows having na values. To turn this into columns, we have to pass ‘columns’ to axis parameter in dropna function. By default, ‘rows’ are passed as argument to axis parameter. df.dropna(axis='columns') Now, let’s examine the how parameter of dropna function. But first, let’s introduce one more column to df function having only NaN values. df[3] = np.nan df Now, let’s see how passing different arguments in how parameter can impact the output. By defualt, ‘any’ is passed to how parameter. Let’s examine the documentation of dropna first. df.dropna? Signature: df.dropna(axis=0, how=’any’, thresh=None, subset=None, inplace=False) Docstring: Remove missing values. See the :ref: User Guide <missing_data> for more on which values are considered missing, and how to work with missing data. Parameters axis : {0 or ‘index’, 1 or ‘columns’}, default 0 Determine if rows or columns which contain missing values are removed. * 0, or 'index' : Drop rows which contain missing values. * 1, or 'columns' : Drop columns which contain missing value. .. deprecated:: 0.23.0 Pass tuple or list to drop on multiple axes. Only a single axis is allowed. how : {‘any’, ‘all’}, default ‘any’ Determine if row or column is removed from DataFrame, when we have at least one NA or all NA. * 'any' : If any NA values are present, drop that row or column. * 'all' : If all values are NA, drop that row or column. thresh : int, optional Require that many non-NA values. subset : array-like, optional Labels along other axis to consider, e.g. if you are dropping rows these would be a list of columns to include. inplace : bool, default False If True, do operation inplace and return None. Returns DataFrame DataFrame with NA entries dropped from it. df.dropna(axis='columns') df.dropna(axis='columns', how='all') Thus, the column having ‘all’ NaN values is dropped and ones having even one not null is not dropped. The parameter thresh specify the minimum number of not null values to be returned which is 3 in the case below: df.dropna(thresh = 3) So, guys with this I conclude this tutorial Handling missing values – Part 1. In the part 2 of this tutorial, we will see the ways which can be used to fill missing values or how to do missing value imputation. Stay tuned! 2 thoughts on “Handling missing values – Part 1”
https://mlforanalytics.com/2020/03/29/handling-missing-values-part-1/
CC-MAIN-2021-21
refinedweb
1,510
67.96
“I mean, the initial interface — where I focus heavily on the server and economic aspects of the game — is perfectly suited for React. But what about when I start to make the farming /interaction aspects? I love the idea of building an isometric interface around the economic system. I once watched a talk by dead_lugosi, where she described building a medieval game in PHP. Margaret inspired me, and that talk was one of the things that led to me writing a book about JS game development. I became determined to write about my experience. Perhaps others could learn from my mistakes in this case, too. The code for this part can be found at: github.com/assertchris-tutorials/sitepoint-making-games/tree/part-1. I’ve tested it with PHP 7.1 and in a recent version of Google Chrome. Setting Up the Back End The first thing I searched for was guidance on building multiplayer economies. I found an excellent Stack Overflow thread in which folks explained various things to think about. I got about halfway through it before realizing I may have been starting from the wrong place. "First things first: I need a PHP server. I’m going to have a bunch of React clients, so I want something capable of high-concurrency (perhaps even WebSockets). And it needs to be persistent: things must happen even when players aren’t around." I went to work setting up an async PHP server — to handle high concurrency and support WebSockets. I added my recent work with PHP preprocessors to make things cleaner, and made the first couple of endpoints. From config.pre: $host = new Aerys\Host(); $host->expose("*", 8080); $host->use($router = Aerys\router()); $host->use($root = Aerys\root(.."/public")); $web = process .."/routes/web.pre"; $web($router); $api = process .."/routes/api.pre"; $api($router); I decided to use Aerys for the HTTP and WebSocket portions of the application. This code looked very different from the Aerys docs, but that’s because I had a good idea about what I needed. The usual process for running an Aerys app was to use a command like this: vendor/bin/aerys -d -c config.php That’s a lot of code to keep repeating, and it didn’t handle the fact that I wanted to use PHP preprocessing. I created a loader file. From loader.php: return Pre\processAndRequire(__DIR__ . "/config.pre"); I then installed my dependencies. This is from composer.json: "require": { "amphp/aerys": "dev-amp_v2", "amphp/parallel": "dev-master", "league/container": "^2.2", "league/plates": "^3.3", "pre/short-closures": "^0.4.0" }, "require-dev": { "phpunit/phpunit": "^6.0" }, I wanted to use amphp/parallel, to move blocking code out of the async server, but it wouldn’t install with a stable tag of amphp/aerys. That’s why I went with the dev-amp_v2 branch. I thought it would be a good idea to include some sort of template engine and service locator. I opted for PHP League versions of each. Finally I added pre/short-closures, both to handle the custom syntax in config.pre and the short closures I planned on using after… Then I set about creating routes files. From routes/web.pre: use Aerys\Router; use App\Action\HomeAction; return (Router $router) => { $router->route( "GET", "/", new HomeAction ); }; And, from routes/api.pre: use Aerys\Router; use App\Action\Api\HomeAction; return (Router $router) => { $router->route( "GET", "/api", new HomeAction ); }; Though simple routes, these helped me to test the code in config.pre. I decided to make these routes files return closures, so I could pass them a typed $router, to which they could add their own routes. Finally, I created two (similar) actions. From app/Actions/HomeAction.pre: namespace App\Action; use Aerys\Request; use Aerys\Response; class HomeAction { public function __invoke(Request $request, Response $response) { $response->end("hello world"); } } One final touch was to add shortcut scripts, to launch dev and prod versions of the Aerys server. From composer.json: "scripts": { "dev": "vendor/bin/aerys -d -c loader.php", "prod": "vendor/bin/aerys -c loader.php" }, "config": { "process-timeout": 0 }, With all of this done, I could spin up a new server, and visit just by typing: composer dev Continue reading %Game Development with React and PHP: How Compatible Are They?% Link:
https://jsobject.info/2017/09/13/game-development-with-react-and-php-how-compatible-are-they/
CC-MAIN-2018-26
refinedweb
718
66.44
Wrap long words in JTextPane (Java 7) In all versions of Java up to 6, the default behaviour of a JTextPane put inside a JScrollPane was: wrap lines at word boundaries if possible. If not, then wrap them anyway. In JDK 7, the default behaviour seems to be: wrap lines at word boundaries if possible. If not, just expand the width of the JTextPane (never wrap long words). It is easy to reproduce this, here is a SSCCE: public class WrappingTest extends JFrame { public static void main ( String[] args ) { new WrappingTest(); } public WrappingTest () { setSize(200,200); getContentPane().setLayout(new BorderLayout()); JTextPane jtp = new JTextPane(); JScrollPane jsp = new JScrollPane(jtp); jsp.setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED); getContentPane().add(jsp,BorderLayout.CENTER); setVisible(true); } } Just run it in JDK 6 and in JDK 7, write some small words, and write a long word, and you will see the difference. My question is simple... the new default behaviour in JDK 7 totally messes a program of mine (they should be more careful at Oracle with changing this kind of defaults... they seem unimportant but when you're using a JTextPane to display data that usually contains very long strings of letters, they're not so unimportant - in fact I'm going to file a bug report, but I'd like to have a workaround while/if they don't resolve it). Any way to go back to the previous behaviour? Note that I have checked the answer to the related question How is word-wrapping implemented in JTextPane, and how do I make it wrap a string without spaces? but it doesn't answer this question - it provides a way of making the JTextPane wrap without any regard at all for whitespace, but for me the desired behaviour is split lines at whitespace if possible, and elsewhere if not possible (as in previous Java versions). Answers For me the fix works (tested under 1.7.0_09) import javax.swing.*; import javax.swing.text.*; import java.awt.*; public class WrapTestApp extends JFrame { public static void main ( String[] args ) { new WrapTestApp(); } public WrapTestApp () { setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setSize(200,200); getContentPane().setLayout(new BorderLayout()); JTextPane jtp = new JTextPane(); jtp.setEditorKit(new WrapEditorKit()); JScrollPane jsp = new JScrollPane(jtp); jsp.setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_AS_NEEDED); getContentPane().add(jsp, BorderLayout.CENTER); jtp.setText("ExampleOfTheWrapLongWordWithoutSpaces"); setVisible(true); } class WrapEditorKit extends StyledEditorKit { ViewFactory defaultFactory=new WrapColumnFactory(); public ViewFactory getViewFactory() { return defaultFactory; } } class WrapColumnFactory implements ViewFactory { public View create(Element elem) { String kind = elem.getName(); if (kind != null) { if (kind.equals(AbstractDocument.ContentElementName)) { return new WrapLabelView(elem); } else if (kind.equals(AbstractDocument.ParagraphElementName)) { return new Paragraph); } } // default to text display return new LabelView(elem); } } class WrapLabelView extends LabelView { public WrapLabelView(Element elem) { super(elem); } public float getMinimumSpan(int axis) { switch (axis) { case View.X_AXIS: return 0; case View.Y_AXIS: return super.getMinimumSpan(axis); default: throw new IllegalArgumentException("Invalid axis: " + axis); } } } } Good catch from @dk89, but alas the given workarounds don't work: JDK 7 apparently still doesn't offer a wait to set a custom BreakIterator on a JTextComponent; not even on a GlyphView, where the generation of the BreakIterator is private. And if we insert the string char by char, it still doesn't work: I suppose the consecutive runs of text with identical style (AttributeSet) are collapsed together. I have spent two days trying to do a custom EditorKit, as advised elsewhere, but it doesn't work well (with JDK 1.7.0_4 at least) as the text. I tried the solution given at How to word wrap text stored in JTextPanes which are cells in a JList and a variant found at But I found out that the breakView is no longer called when the JTextPane is smaller than the longest word in the sentence. So it doesn't work at all when there is only one (long) word. That's our case, as we display user-provided, identifier-like strings, often without spaces, in rather small spaces. I finally found a simple solution, derived from the suggestion in the bug report: indeed, insert the string char by char, but alternate styles! Thus, we have as many segments as we have chars, and the string is wrapped at char bounds. Until the next "bug fix"? Code snippets: private JTextPane tp; private SimpleAttributeSet sas = new SimpleAttributeSet(); tp= new JTextPane(); sas.addAttribute( "A", "C" ); // Arbitrary attribute names and value, not used actually // Set the global attributes (italics, etc.) tp.setParagraphAttributes(styleParagraphAttributes, true); Document doc = tp.getDocument(); try { doc.remove(0, doc.getLength()); // Clear for (int i = 0; i < textToDisplay.length(); i++) { doc.insertString(doc.getLength(), textToDisplay.substring(i, i+1), // Change attribute every other char i % 2 == 0 ? null : sas); } } catch (BadLocationException ble) { log.warn("Cannot happen...", ble); } As stated in the bug, they should have provided an easy way (some property perhaps, or some injectable stuff) to revert to the old behavior. Take a look at this bug: Hi I've had the same problem but found a work-around: just create an extended class of JTextPane e.g. MyWrapJTextPane extends JTextPane and overwrite the following method - it works ;-) public boolean getScrollableTracksViewportWidth() { return true; } Need Your Help MissingSourceFile: cannot load such file -- net/scp Choosing among alternatives in a Haskell algebraic datatype data-structures haskell types algebraic-data-typesWhen type X is defined as:
http://www.brokencontrollers.com/faq/11229003.shtml
CC-MAIN-2019-35
refinedweb
880
54.52
I would like to give my feedback on Cinnamon 1.4 since I think that there's a couple of things that it would improve even further an amazing DE. I came to Linux Min 12 from Windows 7 after trying Ubuntu 11.04. IMO, Mint is the best distro out there and I love the commitment to provide the most polished user experience above a filosofy of pure open-source. Some may not like this, but it's one of the main reasons why I like Mint. I also have great respect by the vision of Clement Lefevbre and the great work that was done with Mate and Cinnamon. In Cinnamon there's some small improvements that I feel should be added to the system: - In the Cinnamon Settings - Themes --> Other Settings there should be a preview of how changing the different themes will change the look of the Windows, Icons, GTK+, etc with at least a screenshot of the new look. To see how changing themes impacts the looks of the system it is really annoying to have to log out every. single. time! - I'm not sure if I didn't found this feature or it is non-existent but is it possible to select more than one image for wallpaper and have it as a slideshow? If not then it would be a great addition to the theme. - The file manager is too simplistic, and at least needs an Up button. It's one thing that I really miss. Not sure if it is the default Gnome 3 file manager but if so maybe you guys should consider forking dolphin from KDE and implement it in Cinnamon. It's much more versatile and powerful than Nautilus. - Not sure if new applets can de developed for this but it would be great if I could pin bookmarks and folders that I open frequently in the file manager applet or if the Places and Bookmarks applet could be further developed to include this. Maybe work on the Recent Documents applet to allow us to pin frequently visited folders/documents in that list. - Replace the Cinnamon Window List with a Icon list with Window previews (with close buttons in the previews) - ideally the Icon list would allow me to pin frequently visited files of each of the icons to them. Think something similar to DockbarX (maybe port that to Cinnamon?). -Include more applets, themes and extensions when installing Cinnamon. It's annoying having to visit the website and installing each of these manually. I know that there's already a repository with many of these available, but something more "official" should be created. - Another thing: although the default Cinnamon theme is great I would love to see the Mint classic Menu make a return (if not as default at least as a pre-installed applet). If possible adopt a theme close to the Gnome 2 Mint look because it was simply beautiful. At least change the default Window theme from the horrible Gnome 3 default to the classic Mint Z theme. - When Panel edit mode is off it should be impossible to add applets to the panel and right clicking on applets should give contextual menus related to said applets to increase their functionality. When Panel Edit mode is On allow configuration of applets as well. I know that many of these things will probably give a lot of work and I would love to help if I was anything more than a mere user with no programming skills whatsoever so I hope not to offend anyone with this feedback. I just want to help Mint jump to greater heights and this is what I can do to help. When I have time I'll submit some of these ideas to the Community website if I get some feedback in here.
https://forums.linuxmint.com/viewtopic.php?p=568386
CC-MAIN-2020-24
refinedweb
642
66.47
I have a dataframe that looks like: field1 field2 field3 time t1 1 1 1 t2 1 1 0 t3 2 3 1 t4 3 3 0 t5 1 2 0 yyyy-mm-dd hh:mm:ss field 1 field 2 (field1,field2) field 3 t1='2016-07-20 00:00:00' t2='2016-07-20 00:01:00' '2016-07-20 00:03:00' field3=0 field3=1 (1,1) 2 min 1 min (2,3) ... ... (3,3) ... ... (1,2) ... ... t1 t2 field3 t2 1 min t2 - t1 2 min current_time - t2 2 min 1 min import pandas as pd from collections import defaultdict, namedtuple # so i can create a defaultdict(Field3) and save some logic class Field3(object): def __init__(self): self.zero= pd.Timedelta('0 days') self.one = pd.Timedelta('0 days') # used to map to field3 in a dictionary Sensor = namedtuple('Sensor','field1 field2') # the dataframe mentioned above df = pd.DataFrame(...) # iterate through each row of the dataframe and map from (field1,field2) to # field3, adding time based on the value of field3 in the frame and the # time difference between this row and the next rows = list(df.iterrows()) sensor_to_field3 = defaultdict(Field3) for i in xrange(len(rows)-1): sensor = Sensor(field1=rows[i][1][0],field2=rows[i][1][1]) if rows[i][1][2]: sensor_to_field3[spot].one += rows[i+1][0]-rows[i][0] else: spot_to_status[spot].zero += rows[i+1][0]-rows[i][0] spot_to_status = {k:[v] for k,v in sensor_to_field3.iteritems()} result = pd.DataFrame(sensor_to_field3,index=[0]) field1,field2 field3 time Managed to get it, in case anyone else runs into something remotely similar. Still not sure if it's optimal, but it feels better than what I was doing. I changed the original dataframe to include the time as a column, and just use integer indices. def create_time_deltas(dataframe): # add a timedelta column dataframe['timedelta'] = pd.Timedelta(minutes=0) # iterate over each row and set the timedelta to the difference of the next one and this one for i in dataframe.index[:-1]: dataframe.set_value(i,'timedelta',dataframe.loc[i+1,'time']dataframe.loc[i,'time']) # set the last time value, which couldn't be set earlier because index out of bounds dataframe.set_value(i+1,'timedelta',pd.to_datetime(datetime.now())-dataframe.loc[i,'time']) return dataframe def group_by_field3_time(dataframe, start=None, stop=None): # optionally set time bounds on what to care about stop = stop or pd.to_datetime(datetime.now()) recent = dataframe.loc[logical_and(start < df['time'] , df['time'] < stop)] # groupby and apply to create a new dataframe with the time_deltas column by_td = df.groupby(['field1','field2']).apply(create_time_deltas) # sum the timedeltas for each triple, which can be used later by_oc = by_td.groupby(['field1','field2','field3']).sum() return by_oc If anyone can think of a better way to do this I'm all ears, but this does feel a lot better than creating dictionaries all over the place.
https://codedump.io/share/jNKjeD7w3nfX/1/sum-datetime-differences-based-on-column-value
CC-MAIN-2018-26
refinedweb
489
53.51
22 September 2009 05:52 [Source: ICIS news] By John Richardson ?xml:namespace> More data specific to polymers and chemicals have emerged, illuminating just how staggering the rebound in demand has been in the world’s most important petrochemicals market. The country's benzene, vinyl chloride monomer (VCM), methanol and propylene imports, meanwhile, soared 100-550% over the same period, the publishing company added. During the last economic recession, imports spiked in the period December 2001-February 2002, said Jean Sudol, president of ITP. This followed the perception that petrochemical prices had bottomed out. “What was different then versus now is that fewer products were involved. The spikes were nothing like the magnitude we are seeing now, and the surge only lasted one to three months. This time, it’s endured for seven to eight months,” Sudol said. Evidence of weaker demand has emerged over the last few weeks. Is this demand decline partly the result of too much inventory rebuilding of chemicals, polymers and of semi-finished and finished goods? All will hopefully become a little clearer after the very long Chinese national holidays from 1-8 October. It is hard to discern to what degree recent sales dips are due to business winding down ahead of the holiday break, overstocking and bleaker economic prospects. On the surface, a lot of the macroeconomic numbers look terrific. But retail sales include government purchases and shipments to shopkeepers before any sales to actual consumers are recorded. “This makes them a very bad proxy for consumption,” writes Michael Pettis on his blog, China Financial Markets. Pettis is a professor at The China Economic Quarterly (CEQ), an online research publication, agreed that the retail sales numbers were not much use in tracking genuine consumption. Even government officials failed to attach much credence to them, it added. Unlike the more pessimistic Pettis, CEQ believes it was well within As to asset bubbles, which could lead to a shift in the government’s expansionary fiscal and monetary policy measures, the “hysteria is premature”, wrote CEQ,” the CEQ report read. But it warned: “Continued growth at 8-9% in subsequent years will depend on whether the government uses the time it has bought through monetary stimulus to push through domestic market reforms. “We are pretty optimistic about financial sector liberalisation; less so about service-sector reform.” Liberalisation and deregulation are crucial for shifting the economy away from exports towards stronger domestic consumption growth. To discuss issues facing the chemical industry go to ICIS connect Read John Richards
http://www.icis.com/Articles/2009/09/22/9249171/all-eyes-on-chinas-economic-recovery-story.html
CC-MAIN-2015-18
refinedweb
421
52.6
On Mon, Jun 22, 2009 at 02:16:12PM +0530, Balbir Singh wrote: > * Vivek Goyal <vgoyal redhat com> [2009-06-19 16:37:20]: > > > code is essentially the CFQ code for fair queuing. The primary difference > > is that flat rounding robin algorithm of CFQ has been replaced with BFQ (WF2Q+). > > > > The patch is quite long and to be honest requires a long time to > review, which I don't mind. I suspect my frequently diverted mind is > likely to miss a lot in a big patch like this. Could you consider > splitting this further if possible. I think you'll notice the number > of reviews will also increase. > Hi Balbir, Thanks for the review. Yes, this is a big patch. I will try to break it down further. Fabio has already responded to most of the questions. I will try to cover rest. [..] > > +static inline struct io_queue *elv_close_cooperator(struct request_queue *q, > > + struct io_queue *ioq, int probe); > > +struct io_entity *bfq_lookup_next_entity(struct io_sched_data *sd, > > + int extract); > > + > > +static inline int elv_prio_slice(struct elv_fq_data *efqd, int sync, > > + unsigned short prio) > > Why is the return type int and not unsigned int or unsigned long? Can > the return value ever be negative? Actually this function was a replacement for cfq_prio_slice() hence return type int. But as slice value can never be negative, I can make it unsigned int. [..] > > + * bfq_gt - compare two timestamps. > > + * @a: first ts. > > + * @b: second ts. > > + * > > + * Return @a > @b, dealing with wrapping correctly. > > + */ > > +static inline int bfq_gt(bfq_timestamp_t a, bfq_timestamp_t b) > > +{ > > + return (s64)(a - b) > 0; > > +} > > + > > a and b are of type u64, but cast to s64 to deal with wrapping? > Correct? Yes. > > > +/** > > + * bfq_delta - map service into the virtual time domain. > > + * @service: amount of service. > > + * @weight: scale factor. > > + */ > > +static inline bfq_timestamp_t bfq_delta(bfq_service_t service, > > + bfq_weight_t weight) > > +{ > > + bfq_timestamp_t d = (bfq_timestamp_t)service << WFQ_SERVICE_SHIFT; > > + > > Why is the case required? Does the compiler complain? service is > already of the correct type. > > > + do_div(d, weight); > > On a 64 system both d and weight are 64 bit, but on a 32 bit system > weight is 32 bits. do_div expects a 64 bit dividend and 32 bit divisor > - no? > d is of type "bfq_timestamp_t" which is u64 irrespective of 64 or 32 bit system. I think it might make sense to change type of "weight" from unsigned long to unsigned int so that it is 32bit on both 64 and 32bit systems. Will do... > > + return d; > > +} > > + > > +/** > > + * bfq_calc_finish - assign the finish time to an entity. > > + * @entity: the entity to act upon. > > + * @service: the service to be charged to the entity. > > + */ > > +static inline void bfq_calc_finish(struct io_entity *entity, > > + bfq_service_t service) > > +{ > > + BUG_ON(entity->weight == 0); > > + > > + entity->finish = entity->start + bfq_delta(service, entity->weight); > > +} > > Should we BUG_ON (entity->finish == entity->start)? Or is that > expected when the entity has no service time left. > As Fabio said, that with preemption logic, I guess theoritically, it is possible that a io queue is preempted without any service received and requeued back. Hence it might not be a very good idea to BUG_ON(entity->finish == entity->start); [..] > > +/** > > + * bfq_extract - remove an entity from a tree. > > + * @root: the tree root. > > + * @entity: the entity to remove. > > + */ > > +static inline void bfq_extract(struct rb_root *root, struct io_entity *entity) > > +{ > > Extract is not common terminology, why not use bfq_remove()? > *_remove() also sounds good. Will replace *_extract() with *_remove(). > > + BUG_ON(entity->tree != root); > > + > > + entity->tree = NULL; > > + rb_erase(&entity->rb_node, root); > > Don't you want to make entity->tree = NULL after rb_erase? As Fabio said that it happens under queue spinlock held. But from readability point of view, it probably looks better to first remove it from rb tree then reset entity fields. Will change the order... > > > +} > > + > > +/** > > + * bfq_idle_extract - extract an entity from the idle tree. > > + * @st: the service tree of the owning @entity. > > + * @entity: the entity being removed. > > + */ > > +static void bfq_idle_extract(struct io_service_tree *st, > > + struct io_entity *entity) > > +{ > > + struct rb_node *next; > > + > > + BUG_ON(entity->tree != &st->idle); > > + > > + if (entity == st->first_idle) { > > + next = rb_next(&entity->rb_node); > > What happens if next is NULL? > > > + st->first_idle = bfq_entity_of(next); > > + } > > + > > + if (entity == st->last_idle) { > > + next = rb_prev(&entity->rb_node); > > What happens if next is NULL? > > > + st->last_idle = bfq_entity_of(next); bfq_entity_of() is capable of handling next == NULL. I can change it to following if you think it is more readable. if (entity == st->first_idle) { next = rb_next(&entity->rb_node); if (next) st->first_idle = bfq_entity_of(next); else st->first_idle = NULL; } [..] > > +static void elv_ioq_update_io_thinktime(struct io_queue *ioq) > > +{ > > + struct elv_fq_data *efqd = ioq->efqd; > > + unsigned long elapsed = jiffies - ioq->last_end_request; > > + unsigned long ttime = min(elapsed, 2UL * efqd->elv_slice_idle); > > + > > + ioq->ttime_samples = (7*ioq->ttime_samples + 256) / 8; > > + ioq->ttime_total = (7*ioq->ttime_total + 256*ttime) / 8; > > + ioq->ttime_mean = (ioq->ttime_total + 128) / ioq->ttime_samples; > > +} > > Not sure I understand the magical 7, 8, 2, 128 and 256. Please help me > understand the algorithm. Taken from CFQ. > > +int elv_init_ioq(struct elevator_queue *eq, struct io_queue *ioq, > > + void *sched_queue, int ioprio_class, int ioprio, > > + int is_sync) > > +{ > > + struct elv_fq_data *efqd = &eq->efqd; > > + struct io_group *iog = io_lookup_io_group_current(efqd->queue); > > + > > + RB_CLEAR_NODE(&ioq->entity.rb_node); > > + atomic_set(&ioq->ref, 0); > > + ioq->efqd = efqd; > > + elv_ioq_set_ioprio_class(ioq, ioprio_class); > > + elv_ioq_set_ioprio(ioq, ioprio); > > +. [..] > > + * coop tells that io scheduler selected a queue for us and we did not > > coop? coop refers to "cooperating". I guess "coop" is not descriptive. I will change the name to "cooperating" and also put more description for clarity. [..] > > diff --git a/block/elevator-fq.h b/block/elevator-fq.h > > new file mode 100644 > > index 0000000..5b6c1cc > > --- /dev/null > > +++ b/block/elevator-fq.h > > @@ -0,0 +1,473 @@ > > +/* > > + * BFQ: data structures and common functions prototypes. > > + * > > + * Based on ideas and code from CFQ: > > + * Copyright (C) 2003 Jens Axboe <axboe kernel dk> > > + * > > + * Copyright (C) 2008 Fabio Checconi <fabio gandalf sssup it> > > + * Paolo Valente <paolo valente unimore it> > > + * Copyright (C) 2009 Vivek Goyal <vgoyal redhat com> > > + * Nauman Rafique <nauman google com> > > + */ > > + > > +#include <linux/blkdev.h> > > + > > +#ifndef _BFQ_SCHED_H > > +#define _BFQ_SCHED_H > > + > > +#define IO_IOPRIO_CLASSES 3 > > + > > +typedef u64 bfq_timestamp_t; > > +typedef unsigned long bfq_weight_t; > > +typedef unsigned long bfq_service_t; > > Does this abstraction really provide any benefit? Why not directly use > the standard C types, make the code easier to read. I think using standard C type is better now. Will get rid of these abstractions. Fabio also seems to be ok with this change. > > > +struct io_entity; > > +struct io_queue; > > + > > +#ifdef CONFIG_ELV_FAIR_QUEUING > > + > > +#define ELV_ATTR(name) \ > > + __ATTR(name, S_IRUGO|S_IWUSR, elv_##name##_show, elv_##name##_store) > > + > > +/** > > + * struct bfq_service_tree - per ioprio_class service tree. > > Comment is old, does not reflect the newer name Yes, this is all over the code. I have not taken care of updating the comments from original bfq code. Will do it. > > > + * @active: tree for active entities (i.e., those backlogged). > > + * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i). > > + * @first_idle: idle entity with minimum F_i. > > + * @last_idle: idle entity with maximum F_i. > > + * @vtime: scheduler virtual time. > > + * @wsum: scheduler weight sum; active and idle entities contribute to it. > > + * > > + * Each service tree represents a B-WF2Q+ scheduler on its own. Each > > + * ioprio_class has its own independent scheduler, and so its own > > + * bfq_service_tree. All the fields are protected by the queue lock > > + * of the containing efqd. > > + */ > > +struct io_service_tree { > > + struct rb_root active; > > + struct rb_root idle; > > + > > + struct io_entity *first_idle; > > + struct io_entity *last_idle; > > + > > + bfq_timestamp_t vtime; > > + bfq_weight_t wsum; > > +}; > > + > > +/** > > + * struct bfq_sched_data - multi-class scheduler. > > Again the naming convention is broken, you need to change several > bfq's to io's :) Yes. Will do. :-) > > +/* > > + * A common structure embedded by every io scheduler into their respective > > + * queue structure. > > + */ > > +struct io_queue { > > + struct io_entity entity; > > So the io_queue has an abstract entity called io_entity that contains > it QoS parameters? Correct? > > > + atomic_t ref; > > + unsigned int flags; > > + > > + /* Pointer to generic elevator data structure */ > > + struct elv_fq_data *efqd; > > + pid_t pid; > > Why do we store the pid? pid of the process which caused io queue creation. > > > + > > + /* Number of requests queued on this io queue */ > > + unsigned long nr_queued; > > + > > + /* Requests dispatched from this queue */ > > + int dispatched; > > + > > + /* Keep a track of think time of processes in this queue */ > > + unsigned long last_end_request; > > + unsigned long ttime_total; > > + unsigned long ttime_samples; > > + unsigned long ttime_mean; > > + > > + unsigned long slice_end; > > + > > + /* Pointer to io scheduler's queue */ > > + void *sched_queue; > > +}; > > + > > +struct io_group { > > + struct io_sched_data sched_data; > > + > > + /* async_queue and idle_queue are used only for cfq */ > > + struct io_queue *async_queue[2][IOPRIO_BE_NR]; > > Again the 2 is confusing > Taken from CFQ. CFQ supports 8 prio levels for RT and BE class. We maintain one async queue pointer per prio level for both RT and BE class. Above number 2 is for RT and BE class. > > +? > > > + struct io_group *root_group; > > + > > + struct request_queue *queue; > > + unsigned int busy_queues; > > + > > + /* Number of requests queued */ > > + int rq_queued; > > + > > + /* Pointer to the ioscheduler queue being served */ > > + void *active_queue; > > + > > + int rq_in_driver; > > + int hw_tag; > > + int hw_tag_samples; > > + int rq_in_driver_peak; > > Some comments of _in_driver and _in_driver_peak would be nice. Taken from CFQ. So somebody familiar with CFQ code can quickly relate. But anyway, I will put two lines of comments. > > > + > > + /* > > + * elevator fair queuing layer has the capability to provide idling > > + * for ensuring fairness for processes doing dependent reads. > > + * This might be needed to ensure fairness among two processes doing > > + * synchronous reads in two different cgroups. noop and deadline don't > > + * have any notion of anticipation/idling. As of now, these are the > > + * users of this functionality. > > + */ > > + unsigned int elv_slice_idle; > > + struct timer_list idle_slice_timer; > > + struct work_struct unplug_work; > > + > > + unsigned int elv_slice[2]; > > Why [2] makes the code hearder to read Taken from CFQ. it represents base slice length for sync and async queues. With put a line of comment. Thanks Vivek
http://www.redhat.com/archives/dm-devel/2009-June/msg00256.html
CC-MAIN-2014-10
refinedweb
1,530
58.48
What’s this again ? My god, what is Feature Scaling? a new barbaric Anglicism (sorry as i’m french 😉 )? a new marketing side effect? Or is there something really relevant behind it? To be honest, Feature Scaling is a necessary or even essential step in upgrading the characteristics of our Machine Learning model. Why ? quite simply because behind each algorithm are hidden mathematical formulas. And these mathematical formulas do not appreciate the variations in scale of values between each characteristic. And this is especially true with regard to gradient descent! If you do nothing you will observe slowness in learning and reduced performance. Let’s take an example. Imagine that you are working on modeling around real estate data. You will have characteristics of the type: price, surface, number of rooms, etc. Of course, the value scales of these data are totally different depending on the characteristics. However, you will have to process them using the same algorithm. This is where the bottom hurts! your algorithm will indeed have to mix prices of [0… 100,000] €, surfaces of [0… 300] m2, numbers of rooms ranging from [1 .. 10] rooms. Scaling therefore consists of bringing this data to the same level. Fortunately Scikit-Learn will once again chew up our work, but before using this or that technique we must understand how each one works. Preparation of tests First of all, we will create random data sets as well as some graph functions that will help us better understand the effects of the different techniques used (above). Here is the Python code: import pandas as pd import numpy as np from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import RobustScaler from sklearn.preprocessing import MaxAbsScaler import matplotlib import matplotlib.pyplot as plt import seaborn as sns def plotDistribGraph(pdf): fig, a = plt.subplots(ncols=1, figsize=(16, 5)) a.set_title("Distributions") for col in pdf.columns: sns.kdeplot(pdf[col], ax=a) plt.show() def plotGraph(pdf, pscaled_df): fig, (a, b) = plt.subplots(ncols=2, figsize=(16, 5)) a.set_title("Avant mise à l'echelle") for col in pdf.columns: sns.kdeplot(pdf[col], ax=a) b.set_title("Apres mise à l'echelle") for col in pdf.columns: sns.kdeplot(pscaled_df[col], ax=b) plt.show() def plotGraphAll(pdf, pscaled1, pscaled2, pscaled3): fig, (a, b, c, d) = plt.subplots(ncols=4, figsize=(16, 5)) a.set_title("Avant mise à l'echelle") for col in pdf.columns: sns.kdeplot(pdf[col], ax=a) b.set_title("RobustScaler") for col in pscaled1.columns: sns.kdeplot(pscaled1[col], ax=b) c.set_title("MinMaxScaler") for col in pscaled2.columns: sns.kdeplot(pscaled2[col], ax=c) d.set_title("StandardScaler") for col in pscaled3.columns: sns.kdeplot(pscaled3[col], ax=d) plt.show() np.random.seed(1) NBROWS = 5000 df = pd.DataFrame({ 'A': np.random.normal(0, 2, NBROWS), 'B': np.random.normal(5, 3, NBROWS), 'C': np.random.normal(-5, 5, NBROWS), 'D': np.random.chisquare(8, NBROWS), 'E': np.random.beta(8, 2, NBROWS) * 40, 'F': np.random.normal(5, 3, NBROWS) } In this code, apart from the trace functions we create 6 datasets in a single DataFrame ( Pandas ). Let’s take a look at what our datasets look like: plotDistribGraph(df) These datasets are based on Gaussian (A, B, C and F), X2 (D) and beta (E) distributions ( thanks to the Numpy np.random functions ). This code is reusable on purpose so that you can vary the datasets and test the techniques presented. The techniques Basically Scikit-Learn ( sklearn.preprocessing ) provides several scaling techniques, we’ll go over 4 of them: - MaxAbsScaler - MinMaxScaler - StandardScaler - RobustScaler MaxAbsScaler () This scaling technique is useful when the distribution of values is sparse and you have a lot of outiers. Indeed the other techniques will tend to erase the impact of the outliers which is sometimes annoying. It is therefore interesting: - Because robust to very small standard deviations - It preserves null entries on a scattered data distribution scaler = MaxAbsScaler() keepCols = ['A', 'B', 'C'] scaled_df = scaler.fit_transform(df[keepCols]) scaled_df = pd.DataFrame(scaled_df, columns=keepCols) plotGraph(df[keepCols], scaled_df) Pour résumer : cette technique se contente de rassembler les valeurs sur une plage de [-1, 1]. To summarize: this technique just collects the values over a range of [-1, 1]. MinMaxScaler () This technique transforms the characteristics (xi) by adapting each one over a given range (by default [-1 .. 1]). It is possible to change this range via the parameters feature_range = (min, max). To keep it simple, here is the transformation formula for each characteristic: Let’s see it at work: scaler = MinMaxScaler() keepCols = ['A', 'B', 'C'] scaled_df = scaler.fit_transform(df[keepCols]) scaled_df = pd.DataFrame(scaled_df, columns=keepCols) plotGraph(df[keepCols], scaled_df) If this technique is probably the best known, it works especially well for cases where the distribution is not Gaussian or when the [itg-glossary href = ” / “Glossary-id =” 15640 ″] Standard Deviation [/ itg-glossary] is low. However and unlike the MaxAbsScaler () technique, MinMaxScaler () is sensitive to outliers. In this case, we quickly switch to another technique: RobustScaler (). RobustScaler () The RobustScaler () technique uses the same scaling principle as MinMaxScaler (). However, it uses the interquartile range instead of the min-max, which makes it more reliable with respect to outliers. Here is the formula for reworking the characteristics: Q1 (x): 1st quantile / 25% Q3 (x): 3rd quantile / 75% Let’s see it at work: scaler = RobustScaler() keepCols = ['A', 'B', 'E'] scaled_df = scaler.fit_transform(df[keepCols]) scaled_df = pd.DataFrame(scaled_df, columns=keepCols) plotGraph(df[keepCols], scaled_df) StandardScaler () We will finish our little tour (not exhaustive) of scaling techniques with probably the least risky: StandardScaler (). This technique assumes that data is normally distributed. The function will recalculate each characteristic (Cf. formula below) so that the data is centered around 0 and with a [itg-glossary href = “” glossary-id = “15640 ″] Standard deviation [/ itg-glossary] of 1. stdev (x): “Standard Deviation” in English means [itg-glossary href = “” glossary-id = “15640 ″] Standard Deviation [/ itg -glossary] Let’s see it at work: scaler = StandardScaler() keepCols = ['A', 'B', 'C'] scaled_df = scaler.fit_transform(df[keepCols]) scaled_df = pd.DataFrame(scaled_df, columns=keepCols) plotGraph(df[keepCols], scaled_df) Conclusion Let’s simply summarize the Feature Scaling techniques that we have just encountered: - MaxAbsScaler: to be used when the data is not in normal distribution. Takes into account outliers. - MinMaxScaler: calibrates the data over a range of values. - StandardScaler: recalibrates the data for normal distributions. - RobustScaler: same as Min-Max but uses interquartile range instead of Min and Max values.
http://aishelf.org/feature-scaling/
CC-MAIN-2021-31
refinedweb
1,079
51.85
In today’s Programming Praxis exercise, our goal is to determine the median values of a sliding window over a stream of numbers. Let’s get started, shall we? A quick import: import Data.List Sinze Haskell is lazy by default, we don’t have to do anything special to only load the needed part of the list. We can just use tails and take to generate all the needed windows, after which we sort each window and take either the middle or the average of the middle two values, depending on whether the window contains an even or an odd number of elements. slidingMedian :: (Ord a, Fractional a) => Int -> Int -> [a] -> [a] slidingMedian size count = map ((\ ~(x:xs) -> if odd size then x else (x + head xs) / 2) . drop (div (size - 1) 2) . sort . take size) . take count . tails A test to see if everything is working properly: main :: IO () main = mapM_ print $ slidingMedian 5 12 [13, 28, 94, 34, 32, 78, 12, 10, 84, 93, 45, 66, 67, 52, 24, 49] Tags: average, bonsai, code, Haskell, kata, median, praxis, programming, sliding, window
http://bonsaicode.wordpress.com/2012/06/29/programming-praxis-sliding-median/
CC-MAIN-2014-15
refinedweb
184
65.66
I tried to store data in the local directory of each node inside the close() function of mapper. Particularly, I want to serialize an object and store it in a file (permanently) in the local disk of each node that currently executes the map phase. I Use this code: FileSystem fs = null; FSDataOutputStream out ; ObjectOutputStream obj; Path localOutPath; in the configure( ) function of the mapper: localOutPath = new Path( conf.get("mapred.local.dir")); fs = localOutPath.getFileSystem(conf); out = fs.create(localOutPath); obj = new ObjectOutputStream(out); and in the close() function of the mapper: obj.writeObject(someObject); obj.close(); Hoever, after checking the mapred.local.dir nothing is stored there. Having read that after each succesful rask this directory is deleted, I think that this might be the reason. Nonetheless, I really want to find a way to make each task able of writing local data to the local filesystem rather than to the hdfs. Thank you. On Wed, Jul 1, 2009 at 5:30 PM, bonito perdo <bonito.perdo@googlemail.com>wrote: > Thank you Jason! > > > On Wed, Jul 1, 2009 at 5:26 PM, jason hadoop <jason.hadoop@gmail.com>wrote: > >> The directory returned by getWorkOutputPath is a task specific directory, >> to >> be used for files that should be part of the final output of the job. >> >> If you want to write to the task local directory, use the local file >> system >> api, and paths relative to '.'. >> The parameter mapred.local.dir will contain the name of the local >> directory. >> >> >> On Wed, Jul 1, 2009 at 9:19 AM, bonito perdo <bonito.perdo@googlemail.com >> >wrote: >> >> > Thank you for you immediate response. >> > In this case, what is the difference with the path obtained from >> > FileOutputFormat.getWorkOutputPath(job)? this path refers to hdfs... >> > >> > Thank you. >> > >> > >> > On Wed, Jul 1, 2009 at 5:13 PM, jason hadoop <jason.hadoop@gmail.com> >> > wrote: >> > >> > > The parameter mapred.local.dir controls the directory used by the task >> > > tracker for map/reduce jobs local files. >> > > >> > > the dfs.data.dir paramter is for the datanode. >> > > >> > > On Wed, Jul 1, 2009 at 8:56 AM, bonito <bonito.perdo@gmail.com> >> wrote: >> > > >> > > > >> > > > Hello, >> > > > I am a bit confused about the local directories where each >> map/reduce >> > > task >> > > > can store data. >> > > > According to what I have read, >> > > > dfs.data.dir - is the path on the local file system in which the >> > DataNode >> > > > instance should store its data. That is, since we have a number of >> > > > individual nodes, this is the place where each node can store its >> own >> > > data. >> > > > Right? >> > > > This data may be part of a-let's say- file stored under the hdfs >> > > namespace? >> > > > The value of this property for my configuration is: >> > > > /home/bon/my_hdfiles/temp_0.19.1/dfs/data. >> > > > As far as I can understand this path refers to the local "disk" of >> each >> > > > node. >> > > > >> > > > Moreover, calling FileOutputFormat.getWorkOutputPath(job) we obtain >> the >> > > > Path >> > > > to the task's temporary output directory for the map-reduce job. >> This >> > > path >> > > > is totally different than the previous which confuses me since the >> > > > temporary >> > > > output of each task should be written locally in the node's disk. >> The >> > > path >> > > > I >> > > > retrieve is: >> > > > >> > > > >> > > > >> > > >> > >> hdfs://localhost:9000/user/bon/keys_fil.txt/_temporary/_attempt_200907011515_0009_m_000000_0 >> > > > Does this path refer to the local disk (node)? Or is it possible >> that >> > it >> > > > may >> > > > refer to another node in the cluster? >> > > > >> > > > Any clarification would be of great help. >> > > > >> > > > Thank you. >> > > > -- >> > > > View this message in context: >> > > > >> > > > Sent from the Hadoop core-user mailing list archive at Nabble.com. >> > > > >> > > > >> > > >> > > >> > > -- >> > > Pro Hadoop, a book to guide you from beginner to hadoop mastery, >> > > >> > > a community for Hadoop Professionals >> > > >> > >> >> >> >> -- >> Pro Hadoop, a book to guide you from beginner to hadoop mastery, >> >> a community for Hadoop Professionals >> > >
http://mail-archives.apache.org/mod_mbox/hadoop-common-user/200907.mbox/%3Cff6ef8a60907011236j26c25679i476c70c0bccc5439@mail.gmail.com%3E
CC-MAIN-2017-47
refinedweb
614
58.38
Hello Everyone, New to the Forum, just wated to introduce myself and gather your thoughts and interest on my project! So here it is: X3 Terran Conflict is an Existing game from a Company called Egosoft @; X-Universe, They have an Community forum in addition to the Site, where one can see all the games that they have designed, as well as brought to print over the years. The most Current Is X3 Terren Conflict. I am looking to put together a team for ongoing Projects concerning this game as well as other projects I have on the table, I am wrestling with Developing my own game or adding to Egosofts, existing game, based on the concept I have written bellow; so check-it-out!!! Please Read what I wrote below, just to let you all know; I have several projects, but this will be the first, as I put together a team to get started on each project, also when I originally wrote what you are about to read, I was on the website game forum. Got a lot of insight from the players and fans, of X3 Terran Conflict, some of it good, some of it bad. Nevertheless; this will be the starting project that puts my team together, so I can see what each Engineer is capable of, also The game was designed to be modded, the developers have stated this several times in their forums, throughout the entire X genre. This is the intro to the mods, or the short story line giving the reason behind the creation of these mods, once again I was on the, so I was writing this as if the news transmission during in-game play would speak to each player, as they would be docked at a Space Station, so I guess you can say I was in character; writing this, so dont laugh (smile!) Enjoy!!!, but keep in mind, websites will need to be built as we construct these actual game mods, to be displayed on the sites we build with very nice ajax and flash elements; each in full sofware platform format which will work in conjunction with the existing game engine, during in-game play. Supposedly The very first tribes of man outside the Garden of Eden: Sa Tribe Wa Tribe Also Check Out: Nuwaubianism: The Nuwaubu Tribes are part of the latter Ancient Tribes, The offspring of the daughters in these tribes migrated amongst the stars as well The Niribu: Are considered to be ancient alien beings who were one of many first encounters for the earth constellations, which were said to be entirely occupied, on all nine planets and beyond; extending to the 19 known Earth Galaxies, with communications and connections to planets extending to all, and greater. The Niribu Race had such an advanced technology for ancient times, stumbling across our earth solar system in search of resources in a huge ship, in which their guidance systems began to malfunction once they entered our earth galaxy-there ship was to large to steer manually or in autopilot, according to recorded reports; they began to bump into planets, such as Saturn, mars, as well as earth. Causing global chaos. Mars was said to be inhabited at the time, and this is the reason for is current condition to this date. The Nephilim: Gen-6: The Sons of God saw that the daughters of men were beautiful, and took them each as wives, and bore them children. There were giants in those days, and also afterwards. Several Hundred Thousand years have passed, and many have asked; how long did this go on, well, it has not stopped, according to the Books of Enoch, This fallen; also known by some as watchers, who became fearful of their actions, and began to mix their DNA with all sorts of life forms on earth, especially human women. This began when the earth was young, and the origins of man had not exceeded not more than 7 generations past the Earth Man; Adam. This would mean some of the sons of Adam, had become truly corrupted. Even other humans outside the garden had begun to take part in what elders and some believers, would consider to this day, an abomination. Additional Information: Yazurra (X Universe Reference to Year) 2610 AD)., It has been said that the Nephilim races, are spread about various Universes, including that which is unbeknown, to the X Universe. Some are still proud of their unholy heritage, being called by many, direct descendants of Shatan, others disagreed even in the days of old, when they reached the age of reason, and it was made known to them who and what they were spawn, for. These are also the reasons many parts of the heavens are a place of refuge for many of the various offspring of the Nephilim. They are the Nephilim offspring of Earth Humans and Fallen Angelic Beings; which some were sorrowful of their very own birth. And by choice they remain secluded among the stars. Make no mistake about these creatures, this Nephilim race; they are very powerful beings, after all they are cross breeds of Fallen Angels and Humans, Fallen Angels and many different kinds of Insects, Birds, Fish, Land Animals, Plants, Micro Lifeforms and many others. Further Information: Those that agreed with the path in which they were bred to follow; proceeded to mix their seed as instructed by forces in which none seem to believe exist or understand. Others that detested the atrocities, that would and have stemed from their own existence, built space vessels, and left earth long before The Great Flood. They are considered Mythical by many in the X Universe, but others swear by their existence. Especially some Gonar. As it is with the high councils amongst the Catholic Earth order, so it is the same among the High Council of The Gonar order. No Investigations, and to quiet anyone who speaks the word, Nephilim. Additional Information Related to This Transmission: There are Stories of a man named Hhaus Bramani, Once a 5 star General in the Argonian Elite Special Forces. Was awarded several badges of honor, and has now retired and Is the Rightful and True Owner of Ujama Enterprises, Bramani Motor Corporation, and Nivirita Productions, Inc. Ujama Enterprises is a Universal Supplier of Advanced Complex Weaponry, and Software, as well as War Machines. Bramani Motor Corporation Builds the Universes most sought after Exotic Automobilia, and Nivirita Productions, Inc., is a Universal Supplier of High Level Performance Computers, and Software, Especially On board Ship Computers. For Some of the most wanted ships in the 17 known Universes, but some say this mans Companies supplies to that of Universes, that are Unknown, as well. General Hhaus Bramani, is a reclusive, and said to be Very Dangerous. It has been said that he is not even true Argonian. And that he left the Universal Earth Territories when the AGI Wars began. It is also said that he has Intergalactic Photos and Documents Proving the Existence of said, Nephilim Races. Ujama Enterprises - 3D Interplanetary Galaxy Map The Galaxy Map Interface will contain all of whats in the original game map, but will have additional features, such as 3D scenes of sectors when one mouses over each sector, and in stopping your mouse over a chosen secter, one will see a 3D View of that sector, as the same actions will allow the viewing of different universes; also one will be able to use the arrow keys or their mouse on the 3D scenery of the sector map for better observations of each map view; this will assist in better routing and planning in regards to trade routes or even plans for attacks, once a person makes a decision to click on that sector, well; or Universe then Secter. - 3D Universal Ship Trader Interface The Universal Ship Trader Interface will be also in full 3D, allowing for the player to make better choices in ship classes, with the ability to now see each ship in X in its entirety (Live in Real Time 3D), along with full Specifications, Technical Information, and Overview of each Ships abilities, as well as Weapons Compatibilities, and Weapons Systems Choices to allow for Massive Load-outs, being that one will now be able to pimp their ship of choice, so to speak! When new ships become available, they will automatically be added to this interface as long as one has the program loaded. It will also be fully compatible with game editors and ship editors. The Interface itself will be fully customizable to suit ones needs, especially those who desire and make use of multiple monitors, as well as, the latest game to the X3 Genre; Superbox, as well as X-Rebirth ... - 3D In Game Communications Interface The In Game Communications Interface will also be a full 3D Communications Interface with a built in browser for ease of access to the; forum, as well as any other online needs, or just to jump on the web while the game is still running to gather info, or research, as well as chat and get updates from friends and colleagues. Or show a live space battle with an embedded player. Its main purpose will be for better visualization and comms between in game humans as well as alien contacts. Especially for any future online game play! One can communicate with alies with ease, but enemies as well as unknowns in the sector, will be optional weather player wants to open a line of communication or not. This will all be made possible by completley enhancing the comm during in-game play with this interface once it is complete. - 3D Tactical Combat Interface The Tactical Combat Interface will be to plan and strategically coordinate battles, all out wars, sneak attacks, etc. With a full 3D view, one can now better coordinate their battles against their enemies. Pirate players will love this. Traders will also see the value in this, because this interface will allow them better defense against Pirates. All in All this interface gives the player complete control over their Ship, Fleet, etc. In Full Real Time 3D In-Game View. Ujama Enterprises - New Ships; Retrieving Information, Please Wait! * 144 Total M0 Class Battle Carrier Motherships! M0 Mothership classes are usually 3 to 5 times larger than your already huge Capital ships - Able to Carry Very Heavy Fighter Class Ships as well As Troops and Supplies, etc. These will be true Planet and Space Station Destroyers. If a player can manage to form an Armada of these Intergalactic Beasts, They will have the ability to take out multiple Sectors in their entirety. Just one of This ship type will be able to disable an entire system or render it useless. Weapons Capability: (Choice of one or both) - Full Range Tachyon Beams or Canons - Full Range Dark Energy Beams or Canons - Full Range Dark Matter Beams or Canons Like the Medusa Weapons Systems, These M0 Class Motherships will Use an Highly Advanced Weapons System Called the Mayflower. The Mayflower Weapons System Will use Omni Directional Firing Array Patterns. These M0 Class Motherships will only be able to be used and operated with the above interfaces. You will need all interfaces to properly control and make full use of these class of ship and its cargo. If one chooses to purchase the Mayflower Weapons System, along with the above new weapons; one will be able to combine and use all weapons in X3TC, combined, interchanged, or even in multiples. * 91 Total Heavy Fighter Class Ships, Retrieving More Information ... Please Wait! Heavy Fighter Information Received: - Smart Missiles - Lightning Bolt Missiles - Systems Disruptor Neutron Beams or Canons - Disintegration Beams or Canons - 3D Character Generation Interface Information Received : Offered Only Through Nivirita Productions, Inc. : Also Owned and Operated By Gen. Hhaus Bramani The Character Generation Interface will allow one to custom build characters for X, one will be able to choose his/her favorite Race and customize the look and dress of each character. One will also be able to spawn an entire race or mixed group, military, very important people beings aboard a space station, and or planet. Currently the game does not have any of the things I am proposing but can be done. The Character Generation Interface will help each player keep track of his role playing character, group, team, organization, Corporation, Tribe, Nation, Entire Planet, etc. - 3D InterGalactic Space Station Interface Information Received : Offered Only Through Nivirita Productions Inc. : Again Owned and Operated By Gen. Hhaus Bramani The Interplanetary or Space Station Interface will allow for Vessels of all types full ability to land on all planets, as well as build Modular Space Stations and Modular Buildings On The Planets; Depending on size, and Race Relations, as well as authorized clearance for some requirements by Planet Authorities. So the same will be for certain Space Stations, as well as certain Planets. Nevertheless, one will be able to exit their ships, and move about by a motor vehicle or atop a custom designed huge Modular structure. Yet and still the player will have the ability to exit their ship and walk, run, move around, as well as build and interact with other beings. The main purpose of the Interplanetary Space Station will be for players to keep track and observe their Planet and Space Station activity. Important Note: The: - 3D Interplanetary Galaxy Map - 3D Universal Ship Trader Interface - 3D In Game Communications Interface - 3D Tactical Combat Interface - 3D Character Generation Interface - 3D Interplanetary or Space Station All 6 of these Modules have to be run together, for full in-game functionallity; otherwise running one at a time will result in limited in-game functionallity of interface of choice; in use. Players can accumulate them one by one, as well as set them up one by one in any order in which one chooses to purchase them (In-Game Purchase; not with real money). But again; they all have to be fully installed together to achieve full functionallity of all 6 mod interfaces working in conjunction with the Existing game engine. I will need a team of: - Web Engineers to develop the interworkings of the software Engineers work of the interfaces for in-game play; to be displayed in real time video play-outs online on the website, as examples of how each interface works. - 3D Architectual Engineers to build Modular Building, Associated Parts, Structures, Realitive Object Oriented Additions, etc., to be used for in-game Modular Assembly, as parts, if not all at once; would be collected by each player, to allow for in-game Modular Assembly of Industrial, Commercial, and Residential Buldings; as well as Roads, Roadways, Ground and Air Traffic Control Hovering Devices. Each Players Role Played Being, will Need to have very advanced futuristic Handheld devices that allow them to Build Structures of their choosing on the fly, with these Hand Held Device. A HUD for each device obtained by each player during in-game play, will apear onscreen for full viewing, as well as integrated interactive mouse-click-triggered Keypad, for added since of realistic gameplay. - 3D Development Engineers will be needed for all the necessary in-game 3D Modeling that will be necessary on all levels, in accordance to the detailed layed out plans of this project. - Audio Engineers will be needed to Score the Audio, working with the Video Engineers, As well as the 3D Modeling Engineers and Programming Engineers, in regards to Quality High Definition Sound, Effects, etc., for putting together all the 3D Movement that will take place for these interface (s) mod additions to the existing game. - Video Engineers will be needed to Construct all Video Playback, working with everyone as needed; for each in-game Mod Examples, as to how effective they will work for each player; during actual in-game play. - Programming Engineers will work on completion of each mod, to finalize each mod, before they are displayed on the online continuously maintained website, In full motion Graphics Video Playback; online, as to making sure in-game play and online website display will benifit each player to download and use one or all mods; for in game use. If anyone is interested, please tell me the proper way for us to contact each other with respect to forum rules[/B], I will also need to see functional preliminaries of the above as soon as possible! In order to familiarize your self with game, one needs to go to ther local Walmart or Best Buy and purchase a copy of this game called: X3 Terran Conflict, or you can go directly online, and buy it at Amazon, etc., and make sure to have Amazon, send you an actual CD In the mail. You can also youtube; X3 Terran Conflict Mods !!!; X3 Superbox Mods !!!!, as well as; X Rebirth!!!!! Very Important Note: Purchasing Streamed copies of the game from companies like Steam, Digital River, etc, as well as, streamed download managers, from sites that require you to attain the game through using their Online download managers, or worse; requiring you to download their download manager on to your computer first, is a mistake; its Highly NOT Recommended, as you are downloading spyware, adware and GOD Only knows what esle into your computer, so be sure to demand the game via US Mail or Fed-Ex-UPS, or even from your local Walmart or Best Buy, as I stated earlier, if you dont like waiting, most sites that still honor the mail system, have some sort of free shipping incentives; anyway. You will have serious problems in Modding this game; as well as, testing the existing mods on the forum under, Scripts and Mods section. Why? Because the CD version of the game gives you full access to the existing mods in the online forum, as well as the; in-game script editor, in-game galaxy editor-you will need true access to these game bonuses, as we work on this project. It is also a good idea to purchase a 1 1/2 terrabyte to 2 terrabyte external drive and load the game by itself on that external drive. Why? For several reasons, (1) You all will be using a lot of Oracle software to stablize and develop mods for this game (Almost all of Oracle Software has a very heavy footprint, especially when extrcted and put into use!!!!), along with your preferred software; another reason is, (2) you will save alot of space on your main; C:\ drive, while we perform on this project, as well as others in the future, (3) The biggest reason is its safer; If something happens to the central area of your system, and you are forced to rebuild your Operating System that requires that main Drive (C:\), You will already have all your work automatically saved / backed-up live on the external Drive, and will be waiting for you when you return (This is a definate consideration for Windows users). So it is strongly recommended when and everytime we start new projects, to make this small investment, and in doing this now you can go between files, and move entire external drive contents around, other external drives as needed, when having and owning additional external drives. The Backgrounds in the game are dead, The Planets, Stars, Nebulas, Space itself, etc., All Dead!!!, They from what I understand Are just there for show, as well As; Still Asteroids, As We All Know, Space Is Consatantly Moving; with all sorts of Random Events, Consistantly taking place. Lets Wake Everything Up, and make it All LIVE!!!! LIVE Active Backgrounds In Everything!!!!! We will need to be very creative, and show realistic atmospheric changes, as well as various weather patterns, when one is flying through such an endless immersive environment such as Space, Also we will need to show realistic Atmospheric changes in entering, as well as rentering; Planets and Space!!! Please Also Note: Nivirita Productions, Inc. Is the blanket Corporation, Controlling all assets for: - Ujama Enterprises - Bramani Motor Corporation More Details as requested, From Nivirita Productions Inc Thanks, 0 Replies - 638 Views - Last Post: 28 April 2011 - 11:04 AM #1 Software Development Engineers Needed for X3 Terran Conflict Posted 28 April 2011 - 11:04 AM Page 1 of 1
http://www.dreamincode.net/forums/topic/229735-software-development-engineers-needed-for-x3-terran-conflict/
CC-MAIN-2016-07
refinedweb
3,393
50.8
Hey guys! First time poster, and a semi-noob here. I'm currently working on a posture trainer, where I aim to have two gyroscopes placed on the user's spine, comparing their values to determine whether or not he/she is slouching. So, I've been trawling around forums and guides in order to get pitch data along the y axis of two gyroscopes working. Finally, I've been able to get it to work somewhat, but it's got issues. First and foremost, after running for some time (10-30 seconds), the console starts printing not a number(nan). Second, it's not very accurate. Small, slow changes don't register at all. I need it to be accurate since just a degree or two can screw it up for my application of choice. I am using SparkFun LSM6DS3 6DOF boards (Datasheet (PDF download)). This is my test code (cred to racquemis here on the forums for the complementary filter code). Since I am using two gyros, this code also implements SparkFun's example for hooking up multiple units over I2C. I am only measuring one for this test. #include "SparkFunLSM6DS3.h" #include "Wire.h" #include "SPI.h" //Create two instances of the driver class LSM6DS3 SensorOne( I2C_MODE, 0x6A ); LSM6DS3 SensorTwo( I2C_MODE, 0x6B ); long timer = micros(); int delta_t; float pitch = 0; float pitchAcc; float P_CompCoeff= 0.98; float gyro1Rotation; float gyro2Rotation; void setup() { // put your setup code here, to run once: Serial.begin(9600); delay(5000); //relax... Serial.println("Processor came out of reset.\n"); //Call .begin() to configure the IMUs if( SensorOne.begin() != 0 ) { Serial.println("Problem starting the sensor at 0x6A."); } else { Serial.println("Sensor at 0x6A started."); } if( SensorTwo.begin() != 0 ) { Serial.println("Problem starting the sensor at 0x6B."); } else { Serial.println("Sensor at 0x6B started."); } } void loop() { ComplementaryFilter(SensorTwo.readFloatAccelX(), SensorTwo.readFloatAccelY(), SensorTwo.readFloatAccelZ(), SensorTwo.readFloatGyroY()); timer=micros(); Serial.println(pitch); delay(20); } void ComplementaryFilter(float ax,float ay,float az,float gy) { delta_t = micros() - timer; long squaresum=(long)ay*ay+(long)az*az; pitch+=((-gy/32.8f)*(delta_t/1000000.0f)); pitchAcc =atan(ax/sqrt(squaresum))*RAD_TO_DEG; pitch =P_CompCoeff*pitch + (1.0f-P_CompCoeff)*pitchAcc; } The gyroscopes he uses are different, so I guess that the inaccuracy could have something to do with his gyro having a different sensitivity? Would be eternally grateful for any help! If I've missed providing you with some piece of information, feel free to gimme a holler and I'll do my best to fill you in! Cheers!
https://forum.arduino.cc/t/gyroscope-complementary-filter-inaccurate-starts-printing-nan/425901
CC-MAIN-2022-21
refinedweb
419
51.14
Agenda See also: IRC log <shadi> SAZ: Changes from telecon on 05/21 have been completed and are in latest editor's draft ... JK sent more comments and fixes are in place ... will clean up references and history sections ... work on 1) status of document and 2) are we willing to publish as is? ... any pubilcation show-stoppers? <shadi> ACTION: SAZ remove "namespace" from table 2 (these are not namespaces) [recorded in] scribe: note that we do not have 2/3 of group present ... follow-up with CI and CV for any last-minute comments <shadi> RESOLUTION: go ahead with publishing HTTP Vocabulary in RDF SAZ: What are we looking for from public feedback and status? ... how these documents relate to POWDR ... feedback from other semantic web groups CV: no ojbections to publication SAZ: in particular, how schema performs and is evaluated by other semantic web groups and technologies ... Mobile Web group also interested in serialization of HTTP ... question for Mobile Web is how our XML representation would "play" with their representation <shadi> SAZ: very well-written; many comments already incorporated ... abstract too brief definitely scribe: merge section 1.3 use cases into itnroduction Quick question - why is XMLContent not a sublcass of TextContent? JK: separate classes bc they have different properties; e.g. XmlContent does not have chars but might be representation of in-memory DOM ok - just curious from a new-commers perspective JK: TextContent could be subclass of Base64Content since text could be represented that way got it. CV: too brief and possibly redundant ... content in RDF is a complementary document to Http in RDF so don't want to spend too much time on it no SAZ: keep in mind that this a distinct publication and needs a context ... people could randomly land on the document and abstract should spell out relationships and context agree - especially spelling out the rleationships to EARL and Http in RDF SAZ: part of a suite ... mimick format of Http in RDF third bullet says, " SAZ: can we have a provisional relation to publish contingent upon discussion of abstract on mailing list? third bullet implicitly refers to EARL - might be a good palce to actually repeat or reiterate the relationship once abstract/intro is redone under use cases <shadi> SAZ: WAI web site lists all publications and relationships betwen them ... who is each doc for, what is it used for, etc. ... is it worth pointing back to this URL ... to provide an overview for first-time readers given our time-frame, +1 for working it out on list <shadi> RESOLUTION: go ahead with publishing Representing Content in RDF, pending changes to the abstract and introduction sections <shadi> ACTION: CV to provide an updated abstract and introduction section for Representing Content in RDF document [recorded in] SAZ: publication of charter must occur in June so little time left ... time line needs adjustment; far too agressive ... on the other hand, need to push as far as possible to get EARL out <shadi> SAZ: heading is missing bc missing in W3C-wide template MS: test description language part of charter for a potential deliverable? SAZ: need to recruit members and otehr development/test organizations for feedback, requirements, and support ... part of TCDL is in Bentoweb project CV: +1 for committing resources to effort JK: +1 SAZ: believe CI was also interested so unanimity in effort ... how does it look in teh charter? ... proposed an analysis and requirements document as a deliverable ... once completed, we can recharter with more specific deliverables <JohannesK> RDFa test case descriptions: <> SAZ: workging group note ... in parallel with completion of EARL, consider a reorientation of the group ... test description languages ... how does EARL and supplementary docs fit into toher W3C work - RDFA, POWDER, Content Transformation, Mobile web ... focus deliverable that describs how EARL fits into these other efforts <EtnaRosso> hi all, hi shadi <EtnaRosso> Is a meeting in progress here? <shadi> yes <CarlosV> Christophe's analysis: (Test Suites' State of the Art and Quality Assurance Methods for W3C Recommendations) <CarlosV> Date: 28 February 2005 (a little bit old) <EtnaRosso> see you SAZ: no meeting Jun 4
http://www.w3.org/2008/05/28-er-minutes.html
CC-MAIN-2017-30
refinedweb
686
52.09
I am a beginning self taught programmer. I was thinking this would be a good exercise to get used to program control statements. This is an attempt to write a program that returns the number of days between two dates. Sadly, it give incorrect values and after hours of trying to figure out what is wrong, I was hoping someone would help me. I will accept any form of help and criticism to help me improve. Thank you public class DaysBetweenDates { public static boolean isLeapyear(int year) {//This method will determine if a year is a leap year or not boolean isTrue=false; if(year%4==0 && year%100!=0) isTrue=true; else if(year%400==0) isTrue=true; return isTrue; } public static int computeDaysLeftInYear(int year1,int month1, int day1) {//This method is supposed to compute the days after the first date in //the first year int daysInYearOne=0; switch(month1) {//The purpose of this switch is to get the day remaining in the first month and store them in the variable daysInYearOne case 1: daysInYearOne=31-day1; break; case 2: if(DaysBetweenDates.isLeapyear(year1)) daysInYearOne=29-day1; else daysInYearOne=28-day1; break; case 3: daysInYearOne=31-day1; break; case 4: daysInYearOne=30-day1; break; case 5: daysInYearOne=31-day1; break; case 6: daysInYearOne=30-day1; break; case 7: daysInYearOne=31-day1; break; case 8: daysInYearOne=31-day1; break; case 9: daysInYearOne=30-day1; break; case 10: daysInYearOne=31-day1; break; case 11: daysInYearOne=30-day1; break; default: daysInYearOne=31-day1; } switch(month1+1) {//This switch is supposed to get the values of the days in the month for the rest of year one stored inside the variable case 2: //daysInYearOne if(DaysBetweenDates.isLeapyear(year1)) daysInYearOne+=29; else daysInYearOne+=28; case 3: daysInYearOne+=31; case 4: daysInYearOne+=30; case 5: daysInYearOne+=31; case 6: daysInYearOne+=30; case 7: daysInYearOne+=31; case 8: daysInYearOne+=31; case 9: daysInYearOne+=30; case 10: daysInYearOne+=31; case 11: daysInYearOne+=30; default: daysInYearOne+=31; } return daysInYearOne; } public static int computeDaysFromYearsInbetween(int year1, int year2) {//This method is supposed to find the days in the whole years that lie int daysInbetweenYears=0; // between the partial years for(int i=year1+1;i<=year2-1;i++) { if(DaysBetweenDates.isLeapyear(i)) daysInbetweenYears+=366; else daysInbetweenYears+=365; } return daysInbetweenYears; } public static int computeDaysBeforeDate(int year2, int month2, int day2) {//This method is supposed to find the days in the last year by using int daysInYearTwo=0; //the first method. I was trying to follow the never repeat yourself principle if(DaysBetweenDates.isLeapyear(year2)) daysInYearTwo=366-DaysBetweenDates.computeDaysLeftInYear(year2,month2,day2); else daysInYearTwo=365-DaysBetweenDates.computeDaysLeftInYear(year2,month2,day2); return daysInYearTwo; } public static void main(String[] args) { int v1=DaysBetweenDates.computeDaysLeftInYear(2012,1,1); int v2=DaysBetweenDates.computeDaysFromYearsInbetween(2012,2013); int v3=DaysBetweenDates.computeDaysBeforeDate(2013,12,31); System.out.println( v1+v2+v3); } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/36623-code-not-giving-correct-values.html
CC-MAIN-2015-32
refinedweb
474
55.47
Abstract base class for sound file encoding. More... #include <SoundFileWriter.hpp> Abstract base class for sound file encoding. This class allows users to write audio file formats not natively supported by SFML, and thus extend the set of supported writable audio formats. A valid sound file writer must override the open and write functions, as well as providing a static check function; the latter is used by SFML to find a suitable writer for a given filename. To register a new writer, use the sf::SoundFileFactory::registerWriter template function. Usage example: Definition at line 41 of file SoundFileWriter.hpp. Virtual destructor. Definition at line 49 of file SoundFileWriter.hpp. Open a sound file for writing. Write audio samples to the open file.
https://www.sfml-dev.org/documentation/2.5.1-fr/classsf_1_1SoundFileWriter.php
CC-MAIN-2019-35
refinedweb
122
58.28
Table of contents These are uncertain times for any business. Maybe you’ve seen your top of the funnel demand decreasing. Or your sales cycles are getting longer. While each company feels the impact differently, one thing is certain – businesses are being forced to adapt to change at a pace we haven’t seen before. When the stakes are high, it can be tempting to invest in short term solutions to help grow your business. But more often than not, the most effective solutions to short-term problems are long-term solutions. A band-aid is of no use to a broken arm. One strategy we’ve found particularly useful in trying times is doubling down on customer retention. This isn’t a new insight. In the last five years alone, the cost of customer acquisition has increased by over 50%. The challenging unit economics faced by companies like Blue Apron and Homejoy has issued a cautionary tale to businesses - if you want to grow in a scalable and profitable way, you have to look beyond customer acquisition. Businesses have gradually started to switch their focus from “How do we acquire more customers?” to “How do we retain the ones we already have?” The math to back this up is pretty compelling. After five years, a company with 2.5% churn is 50% (yes 50%) larger than the company with 5% churn on a revenue basis. Caption: If you want to model this for yourself, check out this calculator Put it another way, it just doesn’t make sense to fill a leaky bucket. At Segment, we’ve helped hundreds of our customers to analyze their user behavior, predict churn, and build profitable, long-lasting businesses. We've also worked incredibly hard to improve retention in our own product over the years. So we thought it would be helpful to share a handful of our most successful strategies, with some recommended tools to help. (all of which are available in the Segment Catalog) Thank you to Eleanor Dorfman, Matt Smidebush, Brooks Taylor, Katie Rovelstad, Tyler Goerzen, Alan Harris and Kevin White for their feedback on this article, and for their tireless work in driving adoption of Segment. How to measure customer retention Before getting into tactics, let’s spend a minute thinking about how to measure your retention. When getting started with retention, the obvious first step might be to look at exit surveys or recently churned customers. It might sound counter-intuitive, but this is actually the wrong place to start. Instead, look at your best customers. Why did these customers stay with your product? Why did they expand their usage of your product? What actions did they take in your product? If you can figure out what and why, you can start to reverse-engineer that path for other users. You’ll often hear these referred to as “activation metrics” or “aha moments”. The canonical example is best illustrated by Chamath Palihapitiya and the early Facebook growth team. They understood what actions separate their best customers from those they lost – namely those that added 7 friends in 10 days. To understand this at Segment, we use a logistic regression model, which combines different dimensions (computed over a week of activity) to form a single raw score. This score indicates how “activated” a customer is, and subsequently how likely they are to retain. The five measures we track (using Segment, of course!) are as follows: Start tracking these key retention measures with Segment 👉 A prototypical “good” customer and their workspace connects 1 source, 1 destination, has data flowing, and has sent track and identify calls. This customer has a score of 4.41. This number then increases or decreases based on additional actions. When a workspace adds another teammate, their raw score goes up to 5.8 and the chances of them retaining increases further still. Similarly, if a customer stops taking a certain action, their score and likelihood of retaining decreases. If you don't have the resources to run a logistic regression analysis like this, don’t worry. You should be able to uncover these high-value actions by running a cohort analysis, or simply by running a series of customer interviews. Whatever method you use, an analysis such as this will help you understand the behaviors that, when performed, best correlate with users continuing to use your product for an extended period of time. And once you understand these behaviors, you can optimize your product or communication to help even more users take these actions, see value from your product and ultimately become a long-term, happy customer. Now that you’ve got a handle on some of the data behind your retention, it's time to come up with creative ideas for improving it. 6 strategies to help optimize your retention When we think about retention, it helps to conceptualize retention by breaking it down into three stages—early, middle, and late. We learned this way of thinking from Brian Balfour, the former VP of Growth at Hubspot and CEO of Reforge. If you are getting started with retention, we highly recommend you watch this talk. Early-stage retention Encourage new signups to take high-value product actions Most products see a precipitous drop in engagement in the first few days. Those that don’t – Facebook, Twitter, Pinterest – do so by making sure users complete valuable product actions early on. At Segment, the first action that heavily correlates with customer retention is when a user starts sending data into Segment. Here’s a simple in-app message we send through Intercom to nudge users to that action. The important thing to measure here isn’t opens and clicks, but conversions. In this example, the conversion (i.e users who got data flowing to a source after this message) was 54%. Not bad! Tools you can use to help: Middle stage retention Track “warning signs” One a user has completed your product’s high-value actions, watch for decreases in overall product usage. Are they logging in less than they had before? When they are logging in, are they spending less time inside your product? When doing this, it’s important to look at engagement, not just activity. I like how Lincoln Murphy puts it: “What many SaaS providers would believe is an ‘active’ user…is really a huge, ugly churn threat.” A user might log in at the same frequency and even spend the same amount of time in your product, but all the while, they’re using the most valuable features less and less. We recommend measuring retention based on whether users come back to your app and perform your high-value actions. For example, we use Segment and a data visualization tool like Tableau to look at how many sources and destinations are being connected by customers month over month. If we notice these numbers trending downwards, like in the below screenshot… We send proactive communication to the cohort in question before it’s too late. (Talk with a Segment team member to enable this use case for yourself 👉) Tools that’ll help: Communicate ROI regularly The legendary Don Dodge once said there were two types of products in this world. "Vitamins" are products that are nice to have. “Painkillers” are the products you couldn’t live without. Understanding and communicating the value your product solves is the key to converting your product to convert from a "vitamin" to a "painkiller". One of our favorite tactics here is to clearly show the ROI your product is having to their organization. In doing so, you will prove that your product is a vital part of keeping their business running. At Segment, we like to send monthly newsletters to our users that aggregate usage stats of the product, but critically, also show the quantifiable impact Segment is having. ROI with your product will look different depending on the individual (i.e for a B2B product, a buyer and practitioner may have a different measure of success). Adjust your messaging accordingly! One of the (many!) positive outcomes as a business sees from Segment is saving engineering resources. So we use a personalization/scripting language called Liquid (open source from Shopify), Segment’s SQL traits, and Customer.io to dynamically calculate and communicate those positive outcomes to customers. Here’s the formula we use if you want to swipe it for your own use. {% capture fun_fact %}{% if status == "habit" %}To top it off, that means you have saved <strong>{{ dest | times: 10 }} hours of engineering work</strong> by using Segment to integrate your <strong>{{ dest }} Destinations</strong>!{% else %}On average, teams that use Segment save <strong>36 hours of engineering work</strong> per year for each enabled Destination. Time to start saving you some time!{% endif %}{% endcapture %} Tools that’ll help: Expand usage beyond the champion During the sales process, reps traditionally try to find the “champion” in a company. This is the person willing to fight through obstacles to adopt a new product and implement change at the organization. While it’s incredibly powerful to have someone fighting for you in your corner, there is an inherent risk attached – if the champion is the only person to advocate for you, what happens when they leave? In engineering, this is known as the single point of failure. If a single part of the system fails, the whole system breaks down. Retention is no different. To combat this, we use a tool called Vitally, a customer success platform that ingests all your customer data – Segment traits and events, conversations, subscriptions, and NPS scores – into 360° profiles to let you proactively monitor the health of an account. What’s particularly neat about Vitally is the ability to push data in and out. This lets you easily build custom workflows that help you stay on top of retention. For example, we have a workflow that pushes data from Salesforce -> Sifdata -> Salesforce -> Segment -> Vitally to alert the assigned customer success and sales rep when we notice an important person left the company. Unlock this workflow with Segment 👉 Tools that’ll help: Late-stage retention Optimize your cancellation flow Have you ever tried to cancel Amazon Prime? Up to 5 pages long and featuring multiple calls to action, it’s scientifically engineered to make churn a last resort. Some of these tactics stray into the dark side of UX, the level of craft shows you why a well-designed cancellation flow is so important. In fact, Headspace found that optimizing their cancellation flow helped reduce churn by up to 10%. A quick win is tackling involuntary churn. Accidental cancellations or missed payments account for a significant percentage of churn. How significant? At Segment, non-payment can account for anywhere between 10-50% on a weekly basis. The answer here is pretty simple: track the data and remind them to pay! You can track the payment events in Segment, using an event such as the below: analytics.track('Payment Failed', { plan: 'Business', cycle: 'Monthly', expires: new Date('2020-04-15T00:00:00.000Z'), revenue: 120 }); And then create an automated email to remind users whose payments are overdue, or whose credit card is expiring. As these are transactional emails, keep personalization simple – the amount due, or the final four digits of the card that’s expiring. When optimizing your cancellation flow, please bear in mind that any customers “saved” should still be considered very much at risk for churning. This should be the first step in trying to bring a customer back in from the cold, not the last. Optimize your cancellation flow with Segment 👉 Tools that’ll help: Learn from those who leave Churn is a natural byproduct of any business. Customers come and go, as does the demand for your product. Though it may be painful, make sure you have a well-considered exit ramp. Acknowledge the reasons they’re churning, address them, and make sure they leave endeared towards your company. At Segment, we recently implemented a churn survey to get a deeper understanding of why our customers are churning and the magnitude of these issues. You can use this data to either a) reclaim churned users or b) identify cohorts who are especially prone to churn, so you can get ahead of it. We send our survey data via Segment first to Slack, to help give folks a reminder about what accounts are churning and why. Churn Alert johndoes churned by downgrading to developer :( -- Churn Reason: “Segment was too hard to use” Any additional Feedback? “I wasn’t sure where to start” User : John(john@does.com) Activation level : 2 sources / 3 destinations / 3,000 MTUs Link to Vitally : segment.vitally.com/johndoes And then to our email tool, customer.io, where we send a personalized email to every account that churns. Best case scenario, you’ll open the lines of communication for a winback in the months ahead. But even if that’s off the table, you’ll get valuable insights you can use to help improve your product. Tools that’ll help: A good retention strategy starts with good data If you’ve found deal flow and signed contracts to be slowing, there is one area where your business isn’t lacking – data. You have access to a trove of data you can use to understand what your users need right now. Apply these insights to not only improve your product but also promote trust. Your success is your users’ success, and now is the time to prove to them you’re with them through thick and thin. By the end of all this, you’ll have a less leaky bucket that can weather just about any storm. (P.S. If you want help with anything covered in this post, we've got you covered! Get started with Segment here 👉)
https://segment.com/blog/customer-retention/?utm_campaign=revue_newsletter&utm_medium=email&utm_source=Revue%20newsletter
CC-MAIN-2022-21
refinedweb
2,313
62.27
Hi all, I’m studying CUDA Fortran for my thesis, and I’ve a question: Like in C, in Fortran we can define functions (or subroutines) that are both host and device, so functions that are callable from host or device. In C in this functions if I have to determine who is the caller, I can use CUDA_ARCH macro: it’s defined when the caller is the GPU, otherwise it isn’t if the caller is the CPU. This is an example of use of CUDA_ARCH __host__ __device__ void function() { #ifdef __CUDA_ARCH__ //__CUDA_ARCH_ defined, GPU is the caller #if __CUDA_ARCH__ >= 200 //Compute capability >= 2.x #elif __CUDA_ARCH__ < 200 //Compute capability < 2.x #endif #else //_CUDA_ARCH_ not defined, host call the function #endif } So, the question is: in Fortran there is something like this? Thanks to all!
https://forums.developer.nvidia.com/t/host-device-function-fortran-counterpart-of-cuda-arch/132961
CC-MAIN-2022-21
refinedweb
137
57.81
# Naming things > There are only two hard things in Computer Science: cache invalidation > > and naming things. > > > > — Phil Karlton We, developers, spend more time reading code than writing it. It is important for the code to be readable and clear about its intent. Below are some advice based on my experience naming things. Meaning ======= A name, be it a variable, a property, a class, or an interface, should reflect the purpose of why it's being introduced and how it's used. Use accurate names ------------------ If one can not get an idea about usage and purpose without extra comments the name is not good enough. If immediate usage or purpose idea based on naming is wrong then the naming is unacceptable. The worst possible naming is when a method name lies to the one who reads it. Avoid meaningless names ----------------------- These are names like `$i`, `$j`, `$k` etc. While these are OK to use in cycles, in other cases they are wasting readability. A common source of such names is classic science where most formulas use one-letter variables so it is faster to write. As a consequence, you can not make sense of these formulas without an introductory paragraph explaining naming. Often this paragraph is hard to find. Since computer science education includes significant number of classic science disciplines, students are getting used to such naming and bring it to programming. Naming classes, interfaces, properties and methods -------------------------------------------------- Class name should be one or several nouns. There should be no verbs. Try avoiding "data", "manager", "info", "processor", "handler", "maker", "util" etc. as these usually an indicator of vague naming. Interfaces are usually either nouns or adjectives. Some teams, including [PHP-FIG](https://www.php-fig.org/), chose to postfix interfaces with `Interface`. Some do it with `I` prefix and some use no prefix or postfix. I find all these acceptable in case you are consistent. Properties should be named with nouns or adjectives. For booleans use affirmative phrases prefixing with "Is", "Can", or "Has" where appropriate. Method names should contain one or more verbs as they are actions. Choose verb that describes what the method does, not how it does it. While it is not strictly necessary, it is a good idea to end derived class name with the name of the base class: ``` class Exception {} class InvalidArgumentException extends Exception {} ``` Consistency =========== Use a single name for a single concept. Be consistent. A good example is using `begin`/`end` everywhere instead of mixing it with `start`/`finish`. Follow code style conventions ----------------------------- When developing a project, a team must agree on code style and naming conventions they use and follow these. If a part of conventions is not accepted by some team members then it should be reviewed, changed and new rule should be set. For PHP the most common convention is currently PSR-2 and most internal project conventions are based on it. Verbosity ========= Avoid reusing names ------------------- Using same name for many concepts should be avoided if possible as it brings two problems: * When reading, you have to keep context in mind. Usually that means getting to namespace or package declaration constantly. * Searching for such names is a pain. Avoid contractions ------------------ Do not use contractions. Common examples are: * `cnt` * `iter` * `amnt` * `impl` ``` function cntBigWrds($data, $length) { $i = 0; foreach ($data as $iter) { if (mb_strlen($iter) > $length) { $i++; } } return $i; } $data = ['I', 'am', 'word']; echo cntBigWrds($data, 3); ``` The code above when named properly becomes: ``` function countWordsLongerThan(array $words, int $minimumLength) { $count = 0; foreach ($words as $word) { if (mb_strlen($word) > $minimumLength) { $count++; } } return $count; } $words = ['I', 'am', 'word']; echo countWordsLongerThan($words, 3); ``` Note still that short explanatory names without contractions are better than long explanatory names so do not take verbosity to extreme ending up with names like `processTextReplacingMoreThanASingleSpaceWithASingleSpace()`. If the name is too long it either means it could be re-worded to make it shorter or the thing you are naming is doing too much and should be refactored into multiple things. Avoid acronyms -------------- Avoid acronyms and abbreviations except commonly known ones such as HTML. [Elon Musk](https://en.wikipedia.org/wiki/Elon_Musk) sent an email titled "Acronyms Seriously Suck" to all SpaceX employees in May 2010: > There is a creeping tendency to use made up acronyms at SpaceX. Excessive use of made up acronyms is a significant impediment to communication and keeping communication good as we grow is incredibly important. Individually, a few acronyms here and there may not seem so bad, but if a thousand people are making these up, over time the result will be a huge glossary that we have to issue to new employees. No one can actually remember all these acronyms and people don't want to seem dumb in a meeting, so they just sit there in ignorance. This is particularly tough on new employees. > > > > That needs to stop immediately or I will take drastic action — I have given enough warning over the years. Unless an acronym is approved by me, it should not enter the SpaceX glossary. If there is an existing acronym that cannot reasonably be justified, it should be eliminated, as I have requested in the past. > > > > For example, there should be not "HTS" [horizontal test stand] or "VTS" [vertical test stand] designations for test stands. Those are particularly dumb, as they contain unnecessary words. A "stand" at our test site is obviously a test stand. VTS-3 is four syllables compared with "Tripod", which is two, so the bloody acronym version actually takes longer to say than the name! > > > > The key test for an acronym is to ask whether it helps or hurts communication. An acronym that most engineers outside of SpaceX already know, such as GUI, is fine to use. It is also ok to make up a few acronyms/contractions every now and again, assuming I have approved them, e.g. MVac and M9 instead of Merlin 1C-Vacuum or Merlin 1C-Sea Level, but those need to be kept to a minimum. I agree with him. Readability =========== Code should be able to be read as easily as prose. Choose words that you would choose writing an article or a book. For example, a property named `TotalAmount` is more readable in English than `AmountTotal`. Hiding implementation details ----------------------------- That is more about object oriented design but it affects readability much if implementation details are exposed. Try not to expose methods named like: * `initialize` * `init` * `create` * `build` Domain language --------------- Code should use the same names as used in the business or domain model automated. For example, if a travel business using "venue" as a general name for cafes, hotels and tourist attractions, it is a bad idea to use "place" in the code because you and your users will speak two different languages making it more complicated than it should. Such a language is often called "The Ubiquitous Language". You can learn more from "[Domain Driven Design Quickly](https://www.infoq.com/minibooks/domain-driven-design-quickly)" mini-book by InfoQ. English ------- Majority of programming languages use English for built-in constructs and it is a good practice to name things in English as well. It is extremely important for a developer to learn English at least on the basic level and, what is more important, to have good vocabulary that one can use for finding a good name. Some useful tools: * [thesaurus.com](http://www.thesaurus.com/) — for finding synonyms. * [wordassociations.net](https://wordassociations.net/en) — for finding associations. References ========== * [Microsoft Guidelines for Names](https://docs.microsoft.com/en-us/previous-versions/dotnet/netframework-4.0/ms229002(v%3dvs.100)) * [TwoHardThings by Martin Fowler](https://martinfowler.com/bliki/TwoHardThings.html) * [Domain Driven Design Quickly by InfoQ](https://www.infoq.com/minibooks/domain-driven-design-quickly)
https://habr.com/ru/post/437122/
null
null
1,286
54.73
Keras is a Python framework designed to make working with Tensorflow (also written in Python) easier. It builds neural networks, which, of course, are used for classification problems. The example problem below is binary classification. You can find the code here. The binary classification problem here is to determine whether a customer will buy something given 14 different features. You can see the data here. Keras can run on top of: - TensorFlow - cuDNN - CNTK Here we use Tensorflow. So install Aconda and then run these commands to install the rest. conda install theano conda install tensorflow conda install keras The Code In the code below, we have a dataframe of shape (673,14), meaning 673 rows and 14 feature columns. We take the columns called Buy and use that for labels. You can use the Keras methods with dataframes, numpy arrays, or Tensors. We declare our model to be Sequential. These are a stack of layers. We tell Keras to return the accuracy metric metrics=[‘accuracy’]. import tensorflow as tf from keras.models import Sequential import pandas as pd from keras.layers import Dense url = ' logisticRegressionBestModel/master/KidCreative.csv' data = pd.read_csv(url, delimiter=',') labels=data['Buy'] features = data.iloc[:,2:16] model = Sequential() model.add(Dense(units=64, activation='relu', input_dim=1)) model.add(Dense(units=14, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy']) model.fit(labels, features, batch_size=12, epochs=10, verbose=1, validation_data=(labels, features)) model.evaluate(labels, features, verbose=0) model.summary() The output looks like this. As you can see it ran the sgd (standard gradient descent) optimizer and categorical_crossentropy 10 times, since we set the epochs to 10. Since we used the same data for training and evaluating we get a 0.9866 accuracy. In actual use we would split the input data into training and test data, following the standard convention. The loss is 362.1225. We could have used mse (mean squared error), but we used categorical_crossentropy. The goal of the model (in this case sgd) is to minimize the loss function, meaning the difference between the actual and predicted values. rain on 673 samples, validate on 673 samples Epoch 1/10 2018-07-26 08:43:32.122722: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 673/673 [==============================] - 0s 494us/step - loss: 1679.5777 - acc: 0.9851 - val_loss: 362.1225 - val_acc: 0.9866 Epoch 2/10 673/673 [==============================] - 0s 233us/step - loss: 362.1225 - acc: 0.9866 - val_loss: 362.1225 - val_acc: 0.9866 Epoch 3/10 673/673 [==============================] - 0s 218us/step - loss: 362.1225 - acc: 0.9866 - val_loss: 362.1225 - val_acc: 0.9866 Epoch 4/10 673/673 [==============================] - 0s 208us/step - loss: 362.1225 - acc: 0.9866 - val_loss: 362.1225 - val_acc: 0.9866 Epoch 5/10 673/673 [==============================] - 0s 213us/step - loss: 362.1225 - acc: 0.9866 - val_loss: 362.1225 - val_acc: 0.9866 Epoch 6/10 673/673 [==============================] - 0s 212us/step - loss: 362.1225 - acc: 0.9866 - val_loss: 362.1225 - val_acc: 0.9866 Epoch 7/10 673/673 [==============================] - 0s 216us/step - loss: 362.1225 - acc: 0.9866 - val_loss: 362.1225 - val_acc: 0.9866 Epoch 8/10 673/673 [==============================] - 0s 218us/step - loss: 362.1225 - acc: 0.9866 - val_loss: 362.1225 - val_acc: 0.9866 Epoch 9/10 673/673 [==============================] - 0s 228us/step - loss: 362.1225 - acc: 0.9866 - val_loss: 362.1225 - val_acc: 0.9866 Epoch 10/10 673/673 [==============================] - 0s 239us/step - loss: 362.1225 - acc: 0.9866 - val_loss: 362.1225 - val_acc: 0.9866 <keras.callbacks.History object at 0x7fa48f3ccac8> >>> ... model.evaluate(labels, features, verbose=0) [362.1224654085746, 0.986627043090639] >>> >>> model.summary() _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_1 (Dense) (None, 64) 128 _________________________________________________________________ dense_2 (Dense) (None, 14) 910 ================================================================= Total params: 1,038 Trainable params: 1,038 Non-trainable params: 0 Now you can use the predict() method to make some prediction on whether a person is likely to buy this product or not. Wikibon: Automate your Big Data pipeline These postings are my own and do not necessarily represent BMC's position, strategies, or opinion. See an error or have a suggestion? Please let us know by emailing blogs@bmc.com.
https://www.bmc.com/blogs/how-keras-machine-language-api-makes-tensorflow-easier/
CC-MAIN-2019-09
refinedweb
697
64.37
Learn more about these different git repos. Other Git URLs f61db18 @@ -115,6 +115,14 @@ def add_appdata(path, username, projectname, lock=None): out = "" + + # We need to have a possibility to disable an appstream builder for some projects + # because it doesn't properly scale up for a large ammount of packages + parent_dir = os.path.dirname(os.path.normpath(path)) + assert parent_dir.endswith(os.path.join(username, projectname)) + if os.path.exists(os.path.join(parent_dir, ".disable-appstream")): + return out kwargs = { "packages_dir": path, "username": username, See #738 Documented in SOP: +1 os.path.join(username, projectname) ? Metadata Update from @frostyx: - Pull-request tagged with: needs-work rebased onto df8f0cede2044be97e328aa4aee843f0fd3cec8a I've switched to os.path.join, os.path.exists and os.path.realpath as it was suggested on meeting. os.path.join os.path.exists os.path.realpath Metadata Update from @frostyx: - Pull-request untagged with: needs-work Sorry, I probably meant os.path.normpath. Even though we don't use symlinks, realpath() has some potential to cause problems in future... os.path.normpath rebased onto aed994cf31a6224497aabc80741222c245e1a043 Sorry, I probably meant os.path.normpath. Sorry, I probably meant os.path.normpath. Fixed :-) rebased onto f61db18 Thanks, please merge (for some reason I can not, again ..). Pull-Request has been merged by praiskup See #738
https://pagure.io/copr/copr/pull-request/742
CC-MAIN-2022-40
refinedweb
212
51.44
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo Update of /cvsroot/plplot/plplot/examples/c++ In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv16703/examples/c++ Modified Files: Makefile.am Added Files: README.c++demos Log Message: Add README noting the problems with some C++ compilers (notably under IRIX) not supporting some newer C++ features. --- NEW FILE: README.c++demos --- These C++ examples exactly replicate the C examples but using the C++ plstream class. The examples are written to work on a fairly modern C++ compiler. They are known to work with gcc 2.95 and gcc 3.3.3 for example. The examples make use of the std namespace feature. This may cause some difficulties on older compilers. You may need to comment out the line "using namespace std;" near the top of the file in this case. We have had reports of other problems on some versions of the IRIX compilers because they do not include the standard header file cstdlib. In this case you may need to include stdlib.h instead. If you encounter any other problems please report them. Andrew Ross <andrewross@...> March 2004. Index: Makefile.am =================================================================== RCS file: /cvsroot/plplot/plplot/examples/c++/Makefile.am,v retrieving revision 1.16 retrieving revision 1.17 diff -u -d -r1.16 -r1.17 --- Makefile.am 11 Feb 2004 11:39:00 -0000 1.16 +++ Makefile.am 2 Mar 2004 15:19:30 -0000 1.17 @@ -117,5 +117,5 @@ $(LN_S) ../c/lena.pgm lena.pgm ) endif -EXTRA_DIST = $(sources) Makefile.examples.in +EXTRA_DIST = $(sources) Makefile.examples.in README.c++demos DISTCLEANFILES = Makefile.examples
http://sourceforge.net/p/plplot/mailman/plplot-cvs/thread/E1AyBgC-0004OO-Ty@sc8-pr-cvs1.sourceforge.net/
CC-MAIN-2015-18
refinedweb
278
63.36
With the advent of .NET Framework 3.0 (formerly WinFX), terms like Windows Presentation Foundation (formerly Avalon), Communication Foundation (formerly Indigo), Workflow Foundation (formerly WWF) and Cardspace (formerly InfoCard) are everywhere. The .NET Framework 3.0 also includes a powerful general-purpose object initialization language known as XAML (eXtensible Application Markup Language). What Microsoft is really putting stress on is creating compelling user experiences and secure, seamless communication across boundaries. In this series of articles, I will be explaining and going in depth with some of the core components of .NET Framework 3.0. Let's start with Windows Workflow Foundation (WF). I assume you have basic knowledge of the .NET Framework. I will try and cover most of the concepts as soon as they arise.: A state-based workflow waits on external entities to perform some action before moving on to the next step. On the other hand, a sequential workflow is just a continuous flow of operations which might include branching, but steps in the sequential workflow don't wait for an external entity to perform the operation, and then continue. Within a workflow, branching is done when a decision is to be made. For each branch in the workflow there must be two alternatives, either true, or false. You cannot just stop the workflow to make a decision at any point. Sequential workflow follows the traditional style of thinking - i.e. as long as the process is simple and stays in the boundary then a sequential workflow will work. On the other hand, state machine workflows deal with different states of the workflow. A process which involves several different iterations before getting into the final shape is a candidate for being a state machine workflow. One of the biggest reasons for creating a workflow is that you are actually creating a model. Most of the business processes have some sort of model associated with them, whether it is use-cases, UML diagrams or a simple flow chart. In a process you always have some flow and thus, there will always be some steps involved in its execution. With WF, model and workflow are one and the same. You use pieces of a business process to make a workflow and you piece together the workflow, which makes a model. Before WF you would make couple of UML diagrams, drawing flow-charts, writing pseudo code, explaining in bulleted text how an operation should work and then finally you will get to writing the code, whereas with WF when you are modeling the business process you are also building the application! As discussed earlier, WF is part of the Microsoft's new programming model .NET Framework 3.0. It enables business processes to be expressed graphically and linked directly to business logic. With WF, workflows can be expressed in either: The WF programming model is made up of a number of exposed APIs that are encapsulated in the Microsoft .NET Framework 3.0 namespace called System.Workflow Let's have a quick look at the WF architecture. The most convenient way to interact with the namespace is of course through the workflow designer. You just create a workflow, drag and drop activities, and there you have it. A complete working model! Every workflow is made up of a set of activities (steps/tasks). These activities facilitate the business process or are part of the business process. In terms of WF, the activities are the actual work units required to carry out a workflow. WF allows you to create your own custom activities and your own library, which will be done in the coming article. The next component of the WF is the WF runtime engine. The WF runtime engine executes workflow made up of activities created with VS2005 Workflow Designer. The runtime engine includes three basic services: A WF has no executable environment and requires a host process within which it can execute. They are not applications themselves, rather, they require an application where they can reside. This host process can be a Windows Application or an ASP.NET Web application. Assuming you have VS 2005 installed and the latest .NET Framework, in order to work with the WF you simply need to download and install the following components: After installing the components, within VS 2005 you will have new project types and will be able to import and use the namespace. Fig. 01 The Sequential Workflow Console Application and State Machine Workflow Console Application provides the workflow model for the two types of workflow that can be created within WF: sequential and state machine. The first step to create a workflow application is to select a workflow model, and then you need to add activities to that model. There isn't much difference between each of the project templates. The difference between the console application and library is just that the console application contains a sub main method which is used to start the workflow. Let's have a look at the design areas of various project templates available in WF: Fig. 02 Before getting into further details it's important to understand the various activities that are available in the WF. WF provides you with more than a dozen basic set of activities which I will briefly discuss below in order to have a better understanding of WF. catchblock in your code. It can have various activities that are fired when an exception is occurred, including a Compensate Activity. If Elseblocks in your application. You can have a branch of activities based on a certain condition. For Eachstatement in your application. It actually creates a number of instances of an activity while running which it must complete before the replicator activity can be completed. Throwstatements in your application. You can use this activity to throw an exception from one workflow or an activity to another. Whilestatement in our application. It executes another activity until a condition is met. The condition can be either a code based or simply a rule based condition. All workflows and activities are classes. If we use a basic OOP definition, a programming language class is nothing but a real world construct that encapsulates related variables (properties) and methods (functions or stubs). As discussed earlier, the namespace Workflow is the newly added namespace in Microsoft .NET Framework 3.0 under the System namespace for building workflow applications. There are actually three assemblies that constitute this namespace i.e. Activities, ComponentModel and Runtime. You can find them physically located in your GAC. Fig. 03 Explaining each and every bit of information in these namespaces will not be feasible for me, you can refer to MSDN for that. However, just to give you an idea we will just discuss a few of these briefly: Enough with the background talking; it's time to wrap up the sleeves and see some practical code examples. Open up your VS 2005 and under File > New > Project > Workflow, you can create a new Sequential Workflow Application. When you create a new workflow in C# the file which is used to host the console application is called Program.cs, whereas in VB it is Module1.vb After creating a new project we have deleted our Workflow1.cs file and instead added a new item for creating our sequential workflow. Fig. 04 Just to clarify here: WPF XAML is all about UI, whereas WF XAML is all about business processes. So in order to distinguish both a new extension XOML has been introduced which is in fact eXtensible Object Markup Language (you may call it as workflow markup). And if we really look into XAML then it is just a new format for serializing WPF .NET types into XML. There is no syntactical difference between the XAML and XOML, only the semantics differ. After we have the workflow file added in our solution, we can simply open it up with an XML Editor or the default Workflow Designer. In the workflow designer, drag and drop the Code Activity to the design surface of our sequential workflow. Fig. 05 Well you can see a red exclamation mark in the above figure. The smart tag will tell you that the property ExecuteCode has not been set i.e. you need to add a handler against your activity. Go to properties, under the Events section double click the ExecuteCode property, and there you have it. A stub generated for you right in you code beside the file. Here is what your code will now look like: namespace _1SequentialWF { public partial class Workflow1 : SequentialWorkflowActivity { private void codeActivity1_ExecuteCode(object sender, EventArgs e) { } } } Just add your code that you want to execute here e.g. we have added a simple display message that will print "Hello Workflow!" in the console output window. Fig. 06 You can also set a breakpoint on your Code Activity. To get an understanding of what's actually happening behind the scenes let's have a look at the Program.cs file and see what code the designer has generated for us. using System; using System.Collections.Generic; using System.Text; using System.Threading; using System.Workflow.Runtime; using System.Workflow.Runtime.Hosting; namespace _1SequentialWF {(_1SequentialWF.Workflow1)); instance.Start(); waitHandle.WaitOne(); } } } } If we write pseudo code of the above code it will look like this: Interesting isn't it. You have just added a simple activity and defined a simple task for that activity and the rest is all handled for you automatically. One of the many workflows that needs to be implemented might need to have values passed into it e.g. userID of the logged in user etc. A workflow can accept input parameters or can provide output parameters which are defined, within the workflow as properties. Another way of passing data into the workflows is through events. With events, workflow authors add an activity that receives an event and the data associated with it. Let us modify our previous code and introduce properties in it. Here we will host our workflow application in a Windows Form application by adding a reference to the System.Windows.Forms assembly. Fig. 07 namespace _1SequentialWF { public partial class Workflow1 : SequentialWorkflowActivity { private void codeActivity1_ExecuteCode(object sender, EventArgs e) { Console.WriteLine("Hello Workflow! " + FirstName + " " + LastName); System.Windows.Forms.MessageBox.Show("Hello Workflow! " + FirstName + " " + LastName); } private string myFirstName; public string FirstName { get { return myFirstName; } set { myFirstName = value; } } private string myLastName; public string LastName { get { return myLastName; } set { myLastName = value; } } } } Now we are ready to launch our workflow application from a windows application. Let's do that by adding a new Windows application to our solution. Fig. 08 Now we will set our Windows application as our StartUp project and will add the reference of our workflow application. Note: the output mode of our workflow application must be set to Class Library before adding the reference, and must be then compiled to build the output dll. Fig. 09 Now after adding the reference to our workflow project, the next step is to add reference to the WF system assemblies. Let's do that as well! Fig. 10 After this we will design our form by dropping two labels, two text boxes and a button on our form so that our form looks like: Fig. 11 Here is the associated code behind the form: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.Workflow.ComponentModel; using System.Workflow.Runtime; using System.Workflow.Runtime.Hosting; namespace WindowsApplicationHost { public partial class Form1 : Form { private WorkflowRuntime wr; public Form1() { InitializeComponent(); } private void btnBeginWorkflow_Click(object sender, EventArgs e) { if (wr == null) { wr = new WorkflowRuntime(); wr.StartRuntime(); } Dictionary<string,> parameters = new Dictionary<string,>(); parameters.Add("FirstName", txtFirstName.Text); parameters.Add("LastName", txtLastName.Text); WorkflowInstance instance = wr.CreateWorkflow( typeof(_1SequentialWF.Workflow1), parameters); instance.Start(); } private void btnCloseWorkflow_Click(object sender, EventArgs e) { if (wr != null) { if (wr.IsStarted) { wr.StopRuntime(); MessageBox.Show("Workflow successfully stopped"); } else { MessageBox.Show("Workflow not started yet"); } } } } }</string,></string,> We have simply added the references to WF namespaces. In our main application we have created an instance of WorkflowRuntime and used a simple Dictionary object to hold input from the user which we have passed to the our workflow. Following is the output of our application: Fig. 12 Fig. 13 This way we have successfully hosted our workflow application in a Windows form application and also passed values to our workflow. In the coming articles I will be describing in detail, how we can attach various activities, declare rule-based workflows and also a detailed insight into the State Machine workflows and Activity Library and then the rest of the components of Microsoft .NET Framework 3.0 Happy Coding~ General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/WF/WFintropart1.aspx
crawl-002
refinedweb
2,129
57.37
Intel’s Clear Linux has launched a new generation of its containers project Clear Containers. Debuted as the Clear Containers 3.0, the new version brings support for leveraging code used for namespace-based containers and improves integration in the container ecosystem. The most significant change in Clear Containers 3.0 over its previous versions is its Go language build. Intel’s Clear Linux team has finally designed to leave C language and switched the entire stack to Go. The new Clear Containers version also comes with a new agent based on the libcontainer library that is designed from the offset. Users can use the library to apply policies and filters such as SELinux and Seccomp within the Clear Container guest mode. The updated Clear Containers brings support for virtio-blk storage backend. The new version also includes support for Kernel SamePage Merging (KSM) throttling to improve container’s scaling and density. Likewise, you can easily run Clear Containers on VMware virtual machines and Microsoft’s Hyper-V hypervisor. In addition to the major improvements to Clear Containers, Intel has upgraded CC-runtime that now includes compatibility with the OCI runtime specification and Docker engine. You can also run the platform using Kubernetes through CRI-O. This new development enables you to run both trusted and untrusted workloads in a Kubernetes cluster on bare metal. The new version of the Clear Containers can be downloaded from its GitHub repository.
https://www.opensourceforu.com/2017/10/intel-clear-containers-3-0-go-language/
CC-MAIN-2021-04
refinedweb
239
56.05
Middleware that provides the ability to upload objects to a cluster using an HTML form POST. The format of the form is: <![CDATA[ >]]> The swift-url is the URL to the Object Storage destination, such as: The name of each file uploaded is appended to the specified swift-url. So, you can upload directly to the root of container with a URL like: Optionally, you can include an object prefix to better separate different users’ uploads, such as: The redirect attribute is the URL to redirect the browser to after the upload completes. The URL <![CDATA[<input type="file" name="filexx"/>]]> attributes if desired. The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated. The signature attribute is the HMAC-SHA1 signature of the form. This sample Python code shows how to compute the signature: import hmac from hashlib import sha1 from time import time path = '/v1/account/container/object_prefix' redirect = '' the X-Account-Meta-Temp-URL-Key header on the account. Be certain to use the full path, from the /v1/ onward. The command-line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. The file attributes must appear after the other attributes to be processed correctly. If attributes come after the file, they are not sent with the sub-request because on the server side, all attributes in the file cannot be parsed unless the whole file is read into memory and the server does not have enough memory to service these requests. So, attributes that follow the file are ignored.
https://docs.openstack.org/icehouse/config-reference/content/object-storage-form-post.html
CC-MAIN-2020-10
refinedweb
268
60.45
Top 10 MATLAB code practices that make me cry 68 Posted by Doug Hull, Without further ado: The top ten and quick ways to not do them: 10.) Not using left hand zeros Certain things must be learned the hard way. I learned this one bleary eyed evening as an undergraduate. Well into the night working on a MATLAB homework assignment that would “take ten minutes, fifteen if you type slow.” (yes, Dr. P, 13 years later I still remember that one! -smile-) It is really easy to mistake a .5 for a 5. That is why I always use a left hand zero like 0.5. 9.) Plotting enormous amounts of data My computer monitor has 2.3 million pixels, total. If I try to plot datasets with huge amounts of data in them, they will very often just look like a blob and slow the machine down in the process. There is very often a better visualization available. Here is an example of changing the visualization to make it clearer and less taxing on memory. 8.) GUIs with garish color schemes In an effort to emphasize certain buttons on their GUI, people will change the colors of them. Very quickly they end up with several different colored buttons, a non-standard background color, extra big buttons, etc… Sticking with the default colors is a good move. Most professionally produced software sticks with the defaults, it ends up looking better. 7.) Using ans, or any other MATLAB function as a variable name or function. When you do this, MATLAB will call whichever one is higher on the path. Some strange behavior can occur when you redefine a function like that. Unfortunately, MATLAB does not catch you doing this for the most part. I try to avoid using variables and function names that are common terms like, mean, filter, etc… If there is any doubt, use the which command to find out if a function exists of a given name. 6.) Not using white space to good effect in code. Even though you can put several commands on one line if separated by a semicolon, these lines can often be hard to notice. Not putting blank lines between sections of code can also make it harder to read. White space is free, use it to make your code look good. 5.) Bad variable names Variable names are often the only commenting that gets added to people’s code. Meaningful variable names are a great opportunity to make the meaning of your code more clear and to some degree, self-documenting. Avoid using variable names like temp, aaa, r247899921. These just do not convey as much information to people that have to read your code as flagPassedInspection, centroidX, fidCurrentFile. 4.) Hard coding data into the MATLAB code file Some people like to put some of their variables directly into the MATLAB code. That makes sense for small variables (I will let you define what small means for you). For instance, I would feel fine putting a 3×3 matrix into my code. I would think twice about a 10×10, and I would start using one of our file readers for a 100 x 100. The worst instance I ever saw of this was some MATLAB code where the .M file was 4 GIG (not a mistake) long. All but a small amount of that was data written out in ASCII. This makes your code hard to read, maintain and understand. 3.) Exceptionally long files Even if not hard coding data into a MATLAB code file, it is easy to just add on “just a few more lines of code” until you have thousands of lines of code in a single script. This makes your code hard to understand. I try to use the rule that I should be able to see an entire script or function in one screen. This is not entirely practical, so I will at least break the code into logical sections that do fit on screen all at once. 2.) Globals I have never seen MATLAB code where globals were the right thing to do. Exception: functions TIC and TOC use them quite nicely. Most of the time I have seen globals being used it was a situation where the code author did not understand scoping of variables. Rather than pass variables from one function to another, they were just being made global. Why are people cautioned against using global variables? I will leave that to the consensus on Wikipedia. 1.) Eval EVAL is right up there with globals. MATLAB user will often string together MATLAB commands to get sequential variable names s1, s2, s3… only to then have to use another EVAL statement to work with the sequential variable names! Very often, a cell array indexed with s{1}, s{2}, s{3}… would work much better. I will also find that people use EVAL to get at certain fields in a structure (for example data.alpha) when they do not know at the time of writing the code what field they will want. Now the “.parens” notation makes that easier. The other most common place to see people use EVAL when it is not needed is when they are trying to load a file or some other function like that. Very often they are trying to EVAL a string like “load filename.mat” not realizing that there is a functionsl form where you can use fileNameString = ‘filename.mat'; load(fileNameString) 68 CommentsOldest to Newest Given some of your previous writings, I am surprised that you left out premature and/or unnecessary optimization. You know, where a person spends 3 hours to vectorize a code that is intended to only run once or twice at 2 seconds per shot. Number 11? @Matt, There was that and eight others that did not make the list. Premature optimization is often indistinguishable from just hard to read code! I don’t always recognize it as an optimization! :) I was hoping to hear from people to see what theirs were. Top 19 did just not have the same ring to it! I guess if I think of one more, I can have the “top ten that make me cringe (instead of cry)” :) 6. Whitespace – YES! So much code I see seems to be from when a blank line meant a blank (wasted) card, spaces in the line meant your code took more cards to print, and more cards were a burden to carry and keep ordered. For a younger generation, it seems to be from when you only had 80×25 character screens, and whitespace meant you couldn’t see much code at once. Let go of the old habits because you don’t have those constraints. Let your code breathe, and be amazed at how many errors you can spot now. 5. Bad variable names – I often see bad variable names with copious accompanying comments that are having to explain what each variable’s purpose is. One common thread to many of these things is feature discovery. It’s natural that you start to learn a new language’s basic parts. With that small kernel you can usually cobble together anything you need to do, given enough time and dwelling on doing it rather than taking a time to see if there’s another way to do it, knowing it will save you time over-and-over later rather than now. MATLAB’s active M-Lint in the editor helps with some discovery. Good examples and cross-referencing in the documentation also help. Great Topic: My personal peeves: 1) People who ignore mlint messages. Repeat after me : mlint is your friend, mlint is your friend. 2) Untidy code and crummy variable names: a simple ctrl-A, ctrl-I will clean up 99% of the code that gets messy as you edit. And for the love of all that is holy do not give me variable names that start with ‘my’ anything. 3) Hard coding just about any non-trivial data (and some seemingly trivial) in the code will bite you down the line. Using i and j in a loop: for i = 1:10; disp(i); end. Using i in a loop is fine to me. I would say the opposite: always use the notation “1+1i” when writing complex numbers. i disagree with not using globals. Global variables are very practical when you have large data sets that are being modified within a function. The moment you change a variable inside a function, matlab makes a copy which implies using a lot more memory. that’s a big problem when you have 800MB sized variables! In such cases, global is extremely convenient – preallocate and update the same one all the time using logical indexing. Far more memory efficient, wouldn’t you agree? @Jayveer, That is an excellent point. That is one of the good uses of globals. I have heard of people using them that way, but I have never seen it! I think it is one of those situations where 99% of all global variables give the other 1% a bad name! Doug Just an fyi… Rather then use globals I created a handle class with dynamic properties. This way you can pass the class around of big variables. classdef dynoclss < dynamicprops end Then you can do this: x = dynoclss; x.addprop(‘foo’) x.foo=rand(1e7,1); pass x to your functions and go crazy. Stephen I needed to store the arrival times of wavefronts and I used variable (2D array) with the name “times”. Well, in Matlab “times” is the name of a function for multiplication… I accidentally figured out the mistake @Stephen Nice! @Petros The reason all (except global) are on here is because I made ALL of these mistakes in the past. Experience is a great teacher! Doug @Stephen Very Nice! In the pre classdef days one trick I used was storing large common data in the root’s appdata. bigData = rand(1e7,1); setappdata(0,’CommonData’,bigData); then when you need it bigData = getappdata(0,’CommonData’); Had to build some framework for managing it but all in all worked pretty well. Matt Matt Whitaker is correct. When a code module is complete, the M-Lint should be silent. @Doug, Many of these items are already covered in some of the various MatLab coding style guides. Maybe you could reference your favorite ones. Nice list. I would add two items which are practices of omission, i.e., not doing the right thing. 11. Not validating function arguments and return values. 12. Inadequate comments, especially if the variable names are not descriptive (#5). For example (without my comments), It was obvious what the function y4F did when I wrote this 3 years ago and isn’t too hard to figure out now. But why figure it out again? Perhaps nested anonymous function calls should be added to the list. Also to two of eval’s cousins, str2num and feval, can be very useful and safe. I frequently use str2num, which of course calls eval in a try-catch block, to allow a user to enter parameters such as plot scaling factors as a expressions. Why make someone pull out a calculator when there’s a great big one (MATLAB) behind that GUI you wrote for them? Ken @ken Eval’s cousins. Yes, they are safer. It is kind of like they are a lawnmower where EVAL is a lawnmower you pick up and use to trim your hedges! Doug @Oyster, I actually do not have any favorite MATLAB coding style guides. Do you? Doug @Ken, Nested anonymous functions aren’t a bad coding practice … as long as it’s clear why you’ve nested them and what each contributes. In your case, I would argue the problem occurs not in the nesting of the functions but in the names of the variables in which you store them. Therefore I’d expand item 5 on the list — it’s not just bad _variable names_ that can cause problems, but bad _identifiers_, which includes things like variable names, function names, struct array field names, and even MAT-file (or other data file) names. Will you remember what’s stored in mat123.mat six months from now? How about what’s stored in financialDataQ1Y2010.mat? Hi, I don’t like the use of some relatively new features of Matlab: 1. The use of anonymous functions, see the list of Ken Gerrard for some horribly unreadable examples. 2. The use of nested functions, they are confusing regarding variable scope. 3. The use of functions like bsxfun without very detailed comments describing what happens. Too bad that the far better understandable for-loops are still soooo slow in Matlab… Markus There is also a use of classes for avoiding global constants: Use constant properties of classes. They are initialized at first time use of the class, so you can make them dependent on a config file, elaborate one time calculations etc. You can group the constants by meaningful class names, use the meta information in the class for validation purposes etc. I like this post, Why don’t you open a wiki page dedicated to MATLAB, I guess the mathworks will find interesting things in there. I agree with most of the things you wrote, These are the things that cause a lot of pain to me: - lack of MACROS, why not include #define operator like in c++ ? - huge overhead time in function calls, especially nested ones. - I would like to pass arrays to function as reference. - globals: why not leaving a dedicated space in the header for globals? wouldn’t that be easier? - can not build an arrays of structure, only structure of arrays. If you know how to do it please tell me. - the rotate command in figures is not arc-balled! it only rotates on the centroid. You can spend most of your time rotating the figure like a dog trying to bite his tail, no way to focus what you want to see. - there is no built-in library for data structure like stl containers. This can be very painful when you need it. - HELP MEMORY ERROR: in my opinion it occurs for too small arrays. If I remember something else I will add it later Luigi Regarding #7, the reuse of MATLAB-defined identifiers, the chief problem with this is that you can’t know what functions are defined by MATLAB. I may be working with bare MATLAB at one computer, while another computer may have Simulink and a whole slew of additional toolboxes. From the bare installation, I have no way to know what other functions are floating around out there without checking the website. Fortunately, MATLAB’s name resolution rules work in the programmer’s favor such that mystery functions you’ve never heard of in super-awesome-toolbox usually do not cause problems. I’ve also had the misfortune of needing this behavior to suppress calls to “keyboard” in code I inherited for testing. Add that to the list of things that will make you cry — using keyboard() instead of assert() or error(). That really messes things up when trying to write automated tests. The converse of #7 is that it is difficult to figure out what toolboxes a piece of code actually uses. One of my personal annoyances is to see an m-file script with the following at the top: This is especially bad when you send your code to other people, who may not realize that your script will wipe out anything they may currently be working on. - Ben I hate when people hard code inputs that are meant to be changed each time, especially file names! Why not just use inputdlg or input functions? They’re just as fast to code, especially if you’re not doing any error checking on the hard-coded file name anyway. I even see times where multiple lines contain the different file names and commenting and uncommenting occurs. Quite often, the person goes through the trouble of using Windows Explorer to find the file name, copy it, and paste it, save the file, then run the script. At that point, just use uigetfile and be done with it! @Luigi My impression is that you try to code Matlab with C++ techniques. - lack of MACROS, why not include #define operator like in c++ ? >> You do not need it in Matlab (also not in Python, not in Ruby, not in Java, not in C#). - huge overhead time in function calls, especially nested ones. >> Agree! That’s the cost of using a dynamically typed interpreted language, do not ever try to compare it to C++. But Matlab and C++ have completely different purposes. And with all ultra-efficient build-in Matlab functions your Matlab code programmed properly can be made very fast. - I would like to pass arrays to function as reference. >> Just do it. - globals: why not leaving a dedicated space in the header for globals? wouldn’t that be easier? >> I think this is a very minor point. - can not build an arrays of structure, only structure of arrays. If you know how to do it please tell me. >> What do you want??? - the rotate command in figures is not arc-balled! it only rotates on the centroid. You can spend most of your time rotating the figure like a dog trying to bite his tail, no way to focus what you want to see. >> You are probably right. - there is no built-in library for data structure like stl containers. This can be very painful when you need it. >> Why? You do not need it in Matlab. Newer Matlab Versions have even Hash tables. - HELP MEMORY ERROR: in my opinion it occurs for too small arrays. >> It is also about memory fragmentation. Fun list. My pet peeves are: -Hyper-dense lines of code. My coworker likes to do this, packing fifteen different function calls into one line. I’m impressed, OK, but I can’t read it. -Haphazard indentation. As mentioned above it’s so easy to do this right. -Repeating giant chunks of code, but changing one little thing. It’s called a “for” loop — use it! “One of my personal annoyances is to see an m-file script with the following at the top: This is especially bad when you send your code to other people, who may not realize that your script will wipe out anything they may currently be working on.” Very true if you are not expecting the commands. Unless your script is supposed to be working on the problem that is contained in the (assumed base) workspace, I think that all scripts aimed at answering a particular question should start with at least the “clear” command. So many errors can occur when there is some dependency on workspace contents (whether by design or not) for a script to run correctly. Starting from scratch ensures repeatable results wherever the code is run. My pet peeve is try/catch blocks used for normal control flow. For example, testing whether a struct has a certain field by trying to dereference it and catching the “no such field” error instead of first using a non-erroring test like isfield(). Bad in any language with exceptions, but particularly bad for Matlab because “dbstop if all error” is so useful, and try/catch in the normal control flow will hobble it by triggering spurious error breakpoints. Regarding #7: I agree with Arthur, it is difficult to assure not overriding MATLAB functions (I don’t really like typing “which” all the time, when coding). That is why I usually put all my code in its own namespace, e.g. work.myfun(..) or for the lazy guys: w.myfun(), so I am at least save in using the function names I like. The additional typing is not an issue to me, thanks to TAB completition. @Doug, Although it is some 7.5 years old now, I’ve used Richard Johnson’s Style Guidelines to good profit. It is available on the MathWorks File Exchange: I’ve found that style guides written for other languages have some application to MatLab. Your item #6 on white space fall into this category. Wow, you all had a lot to say while I was on vacation! Some of this does come down to “religious issues”. I personally advocate putting clear close all clc at the top of every script because I want things reproducible. I am never worried about clearing out the workspace since everything I do is in a script or function, so I should be able to get back to that point easily. I’m gree with Doug on #33! and overall I find very important that things must be reproducible. Another point is the difference between ‘clear all’ and his good cousin ‘clear’, because the first one clear also the breakpoints on ALL the functions and scripts you have opened on the M-Editor. but I disagree with ‘clc’, because with it you clear past error messages, there is enough memory for this. @J.R. It is funny, I use clc specifically because I want to clear the old error messages. I am running the code for a second time specifically to see if my changes fixed the errors. I would not want to confuse myself by thinking that the old error message corresponds to the new code! Doug Re eval: How would you treat conversion w/o eval? For instance: c=class(a); … a=double(a);% to perform some arithmetic on a do something to a … a=eval([c ‘(a);’]); –P @P I did not know, but Loren did! I mostly agree with Doug’s list, although there are a few exceptions. One that comes to mind is hard-coding the Gauss points and weights into an M-File for integration. I have seen this done and it made sense to do it. Here are some of my pet peeves. - clear all,clc at the start of a function M-File. I do sometimes use HOME instead of CLC. – Use of i or j as a loop index. I always (and only) use double letters as a loop index. For example, ii or jj or kk. That way there is no overwriting of built-ins, and these variables always are known as a loop index. – Many blank spaces between lines in a M-File. – Massive IF-ELSEIF chains instead of a SWITCH. – Not pre-allocating arrays which then grow in a loop. – Recursion. I know, it has it’s place, I just can’t bring myself to like it. – No help in a function M-File – Dumping intermediate results to the command window. I see this one often. – Programming as if we have infinite precision. Yikes! @Matt, I like the idea of HOME instead of CLC. I will have to try that for a while. ii, jj, kk. That is a nice little convention. Mush like I use vi (Valid Indices) as the output from a find operation. I always know what means. After many goofed up indices with i in a loop, out of frustration, I made up GAPLI (Generic All Purpose Looping Index) I wish I understood recursion better, it often confuses me! I might have to make up a follow up post to this one “ten more MATLAB coding practices that make blog commenters cry!” Thanks! Doug @Matt, @Doug, and @J.R., Regarding ‘close all’ and ‘clear all’ at the top of scripts, it sounds like Matt (comment #38) has the same coding practice as I do. In order to make things reproducible I almost always write functions instead of m-file scripts. This way I always know exactly what I’m passing in and what I’m passing out, and I know that the function is starting from a clean workspace of it’s own (and I refuse to use ‘global’ variables). While I’m writing/debugging I may put a “keyboard” as the last line of code, so I can see what’s happening before that workspace gets cleared. Once I’m satisfied the function works as desired, I just remove the “keyboard” line. My big reason for *not* wanting “clear all” is that I keep the data I’m working with in structs in the workspace. The data sets tend to be rather large and take a non-negligible amount of time to load from the .mat file I store them in (at least 10-30 seconds to load, if not longer). If the meat of the function only takes 1-2 seconds, it’s a lot of overhead to clear the struct then reload it again every time. - Ben I am very much in agreement with Ben. I work almost exclusively in functions instead of scripts, and often have scratch data in my workspace or extensive state in my open figures. Running a script that has “close all; clear all” at the beginning (or, even worse, buried inside it) nukes all of this state without warning. -Ross Put me in the “no clear or close all” camp. I usually have lots of figures open and things in the base workspace and scripts that clear those without warning are no fun. When I start coding something new, I usually test some things at the command line first. If that works out, I put it into a .m function file and continue working. I usually put a debugger breakpoint at the last line so I can inspect the results, and if I want to, I dump relevant variables to the base workspace using assignin(‘base’,….). I reuse the figure numbers each time I run the script. (I usually turn off most figure creation when things look right, but keep a switch so I can use them for diagnostics.) I created a “number increment function”, next_function, for this purpose, that works in the following way: next = next_function(10); % generate the increment function next, that will start at 10 figure(next()); clf; % creates and clears figure(10) plot stuff figure(next()); clf; % creates and clears figure(11) plot more stuff etc. In that way, I can easily put new figures in between others, and still keep the figure numbers in order following the order they appear in the code. I just wanted to mention that I totally agree with Luigi regarding the problem with the 3D rotate command. Another problem for me is that the elevation angle rotation is constrained to (-90 90]. I don’t see any reason for this – the azimuth angle is not limited in this way, so why limit the elevation? Also, I can not reach elevation = -90; -88 works, -89 works, but when I reach what should be -90, the plot snaps back to +90, which can be pretty frustrating. I’m stuck with 2007a, so maybe these have been addressed recently, I wouldn’t know. Luigi asked: “- can not build an arrays of structure, only structure of arrays. If you know how to do it please tell me.” It’s easy to build either a structure whose fields are arrays: or to build an array of structures. Two ways to do so: Note that while I’ve used STRUCT to construct these struct arrays, you can do so using normal subscripting. Thanks. I am a “heavy user” of eval function. Got some tips to avoid it. Thanks @petro, Show a couple of use cases and we will find alternatives. Doug Hello Doug, Regarding not using global variables. What is your opinion about using global variables to prevent this type of thing: I don’t like REDEFINING the mass and the density and all that because it takes up so much extra space. Is it dangerous to have mass and density and all those as global variables ? @Juliette, Are these huge variables? Are you running out of memory? I doubt they are. Making them global is not a recommended practice, it can cause debugging problems later and takes away on of the reasons for using functions- scoping of variables. Doug I use globals for debugging and plotting switches. Usually, I’ll have one called ‘debugYN’ and one called ‘plotYN’. By changing these booleans at the global level, I can change is my programs will or will not plot, or deliver debugging info. Other options include changing the level of verbosity etc. Doug, I have to agree that using global is horrible, but I have a hard time getting around it most of the time. Currently I’m working on a GUI which requires three script files to run. I’m dealing with 60+ variables that must be passed from one script to the next. From my perspective, it doesn’t seem practical to use a function that requires more than 60 inputs. It drives me crazy to see lines of code that extend off to the right of the editor. So is there a way to pass that many varibles without using global? It just seems way more convenient…But I hate using them, so if you have any insight, that’d be great! Caitlyn When you say script, I think you mean function as you do not pass variables to a script. If these are all related variables, would it be better to save them into a structure? Then you can pass around a few structures. This has the added advantage of being able to keep the same function signature yet add more fields to the structure as you need more data sent to those functions. Doug Hi Doug, I am working on a GUI right now, that contains a huge number of buttons and panels etc. and all the callback functions are in the .m file. Even if for every callback I write a separate function, that is called by the callback, the .m file is still huge and confusing. Is there a way to structure such .m files logically? @Christine, I sometimes find that putting callbacks (and other subfunctions I may need) in separate m-files makes them more manageable. I’m not sure what you mean by “the .m file is still huge and confusing”? It should be much shorter if you remove the callbacks! Just to be clear, you don’t need a callback function in the main m-file to call a separate callback function elsewhere; just make sure the separate m-file is referenced properly for the uicontrol, uipanel, etc. @ Matt Thanks for your response! Can you give me an example how to reference the separate m-file properly for a uicontrol? Maybe you know some sample program, that I could have a look at. By the way, I am using guide to create the GUI. How would you avoid using eval when you have to rename a variable using the string from a cell array and then have to save the new variable as a mat file? example: [pathstr name]=fileparts(Cellarray{i}); exp1=[name,’=varname’]; eval(exp1); clear varname pathstr exp2=[‘save ‘,name,’.mat ‘,name]; eval(exp2); where Cellarray is cell array that contains only the filename.ext not the path and varname is a matrix from loop. Also function should not require user interaction other than initial function call, hence the need for eval. any suggestions would be much appreciated. Roger, So the save part is easy: save(filename, name) The other bit about making a variable of a name known only as a string is more challenging. Often I would think there is just a better thing to do than that. However, you can use the command assignin to do something like this. Doug Thanks Doug! After seeing what assignin did, I can see how it can replace what I wrote earlier. The modified lines are now: [ pathstr NAME ] = fileparts( Cellarray { i }); assignin ( ‘base’, [NAME, ‘string1′], VAR) save( [NAME, ‘string2′, ‘.mat’], NAME2) clearvars -except Cellarray __ __ __ It just takes the filename.ext stored in Cellarray and then the fileparts function is used to get the name of the ith cell. Then assignin is used here to create a new variable NAME that has the same values as variable VAR. Then that new variable created called NAME2 (=[name, ‘string1′] above) is saved as a .mat file. Then all variables are cleared except for Cellarray and other necessary variables shown as ___. The name of both the new variable and the .mat file can be easily changed based on string operations. But I only needed to add an extra string to the new variable name. Thanks again and hope others find this useful. Roger: Use dynamic field names (Doug’s “.parens notation”) in conjunction with SAVE’s -struct flag. % Create a string for the “variable name” str = ‘myvariable'; % Dynamic field names mydata.(str) = magic(5); % SAVE the structure save mymatfile.mat -struct mydata; % The previous step could be written using % the function form of SAVE if the MAT-file % name was stored in a variable. % % matfilename = ‘mymatfile.mat'; % save(matfilename, ‘-struct’, ‘mydata’); % Check that the MAT-file contains the “variable” whos -file mymatfile.mat To avoid “poofing” the variable into the workspace when you LOAD, call LOAD with an output argument and refer to the variable using dynamic field names again. mydata = load(‘mymatfile.mat’); str = ‘myvariable'; y = mydata.(str); To me, the worst code practice I can imagine is use of function handles. I use matlab practically every day, and it is my main tool since 1996. Matlab was always very consistent in syntax – outputs to the left of =, function name, arguments in brackets. Other operators were also very clear and math-mind oriented. That all changed to me when function handles were introduced. The idea is great, but the syntax implementation is so awful and out of line relative to the rest of matlab that to day I still don’t know how to use them properly. Part of it is me refusing to learn it because I don’t want to accept that monster of ill-conceived syntax as part of my coding. The @, followed by brackets with list of input variables, that don’t have to be there either but could, followed by space! followed by function name (that doesn’t have to be there either but could), followed by inline code or not, just horrible, and it feels like someone was trying hard to make it as far from the good old consistent matlab logic as possible. Hi Doug! Last 6 years I did not really work with ML, but please, don’t cry: although my personal skill is out of date, I can still able to do something with 6 years old ML version. Could you please recommend me a person, sufficiently experienced to tell me whether the following question is known and what might be corresponding recommendations. Some 8 years ago I have submitted in FEX my UPPSALATOR. It solves numerically a boundary value integrodifferential problem in partial derivatives and a definite integral. I have tried at that time to use the calculation power of ML to possibly full extent, therefore I have replaced the whole problem by a system of many ordinary differential equations, which can be solved by different standard ODE solvers. This was achieved by a virtual Simpson integration in the relevant definite integral. The question which does bother me since those times is related to the fact that this integration has a certain error, which probably remains even after the ODE solver has formally reached its converging accuracy. I have got this impression by the fact that the high stability of the final answer (error about 1e-6) was achieved at very small spacial steps, less than 1e-3 in comparison to 1. I should appreciate any reaction, because it could save me remarkable efforts to clarify this question via numerical empirics. My email is vassili.pastushenko@gmail.com Best regards to Cleve and whole ML team, Vassili. You dont tweet anymore? Regards Doug, If I have a large set of parameters shared by several functions, which is the best alternative to global?. Thanks in advance! Here’s a tiny tip for globals if you are concerned about naming conflicts: Always only have a *single* global variable for a given project. That can then be a struct with many fields. Often people use globals when they really just want a persistent variable. Use “persistent” for that. @Pepe, I prefer passing them explicitly. One of the big dangers of globals is that variable get modified silently and you don’t know who did it. Overwhelmingly, the times I see globals used is when people do not understand how the scoping of variables works. They will start out each function with Global PARAM PARAM PARAM PARAM x 30 So they just copy and paste the same block of globals everywhere. It gets really ugly. Aslak had some idea with large structures to limit the amount of variables passed. Persistent is often good too. In UIs, getappdata and setappdata are good. @Doug, Thanks Doug!! I’d like to avoid globals, but I have to write callback functions and I don’t have control over what the caller will pass as arguments. Any alternative? @David, GetAppData and SetAppData. Jayveer Thakoor replied on March 9th, 2010 at 19:00 UTC : 9 of 66 … that’s a big problem when you have 800MB sized variables! In such cases, global is extremely convenient – preallocate and update the same one all the time using logical indexing. Far more memory efficient, wouldn’t you agree?… Jayveer, That’s where you would want to use a pointer.
http://blogs.mathworks.com/videos/2010/03/08/top-10-matlab-code-practices-that-make-me-cry/
CC-MAIN-2015-14
refinedweb
6,285
72.66
On Tue, Aug 05, 2008 at 11:32:48PM -0300, Henrique de Moraes Holschuh wrote:> On Wed, 06 Aug 2008, Matthew Garrett wrote:> > The 750ms delay is from thinkpad-acpi. I sent a patch to Henrique which > > makes it go away, but I'm not entirely sure what the ACPI method > > concerned is supposed to be doing. The opregion code won't currently run > > until X is started because the drm layer requires X to be the foreground > > vt before handling IRQs.> > Well, for what is it worth, thinkpad-acpi has a knob (brightness_mode) which> can be used. Set it to CMOS mode (see docs). From what I recall, it should> do what your patch does.It doesn't seem to, no. I should have been clearer - the delay is in the DSDT (not thinkpad-acpi itself), but there's a Thinkpad-specific ACPI call that seems to be needed in order to delay it. Here's the patch again.diff --git a/drivers/misc/thinkpad_acpi.c b/drivers/misc/thinkpad_acpi.cindex b596929..bbc45c8 100644--- a/drivers/misc/thinkpad_acpi.c+++ b/drivers/misc/thinkpad_acpi.c@@ -899,6 +899,9 @@ static int __init tpacpi_check_std_acpi_brightness_support(void) if (ACPI_SUCCESS(status) && bcl_levels > 2) { tp_features.bright_acpimode = 1;+ /* Set ACPI mode */+ if (!acpi_evalf(hkey_handle, NULL, "PWMS", "vd", 0))+ printk(TPACPI_INFO "Failed to claim backlight\n"); return (bcl_levels - 2); }-- Matthew Garrett | mjg59@srcf.ucam.org
http://lkml.org/lkml/2008/8/6/38
CC-MAIN-2018-09
refinedweb
228
74.79
import "gopkg.in/webhelp.v1/whcache" Package whcache provides a mechanism for per-request computation caching Sometimes you have a helper method that performs a computation or interacts with a datastore or remote resource, and that helper method gets called repeatedly. With simple context values, since the helper method is the most descendent frame in the stack, there isn't a context specific place (besides maybe the session, which would be a bad choice) to cache context specific values. With this cache helper, there now is. The cache must be registered up the handler chain with Register, and then helper methods can use Set/Get/Remove to interact with the cache (if available). If no cache was registered, Set/Get/Remove will still work, but Get will never return a value. Get returns previously stored key/value pairs from the context specific cache if one is registered and the value is found, and returns nil otherwise. Register installs a cache in the handler chain. Remove removes any values stored with key from the context specific cache, if possible. Set stores the key/val pair in the context specific cache, if possible. Package whcache imports 5 packages (graph) and is imported by 1 packages. Updated 2018-03-10. Refresh now. Tools for package owners.
https://godoc.org/gopkg.in/webhelp.v1/whcache
CC-MAIN-2018-26
refinedweb
213
61.56
Hey Wix team, I have a table (table1) connected to a dataset (dataset1) inside a Lightbox that I'm launching from a repeater. I'm trying to set the table to show only filtered rows based on a parameter passed to the Lightbox. The parameter is string but in the database it is a referenced value. I tried either filtering the dataset based on this parameter (didn't work) also tried adding a filter to the dataset based on a simple text, which also didn't work. I also tried to remove the connection between the table and the dataset and simply just run a query on the collection, which also didn't work. All string data points transferred to the Lightbox are working wonderfully and I'm able to set those to labels / texts. Here is what I've done: import wixData from 'wix-data'; import wixWindow from 'wix-window'; $w.onReady(() => { let receivedData = wixWindow.lightbox.getContext(); // to receive the information //ALL THESE WORKING WELL: $w("#text349").text = receivedData.data.test; $w("#text306").text = receivedData.data.courseType; $w("#text313").text = receivedData.data.dates; $w("#text348").text = receivedData.data.courseLength; $w("#text309").text = receivedData.data.days; $w("#text312").text = receivedData.data.startTime; $w("#button4").link = receivedData.data["link-Courses-title"]; $w("#button2").link = receivedData.data.productUrl; $w("#button3").link = receivedData.data.productUrl; $w("#text368").text = receivedData.data.originalPrice; $w("#text428").text = receivedData.data.originalPrice; var filter = wixData.filter().contains('SessionCode', currentCourse.value); $w("#dataset1").setFilter(filter); $w("#table1").rows = $w("#dataset1").getItems(); $w("#table1").refresh(); //Tried with or without refresh You can either connect the dataset to the referenced collection and query it like you try to do today or use include in order to get the referenced fields and then perform the query on the results. How can you connect the dataset in the light box to the referenced collection? Is there a way to transfer the referenced collection from the main page to the lightbox? Say you have collectionA which holds your courses data. It has a reference field called SessionCode that references another collection - collectionB. On your lightbox, add a dataset and connect it to collectionB. When you get the data in the lightbox using getContext, create a filter on this dataset to get all the items that have the SessionCode you received. That works. Thank you! Shalom Ohad/Elad/All Need help to pass data from repeater populated by query with include to Lightbox (the include part going to Lightbox). What is a statement instead of wixWindow.openLightbox(lightboxName, dataObj);? You use the openLight box and your dataObj would then be the data you want to pass to the lightbox. So Then when your lightbox opens, in your lightbox $w.onReady() function you need to have it get the data passed using the getContext() function. Add this code to your lightbox page code Hope this helps Thank for reply Here is a sample of code that doesn't work The target is to open different Lightboxes (with data passing) for any related item in repeater on dynamic page. Currently even open Lightbox with lightboxName as parameter doesn't work May be it is scope/context issue? !!!!!! Please urgent help Do you have a link to the page that you are having problems with? Several observations with the code posted (which is why looking at the page code will be more helpful). 1. your $w('#aiRepeater').onItemReady handler shouldn't be declared inside of another function. It is best to declare it in the $w.onReady() function. It will run anytime the data array changes OR the dataset is loaded or filtered. 2. You seem to be using wix-data AND wix-dataset together. Make sure you know what you are doing as it is possible that using both mechanisms to access your data collection can lead to unexpected results. Especially if you have elements on your page connected to the data set using data binding. Is $w('#aiRepeater') connected to a dataset? Is it $w("#aDataset") or do you ONLY populate data in the repeater using the .data property? 3. The $w(`#adoptButton`).onClick() should be declared in the $w.onReady() function. The event argument delivers a new context value that you use to determine which repeater element is being used for the button press. This is what the $w.at() function is there for. If you are familiar with the DOM, think of the repeater element as one of several attached to a common repeater node e.g., document.repeater.item1.adoptButton document.repeater.item2.adoptButton etc. When you click on a button the Wix system figures out that you need element information scoped to a specific item. So getting the $w.at() value essentially provides you the repeater item scope you need to get the correct button value for example If the adoptButton is clicked on the repeater view for item2 then this code will use the event context to point $repeaterItem to all of the elements connected to document.repeater.item2 node. If the lightbox name was used as the label for the adoptButton then the lightbox name would be set from the button label. If you have a text box (this could be visible or hidden) that contained the keeper information (assuming this is a text value) then this would create the lightbox data from that element value. Then the light box would be launched using these arguments. I hope this makes sense. Bottom line you probably need to re familiarize yourself with event handlers and how datasets work with repeaters and you will probably end up with less code to do what you want :-) THIS! This solved my problem! I suspected it might be tied to the $w.at command but I wasn't too sure about how to properly implement it. Thanks! Site is not published yet. How I can copy a link for You?{Person ID} The page is dynamic with dataset for authors. Related artworks (repeater) for each author are populated from query. Repeater is not connected to dataset. I'm grateful for your help The best way to help would be as an authorized contributor. If you are comfortable giving me that access I can look at the site directly in the editor for you. Check my profile for how to contact me.
https://www.wix.com/corvid/forum/community-discussion/dynamic-lightbox-from-repeater
CC-MAIN-2019-47
refinedweb
1,056
63.29