text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Mersenne primes are prime numbers of the form 2p – 1. It turns out that if 2p – 1 is a prime, so is p; the requirement that p is prime is a theorem, not part of the definition. So far 51 Mersenne primes have discovered [1]. Maybe that’s all there are, but it is conjectured that there are an infinite number Mersenne primes. In fact, it has been conjectured that as x increases, the number of primes p ≤ x is asymptotically eγ log x / log 2 where γ is the Euler-Mascheroni constant. For a heuristic derivation of this conjecture, see Conjecture 3.20 in Not Always Buried Deep. How does the actual number of Mersenne primes compared to the number predicted by the conjecture? We’ll construct a plot below using Python. Note that the conjecture is asymptotic, and so it could make poor predictions for now and still be true for much larger numbers. But it appears to make fairly good predictions over the range where we have discovered Mersenne primes. import numpy as np import matplotlib.pyplot as plt # p's for which 2^p - 1 is prime. # See ps = [2, 3, 5, ... , 82589933] # x has 200 points from 10^1 to 10^8 # spaced evenly on a logarithmic scale x = np.logspace(1, 8, 200) # number of p's less than x such that 2^p - 1 is prime actual = [np.searchsorted(ps, t) for t in x] exp_gamma = np.exp(0.5772156649) predicted = [exp_gamma*np.log2(t) for t in x] plt.plot(x, actual) plt.plot(x, predicted, "--") plt.xscale("log") plt.xlabel("p") plt.ylabel(r"Mersenne primes $\leq 2^p-1$") plt.legend(["actual", "predicted"]) Related posts [1] Fifty one Mersenne primes have been verified. But these may not be the smallest Mersenne primes. It has not yet been verified that there are no Mersenne primes yet to be discovered between the 47th and 51st known ones. The plot in this post assumes the known Mersenne primes are consecutive, and so it is speculative toward the right end.
https://www.johndcook.com/blog/2019/09/16/distribution-of-mersenne-primes/
CC-MAIN-2019-51
refinedweb
346
76.32
. the download link for the python script wont work is there a alternative link? Hello, where can I get those jumper wires? You can get them from ebay. Get a big cheap pack from China. No they’re better from the maker shed. the delux jumper wire kit. they really work well. The download worked for me. I have mirrored the script over on Github: Can’t get the pyserialmodule to work. any tips? I need help I keep on getting a Syntax error any advice. can I use a duemenilove for this? dueminalove* I can’t get the pyserial-2.6 to work. It says “Could not find path specified” (or something like that) in the cmd prompt. And now the script won’t work because it says it can’t find a serial module. Where is the download for the library? Will this work for a macbook? Are there any tutorials for this on the Mac OS? I’m on step 4c. When I press f5 to run NunchukMouse_release, I get the message “Traceback (most recent call last): File “C:UsersJoeyDownloadsNunchukMouse_release.py”, line 2, in import math, string, time, serial, win32api, win32con ImportError: No module named serial” Can someone please help? I’m so close! Thanks, -Joey I just get “Please set up the arduino port” Guys pay attention to the lines of code. Instead of “cd /d c:pyserial-2.6” type in “cd /d C:pyserial-2.6”.Also the part after it doesn’t make sense in the command prompt. The author needs to rewrite that. I think the second part may be “C:Python27pythonsetup.pyinstall”, just a guess don’t quote me on it. Figured it out. It’s “setup.py install”. Like I said he really needs to rewrite that. Great article but needs a MEGA update as most of the software is very outdated and some of the links are pooched urgent reply please!!!!! i pressed f5 but i am unable to figure out what to do after that. can anyone help????? i am just a beginner in arduino and python lanuages I’m hoping someone here can help, it seems as though I can’t connect to or read the serial port. I’m getting the following error: Traceback (most recent call last): File “C:UsersPatrickPypart.py”, line 2, in import math, string, time, serial, win32api, win32con ImportError: No module named serial I’m not using IDLE because it gave me some weird issues, so I switched over to Sublime Text 2, which I’ve seen work, so I don’t think that’s the issue. Send this to a friend
https://makezine.com/projects/wii-nunchuk-mouse-2/
CC-MAIN-2019-47
refinedweb
440
85.99
On Tue, Mar 25, 2008 at 07:54:10AM +0000, Bastian Blank wrote: > On Sat, Mar 22, 2008 at 10:02:22PM +0100, Matthias Klose wrote: > > amd64 and i386 side note: the gcc-4.3 4.3.0-2 upload has a patch > > reenabling the cld instruction when stringops are used; this patch is > > neither in the gcc-4_3-branch or in the trunk. > > I discussed with doko a bit and have to propose another solution. This > solution have a prequisite: gcc must not generate string ops without > function calls. > > To understand this solution you have to understand async signal safety > of functions. A signal handler must not call any unsafe function. On a > Debian system this list is in signal(7). All the functions are in the > libc and the libaio. > > The following proposal uses this and should fix any program which does > not use unsafe functions in their signal handlers: > - Add the cld patch to gcc-4.3, the option is disabled by default. > - Build glibc (and maybe libaio) with -mcld. > > Alternative: > - Patch an explicit cld to the beginning of each of the safe functions > in glibc and libaio. > Problem is that memcpy/memmove/memset probably generate rep stos; in the end, I believe memset/memcpy/memmove to be async signal safe, and those are inlined fully in many cases. E.g. it would not be horrible to see SIGALRM handler that basically do: void sigalrm_handler(int signo) { #if 1 memset(&some_global_struct, 0, sizeof(some_global_struct)); #else /* or alternatively, generating probably the same code, in C99 */ some_global_struct = (some_type){ .some_member = 0 }; #endif } So unless I'm mistaken and I miss something obvious, your proposal doesn't work. And fwiw glibc will be built with gcc-4.2 on i386/amd64 to avoid any problems as the toolchain freeze is RSN, and that we can't afford a broken toolchain right now. -- ·O· Pierre Habouzit ··O madcoder@debian.org OOO Attachment: pgpUo6_chg9Cb.pgp Description: PGP signature
https://lists.debian.org/debian-release/2008/03/msg00291.html
CC-MAIN-2018-13
refinedweb
325
65.01
Hi, I'm looking for a little help. I'm trying to create a script with the ability to take home page as an argument, printing a message to say something about the site and then validate whether or not this is a valid URL. I'm relatively new to Python and haven't used arguments very much and would like some help/advice. I have some basic code which I've completed but I'm unsure of how to expand on it. If someone could help or point me in the direction of where I would find information about arguments and using with with URLs that would be great. import sys def printWebsite(URL): URL = raw_input ("Enter website to be checked") site = raw_input print "Valid URL" def main(): print "Website " printWebsite(sys.argv) if __name__ == '__main__': main() Any advice would be greatly appreciated.
https://www.daniweb.com/programming/software-development/threads/468647/using-python-arguments
CC-MAIN-2017-34
refinedweb
144
69.01
As I spend more and more time in the Vue.js ecosystem, I’m developing an a much greater appreciation for the time the core team has spent developing tools that make managing the project configuration. That said, a big part of managing a Vue.js project still requires a lot of Webpack knowledge… More than I have, unfortunately. Today I was working on a sort of micro-application that would be embedded on various web pages that I really have no control over. More specifically, it’s a web app that enables consent management for the CCPA privacy regulations, and it handles conditionally popping open a small UI for California visitors and then providing the opt-out signals to upstream code like prebid.js. This app needs to load on all sorts of third-party web pages, and ideally the functionality would be enabled by simply including a JavaScript in the header of the page before the rest of the site’s advertising technology stack would load. This means the Vue.js app doesn’t really own the page, and it really needed to load as a cleanly as possible in a single JavaScript file. This took a little more wrangling that I’d expected, and much like my deeper dives into doing things like adding Snap.svg to Vue and Nuxt projects, it required a bit of a deeper dive into Webpack and the configuration chaining behavior of the newer vue-cli generated apps. Injecting your own mount DIV One of the things your Vue.js project always needs is a place to mount itself on the web page. Generally, your project will have some sort of HTML template that hosts it, and often your whole site may be the actual Vue.js app and that simple HTML file is all you need to deploy. More often than not, however, I’m loading Vue.js apps onto existing pages, and that involves just adding a <div> element somewhere with the right ID in an existing layout. For this project, I had zero control over the host pages so I had to dynamically inject a mount point. To do this, I had to modify the main.js file in my Vue project to do something like this… import Vue from 'vue' import App from './App.vue' let gMyApp = undefined; document.addEventListener("DOMContentLoaded", () => { // Inject a private mount point div to display UI... let body = document.getElementsByTagName('body')[0]; body.innerHTML = body.innerHTML + "<div id=myApp_mount />"; gMyApp = new Vue( { render: h => h(App), } ).$mount('#myApp_mount') } ); Inspecting the generated webpack configuration redux In my previous article, I talked about inspecting the vue-cli generated Webpack configuration by using ‘vue inspect’ to eject a copy of the configuration, and then modifying it using webpack-chain in the vue.config.js file. One detail that I missed in this earlier exploration was the need to eject a production configuration. To do this, you use the following command and your shell… vue inspect --mode production > ejected.js Without the mode command line variable, what you’re seeing is your development configuration, which can run a whole different chain of loaders and perform other Webpack mischief you’re not expecting. vendors.js must go! But back to my original problem of getting a single bundle as output from the project. By default, the Vue.js CLI will kick out separate bundles for your code, vendor libraries and any extracted CSS. Trying to hand this off to a third party and explaining they needed to include a dog’s breakfast of separate files on all of their site pages was a deal breaker, so figuring out how to merge all of these files in the production build was important. The easy first step was to just stop Webpack from splitting the JavaScript into separate chunks. That’s actually Webpack’s default behavior, so all we need to do is get rid of some of the configuration passed into the split chunks plugin by vue-cli’s defaults. Using chain-webpack in vue.config.js, that looks something like this… module.exports = { chainWebpack: config=> { // Disable splitChunks plugin, all the code goes into one bundle. config.optimization .splitChunks( ).clear(); } } Combining CSS and JavaScript in one Webpack bundle That got the project down to a single emitted JavaScript file and a single CSS bundle. And that’s where things got interesting. To get the CSS to reside with the JavaScript code, it needed to be packaged and injected. That mean undoing the extraction of the CSS into its separate file and then separately using style-loader to sneak it into the DOM when the JavaScript was executed on the page. The first step was disabling the existing extraction process handled by the extract-css-loader. Again, inside vue.config.js in the chainWebpack:config function… config.module.rule('css').oneOf('vue').uses.delete('extract-css-loader'); config.module .rule('css') .oneOf('vue') .use('style-loader') .before('css-loader') .loader('style-loader') .end(); With that in place, the project kicked out a single .js file that had everything in it… vendor and project code, plus the css. And, the css actually got injected when the script ran. Perfect! Bonus Episode! Renaming the Webpack Output File //' ) Perhaps a little wordy, but it gets the job done. Putting it all together in one webpack chain configuration module.exports = { chainWebpack: config=> { //' ) // Disable splitChunks plugin, all the code goes into one bundle. config.optimization.splitChunks( ).clear(); // Disable the CSS extraction into a separate file. config.module.rule('css').oneOf('vue').uses.delete('extract-css-loader'); // Take the CSS from the bundle and inject it in the DOM when // the page loads... config.module .rule('css') .oneOf('vue') .use('style-loader') .before('css-loader') .loader('style-loader') .end(); } } config.module.rule('css').oneOf('vue')... One thought on “Vue-CLI and Single File Webpack Bundles” Thanks for the article, it did work for me for vendor and css, but for other chunks didn’t work. Although I learned a lot with this article which helped me to understand better the configuration in vue. In the end I didn’t need the : config.optimization.splitChunks( ).clear(); I just need to setup maxChunks to 1 in vue.config.js plugins: [ new webpack.optimize.LimitChunkCountPlugin({ maxChunks: 1 }) ] Thanks again for your article.
https://jamesscheller.com/vue-cli-and-single-file-webpack-bundles/
CC-MAIN-2022-40
refinedweb
1,052
55.34
- 1. Previous Content - 2. Aims of This Blog #3 - 3. The Seattle University Training - 4. Digilent Online Training - 4.1 Unit 1: Microprocessor I/O - 4.2 Unit 2: Elements of Real-Time Systems - 4.3 Unit 3: Parallel I/O and Handshaking for LCD Control - 4.4 Unit 4: Communications, part 1:Serial Protocol & part 2:Asynchronous Serial Protocols - 4.5 Unit 5: IrDA Communications Protocols - 4.6 Unit 6: Analog I/O and Process Control - 4.7 Unit 7: Audio Signal Processing - 5. Summaries - 6. What Next ? - 7. Analog Discovery 2 - 8. Resources - 9. Buying the Board 1. Previous Content This blog details experimentation and investigation of the Digilent BASYS MX3 board as part of the BASYS MX3 Trainer Board roadtest and I will be writing a roadtest report on this product in the following week based on these more detailed blogs. My previous blog posts on this board are: - Microchip PIC Overview: for BASYS MX3 Roadtest - Digilent BASYS MX3: #1 Software Setup and Development Cycle - Digilent BASYS MX3: #2 The LibPack Demo Projects 2. Aims of This Blog #3 For this blog #3 I would like to: -. 3. The Seattle University Training The online tutorial covers five topics which are: - Seven Segment Display: demonstrates how to use freeRTOS tasks, queues and timers to monitor a button on the Basys MX3 and update the built-in seven segment display. - Convective Heat Transfer Coefficient: demonstrates how the Basys MX3 might be used to collected temperature data in a heat transfer experiment. The activity also demonstrates how to use an SPI device and how to stream data from the Basys MX3 to a PC. - Natural Frequency of a Cantilevered Beam: demonstrates how the Basys MX3 might be used to collect data in a vibration experiment. The activity also demonstrates how to store and retrieve data on the Basys MX3's built-in 4MB flash memory. - Position Control of a DC Motor: demonstrates how the Basys MX3 might be used in an experiment to control a servomotor. The activity also demonstrates how to use PWM and hardware interrupts on the PIC32 and how to use the built-in H-Bridge on the Basys MX3. - Frequency of an Audio Signal: demonstrates how to use freeRTOS and hardware timers to control sample rate. The activity also demonstrates how to use the PIC32 DSP Library FFT function. Each of these five topics is laid out using the same sequence of sub-tabs, as shown below, and these help keep the reader from getting lost and confused. IMO that is a really good approach. - Overview - Equipment - Code - Instructions - Lab Project - Assignment - Demonstration 3.1 Activity 1: Seven Segment Display (SSD) My first impressions as I read the Overview tab were very good as there are some great explanations of the project requirements such as how to prevent multiplexed display flicker and how often the button should be polled to create a responsive project. I found the Equipment tab to also give some really useful information detailing the basics of multiplexed displays and seven segment display layouts. This tab then starts to show how the LibPack abstracts some operations to a higher level for ease of coding i.e. LED_Toggle(0) to toggle the state of LED number 0. This level of abstraction will appear very familiar to Arduino users. The Code tab lists the main function and it could be tempting to create a new project and paste that code into a newly created main.c. However the user would then need to add all the other linked in libraries and ensure the paths were correct. Luckily I held off and read the Instructions tab. This later tab details downloading the project complete as a zipped package. Downloading and unzipping that allowed MPLAB X IDE to seamlessly pick up the new SSD.X project. The instructions were very complete and I was pleased with the quality. The following is a screenshot of the SSD project as loaded into MPLAB X IDE v5.25 and I have expanded out the folders to show you the types of file they contain. This structure is different to the simple LibPack demonstrations in the previous blog Digilent BASYS MX3: #2 The LibPack Demo Projects and most notably by the inclusion of the RTOS folder and BasysMX3setup.h I then clicked 'Clean and Build' and 'Make and Program' to complete the project and download it to the BASYS MX3 board....and started to run. The SSD shows '0000', the LCD display is blank and the LED numbered LD0 flashes with 1 second on and 1 second off. Pressing the 'down' button results in the SSD incrementing on the push (rather than release of button). The training material then challenges the reader to modify the code: This is where the RTOS fun starts and without much explanation. That isn't too much of an issue as the code is well written and easy to read through. Straight away the creation of some entities to represent tasks that are to be undertaken. I started modifying the ssdUpdate function to release each 7-segment element for longer than the recommended 4ms (total 16ms) and at 8ms on each (32ms total) I noticed the display had started to flicker. // refresh the individual digits and release the task for 4ms ssdSetDigit(0, digit1); vTaskDelay(4); ssdSetDigit(1, digit10); vTaskDelay(4); ssdSetDigit(2, digit100); vTaskDelay(4); ssdSetDigit(3, digit1000); vTaskDelay(4); Looking at the code it is the trackButton function that increments the counter based on the way the push button operates. That was quite easy to change as the code has been written very neatly. After changing the code as shown below the SSD incremented only when the down button was released. What is interesting as well is because this is running as a RTOS holding the button down does not prevent the LED being blinked at 1s - something that often happens if the code blocks whilst waiting for the button release. // Modified 14rhb: if there was a change in state of the button (transitioned from down to up) // add one to the counter and update the SSD if (buttonD == UP && buttonD != prevbuttonD) { count++; if (count > 9999) count = 0; // reset the counter xQueueSendToBack(SSDqueue, &count, portMAX_DELAY); // update the SS Display } The timer to flash the LED is initially setup as every 1000ms, this value can be easily changed to 300 to make the ticks every 300ms although reducing further to 100ms is much more noticeable and the push button increment functionality still works. xTimer1000 = xTimerCreate((const signed char *) "Blinky", 100, // The timer period. 1000 ticks = 1 seconds */ pdTRUE, // This is an auto-reload timer, so xAutoReload is set to pdTRUE. (void *) 0, // The ID is not used, so can be set to anything. Timer1000 // The callback function that inspects the status of all the other tasks. ); That was a very nice and simple project using RTOS and it showed simply the benefits of such an approach, although the code is more complex and would be larger than a standard non-RTOS design. I don't plan to actually undertake each of the assignments listed as those are learning units and what I really want is to make my own project as part of this roadtest. However what is good is that the last Demonstration tab shows a video of the completed assignment along with their suggested code....for those trainees who get stuck. From this first training activity I am impressed with this Seattle University course and also with the ease of functionality under the RTOS (although it is at first glance quite complex). 3.2 Activity 2: Convective Heat Transfer Coefficient Unfortunately this activity requires the use of the Pmod TC1, which I don't currently have. But looking through the Equipment tab I see mention of using the LibPack SPI functions and wonder if I could modify this to at least read from one of my other Pmod devices. I downloaded the zipped project files anyway for some experimentation. My first task was to build and download just to see if it actually worked without the Pmod unit I have, the CMPS2 compass unit. It is definitely a hot day in the office! What it is showing me is that I could likely change this code to read my compass unit values. After some more reading I find my Pmod unit is I2C and this activity 2 example uses an SPI based Pmod. There would be a lot of work to change this across and therefore I will be content to just assess the printed material rather than modify the code at this stage. However I am noting this as it might be a good contender to modify for a final project as a CSV file is also generated and it will be interesting to view in Excel or other spreadsheet application. The Activity 2 ends with a Demonstration tab again that shows a video of the final experimentation and example of code that answers the lab questions. This is another well presented example which almost works straight away and which shows how the RTOS is utilised for SPI interaction, LCD displays are written to and how the UART can be used. 3.3 Activity 3: Natural Frequency of a Cantilevered Beam Unfortunately this activity also relies on a sensor that I do not have (the ADXL335), although the aluminium beam I could likely find a substitute for. However I still looked through the code to see what it achieved as I will then be able to re-purpose sections if required for my own projects. I downloaded the VibratingBeam.X.xzip project so I could search and explore the code directly in MPLAB X IDE. The Overview tab explains the basics of this activity; whereby the analog Z-axis signal from an accelerometer IC (attached to a vibrating beam) is fed to the BASYS MX3 Analog-to-Digital converter. This digital representation of the vibration is then processed using a Fast Fourier Transform (FFT) library function to reveal the underlying fundamental frequency. This is really interesting stuff ! [photo source: ] In a similar way to the preceding two RTOS activities, this activity has tasks created. These are: - LCD Update - calculate FFT - collect Data - output Data - track the Buttons What I have noticed is that the functions listed in MPLAB X IDE under the RTOS folder are the same for each activity meaning they are a generic pull in to the project rather than needing creating and adding manually...this is good. This activity will be useful for reading values from any analog sensor at a reasonable rate (200Hz in this example) and to display the frequency. For example the microphone could likely be utilised as the source and the underlying main frequency determined. A guitar tuner also springs to mind under the possible uses. The power of FFT being realised due to the main PIC32MX device and larger amounts of RAM and RTOS over the lighter weight 8-bit devices I more often use. I looked forward to the Demonstration tab on this activity to see the video of the project running as intended and I wasn't disappointed. 3.4 Activity 4: Position Control of a DC Motor As I started to look through this activity I realised it was another one that I couldn't complete as it required a DC motor with a rotary encoder. As in previous activities I decided to download and explore the project code and the Seattle University instructions to see what I could learn. [photo source: ] This activity makes use of the BASYS MX3's ADC capability and the inbuilt H-bridge DC motor drive. It allows the user to adjust the final position of the motor and the overshoot in this control system. As previously seen several instances of xTaskCreate are called to make tasks that run together on the RTOS system. Each of these tasks can be seen to get a priority number also with the button pushes and LCD remaining lowest whilst the more critical motor positioning is higher priority. These can be seen in the MPLAB X IDE screenshot below (click to zoom): As I explore the code further it does start to get quite complex, in a similar way to how I found many of the ARM based demo boards which used some form of RTOS. There is a link at the end of this blog on RTOS and I delved into that literature a few times which helped my understanding to some extent. For anyone like me who cannot built this activity the Seattle University website has some fantastic interactive demos under the Assignment tab. These allow the user to adjust the proportional gain and then command the motor to a position, the higher the gain the more overshoot can be seen. If you are interesting in motor operation in general this page itself is a great read regardless of any interest in MCU ! 3.5 Activity 5: Frequency of an Audio Signal I downloaded and unzipped the project code for this activity and loaded the project into MPLAB X IDE. The project contains man of the LibPack headers and therefore I know the project will make use of these peripherals. The RTOS folder is also populated and therefore I know this activity will make use of those functions to create a real-time system. However the starting point to investigate is under main.c As a real-time system the code sets up two tasks: - analyseAudio - ssdUpdate These are the main two functions that are called as the system determines. The analyseAudio collects data points form the microphone, determines which frequency bin has the greatest magnitude and parses that to the SSD for display. The ssdUpdate does exactly that and displays the data to the 7-segment display at a refresh rate to elliminate flicker. There are three schemes to choose from: 1 1000Hz 2 4000Hz 3 16000Hz The latter uses an ISR for the data processing and I believe whilst in there it causes the SSD to fliicker. I switched back to scheme1 for my tests. The Javascript on the Seattle University website allows the user to play test tones that can check the board's operation: Below is a short video showing the Digilent BASYS MX3 board running the FFT.x demo: Note to myself: this FFT function can be used to make my Christmas LED lights flash directly to music without the need to choreograph them in sequencing software or a guitar tuner 3.6 Resources If you missed my point in earlier blog posts, this Seattle University website contains some great material relating to installing the MPLAB X IDE software and XC32 compiler. 4. Digilent Online Training The online Digilent tutorial covers seven work units, although unit number four is split into two parts. The units are: - Unit 1: Microprocessor I/O - Unit 2: Elements of Real-Time Systems - Unit 3: Parallel I/O and Handshaking for LCD Control - Unit 4: part 1, communications - Serial Protocol - Unit 4: part 2, communications - Asynchronous Serial Protocols - Unit 5: IrDA Communications Protocols - Unit 6: Analog I/O and Process Control Unit 7: AUdio SIgnal Processing 4.1 Unit 1: Microprocessor I/O [ ] This first unit contains many hyperlinks in the initial discussions and I'm divided as to whether that is a good thing or might cause a novice to end up reading too much background material. Perhaps a caveat early on to caution the reader against clicking on every single hyperlink unless they really do need to read up on that topic. This unit is a fairly random collection of articles IMO but there was a lot of really useful fact included and therefore a very good read. It is also very interesting for anyone interested in microcontrollers or software design. I think it is a pity that Digilent don't take the newcomer along the route to use MPLAB X IDE rather than pass them off to the Microchip website as upon clicking that link the user is faced with a completely different layout and interaction, whilst at the same time I can see it would ensure the reader gets to instructions on the latest versions of the IDE rather than perhaps outdated documentation. 4.1.1 Appendix A: Starting a New MPLAB X Project Select Tool: Licensed Debugger MCU Alpha One Click on “Windows” → “PIC Memory Views”"Target Memory Views" → “Configuration Bits”. Set the options as shown in Listing B.1 in the Appendix. This part seems quite complex for a newcomer although it does show how the configuration bits can be selected via the drop down tool rather than having to be looked up in the data sheets. Click on “Generate Source Code to Output” at the bottom of the screen. // PIC32MX370F512L Configuration Bit Settings // 'C' source line config statements // DEVCFG3 #pragma config USERID = 0xFFFF // Enter Hexadecimal value (Enter Hexadecimal value) is in Non-Window Mode) #pragma config FWDTEN = OFF // Watchdog Timer Enable (WDT Disabled (SWDTEN Bit Controls)) #pragma config FWDTWINSZ = WINSZ_25 // Watchdog Timer Window Size (Window Size is 25%) // PWP = OFF // Program Flash Write Protect (Disable) #pragma config BWP = OFF // Boot Flash Write Protect bit (Protection Disabled) #pragma config CP = OFF // Code Protect (Protection Disabled) // #pragma config statements should precede project file includes. // Use project enums instead of #define for ON and OFF. #include <xc.h> Once a config_bits.h file is generated, it can be copied to other projects without needing to complete steps a through j. That's nice to know. Right-click on Source Files and select “New” → “C Main File..”. Name this file “main.” and the extension set to “c”, so it will be “main.c”. It is under New/Other 4.1.2 Appendix B 4.1.3 Appendix C 4.1.4 Appendix D 4.1.5 Appendix E 4.1.6 Lab 1a // Lab 1a for Element14 Community Digilent BASYS MX3 Roadtest // by 14rhb int main(int argc, char** argv) { // Initialization code goes here //Set the PORT D TRIS register to set the bits for LED8_R, LED8_G, and LED8_R as digital outputs. TRISDbits.TRISD3 = 0; // Set Port D3, blue LED as output ANSELDbits.ANSD3 = 0; // Disable analog input TRISDbits.TRISD12 = 0; // Set Port D12, green LED as output //ANSELDbits.ANS12 = 0; // D12 does not have analog selection TRISDbits.TRISD2 = 0; // Set Port D3, red LED as output ANSELDbits.ANSD2 = 0; // Disable analog input //Set the PORT F TRIS register to set the bits SW0 through SW2 as digital inputs. red=sw0, green = sw1 and blue=sw2 //Port F doesn't seem to have analog signals and therefore no ANSELFbits, source Table 12-11 in datasheet. TRISFbits.TRISF3 = 1; // Set Port RF3, Sw0 = red as input //ANSELFbits.ANSF3 = 0; // Disable analog input TRISFbits.TRISF5 = 1; // Set Port RF5, sw1 = green as input //ANSELFbits.ANSF5 = 0; // D12 does not have analog selection TRISFbits.TRISF4 = 1; // Set Port RF4, sw2=blue as input //ANSELFbits.ANSF4 = 0; // Disable analog input LATDbits.LATD2=0; LATDbits.LATD12=0; LATDbits.LATD3=0; while(1) { // User code in infinite loop LATDbits.LATD2=PORTFbits.RF3; LATDbits.LATD12=PORTFbits.RF5; LATDbits.LATD3=PORTFbits.RF4; } return (EXIT_FAILURE); // Failure if code executes // this line of code. } // End of main.c In future projects I need to include the hardware.h or macros to abstract away from the use of ports and bit numbers as that can become tedious and error prone. 4.1.7 Lab 1b - Down Add - Centre Subtract - Right Multiply - Up Divide - Left Clear /* * File: main.c * Author: 14rhb * * Created on 25 October 2019, 16:16 */ #include <stdio.h> #include <stdlib.h> #include "config_bits.h" //#include "hardware.h" #include "../BASYS_MX3_Lib/Basys-MX3-library-master/LibPack/LibPack.X/ssd.h" #include "../BASYS_MX3_Lib/Basys-MX3-library-master/LibPack/LibPack.X/lcd.h" #include "strings.h" #include "../BASYS_MX3_Lib/Basys-MX3-library-master/LibPack/LibPack.X/utils.h" int testPushbuttons(int); void displayAnswer(unsigned int ); void calculateAnswer(int, int, int ); int main(int argc, char** argv) { unsigned int x=0; unsigned int y=0; int operation=0; unsigned int answer=0; SSD_Init(); //Pushbutton Setup //Set the 7-segment display up - luckily this is in section 7.1 of the BASYS MX3 user guide //Set the registers for slide switches TRISFbits.TRISF3 = 1; // Set Port Sw0 as input TRISFbits.TRISF5 = 1; // Set Port SW1 as input TRISFbits.TRISF4 = 1; // Set Port SW2 as input TRISDbits.TRISD15 = 1; // Set SW3 as input TRISDbits.TRISD14 = 1; // Set SW4 as input TRISBbits.TRISB11 = 1; // Set SW5 as input ANSELBbits.ANSB11 = 0; TRISBbits.TRISB10 = 1; // Set SW6 as input ANSELBbits.ANSB10 = 0; TRISBbits.TRISB9 = 1; // Set SW7 as input ANSELBbits.ANSB9 = 0; LCD_Init(); char message[]="Hello, welcome to the Element14 Community Roadtest of the Digilent BASYS MX3 Board...lab 1b demo " ; char snippet[16]; int i=0; for (; i<90; i++){ memset (snippet, "\0", sizeof(snippet)); strncpy(snippet,message+i, 16); LCD_WriteStringAtPos(snippet, 0, 0); int delay=0; for(; delay<250000; delay++){ DelayAprox10Us; //100* = 1ms, 100,000 = 1s } } LCD_WriteStringAtPos(" E14 - Lab 1b ", 0, 0); // User code in infinite loop while(1) { operation = testPushbuttons(operation); } return (EXIT_FAILURE); // Failure if code executes // this line of code. } // End of main.c /* returns value if button pressed 1= up 2=down 3=right 4=left 5=centre */ int testPushbuttons(int currentValue){ int value=currentValue; int x,y; if (PORTBbits.RB1==1) value=1; if (PORTBbits.RB0==1) value=4; if (PORTAbits.RA15==1) value=2; if (PORTBbits.RB8==1) value=3; if (PORTFbits.RF0==1) value=5; //this is effectively where keys are held - wait for key to be released while(PORTBbits.RB1!=0 || PORTBbits.RB0!=0 || PORTAbits.RA15!=0 || PORTBbits.RB8!=0 || PORTFbits.RF0!=0){ x=readSlideSwitchesX(); y=readSlideSwitchesY(); calculateAnswer(value,x,y); }; LCD_WriteStringAtPos(" ", 1, 0); return value; } int readSlideSwitchesX(){ //value for x read int x=0; if(PORTFbits.RF3==1)x++; if(PORTFbits.RF5==1)x+=2; if(PORTFbits.RF4==1)x+=4; if(PORTDbits.RD15==1)x+=8; return x; } int readSlideSwitchesY(){ //value for y read int y=0; if(PORTDbits.RD14==1)y++; if(PORTBbits.RB11==1)y+=2; if(PORTBbits.RB10==1)y+=4; if(PORTBbits.RB9==1)y+=8; return y; } void calculateAnswer(int operation, int x, int y){ int answer; if (operation==1){ //divide LCD_WriteStringAtPos("Divide ", 1, 0); answer=(unsigned int)((float)x/(float)y); } else if (operation==2){ //add LCD_WriteStringAtPos("Add ", 1, 0); answer=x+y; } else if (operation==3){ //multiply LCD_WriteStringAtPos("Multiply ", 1, 0); answer=x*y; } else if (operation==4){ //clear LCD_WriteStringAtPos("Clear ", 1, 0); answer=0; } else if (operation==5){ //subtract LCD_WriteStringAtPos("Subtract ", 1, 0); answer=x-y; } displayAnswer(answer); } void displayAnswer(unsigned int valIn){ SSD_WriteDigits(valIn & 0xF, (valIn & 0xF0) >> 4, (valIn & 0xF00) >> 8, (valIn & 0xF000) >> 12,0,0,0,0); return; } At this point I'm going to conclude that I don't want to spend hours implementing each example in these units, struggling to get the code working as intended, which is almost akin to undertaking a formal qualification. Instead I want to read through, see what each unit is offering and perhaps only undertake the assignments that will help my final project idea. That's the plan, so if the following units seem brief I have adopted that approach. I'm still keen to pay more attention to how the RTOS projects are created. 4.2 Unit 2: Elements of Real-Time SystemsThis unit looks at real-time systems rather than the combinatorial circuits in unit 1a. There are two lab units to work through, 2a and 2b, which require the use of a stepper motor. The concepts of real-time multi-threaded systems are explained well where it appears that such a system is simultaneously servicing multiple tasks in the code. The following concepts are also discussed: - Hard Real-time Systems: require all deadlines to be met - Firm Real-time Systems: allow for infrequently missing deadlines - Soft Real-time Systems: allow for more deadlines to be missed at the detriment of the system performance - Cooperative - each task willingly stops to allow others to startup, needs all tasks to work together well - Preemptive - this is more from an administrator stopping tasks to run up others "The PIC32MX370 has up to 76 different interrupt sources and 46 interrupt vectors." There are three main types of interrupt: - Software - Internal hardware - External hardware When the Interrupt Service Routine (ISR) is invoked the microcontroller needs to save the context of major registers and settings at that instance of time, it then undertakes the ISR routine before reinstating the context of major registers and settings - this is called prologue and epilogue code.This takes time (which contributes to ISR latency) to undertake which effectively is deadtime to the system. In the PIC32 microcontrollers there are shadow registers of all the major registers and this saving/reinstating of context is almost instantaneous. Preemptive systems generally have foreground and background tasks. There are two types of operating scheme under preemptive: nested and non-nested with nested generally being more responsive. Each can also be used with polling techniques. The PIC32MX range can use polling, nested and non-nested simultaneously. The PIC32MX also uses something called Group Priority (levels 1-7 with one equal to the lowest) and Sub-Group Priority (levels 1-4). This hopefully will become clearer as I start to use it. The Digilent documentation could, IMO, benefit from some simple diagram rather than lots of text. The four code elements required to successful process interrupts are: - function declarations for the ISR - code to initialise the hardware that generates these interrupts - ISR code that will be executed - instructions to enable global interrupts An important point is that the code developer must clear the interrupts to allow the system to be prepared for the next, failure to do this results in the system entering the same ISR again and again and will lock the system up. ISR functions cannot be called from the other code areas in the same way as standard functions and they also cannot accept any variables to be passed to them or from them. They can however call standard functions from their own code. Critical sections of code where it would be detrimental to run any other ISR can be ring-fenced using the instructions INTEnalbeInterrupts() and INTDisableInterrupts(). There is a good section on the use of timers and counters in real-time systems and how hardware implementation is better than software implementation. I actually started to get overwhelmed with the content in this section and will need to re-read several sections. There were code snippets and I was unsure if I should start trying to make them into a project. At that point I started to skim through until I found listing A.3 as a complete program. I copied that into my main.c and compiled it. // E14 Community 1s BLINK Test // 14rhb); int val=0; while (1) { if(INTGetFlag( INT_T1 )) { /* Tasks to be executed each millisecond */ val++; INTClearFlag( INT_T1 ); /* Clear the interrupt flag */ } /* Tasks to be executed at the loop execution rate */ if (val>1000){ LATAINV = 0x01; /* Invert LED 0 */ val=0; } } return (EXIT_FAILURE); /* System has failed if this line is reached */ } /* End of File main.c */ It took me a little while to understand what to do with this as the LED0 appeared to be on constant. Then I modified the code to that shown above such that the ISR is 'ticking' away every 1ms and incrementing val. When val>1000 i.e. 1s the LED0 toggles. What is important here is that for most of those 1ms periods are have control of the MCU and can write other functions, only when the interrupt flag is raised does the code have to run the val++. The unit concludes with two lab assignments: Lab 2a. Lab 2b. I have opted not to undertake either of those at this point although I note that they are both very detailed in their guidance through the task. Perhaps I will select one of these for my final project. 4.3 Unit 3: Parallel I/O and Handshaking for LCD ControlAlthough I already jumped straight in to use the LibPack LCD.h on previous work this unit is detailing how the PIC32MX can be interfaced to other LCD units - this is invaluable when perhaps breaking away from the development board onto a bespoke product PCB and a different LCD module. The units starts off describing the following concepts: - Parallel Communication - Serial Communication - Handshaking - Bit-banging - Parallel Master Port (PMP) The unit concludes with two lab assignments which are: Lab 3a"Develop and test a collection." Lab 3b"Develop and test a collection of.". This is aimed at the PMP interface although the requirement is stated as being the same as assignment 3a. 4.4 Unit 4: Communications, part 1:Serial Protocol & part 2:Asynchronous Serial Protocols This double unit gives a really good overview of the OSI communications model and explanations of Asynchronous and Synchronous communications. There is little code or reference to the BASYS MX3 unit in this general module but that detail is left to the later four lab assignments. Appendix B in this unit is great as it lists the popular serial protocols and provides links to the Wikipedia articles on them. The four lab assignments in this unit are: Lab 4a ." Lab 4b ." Lab 4c "You are to develop a software system that allows the PIC32MX370 to write an arbitrary number of 8-bit bytes to an arbitrary address location in the SPI flash memory device. You must be able to read back this stored data and determine if the data read matches the data written." Lab 4d "The PIC32MX370 processor will periodically read the physical orientation data from a 3-Axis accelerometer and display the information on a character LCD and send a text stream to a computer terminal." Each of these above units includes a huge wealth of knowledge and explanation on each of these topics before guiding the reader through development of their solution with code snippets, flow diagrams, more explanations and circuit diagrams. This really is a comprehensive and well laid out set of training material, my few words does little to convey the detail without copying it to this blog verbatim. 4.5 Unit 5: IrDA Communications Protocols This unit is a complete learning module on IrDA and includes the basic overview, the physical layer detail (the Digilent BASYS MX3 comes with its own built in IrDA module) and full discussion on the protocol. This is further reinforced for anyone just reading through the article as it includes many useful logic analyser screenshots of the protocol in action. The unit the goes on to explain the basics of how the PIC32MX370 would encode and decode such a signal, building on the previous units. The unit culminates in a single lab assignment. Lab 5a "This lab requires the student to analyse the IrDA signal from an arbitrary device remote control unit and generate an application that can exchange IrDA messages with that device." 4.6 Unit 6: Analog I/O and Process Control This unit discusses electro-mechanical systems in a very detailed and methodical way and covers the architectures, mechanisms and algorithms involved. Below is the control block diagram from this Digilent material: The unit explains the differences between: - Open-loop Control - Closed-loop Control The next section of the unit explains the basics of Signal Processing including: - Analog Signal Processing - Digital Signal Processing There is a very detailed section on motor speed control and feedback (which includes the tachometer in the diagram above). The unit concludes with some really succinct motor control information in the appendices before finishing with two user assignments. Lab 6a "As illustrated in Fig. 5.1, open-loop control does not use feedback to the control processing that indicates how the output is actually working. The open-loop portion of this lab will use the potentiometer labelled." Lab 6b . MotorSpeed=[(AnalogControlVoltage∗0.6MaximumAnalogControlVoltage)+0.3]⋅Maximummotorspeed." This looks to be another great and detailed learning unit, the wealth of information in it cannot be conveyed here and I really suggest anyone interested in driving motors from a microcontroller have a read. 4.7 Unit 7: Audio Signal ProcessingI was not disappointed as I read through this final unit. The ability to undertake digital signal processing of audio signals is a very powerful and widely used technique - the fact it can be undertaken on this PIC32MX device shows how much processing power is available. The unit starts with a basic chapter on signal processing and links off to several Wikipedia articles. There are then two small sections on audio processing: - Analog Signal Processing - Digital Signal Processing There is a nicely detailed introduction to digital filters, Finite Impulse Response (FIR) filters and Infinite Impulse Response (IIR) filters. The Digilent Unit 7 is a great starting point IMO for any Audio DSP work, whether using the BASYS MX3 or another similar microcontroller board. The units appendices include some useful links to the BASYS MX3 schematics as well as code snippets for several FIR and IIR filters. The unit concludes again with two assignments: Lab 7a "You are to create a digital sine wave generator capable of synthesizing eight constant amplitude signals at frequencies specified in Section 8.1. The frequency of the sine waves will be selected by setting one of the eight slide switches. The audible output will be enabled and the frequency displayed on the 4-digit 7-segment display for five seconds when BTND is pressed. The LCD will display the frequency of the selected sine wave." Lab 7b"The objective for this lab is to create a graphical spectrum analyzer as modeled in Fig. 8.1. Figure 8.2 shows the LCD and 7-segment display for the result outputs. Figure 8.3 through 8.5 provide an approach to the software organization for this lab. Figure 8.4 shows the tasks to be completed for the system initialisation." 5. Summaries 5.1 Seattle University Material 5.2 Digilent Training Material 5.3 PIC32MX Libraries and plib.h #include <plib.h> /* Include peripheral library */ #include "hardware.h" /* Includes GetPeripheralClock() */ The plib.h was flagged as not being required and hardware.h also caused issues. "Starting with XC32 V1.40, PIC32MX peripheral support libraries are no longer installed as part of the compiler’s installation. These libraries are now installed as a second setup, after the installation of XC32 compiler. Peripheral libraries for PIC32MX products are not needed for Harmony development. Microchip recommends using MPLAB Harmony for new projects. To download MPLAB Harmony, click here." I'm using XC32 v2.30 so the peripheral support libraries were not installed at the same time and I need to complete the second step. I have not developed using Harmony at the moment and therefore downloaded the PIC32 Legacy Peripheral Libraries zip file, which extracts to an exe. 5.4 Reflection on My Aims I believe I've accomplished all of my aims I had set for blog #3: - 6. What Next ?I still want to finish off with my own project before writing up my formal roadtest report on this Digilent BASYS MX3. Although this development board is too large for my project I believe it will form a good basis for developing the code prior to making my own PCB and miniaturising the project. I want to use the Pmod unit I bought as part of my Energy Harvesting Design Challenge to re-work an old project that didn't get very far; my Macrobinocular Mk7150 as used in the Starwars film by Luke Skywalker. 7. Analog Discovery 2 8. Resources - - - - - Analog Discovery 2 - National Instruments
https://www.element14.com/community/people/14rhb/blog/2019/10/30/digilent-basys-mx3-3-training-materials
CC-MAIN-2020-34
refinedweb
5,953
51.99
#include <sys/ddi.h> #include <sys/sunddi.h> int ddi_dma_sync(ddi_dma_handle_t handle, off_t offset, size_t length, uint_t type); Solaris DDI specific (Solaris DDI). The handle filled in by a call to ddi_dma_alloc_handle(9F). The offset into the object described by the handle. The length, in bytes, of the area to synchronize. When length is zero, the entire range starting from offset to the end of the object has the requested operation applied to it. Indicates the caller's desire about what view of the memory object to synchronize. The possible values are DDI_DMA_SYNC_FORDEV, DDI_DMA_SYNC_FORCPU and DDI_DMA_SYNC_FORKERNEL. The ddi_dma_sync() function is used to selectively synchronize either a DMA device's or a CPU's view of a memory object that has DMA resources allocated for I/O . This may involve operations such as flushes of CPU or I/O caches, as well as other more complex operations such as stalling until hardware write buffers have drained. This function need only be called under certain circumstances. When resources are allocated for DMA using ddi_dma_addr_bind_handle() or ddi_dma_buf_bind_handle(), an implicit ddi_dma_sync() is done. When DMA resources are deallocated using ddi_dma_unbind_handle(9F), an implicit ddi_dma_sync() is done. However, at any time between DMA resource allocation and deallocation, if the memory object has been modified by either the DMA device or a CPU and you wish to ensure that the change is noticed by the party that did not do the modifying, a call to ddi_dma_sync() is required. This is true independent of any attributes of the memory object including, but not limited to, whether or not the memory was allocated for consistent mode I/O (see ddi_dma_mem_alloc(9F)) or whether or not DMA resources have been allocated for consistent mode I/O (see ddi_dma_addr_bind_handle(9F) or ddi_dma_buf_bind_handle(9F)). If a consistent view of the memory object must be ensured between the time DMA resources are allocated for the object and the time they are deallocated, you must call ddi_dma_sync() to ensure that either a CPU or a DMA device has such a consistent view. What to set type to depends on the view you are trying to ensure consistency for. If the memory object is modified by a CPU, and the object is going to be read by the DMA engine of the device, use DDI_DMA_SYNC_FORDEV . This ensures that the device's DMA engine sees any changes that a CPU has made to the memory object. If the DMA engine for the device has written to the memory object, and you are going to read (with a CPU) the object (using an extant virtual address mapping that you have to the memory object), use DDI_DMA_SYNC_FORCPU. This ensures that a CPU's view of the memory object includes any changes made to the object by the device's DMA engine. If you are only interested in the kernel's view (kernel-space part of the CPU's view) you may use DDI_DMA_SYNC_FORKERNEL. This gives a hint to the system—that is, if it is more economical to synchronize the kernel's view only, then do so; otherwise, synchronize for CPU. The ddi_dma_sync() function returns: Caches are successfully flushed. The address range to be flushed is out of the address range established by ddi_dma_addr_bind_handle(9F) or ddi_dma_buf_bind_handle(9F). The ddi_dma_sync() function can be called from user, interrupt, or kernel context. ddi_dma_addr_bind_handle(9F), ddi_dma_alloc_handle(9F), ddi_dma_buf_bind_handle(9F), ddi_dma_mem_alloc(9F), ddi_dma_unbind_handle(9F) Writing Device Drivers for Oracle Solaris 11.2
http://docs.oracle.com/cd/E36784_01/html/E36886/ddi-dma-sync-9f.html
CC-MAIN-2015-35
refinedweb
571
51.28
Best Price Car Parts .com offers FREE Ground SHIPPING on your online Auto Parts orders! FREE UPS GROUND SHIPPING on orders in the USA and Canada over $75 dollars! Best Price Car Parts .com is proud to offer you the best quality car and truck parts available for the lowest price anywhere. We have OEM, stock replacement car and truck parts for all import and domestic vehicles. We sell Acura engine parts, Audi brake parts and BMW steering parts.. We can UPS you a fuel pump and an air filter for your Mitsubishi and Porsche. New air conditioning parts will help to keep your Saab and Saturn cool in the summer. Heater cores keep your Subaru warm in the winter. Check out our online car parts catalogue when you need an oil filter or alternator belt for your Suzuki, Toyota, Volvo or Volkswagen. We carry only the highest quality OEM, stock, replacement auto and truck parts including Ernst German Made Exhaust Parts, ATE Brake Pads and rotors. Bosch and NGK electrical components with help your car's ignition system. You can improve your car's handling by installing new KYB, TRW and FEBI Bilstein suspension parts. Clutch and drive line components from EMPI, Gemo, Lemforder Parts and Sachs will give you one smooth ride. Best Price Car Parts .com is committed to bringing our customers the best of the best in OEM quality parts at low prices. Best Price Car Parts .com specializes in providing OEM quality auto parts for Mercedes Benz, BMW, and Porsche. Maybe you are looking for parts for your Chevrolet, Ford or Dodge, manufactured between 1961-2007. We have many car parts for these years of vehicles. Nissan, Subaru and Toyota are some of the vehicles that we sell auto parts online for. We sell stock replacement car and truck parts to Canada and the USA. If you’re looking for vintage restoration car parts or performance auto parts, you’re at the right web site. Our knowledgeable and friendly staff are experienced automobile enthusiasts and have restored many Volkswagen, Porsche and Honda vehicles to their former glory. If this is your first restoration project or you are just repairing your daily driver car, you will be happy with our commitment to find the correct auto parts for your car or truck. FREE UPS ground shipping is standard on all auto parts orders on Best Price Car Parts .com over $75 to the USA and Canada. To qualify for free shipping, please order more that $75 on import and domestic auto parts. Use our unique, online auto part search engine to help you to find the correct auto parts for your car. Whether it’s a Nissan or Honda, you need the parts that are right for your vehicle. We have spent the time to catalogue each auto part we sell to ensure that you get the proper part for you car. Select USA or Canada, next your car's year, then the car make and click search. Now you are shopping for Auto Parts Online in your specific country, either Canada or the USA. Best Price Car Parts .com is the fast and secure way to order auto parts online. When you order parts online from BestPriceCarParts.com, and you use our country, year, make and model search, you can be sure that you are ordering the correct auto and truck parts for your Subaru, Hyundai or Chevrolet. When you use our specialized online auto parts search engine, the list of car parts will be specifically tailored for your vehicle, whether you drive a Acura or Volvo, every part on the results page is for your automobile! Not sure what you’ve got? Please phone our BestPriceCarParts.com toll free phone number 1-800-207-1367 Monday through Friday from 9 am to 5 pm Eastern Standard Time. Also, during the online check out process, please include your vehicle details and our staff will verify your parts order before shipment. We sell foreign and domestic car parts including Audi parts, as well as BMW parts, quality Honda parts and Infiniti parts. For domestic parts, we have everything from Chevy parts and Ford parts to Dodge parts. Our online auto parts catalogue includes parts ranging from Isuzu parts and Jaguar parts to Kia parts and Land Rover parts. Get Lexus parts and Mazda parts here, along with Mercedes parts and Mitsubishi parts, as well as Saab parts and quality Saturn parts. Whether you're here for Subaru parts or Suzuki parts, quality Toyota parts or Volvo parts, or even for VW parts, you've discovered the best place to buy auto parts online, competitively priced. We are our customer's #1 choice when it comes to buying the best auto parts Canada & the USA offers, and they're all right here at Best Price Car Parts .com. Please phone toll free 1-800-207-1367 for pricing on any parts that you cannot after searching Best Price Car Parts .Com. Please visit and for Vintage Volkswagen Parts! You will find parts for your Bug, Beetle, Super Beetle, Karman Ghia, Van, Type 3, Thing on both of these websites. To learn more about vehicle appraisals please visit
http://www.bestpricecarparts.com/
crawl-002
refinedweb
870
72.16
Problem Statement In this problem, we will be given a linked list and an integer X. We need to partition the linked list around this integer X such that all elements that are less than X should occur before all elements having a value greater than or equal to X. Problem Statement understanding: To understand this problem statement, let us learn programming languages online and take an example. If the given linked list is: and X = 5 - Here, X = 5 so, nodes with data less than X are: 1,2,3. - Nodes with data greater than or equal to X are: 5,8,12. - We need to keep 1,2,3 before 5,8,12. - Remember that the order of elements does not matter to us as long as the criteria are satisfied, i.e., elements having a value less than X should be before the elements with a value greater than or equal to X. - So, the final output will be 1→2→3→5→8→12→NULL Let us take another example: If the linked list is 9→1→10→27→42→2→NULL and X = 10 - As explained in the above example, similarly, the output linked list after partitioning the linked list around X = 10 will be: 1→9→2→10→27→42→NULL Note: The order of the elements does not matter, i.e., in the final linked list, the elements smaller than X should be before the elements greater than or equal to X. The order of the occurrence of the elements in the final linked list need not be the same as the order of occurrence in the initial given linked list. We can change the order of occurrence of elements as long as the main condition is satisfied. Also, multiple correct outputs are possible for this problem statement. Now let's have a look at some helpful observations. Helpful Observations - We need to separate the nodes with values less than X from those with values greater than or equal to X. - The order of occurrence of nodes does not matter to us. - Multiple correct solutions exist for this problem. Approach - Here, we will keep track of two pointers i.e., head and tail pointers of the list - When we encounter an element that is less than X, we will insert it before the head and update the head pointer. - When we encounter an element greater than or equal to X, we will insert it after the tail node and will update the tail node. Algorithm - Initialize two pointers named current and tail with the starting node of the original linked list. - Loop in the linked list using this pointer current, starting from first to the last node, and store the next pointer of the current node in another variable before inserting the current node before head or after tail. - If the current node has a value less than X, insert it before head and update the head. - If the current node has a value greater than or equal to X, insert it after the tail and update the tail pointer. - Update the current node with the next pointer stored in step 2. - After the loop ends, make the next pointer of the tail node point to NULL to avoid the cycle in the newly created list. Dry Run Code Implementation #include using namespace std; class Node { public: int data; Node* next; Node(int x){ data = x; next = NULL; } }; Node *partition(Node *head, int x) { /* initialize current and tail nodes of new list as discussed in step 1*/ Node *tail = head; Node *curr = head; while (curr != NULL) { Node *next = curr->next; if (curr->data < x) // left partition { /* Insert node before head if current node data is less than given value of X */ curr->next = head; head = curr; // update the head node } else // right partition { /* Insert node after tail node */ tail->next = curr; tail = curr; // update the tail node } curr = next; } tail->next = NULL; // make next of tail node as NULL to // avoid cycles in newly created list // return changed head return head; } void printList(struct Node *head) { Node *temp = head; while (temp != NULL) { printf("%d ", temp->data); temp = temp->next; } } int main(void) { Node* head = NULL; head = new Node(3); head->next = new Node(12); head->next->next = new Node(1); head->next->next->next = new Node(5); head->next->next->next->next = new Node(8); head->next->next->next->next->next = new Node(2); Node *tmp = partition(head,5); printList(tmp); return 0; } #include #include struct Node { int data; struct Node* next; }; // A utility function to create a new node Node *newNode(int data) { struct Node* new_node = new Node; new_node->data = data; new_node->next = NULL; return new_node; } // Function to make a new list(using the existing // nodes) and return head of new list. struct Node *partition(struct Node *head, int x) { /* Let us initialize start and tail nodes of new list */ struct Node *tail = head; // Now iterate original list and connect nodes Node *curr = head; while (curr != NULL) { struct; } /* Function to print linked list */ void printList(struct Node *head) { struct Node *temp = head; while (temp != NULL) { printf("%d ", temp->data); temp = temp->next; } } // Driver program to run the case int main() { /* Start with the empty list */ struct); return 0; } class Partition { static class Node { int data; Node next; } static Node newNode(int data) { Node new_node = new Node(); new_node.data = data; new_node.next = null; return new_node; } // Function to make a new list // (using the existing nodes) and // return head of new list. static Node partition(Node head, int x) { /* Let us initialize start and tail nodes of new list */ Node tail = head; // Now iterate original list and connect nodes Node curr = head; while (curr != null) {; } static void printList(Node head) { Node temp = head; while (temp != null) { System.out.print(temp.data + " "); temp = temp.next; } } // Driver code public static void main(String[] args) { /* Start with the empty list */); } } class Node: def __init__(self, data): self.data = data self.next = None def newNode(data): new_node = Node(data) new_node.data = data new_node.next = None return new_node # Function to make a new list # (using the existing nodes) # and return head of new list. def partition(head, x): # Let us initialize start and # tail nodes of new list tail = head # Now iterate original list # and connect nodes curr = head while (curr != None): next = curr.next if (curr.data < x): # Insert node at head. curr.next = head head = curr else: # Append to the list of greater values # Insert node at tail. tail.next = curr tail = curr curr = next tail.next = None # The head has changed, so we need # to return it to the user. return head # Function to print linked list def printList(head): temp = head while (temp != None): print(temp.data, end = " ") temp = temp.next # Driver Code if __name__=='__main__': # Start with the empty list head = newNode(3) head.next = newNode(12) head.next.next = newNode(1) head.next.next.next = newNode(5) head.next.next.next.next = newNode(8) head.next.next.next.next.next = newNode(2) x = 5 head = partition(head, x) printList(head) Output 2 1 3 12 5 8 Time Complexity: O(n), Where n is the number of nodes in the list. [forminator_quiz id="4653"] So, in this blog, we have tried to explain how you can partition a linked list around a given value without caring about maintaining the stability in elements of the list in the most optimal way. If you want to solve more questions on Linked List, which are curated by our expert mentors at PrepBytes, you can follow this link Linked List.
https://www.prepbytes.com/blog/linked-list/partitioning-a-linked-list-around-a-given-value-and-if-we-dont-care-about-making-the-elements-of-the-list-stable/
CC-MAIN-2022-21
refinedweb
1,263
68.91
A current trend among developers writing web applications using traditional server-side languages, including Java, is to move the user interface entirely to the browser, and to limit the server-side code to just providing business logic via an API. One of the most popular ways to implement the front end at the moment is as a Single Page Application (SPA) using the Angular 2 framework (soon to be renamed simply, Angular, and released as version 4). Here at Chariot, we've been using the Spring Framework to write web applications for quite a long time. While we're also heavily involved with other technologies, such as Scala, Clojure, and on occasion, Ruby, we're not about to give up on Spring any time soon. We're particularly fond of the latest incarnation of Spring – Spring Boot, as it makes it easier than ever to get a Spring application up and running. In the past, we would typically have used server-side templating (JSP, FreeMarker, etc.), with some Javascript mixed in, to present the user interface. More recently, we might have used Angular 1 to write a Javascript-based user interface, and served that from the static resources directory of our web app. However, preferences in application architecture tend to change over time, and we now generally like to have our Javascript UI code stand alone. The npm-based tools that we use for Angular development make it convenient to run our Angular front-end app on one port (typically 3000), while our Spring Boot backend runs on another (typically 8080). Having the two parts of the application (the UI and the API) served from different ports presents a problem, though – by default, the web browser prevents the UI application from accessing the API on a port different from the one on which the UI was served (this is known as the Single-Origin Policy, or SOP). At first glance, you might think that this is a problem that we're unnecessarily inflicting on ourselves, just for the sake of ease of Angular development. However, we're likely to run into the SOP problem in production as well, for instance, if we want to serve our application's UI from and it's API from, or if we want to provide our API for other people’s applications to consume in addition to our own. A popular solution to this problem is the use of Cross-Origin Resource Sharing (CORS). CORS is a W3C Recommendation, supported by all modern browsers, that involves a set of procedures and HTTP headers that together allow a browser to access data (notably Ajax requests) from a site other than the one from which the current page was served.. Future posts will show how to add authentication and authorization via Spring Security and JSON Web Tokens (JWT). The Application Front-End – Tour of Heroes Let's start with the Tour of Heroes example from the Angular web site. You may already be familiar with it, but if not, now is a good time to go check it out at and work through the tutorial. Go ahead, I'll wait here for you until you get back. (If you're the impatient type, and want to just download the finished code for the Tour of Heroes demo, you can clone my git repo at) Ok, welcome back. Now that we have a working stand-alone demo front-end (with faked API access), let's create a real API for it. Back-End – Custom Spring Boot App The Tour of Heroes front-end expects an API for listing, searching and doing CRUD operations on Hero objects. If you look at the Angular app's hero.ts source file, you'll see that a Hero has two properties – a numeric id and a name: export class Hero { id: number; name: string; } Let's use Spring Boot to create this API. The easiest way to start is to use the SPRING INITIALIZR app at Go there, select Gradle Project, name the group heroes and name the artifact back-end, add dependencies for JPA, H2 and Rest Repositories, and then hit the Generate Project button. You'll get a zip file containing a starter project. Unzip it, and open it in your favorite IDE / editor. You should now have a build.gradle file in the root folder of the project that looks like this: buildscript { ext { springBootVersion = '1.4.3.RELEASE' } repositories { mavenCentral() } dependencies { classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}") } } apply plugin: 'java' apply plugin: 'eclipse' apply plugin: 'org.springframework.boot' jar { baseName = 'back-end' version = '0.0.1-SNAPSHOT' } sourceCompatibility = 1.8 repositories { mavenCentral() } dependencies { compile('org.springframework.boot:spring-boot-starter-data-jpa') compile('org.springframework.boot:spring-boot-starter-data-rest') compile("com.h2database:h2") testCompile('org.springframework.boot:spring-boot-starter-test') } build.gradle This gives us a build that includes the required libraries for an H2 embedded database, JPA (Java Persistence API), and Spring Rest in addition to Spring Boot's default embedded Tomcat server. Now, let's add our Hero domain object. Create a Hero.java file in the src/main/java/heroes folder, and add the following code: package heroes; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; @Entity public class Hero { @Id @GeneratedValue(strategy = GenerationType.AUTO) private long id; private String name; public long getId() { return id; } public void setId(long id) { this.id = id; } public void setName(String name) { this.name = name; } public String getName() { return name; } } Hero.java This is a standard POJO (plain old java object), a.k.a. a Bean, with JPA annotations to mark this as a JPA Entity and to tell JPA how to generate ids. The one thing that is slightly different here from a normal Spring Data Entity is the inclusion of the getId and setId methods. These would typically be omitted because JPA doesn't need them, and a true REST API would employ HATEOAS, and not expose the id as part of an object's JSON representation. However, like manyREST-ishapplications, the Tour of Heroes front-end does not not make use of API-provided hypertext links, and instead uses object ids and well-known endpoint URI patterns to form the URLs used to interact with the API. Since HATEOAS is not the subject of this article, we'll just have the API expose object ids. This isn't quite enough to start exposing Entity ids, though. By default, Spring will still omit the id when serializing objects to Json. To change this, we need to add a configuration, and in keeping with Spring Boot convention, we'll do so with a configuration class. Add the following RepositoryConfig.java source file to the src/main/java/com/example folder: package heroes; import org.springframework.context.annotation.Configuration; import org.springframework.data.rest.core.config.RepositoryRestConfiguration; import org.springframework.data.rest.webmvc.config.RepositoryRestConfigurerAdapter; @Configuration public class RepositoryConfig extends RepositoryRestConfigurerAdapter { @Override public void configureRepositoryRestConfiguration(RepositoryRestConfiguration config) { config.exposeIdsFor(Hero.class); } } RepositoryConfig.java This will configure the Repository (which we haven't created yet) to expose the ids of Hero objects when serializing to Json. Note that if you add more entities, you'll have to add them here as well. Next, we'll add the Repository, which will be responsible for database operations involving our Heros. With Spring Boot, we can simply provide an interface for our Repository, and Spring Data will implement it via a proxy at runtime. Add the following HeroRepository.java file to src/main/java/com/example: package heroes; import java.util.List; import org.springframework.data.repository.CrudRepository; import org.springframework.data.repository.query.Param; import org.springframework.data.rest.core.annotation.RepositoryRestResource; @RepositoryRestResource(collectionResourceRel = "heroes", path = "heroes") public interface HeroRepository extends CrudRepository<Hero, Long> { @Query("from Hero h where lower(h.name) like CONCAT('%', lower(:name), '%')") public Iterable<Hero> findByName(@Param("name") String name); } HeroRepository.java This simple interface, by extending Spring Data's CrudRepository, gives us Create (POST), Update (PUT), Delete (DELETE), and List functionality. The @RepositoryRestResource causes Spring Data to wrap our Repository at runtime with a Controller that handles the HTTP REST requests. We have added only one method to the interface, beyond what CrudRepository provides by default, to support the Angular app's search function. This is all of the code that we need to write to get our basic Heroes REST api up and running. Work At Chariot If you value continual learning, a culture of flexibility and trust, and being surrounded by colleagues who are curious and love to share what they’re learning (with articles like this one, for example!) we encourage you to join our team. Many positions are remote — browse open positions, benefits, and learn more about our interview process below. There's one more thing to do. As the application stands right now, when it starts up, there will be no Heroes in the database. We could just add some, but since we're using an in-memory database (H2), we'd lose our Heroes every time we shut down the application. Fortunately, Spring Data provides an easy facility for loading data into the database at start-up. If it is creating a new database (which it will do every time we start the app), and it finds a file named data.sql in the root of the classpath, it will execute the contents as sql statements. Add a data.sql file to the src/main/resources directory with the following contents (or make up your own!): insert into hero(name) values('Black Widow'); insert into hero(name) values('Superman'); insert into hero(name) values('Rogue'); insert into hero(name) values('Batman'); data.sql Then, to cause the data.sql file to be copied to the classpath at build-time, add the following to the build.gradle file, just after the apply plugin lines: task copySqlImport(type: Copy) { from 'src/main/resources' into 'build/classes' } build.dependsOn(copySqlImport) Ok, go ahead and start up the Spring Boot applicaiton using the Gradle task bootRun (either select it from your IDE if it supports Gradle builds, or run ./gradlew bootRun from the command line (or .\gradlew.bat bootRun if you're on Windows). You can try out the API at. If you point your web browser at that endpoint, you should get a list of the heroes that you specified in your data.sql file. Fire up a REST client (I recommend Postman), and you should also be able to add, update, and delete Heroes. Note that Spring-Data is still including HATEOAS links (in addition to Hero ids), and there's no easy way to turn them off. We're just going to ignore them. Integrating the front-end and back-end To integrate the Angular app with our new Spring Boot – based API, we'll need to make some changes to both the Angular front-end application and the Spring Boot API. First, we need to disable the Angular app's in-memory-database. To do this, edit app.module.ts, and comment out or delete the imports for InMemoryWebApiModule and InMemoryDataService at the top of the file, and inside the @NgModule declaration. Then, in hero.service.ts, change the URLs to point to instead of /app/heroes These two changes will cause the app to use the api provided by the Spring Boot application instead of the internally mocked api. Next, we need to deal with the fact that the Spring Boot api responds with data in the HAL format (see). This format is somewhat different from what the original Tour of Heroes app expected, so we need to modify the Angular app's Hero Service slightly to look for data in the right places. (You could modify the Spring Boot app to provide the format expected by the Angular app instead, but that would be quite a bit more work). Modify hero.service.ts, changing the getHeros() function from getHeroes(): Promise<Hero[]> { return this.http.get(this.heroesUrl) .toPromise() .then(response => response.json().data as Hero[]) .catch(this.handleError); } to getHeroes(): Promise<Hero[]> { return this.http.get(this.heroesUrl) .toPromise() .then(response => response.json()._embedded.heroes as Hero[]) .catch(this.handleError); } (change response.json().datato response.json().embedded.heroes) and changing create(name: string): Promise<Hero> { return this.http .post(this.heroesUrl, JSON.stringify({name: name}), {headers: this.headers}) .toPromise() .then(res => res.json().data) .catch(this.handleError) } to create(name: string): Promise<Hero> { return this.http .post(this.heroesUrl, JSON.stringify({name: name}), {headers: this.headers}) .toPromise() .then(res => res.json()) .catch(this.handleError) } (change .then(res => res.json().data)to .then(res => res.json())) Modify heros-search.service.ts, changing the search function from search(term: string): Observable<Hero[]> { return this.http .get(`app/heroes/?name=${term}`) .map((r: Response) => r.json().data as Hero[]); } to search(term: string): Observable<Hero[]> { return this.http .get(`{term}`) .map((r: Response) => r.json()._embedded.heroes as Hero[]); } (change app/heroes/?name=${term}to{term}) Fire up the apps, and you can see (by using your browser's developer tools that the Angular app does indeed make a GET request to, and that the expected Json is returned with an HTTP 200 response code. But no heroes show up in the dashboard! What happened? Look at the web browser's console, and you'll see that there's an error that reads something like: XMLHttpRequest cannot load. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin '' is therefore not allowed access. The problem here is that the browser has a Same-Origin Policy for XMLHttpRequests (or at least most modern browsers do) [see]. This means that without further intervention, XMLHttpRequests can only be made to the same domain that served the initial page. If the protocol, host, or port do not match the original page request, the response won't be received by the Angular code, and you'll see this error. CORS There is a way around this, and it's known as the Cross Origin Resource Sharing Protocol, or CORS. For simple cases like this GET, when your Angular code makes an XMLHttpRequest that the browser determines is cross-origin, the browser looks for an HTTP header named Access-Control-Allow-Origin in the response. If the response header exists, and the value matches the origin domain, then the browser passes the response back to the calling javascript. If the response header does not exist, or it's value does not match the origin domain, then the browser does not pass the response back to the calling code, and you get the error that we just saw. For more complex cases, like PUTs, DELETEs, or any request involving credentials (which will eventually be all of our requests), the process is slightly more involved. The browser will send an OPTION request to find out what methods are allowed. If the requested method is allowed, then the browser will make the actual request, again passing or blocking the response depending on the Access-Control-Allow-Origin header in the response. Note that for security resasons, the browser is in complete control of the CORS protocol – it cannot be overridden by the calling code. The problem right now, as the error message indicates, is that the API is not returning an Access-Control-Allow-Origin header. Let's correct this. In the Spring Boot app, add the following configuration class to add a Filter that adds the required header. package heroes; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.web.cors.CorsConfiguration; import org.springframework.web.cors.UrlBasedCorsConfigurationSource; import org.springframework.web.filter.CorsFilter; ); } } RestConfig.java In this configuration, we have addAllowedOrigin("*") – this is a wildcard that will simply copy the value of the request's Origin header into the value of the Response's Access-Control-Allow-Origin header, effectively allowing all origins. You can add specific origins instead if you wish to limit them. Go back and reload the Angular app, and you should see that the Heros are now listed, and that the rest of the application works as expected. In this Part 1 post, you have learned how to implement a simple Spring Boot REST API for use by an Angular 2 front-end, and how to allow them to be served from different ports and/or domains via CORS. In a future post, I’ll show you how to add Authentication and Authorization via Spring Security and JWT.
https://chariotsolutions.com/blog/post/angular-2-spring-boot-jwt-cors_part1/
CC-MAIN-2022-40
refinedweb
2,768
55.13
Created on 2017-11-14 17:29 by joern, last changed 2017-11-16 08:01 by berker.peksag. This issue is now closed. The allow_abbrev option (default True) currently is documented like this (): > Normally, when you pass an argument list to the parse_args() method of an ArgumentParser, it recognizes abbreviations of long options. However, it also controls combinations of short options and especially the combination of flags (store_const) like `-a -b` as `-ab`. Example snippet for testing: import argparse import sys parser = argparse.ArgumentParser( allow_abbrev=False ) parser.add_argument('-a', action='store_true') parser.add_argument('-b', action='store_true') parser.add_argument('x', nargs='*') parser.parse_args('-a -b foo bar'.split()) parser.parse_args('-ab foo bar'.split()) As you can see the 2nd parse will fail if allow_abbrev=False. This issue is either a doc issue only or an unintended combination of long option shortening and (the way more common) flag combinations. When i deactivated this in my code, i wanted to disable the (nice to have) long option shortening, but i unintentionally also deactivated (MUST have) short flag combinations. Thank you for the report and for the PR. I think this is a duplicate of issue 26967. > This issue is either a doc issue only or an unintended combination of > long option shortening and (the way more common) flag combinations. This is indeed a bug so it would be better to fix it.
https://bugs.python.org/issue32027
CC-MAIN-2017-47
refinedweb
231
51.34
C++ Find Minimum Element in a Rotated Sorted Vector Program Hello Everyone! In this tutorial, we will demonstrate the logic of Finding the Minimum Element in a Rotated Sorted Vector, in the C++ programming language. What is a Rotated Sorted Vector? A Rotated Sorted Vector is a sorted vector rotated at some pivot element unknown to you beforehand. Example: [4,5,6,7,0,1,2] is one of the rotated sorted vector for the sorted vector [0,1,2,4,5,6,7]. For a better understanding of its implementation, refer to the well-commented CPP code given below. Code: #include <iostream> #include <bits/stdc++.h> using namespace std; int findMin(vector<int> &m) { int i; int n = m.size(); for (i = 0; i < n; i++) { if (i == 0) { if (m[i] < m[n - 1] && m[i] < m[1]) break; } else { if (m[i] < m[i - 1] && m[i] < m[(i + 1) % n]) break; } } return m[i % n]; } int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to find the Minimum element in a rotated Sorted Vector, in CPP ===== \n\n\n"; cout << " ===== Logic: The minimum element will have larger number on both right and left of it. ===== \n\n\n"; //initializing vector with the following elements vector<int> v = {4, 5, 6, 7, 1, 3, 2}; int n = v.size(); int mini = 0; cout << "The elements of the given vector is : "; for (int i = 0; i < n; i++) { cout << v[i] << " "; } mini = findMin(v); cout << "\n\nThe Minimum element in the given vector is: " << mini; cout << "\n\n\n"; return 0; } Output: We hope that this post helped you develop a better understanding of the concept of finding a minimum element in the rotated sorted vector and its implementation in CPP. For any query, feel free to reach out to us via the comments section down below. Keep Learning : )
https://studytonight.com/cpp-programs/cpp-find-minimum-element-in-a-rotated-sorted-vector-program
CC-MAIN-2021-04
refinedweb
314
56.79
(basic) button not working? (basic) button not working? Hi, i'm using the following view to display a toolbaar with a button in it. But nothing happends when i press the button... I don't even see the button image change from button-up to button-down. Anyone any idea why? Code: app.views.test = Ext.extend(Ext.Panel, { fullscreen: true, dockedItems: [ { dock : 'top', xtype: 'toolbar', title:'Standard Titlebar' }, { dock : 'top', xtype: 'toolbar', ui : 'light', items: [ { text: 'Test Button', handler: function() { alert('aaa'); //Not working... } } ] } ], html: 'Testing' }); Nevermind, i figured it out. It's because i used 'app' in the namespace. Just read that i had to change it. Works perfect ! Works perfect ! i have no problems. i touch the buttons and the message box with aaaa appear. how does you tested it ? namespaces namespaces just insert Ext.ns('app.views'); It worked for me once, then i tried to do the alert in a controller instead of the view itself. That didn't seem to work. So i put the alert back in the handler of the view and now i have the same problem again... My button wont react when i press it... Anyone any idea whats going on...??? BTW. what is "Ext.ns('app.views');" and where do it put it..? Ok, i just found the REAL problem. When my phone is connected to the USB cable, then for some reason the handlers won't react to button presses. When i disconnect my phone from the USB cable everything works fine. Test environment Test environment Okay I understand. I only tested in browser. Namespace Namespace I always put it at the beginning of js file. - Join Date - Mar 2007 - Location - St. Louis, MO - 35,532 - Vote Rating - 706 If you use Ext.Application, it will set up the follow namespaces for you: app.controllers app.models app.stores app.views Of course you can change the name of.
http://www.sencha.com/forum/showthread.php?142675-(basic)-button-not-working&p=633351&viewfull=1
CC-MAIN-2014-10
refinedweb
321
78.75
The Q3IconDragItem class encapsulates a drag item. More... #include <Q3IconDragItem> This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information. The Q3IconDragItem class encapsulates a drag item. The Q3IconDrag class uses a list of Q3IconDragItems to support drag and drop operations. In practice a Q3IconDragItem object (or an object of a class derived from Q3IconDragItem) is created for each icon view item which is dragged. Each of these Q3IconDragItems is stored in a Q3IconDrag object. See Q3IconView::dragObject() for more information. Constructs a Q3IconDragItem with no data. Destructor. Returns the data contained in the Q3IconDragItem. See also setData(). Sets the data for the Q3IconDragItem to the data stored in the QByteArray d. See also data().
http://doc.trolltech.com/4.5-snapshot/q3icondragitem.html
crawl-003
refinedweb
141
62.04
Yes, it is true. Therefore I'am using radiant v0.7.1 and rails2.1.2 for this site :) I have no time to do it better now ;( ________________ Regards, Mamed Mamedov Sent from Baku, Azerbaijan On Mon, Jul 27, 2009 at 11:51 PM, Jim Gay <j...@saturnflyer.com> wrote: > > On Jul 27, 2009, at 2:42 PM, Mamed Mamedov wrote: > > No, I didn't disabled cache, but I'm using 'translator' extension which >> changes behaviour of radiant's caching subsystem. >> Here is my some changes to *translate_response_cache.rb*: >> > > This won't work in 0.8.0. Rails 2.3 uses Rack::Cache and Radiant 0.8.0 > packages Rails 2.3.2 > ResponseCache functions were moved to Radiant::Cache, a subclass of > Rack::Cache > > > ResponseCache.class_eval { >> # in here, we're just adding a two-letter language suffix to cached >> pages to make sure that the wrong >> # language doesn't get served up because it has been cached >> inappropriately. we could change this to >> # cache in a separate directory (i.e. en/), but for now, we're just >> adding the extension >> private >> def translator_path(path) >> #path =~ /\.css|\.js$/ ? path : >> kk_request.suffixize(kk_request.language) >> path =~ /\.css|\.js$/ ? path : [ path, >> kk_request.session["gsession_color"], >> kk_request.session["gsession_design"], >> kk_request.suffixize(kk_request.language) ].join( "_" ) >> end >> As a result I have this files in my cache folder: >> selene# ll >> total 28 >> drwxrwxrwx 2 root www 512 Jul 27 23:22 _css >> -rw-rw-rw- 1 root www 22972 Jul 27 23:22 index_blue_full__az-AZ.data >> -rw-rw-rw- 1 root www 185 Jul 27 23:22 index_blue_full__az-AZ.yml >> >> For example, if user requests 'red' & 'full' version of site, then >> *'index_red_full__az-AZ' >> *will be requested. If there is no cached file, then page will be >> generated >> by radiant from db and then cached in this folder. >> >> That is solution, I think. I'am using radiant v0.7.1. My site supports >> many >> colors (design types) languages and light/full-version. >> ________________ >> Regards, >> Mamed Mamedov >> >> Sent from Baku, Azerbaijan >> >> >> On Mon, Jul 27, 2009 at 11:27 PM, Jim Gay <j...@saturnflyer.com> wrote: >> >> >>> On Jul 27, 2009, at 2:24 PM, Mamed Mamedov wrote: >>> >>> Hi everybody! >>> >>>> >>>> Here how I have resolved my problem with changing page's layout >>>> on-the-fly:) >>>> I have created my own extension folder with my namespace-tags for >>>> unusual >>>> tasks. >>>> >>>> desc %{ >>>> Works like design switcher. >>>> } >>>> tag 'gsession:design' do |tag| >>>> design_type = request.session["gsession_design"] >>>> if design_type == nil >>>> design_type = GenieSessionExtension.defaults[:design] >>>> request.session["gsession_design"] = design_type >>>> end >>>> design_type.strip! >>>> if layout = Layout.find_by_name(tag.attr["#{design_type}"]) >>>> tag.globals.page.layout = layout >>>> tag.globals.page.render >>>> end >>>> end >>>> >>>> Usage: >>>> I have created layout named "master-index" with only one line in it: >>>> <r:gsession:design >>>> >>>> With 2 arguments, which describes layout names for 'full' and 'light' >>>> version of my site. You can define 'gsession_*design*' session variable >>>> at >>>> any time by hitting, for example:* /genie/set/design/light* >>>> All my pages have selected 'master-index' layout. >>>> That is all, while page is rendered my tag switches current layout to >>>> 'full-index' or 'light-index' named layouts. >>>> >>>> Waiting for comments ... Thank you:) >>>> >>>> >>> Have you disabled caching to do this? >>> >>> >>> >>> ________________ >>>> Regards, >>>> Mamed Mamedov >>>> >>>> >>>> >>>> >>>> On Thu, Jul 16, 2009 at 6:16 PM, Sean Cribbs <seancri...@gmail.com> >>>> wrote: >>>> >>>> If all of your pages have the same layout at any time, make sure all >>>> >>>>> descendant pages have their layout set to <inherit> and then your >>>>> extension >>>>> could change the layout on the root page. >>>>> However, it would not be trivial to do this on a per-user basis. Have >>>>> you >>>>> considered something like a combination of Javascript and CSS that lets >>>>> your >>>>> users switch layouts? >>>>> >>>>> Sean >>>>> >>>>> Mamed Mamedov wrote: >>>>> >>>>> Hi everybody! >>>>> >>>>>> >>>>>> I have a little question: how can I change page's layout from within >>>>>> my >>>>>> extension? >>>>>> Problem is, that I have 2 different page layouts for my site: [ >>>>>> full-version >>>>>> and light-version ]. >>>>>> I want to write a mini-extension to switch between designs of my site >>>>>> throw >>>>>> hitting: /design/set/full and /design/set/light or /design/reset >>>>>> And I'am saving current design variable in current user's session. >>>>>> >>>>>> And now, just need to change current page's layout on the fly >>>>>> accordingly >>>>>> to >>>>>> session value. >>>>>> ________________ >>>>>> Regards, >>>>>> Mamed Mamedov >>>>>> >>>>>> _______________________________________________ >>>>> >>>> Radiant mailing list >>> Post: Radiant@radiantcms.org >>> Search: >>> Site: >>> >>> _______________________________________________ >> Radiant mailing list >> Post: Radiant@radiantcms.org >> Search: >> Site: >> > > _______________________________________________ > Radiant mailing list > Post: Radiant@radiantcms.org > Search: > Site: > _______________________________________________ Radiant mailing list Post: Radiant@radiantcms.org Search: Site:
https://www.mail-archive.com/radiant@radiantcms.org/msg05210.html
CC-MAIN-2018-43
refinedweb
750
58.38
Statement .” What problems does leaving debug=true cause? There are three main differences between debug=true and debug=false: - ASP.NET Timeouts - Batch compilation - Code optimizationMicrosoft.NETFrameworkv1.1.4322Temporary Searching for modules built in debug mode… MyDebug. Recently my colleague Doug wrote a nice post on Nine tips for a healthy "in production" ASP.NET application…. if yo ugo back and change debug=true to debug=false wont you get a compilation error? do yo need to restart iis after doing this? Another slight app startup performance hit you’ll have if you set debug to false is that (and i’m guessing here) when the ASP.Net app creates a dll for each aspx page, its not finding a unique base address in memory for each dll. So when the AppDomain loads it will have to spend time rebasing each and every dll into an empty spot in memory. if you only have 3-4 dlls, its much less of a hit. Gregor, You should not get an error message just by changing from debug=true to debug=false, but to avoid having some dlls batch compiled and some not i would recommend deleting the temporary asp.net files when you next IISreset Ok &ndash; I admit, I have done this, you take the web.config from development, it gets rolled up to… Uno degli errori più comuni quando si effettua il deployment in ambiente di produzione di una applicazione… We did get an error for a UserControl that it reported it could not load the FileName_ascx class due to multiple versions in the Temp ASP.NET folder. We identified that we had two user controls with the same file name in diferent Folders of the same Web App. The also had diferent namespaces and never through this exception until we set debug="false". We even wiped the Temp ASP.NET directory clean on an IISreset. The only way we could fix the error, was by renaming the ascx file of one of the two. Is this correct…? Was there a better way to fix this? BTW… [KissUpText] Tess, your posts have been very helpfull to our development team, and we really appreciate all the information you have given away. [/KissUpText] Hi Robbie, Thanks for the nice comment:) I am assuming that you are getting "CS1595: ‘UserControls.WebUserControl2’ is defined in multiple places; using definition from ‘c:WINDOWSMicrosoft.NETFrameworkv1.1.4322Temporary ASP.NET Filesusercontrols293a1a4bdbb2d387cisxatg3.dll’ " or similar. The problem basically occurrs if you are using src rather than CodeBehind and your cs or vb files contain a definition for exactly the same class in exactly the same namespace. The error is really the same as what you would get if you tried to compile a dll with another class defined twice in the same namespace. The reason i am saying it happens when you use src is because if you would use CodeBehind you would have gotten an error at compile time. If the usercontrols are really the same I would avoid creating a copy, and instead using the one from the other folder. If they are different I would either give the different names if possible, and if not, make sure that the source classes are in different namespaces, such as ProjectName.FolderName.MyUserControl The reason you are seeing it now and not before is because you are now batch-compiling everything into one dll. Hope this helps. Thanks Tess, Sorry, I should have include the exception message: CS1595: ‘_ASP.BrokerInformation_ascx’ is defined in multiple places; using definition from ‘c:WINDOWSMicrosoft.NETFrameworkv1.1.4322Temporary ASP.NET Filesrootf4fb459b4deb68eb1huxv2vg.dll’ And actually, we are using the CodeBehind attribute: ————————————————————————- UserControl #1: <%@ Control Language="c#" AutoEventWireup="false" Codebehind="BrokerInformation.ascx.cs" Inherits="WAB.Websites.GA.UserAdmin.UserControls.BrokerInformation" TargetSchema=""%>”>"%> CodeBehind #1: namespace WAB.Websites.GA.Admin.UserControls { /// <summary> /// Summary description for BrokerInformation. /// </summary> public class BrokerInformation : UserControlBase … ————————————————————————- UserControl #2: <%@ Control Language="c#" AutoEventWireup="false" Codebehind="BrokerInformation.ascx.cs" Inherits="WAB.Websites.GA.Admin.UserControls.BrokerInformation" TargetSchema="" %> CodeBehind #2: namespace WAB.Websites.GA.UserAdmin.UserControls { /// <summary> /// Summary description for BrokerInformation. /// </summary> public class BrokerInformation : UserControlBase … (this is how it looked when the error was happening) Also, yes they are different controls, and we have since renamed the #2 control AdminBrokerInformation.ascx. So we are no longer receiving the error, but it looks to me that within a ASP.NET (1.x) application, all aspx and ascx files must be named unique (and not in the FQCN meaning of unique), because they all end up under the same "_ASP.*" namespace. please let me know if: Assert.IsTrue(MyCommentAbove.IsCorrect); Tess explains in detail why you should never run a production site with debug enabled: ASP.NET Memory:… I always knew that debug=true in the web config of an ASP.NET project was bad karma.&nbsp; However, this… Every ASP.Net developer knows about this setting, or at least should. But what happens when you forget… Warum sollte man&nbsp;debug=false setzen? The following links to .NET resources have been collated over time with the assistance of colleagues.&nbsp;… Debug Info. Here’s a question: if your application throws an exception which is handled in e.g. Global.asax and written to a file. Will the same ammount of debug information be available with and without debug symbols loaded? (in my experience it wont. With debug symbols loaded we can e.g. tell exactly which line in which file caused the exception). Very helpful for tracking errors on the production server. Hi Blue-fish, Without the debug symbols you will not get the line and file info, but you seriously have to weigh this against perf and memory issues with running debug=true, or leaving optimization turned off (which is necessary for the symbols to match). Right, I know it’s a bit controversial to leave your production server running with debug symbols intentionally. However, Considering that the sites I usually build, are cached (public sites with most page requests being cache hits), I think the extra error info justifies the overhead (which i assume is 0 for a cache hit).… PingBack from One of the things you want to avoid when deploying an ASP.NET application into production is to accidentally… I disabled the debug on the production but suddenly the home page stop responding my requests (there is login button when i try to login the page just post back without login to the system), what happened??!! Only thing I can think of is that disabling debugging generated some type of exception, perhaps because of duplicate class names or similar. I would attach a debugger and try to log in to see if something like that is happening. We set debug=false in our production environment, and this caused the site to crash due to CS1595 [user control] is defined in multiple places. IISReset + wiping the Temp ASP.NET files clears up this error temporarily, but it reoccurs again later if a change is made to the web.config. I’ve seen all the threads about multiple DLLs, Src vs. CodeBehind, compiler flags, but this is not the cause in our case. We never had this issue until setting debug=false. Hi Jeremy, This can also happen if you have multiple virtual directories pointing to the same physical path, in which case there can be a problem like this during re-compilation of the page, because the old dlls don’t get removed. It only happens if batch compilation is set to true, which is why you only see it when debug=true. You can use this to turn batch compilation off while debug is still set to false, but then you get one dll per page <compilation defaultLanguage="c#" debug="false" batch="false" /> HTH PingBack from PingBack from Question regarding debug settings in VS 2005… There’s a new setting for debug info, pdb-only. How does this compare to full debug info? Just to avoid confusion (in case there was any:)) setting the app to debug mode in visual studio is different from debug=true With PdbOnly, the dll is still compiled in release mode, but a symbol file is also generated. With debug the dll is not optimized as much as in release mode to allow for injection points for the debugger. However, when you start an application under a managed debugger, the debugger will set attributes on the dlls allowing tracking and less optimization even if the dll is built in release mode, so you will still be able to debug. In an unmanaged debugger this does not happen, unless you do it explicitly with an ini file, but in 98% of the cases that is not important since you dont usually do source stepping anyways with an unmanaged debugger. I would definitely recommend no debug or pdbonly in production. Having said this, for ASP.NET 2.0 this is only relevant for components used by your Asp.Net pages (not the asp.net pages themselves or the codebehind pages). The compilation model for ASP.NET changed radically from 1.1 to 2.0, where in 1.1 the code behind classes were compiled at design time in visual studio into a dll (stored in the bin directory usually), and in 2.0 it moved over to a different model with several types of compilation strategies and partial classes (see:). I believe (but don’t quote me on this) that if you build from visual studio (visual web developer) you don’t really compile into executable dlls anymore, it is merely a pre-check for compilation errors, which means that effectively the release/debug option in visual studio has no effect on the web applications dlls. The actual compilation occurrs at runtime. If you have the visual studio help installet you can check out this topic for more info about the new compilation/deployment model ms-help://MS.VSCC.v80/MS.MSDN.v80/MS.VisualStudio.v80.en/dv_vwdcon/html/3ef36871-2cb9-452a-8c96-2068fccead18.htm Continuing from my previous post on common causes for memory leaks, remote debugging is another ASP.NET…(‘DivSearch’);") And here are the HTML generated statements: <TD class="SDToolBarButton"><asp:imagebutton</asp:imagebutton><asp:imagebutton</asp:imagebutton></TD> <TD class="SDToolBarButton"><asp:imagebutton</asp:imagebutton><asp:imagebutton</asp:imagebutton></TD> 🙂. Yes, but what about the ajaxtoolkit .dlls that have debug set to true? Hello Tess Sorry to resurrect this old zombie issue, but … I notice in your article you make no mention of .asmx web services. Does this setting have as big an effect on these too? I’m trying to investigate a performance issue on a middle-tier web server that exclusively hosts .asmx web services which are implemented as C# dll’s. I think what I’m not sure about 😀 is whether this setting affects the JIT/runtime compilation of the web service’s MSIL objects (i.e. the dll’s) and therefore presumably will affect the performance/memory usage of these web services too, or is it just the assembly of those web-app pages and ‘code-behind’ *.aspx.vb/vc … er … things. Hopefully you can make a sensible question out of this 🙂 thanks Tim Do you have any general benchmarks on how much smaller the memory footprint can be if debug mode = false? Are we talking 50% more memory, twice as much memory, or more? I understand that the answer is "It depends on your app", but do you have any rules of thumb. We’ve got a pretty complex app with hundreds of thousands of lines of code. We would prefer to keep the debug info in there because we’re concerned about the effort to retest the app and side effects it would have on our support team, who are used to using the debug info to resolve customer cases. On the other hand, our app has a huge memory footprint. If the compile in release mode is going to reduce it by 50% or more, then we may need to look into it. Thanks in advance Hi Krister, There is really no good way to say. Picture this… if you have 20 pages = 20 assemblies, and then your app allocates 1 GB worth of memory for various datasets etc. then the overhead for the debug data for the 20 assemblies would be negligeable. If on the other hand you have 1000 assemblies/pages and your app is otherwise frugal with memory usage then the overhead is not so negligeable so it is really impossible to say. Even on a specific page/assembly you can’t give a percentage, and it also depends on if the page is changed during the app lifetime. The best thing is probably to test with debug=true or debug=false and see. But more importantly, with debug=true you will not take advantage of timeouts or other code optimization and that is really a bigger issue than the memory usage. What type of debug info does your support team use that would not be available if you have debug=false? debug.write output for example would still be present… Tim, Yes, it applies to web services as well. Web services (asmx) is basically a subset of asp.net We've run into this issue in our ASP.net 3.5 code where the machine.congif has the Deployment retail=true and our code managed to get updated in Prod with Debug=true in the web.config. However since the machine.config is set it should have been fine, yet it seems this is not working. Do you know if this is something that works in 3.5? A look at my memory dumps during an issue led me to the setting and then mass debate occured between systems, architects/devs and our team over how this could not happen..the machine.config settings are there. Any experience on this? APS.net 3.5.20729.1 (.Net 3.5 SP1) on Windows 2003 64 Bit using 64bit framework. Thanks! First I thought that debug=true would override this but after looking at this in more detail both in dumps and in documentation I found that retail=true disables the debug=true setting, so if retail=true then debug=true should not be in effect. Thanks, Tess Thanks Tess for the response….we have an issue where the machine.config has retail=true yet when debug=true is set on the web applications our performance goes out the window. If we set it to false (which why wouldn't we on production anyway?) it goes normal. Our issue has only really been that it slips up there in dpeloyments some times and the last time it got us we put in a critical support ticket with MS who turned around and said you have debugging enabled in production and we see in your dumps that is eating a lot of your resources. Which started the debate on retail=true and debug=true. It seems to me there are a few gotchas still out there and possibly some further digging to what really changes when both these settings are true versus retail=true and debug=false. If debug=true you will NOT get time-outs no matter what the setting is for retail in machine.config. That is because the ASK.NET Deadlock detection mechanism literally looks in web.config and if denug=true it theoretically runs for an infinite amount of time so that you have more than enough time to attach a debugger. Hi Tess, Thank you for your posts. I would not have solved my application's memory issues without this particular post and all the debugging tips throughout your blog. I've really learned a lot! how do i run a a nifty command in sos.dll ? Hi Tess, Our application uses one dll called ISDBLayer.dll. This is one of the class library projects in a DLL Solution. When I compile it the solution in Debug mode, and deploy it to production its size is abot 48 kb and the app works fine. Whereas, when I compile it in Release mode (where the size becomes 44 kb), and then deploy it to prod, the application gives an error. This is baffling to me. Any ideas why would such a thing happen just because of a change in the debug/release option. Thanks, Syed. Hi Tess, This article is a great read. We have a web application project and use custom http handlers. It has a File import feature to import data from various sources. My question is does this affect custom http handlers? Are the custom http handlers compiled at runtime? I am a crazy problem. One of our application (still running under .NET 1.1) runs well at local DEV environment but fails on our PT server. ASP.NET debug=false in PT. Any help ? Compilation Error Description: An error occurred during the compilation of a resource required to service this request. Please review the following specific error details and modify your source code appropriately. Compiler Error Message: CS1595: 'MoreThan.WebInterface.Main.Global' is defined in multiple places; using definition from 'c:WINDOWSMicrosoft.NETFrameworkv1.1.4322Temporary ASP.NET Filesroot1627369fd5eca90eassemblydl23f4ed17e 0b566d5_c21ad001MoreThan.WebInterface.Main.Global.asax.cs.DLL' Source Error: Line 25: Line 26: Line 27: public class Global_asax : MoreThan.WebInterface.Main.Global { Line 28: Line 29: private static bool __initialized = false; Source File: c:WINDOWSMicrosoft.NETFrameworkv1.1.4322Temporary ASP.NET Filesroot1627369fd5eca90ey5v2dlol.0.cs Line: 27 Show Detailed Compiler Output: Show Complete Compilation Source: Version Information: Microsoft .NET Framework Version:1.1.4322.2511; ASP.NET Version:1.1.4322.2515
https://blogs.msdn.microsoft.com/tess/2006/04/12/asp-net-memory-if-your-application-is-in-production-then-why-is-debugtrue/
CC-MAIN-2016-36
refinedweb
2,941
55.95
Trees Trees are structurally a lot like linked lists. Actually, a linked list is a simplistic kind of tree. Tree definition Recall that a linked list “node” had a value and a “next” pointer. The tree structure also has (one or more) values plus (one or more) pointers: class Tree { public: double value; Tree *left; Tree *right; }; The only difference with a linked list is that the tree has two (or more) “next pointers,” or as we label them in the class, a “left” and “right” next pointer. We use the NULL (0) pointer to indicate that either the left or right pointers point to nothing. A tree can have any number of next pointers; if it always has two (each of which may or may not equal NULL), like we will do here, it’s a binary tree. Surely you imagine a tree that had three pointers, or even an array of pointers, rather than just two. Thus, a tree’s “next pointers” are more like “children” — each node in a binary tree has zero, one, or two children. Those children are also trees. Additionally, a tree has only one “parent.” That is to say, for each node in the tree, either that node is the top (the “root”) or only one node links to it (it is the child of only one node). If we do not restrict trees in this way, and allow different nodes to link to the same children, then our algorithms for processing such structures will be more complicated. Actually, we wouldn’t even call them trees anymore; the correct terminology is “graph.” A graph is a structure where every node can have links to any other node, even back to itself (so trees are a kind of graph). See the graphs notes for more details. Building a tree Consider a kind of binary tree that represents arithmetic expressions, like 2 + sin(5) and so on. Each node in the tree is either a number or an operator ( +, sin, etc.). Some operators are unary ( sin), some are binary ( +). Unary operators have a left subtree but no right subtree. Binary operators have both subtrees. Here is our new tree structure: class Tree { public: Tree *left; Tree *right; std::string op; double val; }; (We use std::string instead of string in case using namespace std; is not present.) Let’s now build a tree. Suppose we want to represent the expression 3.4-(2.6+5.0). This is what the tree should look like: Here is how we build it in code. Note that we have to make a variable for each node, set each variable’s values, then link the variables together. Tree m; m.op = "-"; Tree p; p.op = "+"; Tree v1; v1.val = 3.4; Tree v2; v2.val = 2.6; Tree v3; v3.val = 5.0; m.left = &v1; m.right = &p; p.left = &v2; p.right = &v3; v1.left = v1.right = v2.left = v2.right = v3.left = v3.right = NULL; This code makes the variable m (for “minus”) the top node, i.e. the “root” of the tree. Processing the tree: printing its values Now, suppose you have a tree; actually, suppose you have a variable root that’s a pointer to a tree. For example, Tree *root = &m; Starting with this “root” pointer, how do we get to all the nodes in the tree, and print their values? Specifically, how do we print the values in the tree in such a way as to arrive at an equivalent arithmetic expression (perhaps with extra parentheses)? Our task is to take the tree we’ve defined and print the contents of the tree in this form: (3.4-(2.6+5.0)) You’ll notice that the printed form of the tree follows a simple pattern: for every operation, print a (, then the contents of the left subtree, then the operation (e.g. +), then the contents of the right subtree, finally a ). Unlike linked lists, trees are non-linear, so we cannot process them in a linear manner. For some node in the tree, we cannot know how big a subtree there is on the left side or the right side. So we cannot simply set up a loop to process the left and right subtrees; instead, we must resort to a recursive procedure. Notice that the description above of how to print the contents of the tree was a recursive description: “print a (, then the contents of the left subtree, then …” Suppose we call this process print_tree. Then we can rewrite the description like this: The print_treeprocedure works as follows: Print a (. Then, follow the print_treeprocedure (this very same procedure) on the left subtree. Then, print the operation (e.g. +). Then, follow the print_treeprocedure on the right subtree. Finally, print a ). There are a few minor problems with that description. First, if the node is just a value (that is, op == ""), then we just print the value and don’t bother with ( or ) or left and right subtrees (values don’t have left or right subtrees; those pointers are NULL pointers). Also, if there is no tree (the node we’re looking at is just a NULL pointer), then we don’t need to do anything. Now we can code this procedure in C++, as a function: void print_tree(Tree *root) { if(root == NULL) return; if(root->op == "") { cout << root->val; } else { cout << "("; print_tree(root->left); cout << root->op; print_tree(root->right); cout << ")"; } } Using the function like this: print_tree(root) results in the printout (3.4-(2.6+5)), which is just what we want. A slight variation: handling other kinds of operators As described above, these trees may have both binary and unary operators. If we use the same print_tree function with a tree that has a unary operator in it, we get output that doesn’t look right. For example, the tree / / \ / \ / \ sin cos / / 4.00 4.00 …prints as ((4sin)/(4cos)) (do you see why?). We can fix this by adding another conditional in our code. Unary operators have only a left subtree, and no right subtree (while operators +, -, etc. always have subtrees on both sides). So, we simply check if we have an operator and there is no right subtree; if so, we print the operator first, then (, then the left subtree, then ). void print_tree(Tree *root) { if(root == NULL) return; if(root->op == "") { cout << root->val; } else if(root->right == NULL) { cout << root->op; cout << "("; print_tree(root->left); cout << ")"; } else { cout << "("; print_tree(root->left); cout << root->op; print_tree(root->right); cout << ")"; } } Now, that same tree prints as (sin(4)/cos(4)), which is what we want. Self-practice exercises The print_treemethod above is a bit more complicated than a simple pre-order, in-order, or post-order. Write each of these other tree printing functions. Write a tree destructor, ~Tree. Write a tree search function, bool search_tree(double search_val), that looks through the whole tree for search_val. Write a function, void delete_subtree(double search_val), that deletes the whole subtree starting at the first occurrence of search_val(if it is found). If search_valis found multiple times in the tree, only the first occurrence (and its subtree) is deleted. Write a double multiply_collapse()method that multiplies every value in the tree together and returns the result.
http://csci221.artifice.cc/lecture/trees.html
CC-MAIN-2017-30
refinedweb
1,218
72.56
How to Build a Chat App With Next.js & Firebase In this tutorial, we will guide you through building a simple chat application with Next.js and Firebase. The COVID-19 pandemic has not only affected the physical gathering of Software Engineers, for events such as local meet-ups and global annual conferences, moreso , other live events where the gathering of one or more persons is inevitable. To prevent exposure to the virus, participants of the supposed live events will rather prefer to have it hosted online and join at the comfort of their homes. A couple of options such as YouTube and zoom come to mind for such events. But imagine if you needed to build an application with functionalities and features robust enough to hold a live event, that’s going to be a lot of work. In this tutorial, I will show you a simpler way to build a Virtual event site as an extra functionality for your existing application or as a stand-alone system. For this, we will use Laravel to build the appropriate backend logic and then use Vue.js to handle all the interface and frontend logic. The core and the most interesting section of this application is the chat feature, which will allow participants of an event to chat with each other. To achieve this we will leverage CometChat infrastructure. While I will do my best to explain any complex concepts and terms in this tutorial, the following prerequisites will help you get the best out of this post. The application that will be built in this tutorial is a live event platform that will allow users to view multiple events and be able to select, join and participate in a chat during the live session as shown here: Users of this application will be able to: Bear in mind that in an ideal production environment, it is advisable to allow a particular user with the required privilege, such as an admin, to create and moderate the live events. But to reduce complexity in this post, we will allow all registered users to have the access to creating events. This application will allow users to register by providing their details such as name, email and an optional avatar_url. Once the user is created successfully, we will utilize CometChat RESTful API to create such a user on CometChat. To uniquely identify and easily authenticate each user later on CometChat, an authentication token will be generated and saved against the user’s details in the database. Also, for the log in process, a user with the appropriate credential will provide his or her details, and once successfully authenticated, such user will also be logged in on CometChat by making an API call using the REST API. Lastly, one of the most important feature of this application is the live event. We will set up different endpoints to manage events in a bit. Every event will be regarded as a group on CometChat, so once created within our application, we will send an API request to CometChat to easily create the corresponding group with a particular guid as well. Only authenticated users who belong to a particular group (event) will be able to join and participate in any chat. In this section we will begin by using Composer to install Laravel. Issue the following command from the terminal: composer create-project laravel/laravel lara-virtual-event The preceding command will create a fresh Laravel installation in a directory named lara-virtual-event and place it in your development folder or where you ran the command from. Then, navigate to the new project and run it using the following commands: View the welcome page of Laravel on. First we will modify the existing User model and its corresponding migration file that came installed with Laravel by default and after that, we will create a migration file for Events. Start by opening the User table migration file and replace its content with the following: In addition to the default fields, we modified the file to create fields for token and avatar_url in the database. Open the User model file in app/Models/User.php and modify the $fillable property to reflect this: To create and manage events, we will need to create an Eloquent model, a database migration file and a controller. Run the command below to do all these: php artisan make:model Event -mc With the preceding command, we took advantage of the flexibility offered by Laravel artisan command to automatically generate a Model, migration file and controller using a single command. The mc option represent, a migration and controller. With that in place, lets edit the default contents for the migration file and model. Start by opening migration file in the database/migrations folder and replace its content with the following: Next, update the $fillable property by using the following content for the event Model located in app/Models/Event.php file: With all the appropriate models and migration file created, proceed to create a database for your application and also update the .env file with its details: Swap the YOUR_DB_NAME, YOUR_DB_USERNAME and YOUR_DB_PASSWORD with the appropriate credentials. Now, issue the following command to run the migration file and create the appropriate fields in the database: php artisan migrate After running the previous command to set up the database tables, we will proceed to set up and implement the authentication logic and other backend business logics for the application. To begin the Authentication aspect of this application, we will utilize the Laravel/UI scaffolding package. Run the following command to install it using Composer: composer require laravel/ui This package provides the Bootstrap and Vue scaffolding for a new Laravel project. Once the package has been installed, we will use the ui Artisan command to install the complete frontend scaffolding. Issue the following command to achieve that: php artisan ui vue --auth This will install Vue.js and also create authentication controllers as well as its corresponding views. After the installation is done, you will run the command below to install all the dependencies created in the package.json file for the UI scaffold: npm install Next, run these commands to install additional dependencies: npm install [email protected]^15.9.5 --save-dev --legacy-peer-deps npm install moment In this section, we will start working on the business logic for the backend API by modifying the authentication logic within Laravel to fit in into our case and also create the appropriate logic to manage events. Start with the RegisterController found in app/Http/Controllers/Auth folder and replace its content with: In the file above, we specified the required fields that should. In this section, we will update the EventController.php file with the appropriate code to create and view events. Bear in mind that we want Laravel to return the required views to render the form for creating events but use Vue.js components to handle its logic instead of blade template engine. Open app/Http/EventController.php file and replace its content with the following code: First, we created an index() method to render a view that will display the list of events as retrieved from the database. Immediately after that, we created the following methods: With the logics out of the way, navigate to routes/web.php file and update its content with the following: We included the endpoints to reference all the methods created within all our Controllers. These endpoints will be called from within the frontend of the application. Here instead of handling all the logic with the blade template engine, Vuejs components will be used. Also, we will install and leverage the UI kit created by CometChat to easily configure the group chat view. Since we have already install Vue.js and its dependencies earlier, we will start Initializing CometChat within an application is part of the core concepts specified for communicating with the CometChat APIs. To do that within our application, open the resources/js/app.js file and use the following code for it: Just before we start creating the necessary Vue.js components for this application, we need to set up few configurations: To begin, run the following command from the terminal to install @babel/plugin-proposal-class-properties: npm install @babel/plugin-proposal-class-properties Once the installation is done, create a new file named .babelrc within the root of your application and use the following content for it: Next, navigate to webpack.mix.js file and update its content as shown here: Here, we specified the any file with an extension of .wav be handled by Webpack using file-loader. In this section, we will create register, login and a event view for users. and, password and avatar_url. create an event, a user must provide the event title, description and YouTube live stream URL. Populate the createEvent.vue file with the following code to receive those inputs: Next, add the following to the <script></script> section. From the file above, we defined a method named createEvent() to create an event as a group on CometChat. Once successful, we also created the details of such event in our database. Navigate to Events.vue file and paste the following code to display the list of created events: The content above will render a list of live events and also display the thumbnail for each event. Next, paste the following code in the <script></script> section of the file: Once the component was ready, we sent a GET HTTP request to our API to retrieve the list of events from the database. We then proceeded to create two other methods: Lastly, for styling purpose, add the following styles to the <style></style> section of the Events.vue file: The single event component will render a video container and a group chat view. To set this up, use the following code for the <template></template> section of Event.vue file: Aside from the video container section, we introduced and used a comet-chat-messages components to render messages for the particular group (event) selected from the event listing. This component was imported from the CometChat UI kit for Vuejs and it takes the type and the group object as props. The props are very important as this component will throw an error without them. Once this component is mounted, we called the CometChat.getGroup() to retrieve the group object and assigned it to the groupEvent variable reference within the view. To make this page layout more appealing and properly structure, use the following style for the <style></style> section of the file: So far, we have built the user interface into smaller Vue.js components and before we can make use of them within the appropriate Blade templates, register them as Vue.js components, by adding them to resources/js/app.js as shown here: Here, we will make use of all the components created so far within each blade file from Laravel. Start with the login.blade.php file in resources/views/auth folder and replace its content with the following: For resources/views/auth/register.blade, use the following content: The list of events will be rendered within the resources/views/home.blade.php file. So open it and replace its content with the following: Change the content rendered on the homepage by replacing the contents of resources/views/welcome.blade.php file with the following: Open resources/sass/app.scss and update its content with: This is the CDN file for font awesome icons. Create a new folder named events within resources/views folder and then create two new files named create-event.blade.php and event.blade.php inside the newly created folder. Next, open event.blade.php and use the following content for it: And for the create-event.blade.php file, update its content with the following: To update the Layout file. Replace the content of resources/views/layouts/app.blade.php file with the following: Now that we have included the appropriate logic for the the frontend of our application, we can now serve the application by issuing the commands below. This first command will run the Laravel application: php artisan serve While the command below will compile all the assets including Vue: npm run watch If you come across this error This is because we have two images compiled by webpack with the same name and casing from different locations within the project. One of the best temporary fixes is to rename one of the files and also update its reference. Navigate to resources/js/cometchat-pro-vue-ui-kit/src/components/Messages/CometChatMessageActions/resources folder and rename edit.png to editIcon.png. Also rename add-reaction.svg to add-reaction-icon.svg. Next, open resources/js/cometchat-pro-vue-ui-kit/src/components/Messages/CometChatMessageActions/CometChatMessageActions.vue file and update the import of the files renamed above as shown below: import reactIcon from "./resources/add-reaction-icon.svg"; import editIcon from "./resources/editIcon.png"; Next, ensure that the application is opened from two different terminals and then proceed to run the following commands again: php artisan serve And npm run watch From the second terminal. Navigate to from your browser. Next, go ahead and Register: Or Log in if you have created an account. You can create a new event or view the list of available events and select one to joined the stream as shown here: Conclusion In this post, we built a virtual live event system where authenticated users can create multiple events, add a YouTube live stream link, and most importantly chat with one another. The chat functionality was implemented easily by taking advantage of the existing robust APIs and components created by CometChat. This saved us a lot of time. The complete source code for the project built in this tutorial can be found here on GitHub..
https://www.cometchat.com/tutorials/how-to-build-a-virtual-event-site-for-laravel-php-and-vue
CC-MAIN-2021-43
refinedweb
2,324
52.39
Hi Michael, What we have learned from creating the Zope Toolkit (formerly Zope 3), is that __init__.py files in namespace packages should be empty, and imports should be absolute. [1] That said, there are ways to avoid import cycles. One is to very carefully craft your modules so they do not have to import from each other. Another is to not have imports at the module level, but move them into the functions where they are required. Third, large libraries like the Zope Toolkit usually have mechanisms to defer imports to some point after initial loading. You may want explore this direction as well. [2] (Not trying to plug the ZTK here, it just happens to be a large, namespace-using library I know.) Hope this helps, Stefan [1] [2] -- Stefan H. Holek stefan at epy.co.at
https://mail.python.org/pipermail/python-list/2012-November/634384.html
CC-MAIN-2019-22
refinedweb
139
74.69
From: Ion Gaztañaga (igaztanaga_at_[hidden]) Date: 2006-09-24 13:52:29 Hi Christopher, > You can reverse the inclusion order to make it work. But this is very annoying since headers from every boost library using date-time would have also those dependencies. > Defining > WIN32_LEAN_AND_MEAN will also prevent windows.h from including > winsock.h. The asio headers already include the following > construct to #define WIN32_LEAN_AND_MEAN: > > # if !defined(BOOST_ASIO_NO_WIN32_LEAN_AND_MEAN) > # if !defined(WIN32_LEAN_AND_MEAN) > # define WIN32_LEAN_AND_MEAN > # endif // !defined(WIN32_LEAN_AND_MEAN) > # endif // !defined(BOOST_ASIO_NO_WIN32_LEAN_AND_MEAN) > > Perhaps something similar should be added to date_time, since > it does not need winsock.h? Seems the most reasonable answer, if lean and mean configuration has all the necessary headers Boost.DateTime needs. Regards, Ion Boost list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
http://lists.boost.org/Archives/boost/2006/09/110668.php
crawl-001
refinedweb
141
53.68
Many components are fairly simple. They might consist of a single class with a simple object model or no object model. Other components are more complex, and might need to contain and manage a large number of subordinate objects. Nested classes are one way for complex components to contain and manage the objects they need. A nested class is a class that is fully enclosed within another class declaration. For example, a nested class and an enclosing class might look like the following example: ' Visual Basic ' This is the enclosing class, whose class declaration contains the nested ' class. Public Class EnclosingClass ' This is the nested class. Its class declaration is fully contained ' within the enclosing class. Public Class NestedClass ' Insert code to implement NestedClass. End Class ' Insert code to implement EnclosingClass. End Class // C# // This is the enclosing class, whose class declaration contains the // nested class. public class EnclosingClass { // This is the nested class. Its class declaration is fully contained // within the enclosing class. public class NestedClass { // Insert code to implement NestedClass. } // Insert code to implement EnclosingClass. } In this example, the class declaration for NestedClass is fully contained by the class declaration of EnclosingClass. As a result of being contained within the enclosing class, the nested class gains a certain level of protection. Unless you use the Imports (using in C#) statement, all references to the nested class must be qualified with the name of the containing class. For example, to instantiate the nested class in the previous example, you would have to use the following syntax: ' Visual Basic Dim aClass as New EnclosingClass.NestedClass() // C# EnclosingClass.NestedClass aClass = new EnclosingClass.NestedClass(); The access level of the nested class is implicitly at least the access level of the enclosing class. Even if the nested class is Public, if the enclosing class is Friend (internal in C#), then only members of the assembly will be able to access the nested class, and if the enclosing class is Private, then the nested class will be unavailable to all callers except the enclosing class. Assuming a Public enclosing class, the access level of the nested classes pretty much follows the same rules as for access of unnested classes. Friend classes are available to members of the assembly, but not external clients. Private classes are available to the enclosing class, other nested classes within the enclosing class, and any classes nested within other nested classes. Nested classes are useful when an object will logically contain subordinate objects, but other objects would have use for those objects. An example might be a Wheel class. This could be a class that clients could create and use wherever a wheel might be needed in their application. But except in the most primitive implementations, a wheel is not just a single object, but is composed of several subordinate objects, each of which build the wheel. A wheel might have a Rim object, Tire object, Spoke objects and other objects, without which the wheel could not function. But the average user would have no need to create a spoke, or a rim, or a bearing — all he's interested in is the wheel. In a case like this, it makes sense for the Wheel class to contain the implementation for all of its subordinate classes. This way, the wheel can create and manage any contained objects it may need while conveniently hiding the details of this implementation from the client. Those subordinate objects the client might reasonably need to have contact with now and again (for example, a Tire object) can be exposed as part of the public object model, and those that a client should never see (for example, a Bearings collection) could be declared private and hidden. Recommendations on Nested Classes in Components | Implementing Nested Classes | Components that Contain Other Components
http://msdn.microsoft.com/en-us/library/cbwxw0ye(VS.71).aspx
crawl-002
refinedweb
634
53.71
.... Low-level concrete target that is specific to Boost.Jam build engine. Essentially a string—most often a name of file. In most cases, you will only have to deal with concrete targets and the process that creates concrete targets from metatargets. Extending metatarget level is rarely required. The jam targets are typically only used inside the command line patterns. Metatarget is an object that records information specified in Jamfile, such as metatarget kind, name, sources and properties, and can be called with specific properties to generate concrete targets. At the code level it is represented by an instance of class derived from abstract-target. by target-id using the targets.resolve-reference function, and the targets.generate-from-reference function can both lookup and generate a metatarget. The abstract-target class has three immediate derived classes: project-target that corresponds to a project and is not intended for further subclassing. The generate method of this class builds all targets in the project that are not marked as explicit. main-target corresponds to a target in a project and contains one or more target alternatives. This class also should not be subclassed. The generate method of this class selects an alternative to build, and calls the generate method of that alternative. basic-target corresponds to a specific target alternative. This is base class, with a number of derived classes. The generate method processes the target requirements and requested build properties to determine final properties for the target, builds all sources, and finally calls the abstract construct method with the list of source virtual targets, and the final properties. The instances of the project-target and main-target classes are created implicitly—when loading a new Jamfiles, or when a new target alternative with as-yet unknown name is created. The instances of the classes derived from basic-target are typically created when Jamfile calls a metatarget rule, such as such as exe. It it permissible to create a custom class derived from basic-target and create new metatarget rule that creates instance of such target. However, in the majority of cases, a specific subclass of basic-target— typed-target is used. That class is associated with a type and relays to generators to construct concrete targets of that type. This process will be explained below. When a new type is declared, a new metatarget rule is automatically defined. That rule creates new instance of type-target, associated with that type. Concrete targets are represented by instance of classes derived from virtual-target. The most commonly used subclass is file-target. A file target is associated with an action that creates it— an instance of the action class. The action, in turn, hold a list of source targets. It also holds the property-set instance with the build properties that should be used for the action. Here's an example of creating a target from another target, source local a = [ new action $(source) : common.copy : $(property-set) ] ; local t = [ new file-target $(name) : CPP : $(project) : $(a) ] ; The first line creates an instance of the action class. The first parameter is the list of sources. The second parameter is the name a jam-level action. The third parameter is the property-set applying to this action. The second line creates a target., besides allowing Boost Build to track which virtual targets got created for each metatarget, this will also replace targets with previously created identical ones, as necessary.[17] Here are a couple of examples: return [ virtual-target.register $(t) ] ; return [ sequence.transform virtual-target.register : $(targets) ] ; In theory, every kind of metatarget in Boost.Build (like exe, lib or obj) could be implemented by writing a new metatarget class that, independently of the other code, figures what files to produce and what commands to use. However, that would be rather inflexible. For example, adding support for a new compiler would require editing several metatargets. In practice, most files have specific types, and most tools consume and produce files of specific type. To take advantage of this fact, Boost.Build defines concept of target type and generators, and has special metatarget class typed-target. Target type is merely an identifier. It is associated with a set of file extensions that correspond to that type. Generator is an abstraction of a tool. It advertises the types it produces and, if called with a set of input target, tries to construct output targets of the advertised types. Finally, typed-target is associated with specific target type, and relays the generator (or generators) for that type. A generator is an instance of a class derived from generator. The generator class itself is suitable for common cases. You can define derived classes for custom scenarios. Say you're writing an application that generates C++ code. If you ever did this, you know that it's not nice. Embedding large portions of C++ code in string literals is very awkward. A much better solution is: It's quite easy to achieve. You write special verbatim files that are just C++, except that the very first line of the file contains the name of a variable that should be generated. A simple tool is created that takes a verbatim file and creates a cpp file with a single char* variable whose name is taken from the first line of the verbatim file and whose value is the file's properly quoted content. Let's see what Boost.Build can do. First off, Boost.Build has no idea about "verbatim files". So, you must register a new target type. The following code does it: import type ; type.register VERBATIM : verbatim ; The first parameter to type.register gives the name of the declared type. By convention, it's uppercase. The second parameter is the suffix for files of this type. So, if Boost.Build sees code.verbatim in a list of sources, it knows that it's of type VERBATIM. Next, you tell Boost.Build that the verbatim files can be transformed into C++ files in one build step. A generator is a template for a build step that transforms targets of one type (or set of types) into another. Our generator will be called verbatim.inline-file; it transforms VERBATIM files into CPP files: import generators ; generators.register-standard verbatim.inline-file : VERBATIM : CPP ; Lastly, you have to inform Boost.Build about the shell commands used to make that transformation. That's done with an actions declaration. actions inline-file { "./inline-file.py" $(<) $(>) } Now, we're ready to tie it all together. Put all the code above in file verbatim.jam, add import verbatim ; to Jamroot.jam, and it's possible to write the following in your Jamfile: exe codegen : codegen.cpp class_template.verbatim usage.verbatim ; The listed verbatim files will be automatically converted into C++ source files, compiled and then linked to the codegen executable. In subsequent sections, we will extend this example, and review all the mechanisms in detail. The complete code is available in the example/customization directory. The first thing we did in the intruduction. This section will describe how Boost.Build can be extended to support new tools. For each additional tool, a Boost.Build object called generator must be created. That object has specific types of targets that it accepts and produces. Using that information, Boost.Build is able to automatically invoke the generator. For example, if you declare a generator that takes a target of the type D and produces a target of the type OBJ, when placing a file with extention .d in a list of sources will cause Boost.Build to invoke your generator, and then to link the resulting object file into an application. (Of course, this requires that you specify that the .d extension corresponds to the D type.) Each generator should be an instance of a class derived from the generator class. In the simplest case, you don't need to create a derived class, but simply create an instance of the generator class. Let's review the example we've seen in the introduction. import generators ; generators.register-standard verbatim.inline-file : VERBATIM : CPP ; actions inline-file { "./inline-file.py" $(<) $(>) } We declare a standard generator, specifying its id, the source type and the target type. When invoked, the generator will create a target of type CPP with a source target of type VERBATIM as the only source. But what command will be used to actually generate the file? In bjam, actions are specified using named "actions" blocks and the name of the action block should be specified when creating targets. By convention, generators use the same name of the action block as their own id. So, in above example, the "inline-file" actions block will be used to convert the source into the target. There are two primary kinds of generators: standard and composing, which are registered with the generators.register-standard and the generators.register-composing rules, respectively. For example: generators.register-standard verbatim.inline-file : VERBATIM : CPP ; generators.register-composing mex.mex : CPP LIB : MEX ; The first (standard) generator takes a single source of type VERBATIM and produces a result. The second (composing) generator takes any number of sources, which can have either the CPP or the LIB type. Composing generators are typically used for generating top-level target type. For example, the first generator invoked when building an exe target is a composing generator corresponding to the proper linker. You should also know about two specific functions for registering generators: generators.register-c-compiler and generators.register-linker. The first sets up header dependecy scanning for C files, and the seconds handles various complexities like searched libraries. For that reason, you should always use those functions when adding support for compilers and linkers. (Need a note about UNIX) The standard generators allows you to specify source and target types, an action, and a set of flags. If you need anything more complex, you need to create a new generator class with your own logic. Then, you have to create an instance of that class and register it. Here's an example how you can create your own generator class: class custom-generator : generator { rule __init__ ( * : * ) { generator.__init__ $(1) : $(2) : $(3) : $(4) : $(5) : $(6) : $(7) : $(8) : $(9) ; } } generators.register [ new custom-generator verbatim.inline-file : VERBATIM : CPP ] ; This generator will work exactly like the verbatim.inline-file generator we've defined above, but it's possible to customize the behaviour by overriding methods of the generator class. There are two methods of interest. The run method is responsible for the overall process - it takes a number of source targets, converts them to the right types, and creates the result. The generated-targets method is called when all sources are converted to the right types to actually create the result. The generated-targets method can be overridden when you want to add additional properties to the generated targets or use additional sources. For a real-life example, suppose you have a program analysis tool that should be given a name of executable and the list of all sources. Naturally, you don't want to list all source files manually. Here's how the generated-targets method can find the list of sources automatically: class itrace-generator : generator { .... rule generated-targets ( sources + : property-set : project name ? ) { local leaves ; local temp = [ virtual-target.traverse $(sources[1]) : : include-sources ] ; for local t in $(temp) { if ! [ $(t).action ] { leaves += $(t) ; } } return [ generator.generated-targets $(sources) $(leafs) : $(property-set) : $(project) $(name) ] ; } } generators.register [ new itrace-generator nm.itrace : EXE : ITRACE ] ; The generated-targets method will be called with a single source target of type EXE. The call to virtual-target.traverse will return all targets the executable depends on, and we further find files that are not produced from anything. The found targets are added to the sources. The run method can be overriden to completely customize the way the generator works. In particular, the conversion of sources to the desired types can be completely customized. Here's another real example. Tests for the Boost Python library usually consist of two parts: a Python program and a C++ file. The C++ file is compiled to Python extension that is loaded by the Python program. But in the likely case that both files have the same name, the created Python extension must be renamed. Otherwise, the Python program will import itself, not the extension. Here's how it can be done: rule run ( project name ? : property-set : sources * ) { local python ; for local s in $(sources) { if [ $(s).type ] = PY { python = $(s) ; } } local libs ; for local s in $(sources) { if [ type.is-derived [ $(s).type ] LIB ] { libs += $(s) ; } } local new-sources ; for local s in $(sources) { if [ type.is-derived [ $(s).type ] CPP ] { local name = [ $(s).name ] ; # get the target's basename if $(name) = [ $(python).name ] { name = $(name)_ext ; # rename the target } new-sources += [ generators.construct $(project) $(name) : PYTHON_EXTENSION : $(property-set) : $(s) $(libs) ] ; } } result = [ construct-result $(python) $(new-sources) : $(project) $(name) : $(property-set) ] ; } First, we separate all source into python files, libraries and C++ sources. For each C++ source we create a separate Python extension by calling generators.construct and passing the C++ source and the libraries. At this point, we also change the extension's name, if necessary. Often, we need to control the options passed the invoked tools. This is done with features. Consider an example: # Declare a new free feature import feature : feature ; feature verbatim-options : : free ; # Cause the value of the 'verbatim-options' feature to be # available as 'OPTIONS' variable inside verbatim.inline-file import toolset : flags ; flags verbatim.inline-file OPTIONS <verbatim-options> ; # Use the "OPTIONS" variable actions inline-file { "./inline-file.py" $(OPTIONS) $(<) $(>) } We first define a new feature. Then, the flags invocation says that whenever verbatin.inline-file action is run, the value of the verbatim-options feature will be added to the OPTIONS variable, and can be used inside the action body. You'd need to consult online help (--help) to find all the features of the toolset.flags rule. Although you can define any set of features and interpret their values in any way, Boost.Build suggests the following coding standard for designing features. Most features should have a fixed set of values that is portable (tool neutral) across the class of tools they are designed to work with. The user does not have to adjust the values for a exact tool. For example, <optimization>speed has the same meaning for all C++ compilers and the user does not have to worry about the exact options passed to the compiler's command line. Besides such portable features there are special 'raw' features that allow the user to pass any value to the command line parameters for a particular tool, if so desired. For example, the <cxxflags> feature allows you to pass any command line options to a C++ compiler. The <include> feature allows you to pass any string preceded by -I and the interpretation is tool-specific. (See. Here's another example. Let's see how we can make a feature that refers to a target. For example, when linking dynamic libraries on Windows, one sometimes needs to specify a "DEF file", telling what functions should be exported. It would be nice to use this file like this: lib a : a.cpp : <def-file>a.def ; Actually, this feature is already supported, but anyway... Since the feature refers to a target, it must be "dependency". feature def-file : : free dependency ; One of the toolsets that cares about DEF files is msvc. The following line should be added to it. flags msvc.link DEF_FILE <def-file> ; Since the DEF_FILE variable is not used by the msvc.link action, we need to modify it to be: actions link bind DEF_FILE { $(.LD) .... /DEF:$(DEF_FILE) .... } Note the bind DEF_FILE. Sometimes you want to create a shortcut for some set of features. For example, release is a value of <variant> and is a shortcut for a set of features. It is possible to define your own build variants. For example: variant crazy : <optimization>speed <inlining>off <debug-symbols>on <profiling>on ; will define a new variant with the specified set of properties. You can also extend an existing variant: variant super_release : release : <define>USE_ASM ; In this case, super_release will expand to all properties specified by release, and the additional one you've specified. You are not restricted to using the variant feature only. Here's example that defines a brand new feature: feature parallelism : mpi fake none : composite link-incompatible ; feature.compose <parallelism>mpi : <library>/mpi//mpi/<parallelism>none ; feature.compose <parallelism>fake : <library>/mpi//fake/<parallelism>none ; This will allow you to specify the value of feature parallelism, which will expand to link to the necessary library. A main target rule (e.g “exe” Or “lib”) creates a top-level target. It's quite likely that you'll want to declare your own and there are two ways to do that. The first way applies when your target rule should just produce a target of specific type. In that case, a rule is already defined for you! When you define a new type, Boost.Build automatically defines a corresponding rule. The name of the rule is obtained from the name of the type, by downcasing all letters and replacing underscores with dashes. For example, if you create a module obfuscate.jam containing: import type ; type.register OBFUSCATED_CPP : ocpp ; import generators ; generators.register-standard obfuscate.file : CPP : OBFUSCATED_CPP ; and import that module, you'll be able to use the rule "obfuscated-cpp" in Jamfiles, which will convert source to the OBFUSCATED_CPP type. The second way is to write a wrapper rule that calls any of the existing rules. For example, suppose you have only one library per directory and want all cpp files in the directory to be compiled into that library. You can achieve this effect using: lib codegen : [ glob *.cpp ] ; If you want to make it even simpler, you could add the following definition to the Jamroot.jam file: rule glib ( name : extra-sources * : requirements * ) { lib $(name) : [ glob *.cpp ] $(extra-sources) : $(requirements) ; } allowing you to reduce the Jamfile to just glib codegen ; Note that because you can associate a custom generator with a target type, the logic of building can be rather complicated. For example, the boostbook module declares a target type BOOSTBOOK_MAIN and a custom generator for that type. You can use that as example if your main target rule is non-trivial. If your extensions will be used only on one project, they can be placed in a separate .jam file and imported by your Jamroot.jam. If the extensions will be used on many projects, users will thank you for a finishing touch. The using rule provides a standard mechanism for loading and configuring extensions. To make it work, your module should provide an init rule. The rule will be called with the same parameters that were passed to the using rule. The set of allowed parameters is determined by you. For example, you can allow the user to specify paths, tool versions, and other options. Here are some guidelines that help to make Boost.Build more consistent: The init rule should never fail. Even if the user provided an incorrect path, you should emit a warning and go on. Configuration may be shared between different machines, and wrong values on one machine can be OK on another. Prefer specifying the command to be executed to specifying the tool's installation path. First of all, this gives more control: it's possible to specify /usr/bin/g++-snapshot time g++ as the command. Second, while some tools have a logical "installation root", it's better if the user doesn't have to remember whether a specific tool requires a full command or a path. Check for multiple initialization. A user can try to initialize the module several times. You need to check for this and decide what to do. Typically, unless you support several versions of a tool, duplicate initialization is a user error. If the tool's version can be specified during initialization, make sure the version is either always specified, or never specified (in which case the tool is initialied only once). For example, if you allow: using yfc ; using yfc : 3.3 ; using yfc : 3.4 ; Then it's not clear if the first initialization corresponds to version 3.3 of the tool, version 3.4 of the tool, or some other version. This can lead to building twice with the same version. If possible, init must be callable with no parameters. In which case, it should try to autodetect all the necessary information, for example, by looking for a tool in PATH or in common installation locations. Often this is possible and allows the user to simply write: using yfc ; Consider using facilities in the tools/common module. You can take a look at how tools/gcc.jam uses that module in the init rule.
https://www.boost.org/doc/libs/1_55_0/doc/html/bbv2/extender.html
CC-MAIN-2020-10
refinedweb
3,506
58.69
$37.50. Premium members get this course for $349.00. Premium members get this course for $18.75. Premium members get this course for $95.20. Since you move the raidset i assume you only have to re-annouce the shares to the new server and edit local users/groups-rights but i am not 100% sure.... I do not know if this is still possible in your setup, but the most seamless way I know about are DFS replicated folders, there will be no downtime at all. This means you have to change the mount points first however to something like: \\mydomain.com\DFSfolder\s The way to do this would involve the following steps: 1. Setup the new Server with the new storage 2. domain join it 3. install the File Server and DFS role 4. and recreate every share needed, install all software. 5. test it. Now, create DFS namespaces and replication folders: 1. In DFS, create a new name space and Folders for all the shares needed. 2. Then add folder targets to the DFS folders, both servers with its shares each. Eg, if you have a share 'security' on both servers, then add a DFS folder named 'security' and set both server's 'security' share as targets. 3. The DFS wizard will prompt you to setup replicated folders. Do as it says. Just make sure you choose as your old server as master to be replicated from. Be careful here! If choosing the wrong one, your old share will be emptied (though not permanently). 4. Wait until the replication is done, depends on the stuff to be replicated. The switch 1. If everything is replicated, change your GPOs to point to the DFS folder instead of directly to the old file server. 2. Wait until everything is propagated and set the rest manually. 3. If everything works, disable the old server's folder target in DFS. 4. If nobody complains after a few days switch off the old server, remove it's DFS targets and disable the replication groups. HTH Watch this video to see how easy it is to make mass changes to Active Directory from an external text file without using complicated scripts. here is what I will do: Have the new server out of the Domain, regardless the name. Write down all the shares of the old server, permissions, etc. Remove the actual server (2003) from the domain. Install the SAS on the new 2012 and make sure it is working and you can access it on the server. Join the 2012 to the domain using the old name. Recreate all shares, permissions, etc. Test access from a client. Also, you can try just taking the SAS to the new server, recreate shares and everything and add a GPO rule to all the machines to kill the previous share name and use the new one, this will work fine if you have already a GPO to create a "Mapped Drive" at login. With this option though all user will have to logout and login again. With the previous one, if done correctly they will have time down but as soon the share is up they will have access to it without having to login back again. Good Luck!
https://www.experts-exchange.com/questions/28223615/Moving-an-Array-to-a-New-Server-2012.html
CC-MAIN-2018-13
refinedweb
546
82.24
Cool idea Yellowbyte Studios Creator of Recent community posts Tough game, very sensitive controls. The camera isn't always centred on the player. Putting the player back to the very start is pretty harsh, could be a seperate game mode. Nice work :) It's gig night, and Pauly Big-Head is about to perform his new hit song. But the crowd is crazy tonight, and they want a good show. Let's hope Pauly can remember the notes! Super Key Boy is a short rhythm game in which you must use your keyboard to play the correct chords as they appear on screen. The audience will be watching, and will be unhappy is you get it wrong. Good luck. This game was made in 24-hours by a two man team: Robert O'Connor - Coder/Artist/Designer - Twitter: @ThatBobbyGuy Dónall O'Donoghue - Music/Concept - Twitter: @donall Do us a favour and leave a comment below. We hope you enjoy playing our game! P.S. Any secret messages you may find are completely coincidental. Completely. Theme suggestions: Inside-out Big things that are small and small things that are big Artificial intelligence Spooky robots Opposite day App Link - What is the study about? The aim of the study is to see how the use of a smartphone application can impact user behaviour such as physical activity and social interactions. Who are we looking for? We are seeking volunteers who are over 18 years of age and who currently own an Android smartphone device. What will I have to do? Your involvement in the study will be over the course of a week. A short initial survey will be used to assess your current levels of physical and social activity. You will then be asked to install the application on your android device and play through the game in your vicinity. This is a crafting adventure game where you will travel around the world, collecting resources to build and wield rare items. The game uses real- world locations to generate the game content, so you will be required to walk around to play the game. The testing will finish with a quick online feedback survey. What are the benefits? The information from this study may help researchers gain a better understanding of the positive effects of these types of games and provide innovative ideas for further research into treating symptoms of depression and anxiety. Hi guys, the winners have been announced over on the website! If you are a winner, please email us at noexcusesgamejam@gmail.com as soon as possible so we can get everyone their prizes! Thanks for participating! :) Only one day left to get your submissions in. Time to put on the finishing touches, add some polish, and get those games in! Good luck! :) Yes, you can include copyrighted music for this jam, as long as credit is given to the original artist. :) But if you plan on developing the game further in the future, I would highly recommend getting permission or changing the music. :) Hi, thanks for the question. :) For a beginner, I would suggest Unity. It's completely free to start developing, and the coding can be kept to a minimum. There are also a bunch of tutorials available on YouTube and other sites for both 2D and 3D development. As for game artwork, it is okay to use other creators art, as long as you give them credit. If you are looking for an artist, you can look on sites like this, or even create a new topic in this forum. :) If there's one thing that years of Newgrounds games taught me, it's that stick man games can be just as fun as any other. I would focus on making the game fun to play, looks come later. ;) Hi. If you win and don't want the prize, you could gift it to a friend. Of course, if you really really don't want it, we can give it to the next runner up. The main thing is that you made a game by the end and that should be its own reward, don't you think? ;) Hello. There are no restrictions, provided the game relates to the theme in some way. It must also be a "game" i.e. requires some sort of interaction from the user. Hi! Thanks for the question. You can enter as a team of any size, but you'll have to find your own members. :) Thanks for all the feedback! :) I'm currently working on the Android release which will feature updated graphics and a level editor. :) Ran kinda slow on my laptop but looks amazing from the videos I've seen on your dev log. I'd like to learn more about 3D development using LibGdx so thanks for making something that inspires hope for that type of development. Also, nice GUI design and map textures. Good work! :) Thanks for the feedback @David @CiderPunk and @teamkingmonkey. I'm glad you guys enjoyed it! I'm looking forward to playing your submissions! :) Haha, I like the music and sprites, the gameplay is pretty tricky though. Nice submission! :) Hey People! So, I've been quite busy recently adding features to my game entry. Added GUI I added a overlay display which shows the players health and jetpack fuel. This was done by creating a new camera for the screen and simply passing in the player object to print out the data on the screen in the GUIs render method. Doors Doors have been added which allow the player to pass from one Tile map to the next. This was done by creating an object layer in Tiled and creating an object at the specified point. This object contained a string value (to indicate the filename of the next map) and an X and Y value (to indicate the players entry point on the new map).This data was then stored in a java object with a body. When the player comes in contact with this body, the object data is used to load the new map and place the player in the given location. Pick Ups With the addition of jetpack fuel, I decided to create a basic pickup to give the player more fuel. This was achieved much the same way as the door object, using a tiled object layer and listeneing for player contact. In this case, on contact with the player the pickup is removed from the map and the players fuel is increased by 25. Controller Support The game now has full support for XBox 360 controllers. This wasn't too difficult since LibGDX offers libraries which help with this. I set up a listener in the MainGame class which will decide whether the player is using Keyboard or Controller. The source is available on GitHub here: Here's a video of what the game looks like so far: That's all for now. My next step is to get some enemies walking around these maps. Why have a gun if there's nothing to shoot right? Best of luck! Yellowbyte :) Hello Everyone! So today I had pretty much the whole day to put some work into my game. I managed to add some BULLETS for the players gun. This took some time to get right but I eventually got it. In my GameScreen, I set up a bullet array to manage all the bullets that need to be rendered. At the moment there are no enemies, but I did have to work on bullet collisions with the walls. For this, I used another array in the ContactListener called "bodiestoRemove". This is then iterated through in the GameScreen update() method, and removes all of the bullets which have collided with the walls. if (fa.getUserData() != null && fa.getUserData().equals("bullet")) { bodiesToRemove.add(fa.getBody()); } if (fb.getUserData() != null && fb.getUserData().equals("bullet")) { bodiesToRemove.add(fb.getBody()); }<span></span> In the above code, I make the assumtion that the bullet has collided with a wall fixture (since there's nothing else to collide with yet). Then in the GameScreen, I remove the Bullet from the bullet array and destroy the body: if (contactListener.getBodies().size > 0) { for(Body b : contactListener.getBodies()) { bullets.removeValue((Bullet) b.getUserData(), true); world.destroyBody(b); } } I added a Spike block too, which will reset the level if the player comes in contact with it. This was simply a matter of listening for this in the ContactListener. That's pretty much it. I also made some changes to textures and animations. The source is available on GitHub here: Here is a short video showing my progress so far: I hope to continue work on Doors (for linking maps), Enemies, a nice GUI (showing health, ammo, gas, etc.), and much more! :) Stay tuned, and if you're still working on your game, Good Luck! - Yellowbyte :) Yo! Took a bit of a holiday so I'm only getting back to this now. I've been doing some work on making player animations in Spriter. (All my own assets.) I've been able to import these animations into the game and will post a longer update shortly on this. I'm gonna have a long sleep now. Good luck everyone. Yellowbyte :) Hi people, Progress is going quite well today. I have created a Box2D contact listener to listen for the number of contacts that the player has. This involved creating a new fixture for the player (foot), which will tell the contact listener when the player is on the ground. It will be important to know this for displaying the correct player image. public class Box2DContactListeners implements ContactListener { private int numFootContacts; public Box2DContactListeners() { super(); } public void begin void end boolean playerCanJump() { return numFootContacts > 0; } public void preSolve(Contact c, Manifold m) { } public void postSolve(Contact c, ContactImpulse ci) { } As you can see, by implementing the Box2D ContactListener interface, a custom listener can be created which listens for the 'foot' fixture to collide with another fixture. For a normal jumping game, this would be used for judging when the user could make the player jump, however, since I'm using a jet-pack behavior in this game, this will only be used for sprite assignment, for now. Next I worked on making the camera follow the player. This didn't have much involved since the camera position will always be based off of the player position. The following method in the GameScreen make sure that the cameras are updated with the player position: private void updateCameras() { playerPos = player.getBody().getPosition(); float targetX = playerPos.x * PPM + MainGame.WIDTH / 50; float targetY = playerPos.y * PPM + MainGame.HEIGHT / 50; cam.setPosition(targetX, targetY); b2dCam.setPosition(playerPos.x + MainGame.WIDTH / 50 / PPM, playerPos.y + MainGame.HEIGHT / 50 / PPM); b2dCam.update(); cam.update(); } It is important that both cameras are updated together so that our sprite view and our Box2D world are always rendered correctly. That's pretty much it for today, I'm also doing some Art work in Spriter to make my spaceman a bit cooler looking with more animations so hopefully that will be in the next post. Source here: Thanks for reading (sorry for no pictures :( ), Yellowbyte x Box2d Time :D Now that the tile map can be read in to the code, the walls need to be setup in Box2d for each tile. This will involve cycling through the map and creating static boxes for each wall tile in the map. So for this I set up a TileManager class to be in charge of this task: public class TileManager { public void createWalls(World world, TiledMap tileMap) { TiledMapTileLayer layer = (TiledMapTileLayer) tileMap.getLayers().get(0); float tileSize = layer.getTileWidth(); float PPM = 100; Vector2 bot_L = new Vector2((-tileSize / 2)/(PPM), (-tileSize / 2)/(PPM)); Vector2 top_L = new Vector2((-tileSize / 2)/(PPM), ( tileSize / 2)/(PPM)); Vector2 top_R = new Vector2(( tileSize / 2)/(PPM), ( tileSize / 2)/(PPM)); Vector2 bot_R = new Vector2(( tileSize / 2)/(PPM), (-tileSize / 2)/(PPM)); BodyDef bdef = new BodyDef(); FixtureDef fdef = new FixtureDef(); for (int row = 0; row < layer.getHeight(); row++) { for (int col = 0; col < layer.getWidth(); col++) { TiledMapTileLayer.Cell cell = layer.getCell(col, row); if (cell == null) continue; if (cell.getTile() == null) continue; bdef.type = BodyDef.BodyType.StaticBody; bdef.position.set((col + 0.5f), (row + 0.5f)); ChainShape chainShape = new ChainShape(); Vector2[] v; v = new Vector2[4]; v[0] = bot_L; v[1] = top_L; v[2] = top_R; v[3] = bot_R; chainShape.createChain(v); fdef.density = 1f; fdef.shape = chainShape; world.createBody(bdef).createFixture(fdef).setUserData("ground"); chainShape.dispose(); } } } As you can see, the createWalls() method cycles through every tile in the layer and creates a chain shape for each wall tile and adds it to the Box2D world object. Next I created a player object with a dynamic body to test the new Box2D map. This object takes a Box2d body as a parameter since it will need it for drawing its image in the right position. For creating the player body, I set up a method in the GameScreen. private void setupPlayer() { BodyDef bdef = new BodyDef(); bdef.type = BodyDef.BodyType.DynamicBody; bdef.fixedRotation = true; bdef.linearVelocity.set(0f, 0f); bdef.position.set(2, 5); // create body from bodydef Body body = world.createBody(bdef); // create box shape for player collision box PolygonShape shape = new PolygonShape(); shape.setAsBox(40 / PPM, 60 / PPM); // create fixturedef for player collision box FixtureDef fdef = new FixtureDef(); fdef.shape = shape; fdef.filter.categoryBits = Box2DVars.BIT_PLAYER; fdef.filter.maskBits = Box2DVars.BIT_WALL; body.createFixture(fdef).setUserData("player"); shape.dispose(); player = new Player(body); } The above code creates a simple box2d box with the dimensions of the player image. The category and mask bits are set so that the box will collide with the walls, instead of falling straight through them. (These were also setup back in the TileManager class for this same reason.) In the Player object, I added a Sprite with the player texture. The players render method then takes the position of the Box2D body, and draws the texture in that position. This gives us the player which falls to the floor. Next I needed to implement player controls. For this, most of the work is done within the Player object. I listened for certain key presses and then applied forces to the player based on these presses. The result was as follows: The full source can be viewed on GitHub here: That's all for now, I'll be working on camera movement next so that we can follow the player around the whole map. I'll also be setting up a contact listener so we can check when the player comes into contact with specific surfaces and objects. Goodbye for now, Yellowbyte x Hello again, Day 1 I decided that I would begin with screen navigation for menus. I did this by creating a basic state system which will be in charge of displaying whichever screen is active. This consists of: 1. A 'Screen' interface containing methods such as 'onCreate()', onUpdate() and onRender() etc. This is important since each screen will use these methods. 2. A ScreenManager which holds the current screen being displayed. This has setters and getters for changing the currentScreen. public class ScreenManager { private static Screen currentScreen; public static void setScreen(Screen screen) { if (currentScreen != null) { currentScreen.dispose(); } currentScreen = screen; currentScreen.create(); } public static Screen getCurrentScreen() { return currentScreen; } } 3. Our screens which will implement the Screen interface! For now I have only set up one screen which is called GameScreen. This is the screen which will display the main game shown in the previous post. Now, all we need to do in our MainGame class is to set our current screen in our ScreenManager to the GameScreen and the manager will take care of calling its render method! Here is what the MainGame class looks like now: public class MainGame extends ApplicationAdapter { public static final int WIDTH = 1920; public static final int HEIGHT = 1080; public static SpriteBatch sb; public static final float STEP = 1 / 60f; private float accum; @Override public void create() { sb = new SpriteBatch(); ScreenManager.<em>setScreen</em>(new GameScreen()); } @Override public void render() { if (ScreenManager.<em>getCurrentScreen</em>() != null) { accum += Gdx.graphics.getDeltaTime(); while (accum >= STEP) { accum -= STEP; ScreenManager.<em>getCurrentScreen</em>().update(STEP); ScreenManager.<em>getCurrentScreen</em>().render(sb); } } } As you can see, we set the screen to GameScreen in the onCreate() method and then we call the update and render methods of the current screen sing the manager. This way, we can change the current screen anywhere in the code (e.g. Game over screen when player dies), and the manager will automatically call the update and render methods for that screen. You may notice that I have an accum and STEP variables set up in this class. These are for making sure that the game doesn't slow down but instead skips frames when things get heavy, hopefully we wont have to worry about this. ;) I also set up my WIDTH and HEIGHT variables as 1920 and 1080 (full-hd). This is something I always do since its good to have your graphics and images exported with a specific resolution in mind. These will later be accessed in other areas of our code eg. cameras and such.. Now I'm going to talk about the GameScreen. Firstly I opened up 'Tiled' and created an extremely basic tile map, exported it as 'test.tmx' and added it to my games assets. (tilesize 100x100, may change this later) Next I needed to read this map into my code and display it in my GameScreen. The most common way to do this is by using the OrthogonalTiledMapRenderer() class included with LibGDX. public class GameScreen implements Screen { private BoundedCamera cam, b2dCam; private World world; private TiledMap tileMap; private Box2DDebugRenderer b2dr; private int PPM = 100; private OrthogonalTiledMapRenderer tmr; @Override public void create() { //Setup camera. cam = new BoundedCamera(); cam.setToOrtho(false, MainGame.WIDTH, MainGame.HEIGHT); world = new World(new Vector2(0, 0f), true); b2dr = new Box2DDebugRenderer(); b2dCam = new BoundedCamera(); b2dCam.setToOrtho(false, MainGame.WIDTH / PPM, MainGame.HEIGHT / PPM); //Set tile map using Tiled map path. tileMap = new TmxMapLoader().load("test.tmx"); //Setup map renderer. tmr = new OrthogonalTiledMapRenderer(tileMap); } @Override public void update(float step) { b2dCam.update(); cam.update(); } @Override public void render(SpriteBatch sb) { tmr.setView(cam); tmr.render(); b2dr.render(world, b2dCam.combined); } This is what the GameScreen looks like now. You can see that I also tooks the liberty of setting up a Box2d world and renderer for myself since I know I will be using these soon for my players physics (next post). I am also using a BoundedCamera which is a custom camera: public class BoundedCamera extends OrthographicCamera { private float xmin; private float xmax; private float ymin; private float ymax; public BoundedCamera() { this(0, 0, 0, 0); } public BoundedCamera(float xmin, float xmax, float ymin, float ymax) { super(); setBounds(xmin, xmax, ymin, ymax); } public Vector2 unprojectCoordinates(float x, float y) { Vector3 rawtouch = new Vector3(x, y, 0); unproject(rawtouch); return new Vector2(rawtouch.x, rawtouch.y); } public void setBounds(float xmin, float xmax, float ymin, float ymax) { this.xmin = xmin; this.xmax = xmax; this.ymin = ymin; this.ymax = ymax; } public void setPosition(float x, float y) { setPosition(x, y, 0); } public void setPosition(float x, float y, float z) { position.set(x, y, z); fixBounds(); } private void fixBounds() { if(position.x < (xmin + viewportWidth / 2)) { position.x = (xmin + viewportWidth / 2); } if(position.x > (xmax - viewportWidth / 2)) { position.x = (xmax - viewportWidth / 2); } if(position.y < (ymin + viewportHeight / 2)) { position.y = (ymin + viewportHeight / 2); } if(position.y > (ymax - viewportHeight / 2)) { position.y = (ymax - viewportHeight / 2); } } } The idea of this camera is to make sure that it stays within the boundaries of the tile map. For example, if the player is following our player and we reach an edge of the map, the camera should stop moving in that direction so we dont see past the map. The game now looks like this: Incredible I know. :P This is the Bottom-Left edge of the tile map I created in Tiled being rendered on the screen. In the next post I'll probably talk a lot about Box2d, walls, gravity, player physics and moving the camera. Thanks for reading, Yellowbyte Hi Everyone, Merry Christmas! I hope everyone had a good holiday season and enjoyed spending time with family. So, I've decided that I'm going to go with a 2d-platformer styled game for this jam. The game will let the player play as a Space Doctor, 'Dr. Spaceman', as he travels to different planets (which will feature different landscapes, gravity, temperature etc.) which will affect how you traverse each stage. Dr. Spaceman's main goal is to bring medicine to the inhabitants of each planet, all while fighting off not so welcoming beings which endanger the planets other dwellers. Dr. Spaceman will have a jet-pack, and a basic gun tool to start with, which can be upgraded as the game progresses. Hopefully that's enough back-story for now. :P Now on to implementation. I've decided to start with getting the very basics set up. This includes: 1. Rendering a basic tile map. 2. Creating a player object and having him navigate this map. 3. Creating an exit for the player to finish the stage. These are the three main goals I have set for myself to have finished by the end of today. I will post soon which will detail how (in code) I am going to achieve this. Thanks, Yellowbyte x Greetings! Hi everybody, I'm Yellowbyte, 21 y/o, an Irish programmer very interested in game development. I recently finished four years of college studying 'Multimedia and Computer Games Development'. I decided to use LibGDX to develop my Final Year Project last year and I've been learning more about it ever since. I'll be flying solo for this Jam. The tools I plan to use are as follows: Art - Flash, Spriter Audio - Audacity Map Building - Tiled I'm very excited to be a part of this Jam and I look forward to seeing what everyone comes up with. Good luck! :) - Yellowbyte xoxo
https://itch.io/profile/yellowbyte-studios
CC-MAIN-2018-51
refinedweb
3,690
65.52
Writing good test automation code is software development. As a result, the test automation code should be treated exactly the same as the production level code. So we need a Definition of Done for test automation so that we can call a test complete. In this article, I outline the DoD (the ultimate test automation checklist) that I’ve used at some organizations to drastically transform their test automation efforts. Definition of Done for an Automated Functional Test - The automated functional test can be successfully executed in all application environments. - The automated test has greater than 95% accuracy. - Your test uses the page object pattern exactly as prescribed here. - Your test does not contain any reference to interactions with HTML (no button clicks, no element locators and so on…) - You left the code cleaner than you found it. The automated functional test can be successfully executed in all application environments I hope that this doesn’t sound silly? However, I have worked on and worked with projects where the test automation is capable of only running in one environment, like staging. That’s a really poor use of test automation if that’s the case. One of the benefits of good test automation is that it can provide extremely rapid feedback regarding the quality of the software being developed. We can then use that feedback to decided if we want to move the code from one environment to the next. If our automation cannot run in all environments, then it can’t be used everywhere. As a result, it’s less useful in helping us to release software faster. The other benefit of good test automation is that you can run a single script in multiple environments. As the environments get more complex and integrated with other technologies, we can run our tests to make sure that we didn’t introduce any weirdness. Being able to run in a single environment drastically decreases the return of our automation efforts. The automated test has greater than 95% accuracy This means that our automated test is only allowed 5 false results out of 100. A false result is when the test fails for any other reason besides an actual bug or a requirements change. These are the failures that we normally refer to as “flakes”. The test can also pass incorrectly, but this one is much harder to catch. If we run our test once per day, then that means within 100 days we’re only allowed 5 of such results. Whenever we notice that a test keeps failing too frequently just quarantine it until we are ready to drastically improve its stability. If we keep following this pattern, we will be left with a suite of functional tests that are extremely stable and provide real value to our employer. TeamCity used to make this process easy because it would identify flaky tests for us. Afterward, we can analyze the historical trends to see the pass rate of the test and determine if it should be quarantined. Other than TeamCity, I don’t know of other tools to make our job easy here. It’s usually a manual effort otherwise. But well worth the time. This is likely the most important item of the checklist. I’ve spoken about this in-depth before and believe that fixing this problem in our code is paramount to any test automation success. Your test uses the page object pattern exactly as prescribed here Please, let’s start following the page object pattern as it was originally prescribed. There are so many resources out there that add to the confusion with different page object patterns and even other models such as Screenplay Pattern. We just need to put on the blinders and use the simplest and easiest approach to our test automation. I have gone through the struggles myself where I created different versions of the pattern. They were all a sub-optimal use of my time. Imagine if we were trying to make a delicious bowl of spaghetti. Rather than focusing on the ingredients and cooking them well, we spent our time trying to reinvent the tools used for the cooking process. We decided that the spoon and the pot were not sufficient and that we would create our own to cook this bowl of spaghetti. And what if we didn’t do the best job in reinventing these tools? As a result, our spaghetti came out worse than ever and we wasted a ton of time on nonsensical tasks. Hence, rather than focusing on reinventing ideas like the page object pattern or creating our own test runners to read spreadsheets or creating our own BDD syntax, let’s use our time to help write the fastest, most stable, most valuable test automation possible. Your test does not contain any reference to interactions with HTML We never ever want to reference anything related to the HTML in our test code. No ifs, and, or buts about it. The reason for this is that when the UI of our software changes, we want a single place to go and update our test automation code. Here’s a code example of a test that exposes both locators and interactions with the HTML (slightly modified from real code): The issue here is that when (yes, when, not if) the UI changes, we will need to go and update every single line of code that referenced the locators that changed. If an entire page is redesigned, this whole test is done. Even worse is that this is a single test. There could be hundreds or thousands of others that reference the same exact locators. If all HTML interactions lived in a page object, and our tests used those page objects, then we would only need to go to one place to update all of that information. This idea basically enforces the Single Responsibility Principle (please read about it). Rather, we want our tests to only convey user behavior, as such: [Test] public void ShouldBeAbleToLoginWithValidUser() { _loginPage.Open(); var productsPage = _loginPage.Login("standard_user", "secret_sauce"); productsPage.IsLoaded.Should().BeTrue("we successfully logged in and the home page should load."); } Please note how we don’t have any reference to anything related to the HTML. All of that logic lives here: public class SauceDemoLoginPage : BasePage { public SauceDemoLoginPage(IWebDriver driver) : base(driver) { } private readonly By _loginButtonLocator = By.ClassName("btn_action"); public bool IsLoaded => new Wait(_driver, _loginButtonLocator).IsVisible(); public IWebElement PasswordField => _driver.FindElement(By.Id("password")); public IWebElement LoginButton => _driver.FindElement(_loginButtonLocator); private readonly By _usernameLocator = By.Id("user-name"); public IWebElement UsernameField => _driver.FindElement(_usernameLocator); public SauceDemoLoginPage Open() { _driver.Navigate().GoToUrl(BaseUrl); return this; } internal Dictionary<string, object> GetPerformance() { var metrics = new Dictionary<string, object> { ["type"] = "sauce:performance" }; return (Dictionary<string, object>)((IJavaScriptExecutor)_driver).ExecuteScript("sauce:log", metrics); } public ProductsPage Login(string username, string password) { SauceJsExecutor.LogMessage( $"Start login with user=>{username} and pass=>{password}"); var usernameField = Wait.UntilIsVisible(_usernameLocator); usernameField.SendKeys(username); PasswordField.SendKeys(password); LoginButton.Click(); SauceJsExecutor.LogMessage($"{MethodBase.GetCurrentMethod().Name} success"); return new ProductsPage(_driver); } } If anything related to this web page ever changes, we only need to go to a single place to fix it. Remember, implementation details will always change. We will always update the UI to new technologies. The business case and flow are unlikely to change. If a user needs to login to your app today then this is likely the case for the business, regardless of whether you are using Angular or COBOL. That’s why we place any HTML related stuff into a single place, the page object. As opposed to a dozen tests. You left the code cleaner than you found it Code rot is what happens over time as we work on software. Through this process, we continue to leave small messes everywhere. Over time, these messes become so unbearable that they begin to impact our work and progress, that’s code rot. At some point, we need to do a large refactor because the code is so rotten. We have all been there. Can you imagine if our automation didn’t need a large refactor? What if it actually improved and became easier to use over time. That would be pretty fantastic, right? Hence, every time we touch the code, we should also clean something else up. Rename a variable, rename a method, split up a method, that’s all it takes. Here’s an example of a test before being cleaned up. Are you able to understand what this test is actually checking? [Test, TestCaseSource(nameof(DataSource))] public void RespondToAllItems(string accNum, string dataSubject) { #region Parameters //if (accessionNumber.Contains(TestContext.DataRow["AccessionNumber"].ToString())) //{ string subject = dataSubject; string loginId = DataLookup.GetLoginIdByAccessionNumber(accNum);Num, } Here’s that test after a little bit of love ❤. Are you capable of understanding what is being tested now? public void RespondToAllItems(string itemNumber, string dataSubject) {; } } That’s the difference between code that is rotting and code that is flourishing 🌼 If we can avoid code rot it will save us a lot of headaches. We will be able to write automation quickly and efficiently. Our code will be a pleasure to read. Our code will not rot and might actually live for decades. Conclusions As the software that is being developed, the test automation code requires a definition of done. This way you can ensure that everyone on our team is following top quality standards to create the highest quality test automation. Use this checklist to keep everyone on your team on the same page.
https://blog.testproject.io/2019/11/20/definition-of-done-for-test-automation-the-ultimate-test-automation-checklist/
CC-MAIN-2022-05
refinedweb
1,587
55.84
In this tutorial we’ll show you how to build a web server that serves HTML and CSS files stored on the ESP32 filesystem. Instead of having to write the HTML and CSS text into the Arduino sketch, we’ll create separated HTML and CSS files. For demonstration purposes, the web server we’ll build controls an ESP32 output, but it can be easily adapted for other purposes like displaying sensor readings. Recommended reading: ESP8266 Web Server using SPIFFS ESP32 Filesystem Uploader Plugin To follow this tutorial you should have the ESP32 Filesystem Uploader plugin installed in your Arduino IDE. If you haven’t, follow the next tutorial to install it first: Note: make sure you have the latest Arduino IDE installed, as well as the ESP32 add-on for the Arduino IDE. If you don’t, follow one of the next tutorials to install it: - Windows instructions – Installing the ESP32 Board in Arduino IDE - Mac and Linux instructions – Installing the ESP32 Board in Arduino IDE Project Overview Before going straight to the project, it’s important to outline what our web server will do, so that it is easier to understand. - The web server you’ll build controls an LED connected to the ESP32 GPIO 2. This is the ESP32 on-board LED. You can control any other GPIO; - The web server page shows two buttons: ON and OFF – to turn GPIO 2 on and off; - The web server page also shows the current GPIO state. The following figure shows a simplified diagram to demonstrate how everything works. - The ESP32 runs a web server code based on the ESPAsyncWebServer library; - The HTML and CSS files are stored on the ESP32 SPIFFS (Serial Peripheral Interface Flash File System); - When you make a request on a specific URL using your browser, the ESP32 responds with the requested files; - When you click the ON button, you are redirected to the root URL followed by /on and the LED is turned on; - When you click the OFF button, you are redirected to the root URL followed by /off and the LED is turned off; - On the web page, there is a placeholder for the GPIO state. The placeholder for the GPIO state is written directly in the HTML file between % signs, for example %STATE%. Installing Libraries In most of our projects we’ve created the HTML and CSS files for the web server as a String directly on the Arduino sketch. With SPIFFS, you can write the HTML and CSS in separated files and save them on the ESP32 filesystem. One of the easiest ways to build a web server using files from the filesystem is by using the ESPAsyncWebServer library. The ESPAsyncWebServer library is well documented on its GitHub page. For more information about that library, check the following link:folder to your Arduino IDE installation libraries folder - Finally, re-open your Arduino IDE Organizing your Files To build the web server you need three different files. The Arduino sketch, the HTML file and the CSS file. The HTML and CSS files should be saved inside a folder called data inside the Arduino sketch folder, as shown below: Creating the HTML File The HTML for this project is very simple. We just need to create a heading for the web page, a paragraph to display the GPIO state and two buttons. Create an index.html file with the following content or download all the project files here: <!DOCTYPE html> <html> <head> <title>ESP32 Web Server</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="icon" href="data:,"> <link rel="stylesheet" type="text/css" href="style.css"> </head> <body> <h1>ESP32 Web Server</h1> <p>GPIO state: <strong> %STATE%</strong></p> <p><a href="/on"><button class="button">ON</button></a></p> <p><a href="/off"><button class="button button2">OFF</button></a></p> </body> </html> Because we’re using CSS and HTML in different files, we need to reference the CSS file on the HTML text. The following line should be added between the <head> </head> tags: <link rel="stylesheet" type="text/css" href="style.css"> The <link> tag tells the HTML file that you’re using an external style sheet to format how the page looks. The rel attribute specifies the nature of the external file, in this case that it is a stylesheet—the CSS file—that will be used to alter the appearance of the page. The type attribute is set to “text/css” to indicate that you’re using a CSS file for the styles. The href attribute indicates the file location; since both the CSS and HTML files will be in the same folder, you just need to reference the filename: style.css. In the following line, we write the first heading of our web page. In this case we have “ESP32 Web Server”. You can change the heading to any text you want: <h1>ESP32 Web Server</h1> Then, we add a paragraph with the text “GPIO state: ” followed by the GPIO state. Because the GPIO state changes accordingly to the state of the GPIO, we can add a placeholder that will then be replaced for whatever value we set on the Arduino sketch. To add placeholder we use % signs. To create a placeholder for the state, we can use %STATE%, for example. <p>GPIO state: <strong>%STATE%</strong></p> Attributing a value to the STATE placeholder is done in the Arduino sketch. Then, we create an ON and an OFF buttons. When you click the on button, we redirect the web page to to root followed by /on url. When you click the off button you are redirected to the /off url. <p><a href="/on"><button class="button">ON</button></a></p> <p><a href="/off"><button class="button button2">OFF</button></a></p> Creating the CSS file Create the style.css file with the following content or download all the project files here: html { font-family: Helvetica; display: inline-block; margin: 0px auto; text-align: center; } h1{ color: #0F3376; padding: 2vh; } p{ font-size: 1.5rem; } .button { display: inline-block; background-color: #008CBA; border: none; border-radius: 4px; color: white; padding: 16px 40px; text-decoration: none; font-size: 30px; margin: 2px; cursor: pointer; } .button2 { background-color: #f44336; } This is just a basic CSS file to set the font size, style and color of the buttons and align the page. We won’t explain how CSS works. A good place to learn about CSS is the W3Schools website. Arduino Sketch Copy the following code to the Arduino IDE or download all the project files here. Then, you need to type your network credentials (SSID and password) to make it work. /********* Rui Santos Complete project details at *********/ // Import required libraries #include "WiFi.h" #include "ESPAsyncWebServer.h" #include "SPIFFS.h" // Replace with your network credentials const char* ssid = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH_YOUR_PASSWORD"; // Set LED GPIO const int ledPin = 2; // Stores LED state String ledState; // Create AsyncWebServer object on port 80 AsyncWebServer server(80); // Replaces placeholder with LED state value String processor(const String& var){ Serial.println(var); if(var == "STATE"){ if(digitalRead(ledPin)){ ledState = "ON"; } else{ ledState = "OFF"; } Serial.print(ledState); return ledState; } return String(); } void setup(){ // Serial port for debugging purposes("/on", HTTP_GET, [](AsyncWebServerRequest *request){ digitalWrite(ledPin, HIGH); request->send(SPIFFS, "/index.html", String(), false, processor); }); // Route to set GPIO to LOW server.on("/off", HTTP_GET, [](AsyncWebServerRequest *request){ digitalWrite(ledPin, LOW); request->send(SPIFFS, "/index.html", String(), false, processor); }); // Start server server.begin(); } void loop(){ } How the Code Works First, include the necessary libraries: #include "WiFi.h" #include "ESPAsyncWebServer.h" #include "SPIFFS.h" You need to type your network credentials in the following variables: const char* ssid = "REPLACE_WITH_YOUR_SSID"; const char* password = "REPLACE_WITH_YOUR_PASSWORD"; Next, create a variable that refers to GPIO 2 called ledPin, and a String variable to hold the led state: ledState. const int ledPin = 2; String ledState; Create an AsynWebServer object called server that is listening on port 80. AsyncWebServer server(80); processor() The processor() function is what will attribute a value to the placeholder we’ve created on the HTML file. It accepts as argument the placeholder and should return a String that will replace the placeholder. The processor() function should have the following structure: String processor(const String& var){ Serial.println(var); if(var == "STATE"){ if(digitalRead(ledPin)){ ledState = "ON"; } else{ ledState = "OFF"; } Serial.print(ledState); return ledState; } return String(); } This function first checks if the placeholder is the STATE we’ve created on the HTML file. if(var == "STATE"){ If it is, then, accordingly to the LED state, we set the ledState variable to either ON or OFF. if(digitalRead(ledPin)){ ledState = "ON"; } else{ ledState = "OFF"; } Finally, we return the ledState variable. This replaces the placeholder with the ledState string value. return ledState; setup() In the setup(), start by initializing the Serial Monitor and setting the GPIO as an output. Serial.begin(115200); pinMode(ledPin, OUTPUT); Initialize SPIFFS: if(!SPIFFS.begin(true)){ Serial.println("An Error has occurred while mounting SPIFFS"); return; } Wi-Fi connection Connect to Wi-Fi and print the ESP32 IP address: WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("Connecting to WiFi.."); } Serial.println(WiFi.localIP()); Async Web Server The ESPAsyncWebServer library allows us to configure the routes where the server will be listening for incoming HTTP requests and execute functions when a request is received on that route. For that, use the on() method on the server object as follows: server.on("/", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(SPIFFS, "/index.html", String(), false, processor); }); When the server receives a request on the root “/” URL, it will send the index.html file to the client. The last argument of the send() function is the processor, so that we are able to replace the placeholder for the value we want – in this case the ledState. Because we’ve referenced the CSS file on the HTML file, the client will make a request for the CSS file. When that happens, the CSS file is sent to the client: server.on("/style.css", HTTP_GET, [](AsyncWebServerRequest *request){ request->send(SPIFFS, "/style.css","text/css"); }); Finally, you need to define what happens on the /on and /off routes. When a request is made on those routes, the LED is either turned on or off, and the ESP32 serves the HTML file.); }); In the end, we use the begin() method on the server object, so that the server starts listening for incoming clients. server.begin(); Because this is an asynchronous web server, you can define all the requests in the setup(). Then, you can add other code to the loop() while the server is listening for incoming clients. Uploading Code and Files Save the code as Async_ESP32_Web_Server or download all the project files here. Go to Sketch > Show Sketch Folder, and create a folder called data. Inside that folder you should save the HTML and CSS files. Then, upload the code to your ESP32 board. Make sure you have the right board and COM port selected. Also, make sure you’ve added your networks credentials to the code. After. Wrapping Up Using SPI Flash File System (SPIFFS) is specially useful to store HTML and CSS files to serve to a client – instead of having to write all the code inside the Arduino sketch. The ESPAsyncWebServer library allows you to build a web server by running a specific function in response to a specific request. You can also add placeholders to the HTML file that can be replaced with variables – like sensor readings, or GPIO states, for example. If you liked this project, you may also like: - ESP32 Web Server with BME280 – Mini Weather Station - ESP32 Web Server – Arduino IDE - Getting Started with MicroPython on ESP32 and ESP8266 This is an excerpt from our course: Learn ESP32 with Arduino IDE. If you like ESP32 and you want to learn more, we recommend enrolling in Learn ESP32 with Arduino IDE course. 66 thoughts on “ESP32 Web Server using SPIFFS (SPI Flash File System)” Can you configure the device to be in Station mode where it creates its own hotspot so there is no internet / router needed? Hi Evan. I think you mean setting the ESP32 as an access point, where there is no need for router. Yes, you can set the ESP32 as an acess point. Take a look at the following tutorial: – How to Set an ESP32 Access Point (AP) for Web Server I hope this helps. Regards, Sara 🙂 Thanks for an amazing tutorial, Rui! Can HTML file contain JavaScript code? Yes. Can I upload Javascript file? Yes. Hi, incredible good… thanks! Can you store the CSS and HTML files into an SD card, so that you can actualize it every time you want without requiring to update your sketch? You don’t need to re-compile whenever you want to upload the HTML files. They are stored in the ESP32 file system: SPIFFS. Totally separate memory from code – that was the point of this article and not to store the HTML in the code, but in the dedicated local filing system. Saves you needing any SD, its built into the ESP32 own memory map. Hi Adam. That’s right! Thank you for answering the question. Regards, Sara 🙂 Hi Paul. Yes, alternatively you can store your files in an SD card. You need to use the SD.h library to manipulate the files on the SD card. Regards, Sara 🙂 Hey Team, this looks awesome! Will have a go when I get back home. I’ll try the LED tutorial first, then implement some crazier stuff 😉 (Probably a LED strip/matrix) Thanks! Hi Rho. That’s great! Then, tell us how it went 🙂 Regards, Sara 🙂 Excelente como todos tus tutoriales Thanks Leo. Regards, Sara 🙂 Well, I suppose SD cards can hold up to 32 G data, SPIFFS maybe 1 M Then SPIFFS is convenient with unfrequent changes (suppose html page countains data: if they are updated once a second -temperature; ca 10 chars, with a time stamp in a huamn friendly format -, it needs ca one day to fill SPIFFS …. and some decades to fill SD card). I hope this calculation is not absurd (and I am aware I missed number of repeated writes on SD cards, and maybe on ESP flash). Hi Denis, you are right. As stated in this tutorial, using SPIFFS is useful to – Create configuration files with settings; – Save data permanently; – Create files to save small amounts of data instead of using a microSD card; – Save HTML and CSS files to build a web server; It shouldn’t be used to create datalogging projects that required large amounts of data or multiple updates for second. Thank you for your comment, Regards, Sara 🙂 Excellent! Very good tutorial. thanks how to send text file in sd card to a web server in esp wroom32 Hi. We don’t have a specific tutorial about that subject. But we have a project in which we load the html file from an SD card to a web server using ESP32. You can take a look at the code in the following project: I hope it helps. Regards, Sara 🙂 Hi guys, I have the webserver running with SPIFFS. I have tried adding a slider to get a value back into ESP without much success. Without using Ajax or Java, is there an easy way of getting a slider value fro the webpage into ESP32? Hi Alan. I don’t have a specific example using just a slider. We have an example that uses a slider to control a servo motor. You can take a look at the code and try to figure out how it works: Regards, Sara Good job ! Is it possible to add image in the data folder for HTML file to reference it? Is it also possible to use bootstrap with the help of SPIFFS? Thanks a lot Hi. Yes, you can use bootstrap. I think you can also add images in the data folder, but you need to be careful with the space it occupies (I haven’t tried images yet, but I think it might work). Hi Sara! I am having trouble uploading file in the Data folder with the following error message: SPIFFS not supported on esp32 Hi again. Please update your ESP32 repository. Try taking a look at the following issue and see if that solves your problem: github.com/me-no-dev/arduino-esp32fs-plugin/issues/5 Have you installed the ESP32 filesystem uploader? Regards, Sara 🙂 Thanks for an amazing tutorial, how to make led state update in all devices? and how many device can connect to this server ?i connected 4 device and all worked very good.only led state doesnt update in all devices. thanks Hi Rashad. To update the LED state you need to refresh the device’s browser. If you want it to update automatically, you can use ajax in your webserver code, for example. You can connect as many clients as you want, as long as the connection closes after the response is sent. Regards, Sara 🙂 I tried for several hours to install the ESP32 file loader as described. It does not show up under my tools drop down. Any suggestions? I am using Arduino 1.8.8 and win10 Hi Rick. Are you using just one version of Arduino IDE? Are you sure you’re placing the .jar file on the right directory? Have you tried our exact instructions: You can also try to follow Github’s instructions: If you could provide more details, it may be easier to find out what’s wrong. Regards, Sara 🙂 There’s no option to upload ESP32 sketch data on Arduino 1.8.8. Am I missing something here I’m on Linux Solus Mate Hi Daniel. You need to install the ESP32 Filesystem Uploader on Arduino IDE. Follow the next tutorial to do that: Regards, Sara 🙂 Thanks Sara 🙂 … I just did now the issue is I can’t upload the ESP32 Sketch data. I get the following error message: [SPIFFS] data : /home/object-undefined/Arduino/ESP32_Async_Web_Server/data [SPIFFS] start : 2691072 [SPIFFS] size : 1468 [SPIFFS] page : 256 [SPIFFS] block : 4096 /style.css /index.html [SPIFFS] upload : /tmp/arduino_build_905232/ESP32_Async_Web_Server.spiffs.bin [SPIFFS] address: 2691072 [SPIFFS] port : /dev/ttyUSB0 [SPIFFS] speed : 921600 [SPIFFS] mode : dio [SPIFFS] freq : 80m Traceback (most recent call last): File “/home/object-undefined/Arduino/hardware/espressif/esp32/tools/esptool.py”, line 35, in import serial.tools.lists_ports as list_ports ImportError: No module named lists_ports SPIFFS Upload failed! I have done everything on you guide. but Serial monitor show Backtrace: 0x400874f8:0x3ffc64c0 0x400875f7:0x3ffc64e0 0x400d4793:0x3ffc6500 0x400eaf04:0x3ffc6530 0x400eb272:0x3ffc6550 0x400eb5a1:0x3ffc65a0 0x400ead18:0x3ffc6600 0x400eab5a:0x3ffc6650 0x400eabf3:0x3ffc6670 0x400eac3e:0x3ffc6690 0x400e9f44:0x3ffc66b0 0x400e9ef3:0x3ffc66d0 0x400d5b12:0x3ffc6700 How can I fix it? Hi Nash. I’ve searched for a while and I found some people with the same problem but using other sketches. Unfortunately, I haven’t found a clear solution for that problem: github.com/espressif/arduino-esp32/issues/884 github.com/espressif/arduino-esp32/issues/2159 github.com/espressif/arduino-esp32/issues/1967 In this discussion, they suggest erasing the flash memory before uploading the new sketch: bbs.esp32.com/viewtopic.php?t=8888 I hope this helps in some way. Regards, Sara Hi, Do you have code for wifi controlled robot using ESP32 where we can control the direction of robot via web server? Thanks . Looking forward for your reply Hello Sara Tolling about ESP32 in AP mode From factory is programmed to accept maximum 4 clients connections Is known that can be extended to 8 or more Can you help me to increase the maximum connections to 8 at list Regards Hello Sara, i able to upload the sketch in my arduino ide, in the serial monitor showing the text just like above except the connecting line and ip address. it’s not showing at all. why is that? Hi. That usually happens when people forget to insert their network credentials. Please make sure that you’ve entered your network credentials in the code and that they are correct. Also, make sure that your ESP32 is relatively close to your router. REgards, Sara Hello Sara, i think found out the problem, it seems when i upload the sketch with tools->esp32 sketch data upload. it uploaded successfully, but when i run the serial monitor it kind of not work. it only read the previous uploaded sketch using the normal upload mode (ctrl-u) Sara! I’ve already implemented a project using the WebServer.h library instead of the ESPAsyncWebServer library on a ESP32. Is there a way to handle SPIFFS with WebServer.h or I need to convert my project using the ESPAsyncWebServer library ? Hi Fred. I think it would be easier to convert your project to the ESPASyncWebServer library. It is not as straightforward using the WebServer library (in my opinion): Regards, Sara Hello , The tutorial is very informative and explained with details. I’m curious to know whether we can upload the HTML file to SPIFFS using platform IO. is there any way to upload files in platform IO? Hi, You have to create a folder name data/ same level as src/ Then in terminal write “pio run -t uploadfs” Thanks for sharing that 😀 This error is caused when the Arduino IDE “Tools” options for ESP32 flashing MISBEHAVE. Not kidding! And it is possibly a result of a prior code that tweaked the nvs flash and SPIFFS partitions of ESP32. Solution: STEP 1: Restart Arduino IDE and check if you are getting the standard options on the “Tools” tab when flashing to ESP32, i.e. Flash Freq, Size, Mode, Partition Scheme etc. STEP 2: If these are present, go ahead and flash your code to ESP32. Make sure all options have been selected properly. SUCCESS! EXIT! STEP 3: If these (Flash Freq, Size, Mode, Partition Scheme etc. ) options do not appear under “Tools” tab, GO TO STEP 1, until STEP 2 becomes true. Hi Ved. Thanks for sharing. I had a few readers reporting a similar issue and I didn’t know how to solve that. I hope this helps other people. Thank you 😀 Hi, Thanks for the detailed tutorial. I want to try SPIFF with SOFTAP. Is there anything that you can help me with? Hi. You can take a look at this tutorial that shows how to set an ESP32 soft access point: I hope this helps. Regards, Sara Could someone enlighten me on what would be involved in porting this sketch/files to a secure server (HTTPS). I can create the HTTPS server on my ESP32, connect to it but am at a loss in sending the html response from SPIFFS or in replacing the status as there doesn’t seem to be an equivalent processor function. // Define Constants // Number of steps per internal motor revolution const float STEPS_PER_REV = 32; // Amount of Gear Reduction const float GEAR_RED = 64; // Number of steps per geared output rotation const float STEPS_PER_OUT_REV = STEPS_PER_REV * GEAR_RED; // Define Variables // Number of Steps Required int StepsRequired; // Create Instance of Stepper Class //
https://randomnerdtutorials.com/esp32-web-server-spiffs-spi-flash-file-system/?replytocom=460820
CC-MAIN-2020-40
refinedweb
3,855
63.09
. webpack 4 comes with appropriate presets. However, you have to understand a fair number of concepts to reap their performance benefits. Furthermore, the possibilities to tweak webpack’s configuration are endless, and you need extensive knowhow to do it the right way for your project. You can follow along the examples with my demo project. Just switch to the mentioned branch so you can see for yourself the effects of the adjustments. The master branch represents the demo app, along with its initial webpack configuration. Just execute npm run dev to take a look at the project (the app opens on localhost:3000. // package.json "scripts": { "dev": "webpack-dev-server --env.env=dev", "build": "webpack --env.env=prod" } Use production mode for built-in optimization webpack 4 has introduced development and production modes. You should always ship a production build to your users. I want to emphasize this because you get: Open dist/static/app.js and you can see for yourself that webpack has uglified our bundle. If you check out the branch use-production-mode, you can see how the app bundle size increases when you set the mode to "development" because some optimizations are not performed. // webpack.prod.js const config = { + mode: "development", - mode: "production", // ... } The result looks like this: Use webpack-bundle-analyzer regularly You should use the awesome webpack-bundle-analyzer plugin regularly to understand what your bundles consist of. In so doing, you realize what components really reside inside your bundles and find out which components and dependencies make up the bulk of their size — and you may discover modules that got there by mistake. The following two npm scripts run a webpack build (development or production) and open up. If you take a closer look, you will see that there is still some code that does not belong in a production build (e.g., react-hot-loader). This is a good example of why frequent analysis of our generated bundles is an important part of finding opportunities for further optimization. In this case, the webpack/React setup needs improvement to exclude react-hot-loader. If you have an “insufficient production config” and set mode: "development", then you come up with a much larger React chunk (switch to branch use-production-mode and execute npm run build:analyze). Add multiple entry points for bundle splitting Instead of sending all our code in one large bundle to our users, our goal as frontend developers should be to serve as little code as possible. This is where code splitting comes into play. Imagine that our user navigates directly to the profile page via the route /profile. We should serve only the code for this profile component. that you have to make sure the bundles are included in one or more HTML pages depending on the project, whether a single-page application (SPA) or multi-page application (MPA). This demo project constitutes an SPA. The good thing is that you can automate this composing step with the help of HtmlWebpackPlugin. // webpack.prod.js plugins: [ // ... new HtmlWebpackPlugin({ template: `public/index.html`, favicon: `public/favicon.ico`, }), ], Run the build, and the generated index.html consists of the correct link and script tags in the right order. <!--" /> <link href="styles/profile.css" rel="stylesheet" /> </head> <body> <div id="root"></div> <script src="static/app.js"></script> <script src="static/profile.js"></script> </body> </html> To demonstrate how to generate multiple HTML files (for an MPA use case), check out the branch entry-point-splitting-multiple-html. The following setup generates an index.html and a profile.html. // webpack.prod.js plugins: [ - new HtmlWebpackPlugin({ - template: `public/index.html`, - favicon: `public/favicon.ico`, - }), + new HtmlWebpackPlugin({ + template: `public/index.html`, + favicon: `public/favicon.ico`, + chunks: ["profile"], + filename: `${commonPaths.outputPath}/profile.html`, + }), + new HtmlWebpackPlugin({ + template: `public/index.html`, + favicon: `public/favicon.ico`, + chunks: ["app"], + }), ], There are even more configuration options. For example, you can provide a custom chunk-sorting function with chunksSortMode, as demonstrated here. webpack only includes the script for the profile bundle in the generated dist/profile.html file, along with a link to the corresponding profile.css. index.html looks similar and only includes the app bundle and app.css. <!--/profile.js"></script> </body> </html> Separate application code and third-party libs vendor first. In order to use the aforementioned SplitChunksPlugin, we add optimization.splitChunks to our config. // webpack.prod.js const config = { // ... + optimization: { + splitChunks: { + chunks: "all", + }, + }, } This time, we run the build with the bundle analyzer option ( npm run build:analyze). The result of the build looks like this:. The names of the vendor bundles indicate from which application chunk the dependencies were pulled. Since React DOM is only used in the index.js file of the app.js bundle, it makes sense that the dependency is only inside of vendors~app.js. But why is Lodash in the vendors~app~profile.js bundle if it’s only used by the profile.js bundle? Once again, this is webpack magic: conventions/default values, plus some clever things under the hood. I really recommend reading this article by webpack core member Tobias Koppers on the topic. The important takeaways are: - Code splitting is based on heuristics that find candidates for splitting based on module duplication count and module category - The default behavior is that only dependencies of ≥30KB are picked as candidates for the vendor bundle - Sometimes webpack intentionally duplicates code in bundles to minimize requests for additional bundles But the default behavior can be changed by the splitChunks options. We can set the minSize option to a value of 600KB and thereby tell webpack to first create a new vendor bundle if the dependencies it pulled out exceed this value (check out out theKB. webpack couldn’t create another chunk candidate for another vendor bundle because of our new configuration. As you can see, Lodash is duplicated in the profile.js bundle, too (remember, webpack intentionally duplicates code). If you adjust the value of minSize to, e.g., 800KB, webpack cannot come up with a single vendor bundle for our demo project. I leave it up to you to try this out. Of course, you could have more control if you wanted. Let’s assign a custom name: node_vendors. We define a cacheGroups property for our vendors, which we pull out of the node_modules folder with the test property (check out the vendor-splitting-cache-groups branch). In the previous example, the default values of splitChunks contains cacheGroups out of the box. // webpack.prod.js const config = { // ... - optimization: { - splitChunks: { - chunks: "all", - minSize: 1000 * 600 - }, - }, + optimization: { + splitChunks: { + cacheGroups: { + vendor: { + name: "node_vendors", // part of the bundle name and + // can be used in chunks array of HtmlWebpackPlugin + test: /[\\/]node_modules[\\/]/, + chunks: "all", + }, + }, + }, + }, // ... } This time, webpack presents us the branch common-splitting). We import and use the add function of util.js in the Blog and Profile components, which are part of two different entry points. We add a new object with the key common to the cacheGroups object, with a regex to target only modules in our components folder. Since the components of this demo project are very tiny, we need to override webpack’s default value of minSize. We’ll set it to 0KB just in case. // webpack.prod.js const config = { // ... optimization: { splitChunks: { cacheGroups: { vendor: { name: "node_vendors", // part of the bundle name and // can be used in chunks array of HtmlWebpackPlugin test: /[\\/]node_modules[\\/]/, chunks: "all", }, + common: { + test: /[\\/]src[\\/]components[\\/]/, + chunks: "all", + minSize: 0, + }, }, }, }, already seen how we can use react-router in combination with entry code splitting in the entry-point-splitting branch. We can push code splitting one step further by enabling our users to load different components on demand while they are navigating through our application. This use case is possible with dynamic imports and code splitting by route. In the context of our demo project, this is evidenced by the fact that the profile.js bundle is only loaded when the user navigates to the corresponding route (switch to branch code-splitting-routes). We utilize the @loadable/components library, which does the hard work of code splitting and dynamic loading behind the scenes. // package.json { // ... "dependencies": { "@loadable/component": "^5.12.0", // ... } // ... } In order to use the ES6 dynamic import syntax, we have to use a babel plugin. // .babelrc { // ... "plugins": [ "@babel/plugin-syntax-dynamic-import", // ... ] } The implementation is pretty straightforward. First, we need a tiny wrapper around our Profile component that we want to lazy-load. // ProfileLazy.js import loadable from "@loadable/component"; export default loadable(() => import(/* webpackChunkName: "profile" */ "./Profile") ); We use the dynamic import inside the loadable callback and add a comment (to give our code-split chunk a custom name) that is understood by webpack. In our Blog component, we just have to adjust the import for our Router component. // Blog.js import React from "react"; import { BrowserRouter as Router, Switch, Route, Link } from "react-router-dom"; - import Profile from "../profile/Profile"; + import Profile from "../profile/ProfileLazy"; import "./Blog.css"; import Headline from "../components/Headline"; export default function Blog() { return ( <Router> <div> <ul> <li> <Link to="/">Blog</Link> </li> <li> <Link to="/profile">Profile</Link> </li> </ul> <hr /> <Switch> <Route exact <Articles /> </Route> <Route path="/profile"> <Profile /> </Route> </Switch> </div> </Router> ); } // ... The last step is to delete the profile entry point that we used for entry point code splitting from both the production and development webpack config. // webpack.prod.js and webpack.dev.js const config = { // ... entry: { app: [`${commonPaths.appEntry}/index.js`], - profile: [`${commonPaths.appEntry}/profile/Profile.js`], }, // ... } Run a production build and you’ll see in the output that webpack created a profile.js bundle because of the dynamic import. The generated HTML document does not contain a script pointing to the profile.js bundle. // diest</script> <script src="static/app.js"></script> </body> </html> Wonderful — HtmlWebpackPlugin did the right job. It’s easier to test this with the development build ( npm run dev). DevTools shows that the profile.js bundle is not loaded initially. When you navigate to the /profile route, though, the bundle gets loaded lazily. Lazy loading on the component level Code splitting is also possible on a fine-grained level. Check out branch code-splitting-component-level and you’ll see that the technique of route-based lazy loading can be used to load smaller components based on events. In the demo project, we just add a button on the profile page to load and render a simple React component, Paragraph, on click. // Profile.js - import React from "react"; + import React, { useState } from "react"; // some more boring imports + import LazyParagraph from "./LazyParagraph"; const Profile = () => { + const [showOnDemand, setShowOnDemand] = useState(false); const array = [1, 2, 3]; _.fill(array, "a"); console.log(array); console.log("add", add(2, 2)); return ( <div className="profile"> <Headline>Profile</Headline> <p>Lorem Ipsum</p> + {showOnDemand && <LazyParagraph />} + <button onClick={() => setShowOnDemand(true)}>click me</button> </div> ); }; export default Profile; ProfileLazy.js looks familiar. We want to call the new bundle paragraph.js. // LazyParagraph.js import loadable from "@loadable/component"; export default loadable(() => import(/* webpackChunkName: "paragraph" */ "./Paragraph") ); I skip Paragraph because it is just a simple React component to render something on the screen. The output of a production build looks like this. The development build shows that loading and rendering the dummy component bundled into paragraph.js happens only on clicking the button. Extracting webpack’s manifest into a separate bundle What does this even mean? Every application or site built with webpack includes a runtime and manifest. It’s the boilerplate code that does the magic. The manifest wires together our code and the vendor code. It is not the branch manifest-splitting. The basis is our production configuration with two entry points and, now, vendor/common code splitting in place. In order to extract the manifest, we have to add the runtimeChunk property. // webpack.prod.js const config = { mode: "production", entry: { app: [`${commonPaths.appEntry}/index.js`], profile: [`${commonPaths.appEntry}/profile/Profile.js`], }, output: { filename: "static/[name].js", }, + optimization: { + runtimeChunk: { + name: "manifest", + }, + }, // ... } A production build shows a new manifest.js bundle. The two entry point bundles are a little bit smaller. In contrast, here you can see the bundle sizes without the runtimeChunk optimization. The two HTML documents were generated with an extra script tag for the manifest. As an example, here you can see the profile.html file (I skip index.html). <!--/manifest.js"></script> <script src="static/profile.js"></script> </body> </html> Exclude dependencies from bundles With webpack’s externals config option, it is possible to exclude dependencies from bundling. This is useful if the environment provides the dependencies elsewhere. As an example, if you work with different teams on a React-based MPA, you can provide React in a dedicated bundle that is included in the markup in front of each team’s bundles. This allows. The size of app.js has shrunk from 223KB to 96.4KB because react-dom was left out. The file size of profile.js has decreased from 78.9KB to 71.7KB. Now we have to provide React and react-dom in the context. In our example, we provide React over CDN by adding script tags to our HTML template files. <!-- public/index.html --> <body> <div id="root"></div> <script crossorigin</script> <script crossorigin</script> </body> This is how the generated index.html looks (I’ll skip the profile.html file). <!-- crossorigin</script> <script crossorigin</script> <script src="static/manifest.js"></script> <script src="static/app.js"></script> </body> </html> Remove dead code from bundles unused code from your bundles. The good thing with webpack 4 is that you most likely don’t have to do anything to perform tree shaking. Unless you are working on a legacy project with the CommonJS module system, you get it automatically in webpack when you use production mode (you have to use ES6 module syntax, though). Since you can use webpack without specifying a mode, or set it to "development", you have to meet a set of additional requirements, which is explained in great detail here. Also, as described in webpack’s tree shaking documentation, you have to make sure no compiler transforms the ES6 module syntax into the CommonJS version. Since we use @babel/preset-env in our demo project, we better set modules: false to be on the safe side to disable transformation of ES6 module syntax to another module type. // .babelrc { "presets": [ ["@babel/preset-env", { "modules": false }], "@babel/preset-react" ], "plugins": [ // ... ] } This is because ES6 modules can be statically analyzed by webpack, which isn’t possible with other variants, such as require by CommonJS. It is possible to dynamically change exports and do all kinds of monkey patching. You can also see an example in this project where require statements are created dynamically within a loop: // webpack.config.js // ... const addons = (/* string | string[] */ addonsArg) => { // ... return addons.map((addonName) => require(`./build-utils/addons/webpack.${addonName}.js`) ); }; // ... If you want to see how it looks without tree shaking, you can disable the UglifyjsWebpackPlugin, which is used by default in production mode. Before we disable tree shaking, I first want to demonstrate that the production build indeed removes unused code. Switch to the master branch and take a look at the following file: // util.js export function add(a, b) { // imported and used in Profile.js console.log("add"); return a + b; } export function subtract(a, b) { // nowhere used console.log("subtract"); return a - b; } Only the first function is used within Profile.js. Execute npm run build again and open up dist/static/app.js. You can search and find two occurrences of the string console.log("add") but no occurrence of console.log("subtract"). This is the proof that it works. Now switch to branch disable-uglify, where minimize: false is added to the configuration. // webpack.prod.js const config = { mode: "production", + optimization: { + minimize: false, // disable uglify + tree shaking + }, entry: { app: [`${commonPaths.appEntry}/index.js`], }, // ... } If you execute another production build, you get a warning that your bundle size exceeds the standard threshold. does this in the second step. In the first step, a live code inclusion takes place to mark the source code in a way that the uglify plugin is able to “shake the syntax tree.” Define a performance budget It’s crucial to monitor the bundle sizes of your project. 244KB with 747KB. In production mode, the default behavior is to warn you about threshold violations. We can define our own performance budget for output assets. We don’t care for development, but we want to keep an eye on it for production (check out branch performance-budget). // webpack.prod.js const config = { mode: "production", entry: { app: [`${commonPaths.appEntry}/index.js`], }, output: { filename: "static/[name].js", }, + performance: { + hints: "error", + maxAssetSize: 100 * 1024, // 100 KiB + maxEntrypointSize: 100 * 1024, // 100 KiB + }, // ... } We defined 100. That’s a pretty big chunk not only in absolute terms, but also in relation to the other components. This is how we use it. // Profile.js import _ from "lodash"; // ... const Profile = () => { const array = [1, 2, 3]; _.fill(array, "a"); console.log(array); // ... } The problem is that we pull in the whole library even they don’t have side effects. From webpack’s perspective, all libraries have side effects unless they tell it otherwise. Library owners have to mark their project, or parts of it, as side effect-free. Lodash also does this with the sideEffects: false property within its package.json. Ship the correct variant of source maps to production It. For production, devtool: "source-map" is a good choice because only a comment is generated into the JS bundle. If your users do not open the browser’s devtools, they do not load the corresponding source map file generated by webpack. For adding source maps, we just have to add one property to the production configuration (check out branch sourcemaps). // webpack.prod.js const config = { mode: "production", + devtool: "source-map", // ... } The following screenshot shows how the result would look for the production build.. That’s because we have to use a different filename substitution placeholder. In addition to [name], we have three placeholder strings when it comes to caching: [hash]– if at least one chunk changes, a new hash value for the whole build is generated [chunkhash]– for every changing chunk, a new hash value is generated [contenthash]– for every changed asset, a new hash based on the asset’s content is generated At first, it’s not quite clear what the difference is between [chunkhash] and [contenthash]. Let’s try out [chunkhash] (check out branch caching-chunkhash). // webpack.prod.js const config = { // ... output: { - filename: "static/[name].[hash].js", + filename: "static/[name].[chunkhash].js", }, // ... plugins: [ new MiniCssExtractPlugin({ - filename: "styles/[name].[hash].css", + filename: "styles/[name].[chunkhash].css", }), // ... ], } Give it a shot and run the production build again!. Maybe [contenthash] helps? Let’s find out (check out branch caching-contenthash). // webpack.prod.js const config = { // ... output: { - filename: "static/[name].[chunkhash].js", + filename: "static/[name].[contenthash].js", }, // ... plugins: [ new MiniCssExtractPlugin({ - filename: "styles/[name].[chunkhash].css", + filename: "styles/[name].[contenthash].css", }), // ... ], } Each asset has been given a new but individual hash value. Did we make it? Let’s change something in Profile.js again and find out. // Profile.js const Profile = () => { + console.log("Let's go"); // ... }; CSS asset names have not changed, so we have decoupled them from the associated JS assets. The vendor bundle still has the same name. Since the Profile React component is imported into the Blog React component, it’s OK that both filenames for both JS bundles have changed. A change in Blog.js, though, should only lead to a change of the app bundle name and not of the profile bundle name because the latter imports nothing from the former. // Blog.js export default function Blog() { + console.log("What's up?"); // ... } This time, the profile bundle name is unchanged — great! In addition, the official webpack docs recommend you also do the following: caching-moduleids for the final version of our configuration. // webpack.prod.js const config = { // ... optimization: { + moduleIds: "hashed", + runtimeChunk: { + name: "manifest", + }, splitChunks: { // ... } } // ... } Now all the JS bundles have changed because the manifest was extracted out of them into a separate bundle. As a result, their sizes have shrunk a tiny bit. I’m not sure when the manifest changes — maybe after a webpack version update? But we can verify the last step. If we update one of the vendor dependencies, the node_vendors bundle should get a new hash name, but the rest of the assets should remain unchanged. Let’s change a random dependency in package.json and run npm i && npm run build. mainly on performance optimization techniques regarding JavaScript. Some HTML was also included with the HtmlWebpackPlugin. There are more areas that could be covered, such as optimizing styles (e.g., CSS-in-JS, critical CSS) or assets (e.g., brotli compression), but that would be the subject of another article. An in-depth guide to performance optimization with webpack appeared first on LogRocket Blog. Posted on by: Brian Neville-O'Neill Director content @LogRocket. I didn't write the post you just read. To find out who did, click the link directly above my name. Discussion
https://dev.to/bnevilleoneill/an-in-depth-guide-to-performance-optimization-with-webpack-4o2e
CC-MAIN-2020-34
refinedweb
3,520
50.73
Custom MVC4 bundling for timezoneJS tz data I am using the timezoneJS to provide time zone support for JavaScript running in an MVC4 website that is hosted in Azure. The JavaScript library is bundled with 38 zone files under a tz directory. My understanding is the library reads these zone files in order to correctly determine time zone offsets for combinations of geographical locations and dates. The library downloads zone files from the web server as it requires. The big thing to note about the zone files are that they are really heavily commented using a # character as the comment marker. The 38 zone files in the package amount to 673kb of data with the file “northamerica” being the largest at 137kb. This drops down to 232kb and 36kb respectively if comments and blank lines are striped. That’s a lot of unnecessary bandwidth being consumed. MVC4 does not understand these files so none of the OOTB bundling strip the comments. The bundling support in MVC4 (via the Microsoft.Web.Optimization package) will however allow us to strip this down to the bare data with a custom IBundleBuilder (my third custom bundler – see here and here for the others). Current Implementation For background, this is my current implementation. The web project structure looks like this. The project was already bundling the zone files using the following logic. private static void BundleTimeZoneData(BundleCollection bundles, HttpServerUtility server) { var directory = server.MapPath("~/Scripts/tz"); if (Directory.Exists(directory) == false) { var message = string.Format( CultureInfo.InvariantCulture, "The directory '{0}' does not exist.", directory); throw new DirectoryNotFoundException(message); } var files = Directory.GetFiles(directory); foreach (var file in files) { var fileName = Path.GetFileName(file); bundles.Add(new Bundle("~/script/tz/" + fileName).Include("~/Scripts/tz/" + fileName)); } } The timezoneJS package is configured so that it correctly references the bundle paths. timezoneJS.timezone.zoneFileBasePath = "/script/tz"; timezoneJS.timezone.defaultZoneFile = []; timezoneJS.timezone.init({ async: false }); Custom Bundle Builder Now comes the part were we strip out the unnecessary comments from the zone files. The TimeZoneBundleBuilder class simply strips out blank lines, lines that start with comments and the parts of lines that end in comments. public class TimeZoneBundleBuilder : IBundleBuilder { private readonly IBundleBuilder _builder; public TimeZoneBundleBuilder() : this(new DefaultBundleBuilder()) { } public TimeZoneBundleBuilder(IBundleBuilder builder) { Guard.That(() => builder).IsNotNull(); _builder = builder; } public string BuildBundleContent(Bundle bundle, BundleContext context, IEnumerable<BundleFile> files) { var contents = _builder.BuildBundleContent(bundle, context, files); // The compression of the data files is down to ~30% of the original size var builder = new StringBuilder(contents.Length / 3); var lines = contents.Split( new[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries); for (var index = 0; index < lines.Length; index++) { var line = lines[index]; if (string.IsNullOrWhiteSpace(line)) { continue; } if (line.Trim().StartsWith("#", StringComparison.OrdinalIgnoreCase)) { continue; } var hashIndex = line.IndexOf("#", StringComparison.OrdinalIgnoreCase); if (hashIndex == -1) { builder.AppendLine(line); } else { var partialLine = line.Substring(0, hashIndex); builder.AppendLine(partialLine); } } return builder.ToString(); } } This is then hooked up in the bundle configuration for the tz files. bundles.Add( new Bundle("~/script/tz/" + fileName) { Builder = new TimeZoneBundleBuilder() }.Include("~/Scripts/tz/" + fileName)); Fiddler confirms that the bundler is stripping the comments. The good news here is that gzip compression also comes into play. Now the gzip compressed “northamerica” file is down from 58kb to 9kb over the wire. One of the key points to take away from this that you need to know what your application is serving. That includes the output you have written, but also the external packages you have included in your system.
https://www.neovolve.com/2013/05/09/custom-mvc4-bundling-for-timezonejs-tz-data/
CC-MAIN-2020-05
refinedweb
575
51.44
Parent Directory | Revision Log Cleaned up tracing. Improved support for file tracing from CGI scripts. # # ChDir); Percent C<<; # Return the file's contents in the desired format. if (wantarray) { return @retVal; } else { return join "\n", @retVal; } } =head3 PutFile C<<"); if (ref $lines ne 'ARRAY') { # Here we have a scalar, so we write it raw. print $handle $lines; } else { # Write the lines one at a time. for my $line (@{$lines}) { print $handle "$line\n"; } } # Close the output file. close $handle; } 1, $value2, ... valueN); >>1, value2, ... valueN List of values to add to the key's value list. =back =cut sub AddToListMap { # Get the parameters. my ($hash, $key, @values) = @_; # Process according to whether or not the key already has a value. if (! exists $hash->{$key}) { $hash->{$key} = [@values]; } else { push @{$hash->{$key}}, @values; } } . my $ttype = ($query->param('TF') ? ">$FIG_Config::temp/Trace$$.log" : "QUEUE"); TSetup($query->param('Trace') . " FIG Tracer", $ttype); # Trace the parameter and environment data. TraceParms($query); } else { # Here tracing is to be turned off. All we allow is errors traced into the # error log. TSetup("0", "WARN"); } # Create the variable hash. my $varHash = { DebugData => '' }; # Return the query object and variable hash. return ($query, $varHash); } =head3 TraceParms C<< Tracer::TraceParms($query); >> Trace the CGI parameters at trace level CGI => 3 and the environment variables at level CGI => 4. =over 4 =item query CGI query object containing the parameters to trace. =back =cut sub TraceParms { # Get the parameters. my ($query) = @_; if (T(CGI => 3)) { # Here we want to trace the parameter data. my @names = $query->param; for my $parmName (sort @names) { # Note we skip the Trace parameters, which are for our use only. if ($parmName ne 'Trace' && $parmName ne 'TF') { my @values = $query->param($parmName); Trace("CGI: $$actualDest</a>.</p>\n"; } else { # Here we have one of the special destinations. $traceHtml = "<P>Tracing output type is $Destination.</p>\n"; } substr $outputString, $pos, 0, $traceHtml; } #."); } } } =head3 SendSMS C<< CommaFormat C<< SetPermissions C<<(2); }; # Check for an error. if ($@) { Confess("SetPermissions error: $@"); } } =head3 CompareLists C<< GetLine C<< = (); # Read from the file. my $line = <$handle>; # Only proceed if we found something. if (defined $line) { # Remove the new-line. chomp $line; # If the line is empty, return a single empty string; otherwise, parse # it into fields. if ($line eq "") { push @retVal, ""; } else { push @retVal, split /\t/,$line; } } # Return the result. return @retVal; } =head3 PutLine C<< Tracer::PutLine($handle, \@fields); >> Write a line of data to a tab-delimited file. The specified field values will be output in tab-separated form, with a trailing new-line. =over 4 =item handle Output file handle. =item fields List of field values. =back =cut sub PutLine { # Get the parameters. my ($handle, $fields) = @_; # Write the data. print $handle join("\t", @{$fields}) . "\n"; } =head3 GenerateURL C<<; } 1;
http://biocvs.mcs.anl.gov/viewcvs.cgi/FigKernelPackages/Tracer.pm?revision=1.68&view=markup&pathrev=rast_rel_2010_1206
CC-MAIN-2019-51
refinedweb
458
69.79
How Much Money Do You (or Your Parents) Need for Retirement?. Natalia A. Humphreys Based on the article by James W. Daniel, UT Austin. How Much Money Do You (or Your Parents) Need for Retirement?. $100,000 per year? A lump sum of a million dollars? As much A. Humphreys Based on the article by James W. Daniel, UT Austin Let us refine our question… Suppose: To receive yearly payments of $I, the retiree would need I times as much as needed for a stream of $1 yearly payments. But… So… (1+r)/(1+g)=1+i, where i=(r-g)/(1+g) i- the real yearly rate of return (after expected taxes and inflation) Assuming a real yearly rate of return i, how much does a retiree need to have invested in order to provide $1 at the start of each year for life, starting at the moment of retirement? Will illustrate the kinds of modeling that actuaries perform. It depends. Beware! Observe the number of whole future years lived by each of 50,000 typical 65-year-old female retirees Then average these future lifetimes over 50,000 retirees The average will fall between 20.83 and 20.99 future years Let’s use 20.9 as the value for the average female (the corresponding average for males is 17.9 years). Consider a simpler retirement plan that provides a single $1,000,000 payment to any retiree who survives to age 87. Since K=20.9, an average retiree would die between ages 85 and 86, so would not live to qualify for the payment at age 87. Thus, $0 is needed on an average retiree with the average future lifetime! Clearly, investing just enough (namely, zero) to pay the benefits for a retiree who survives the average number of years cannot be the right approach… Why not? If you start with 50,000 retirees, some of them will outlive the average and collect their $1,000,000 at age 87. Had a fund started with $P for such a person, it would have grown with interest to $P(1+i)^22 by that time P(1+i)^22 = 1,000,000 P=1,000,000 v^22 With i=4%, P=$421,955 About 25,866 of the 50,000 are expected to survive to age 87. Thus, on average, we need to invest 25,866 (1,000,000) v^22/50,000 With i=4%, this is about $218,285 per original retiree This is different from $0! $L_{65+k} v^k, the Present Value (PV) of money needed at age 65+k. L_{65+0} v^0+L_{65+1} v^1+L_{65+2} v^2+… APV=L_{65+0}/L_{65} v^0+L_{65+1} /L_{65} v^1+L_{65+2} /L_{65} v^2+… $(L_{65}+L_{66} v^1+L_{67} v^2+L_{68} v^3)/L_{65}= =$3.72 Let us look at the problem from the individual retiree’s point of view Lurking in the background of the preceding analysis are both Probability And Statistics, Two fundamental tools for actuaries F(x)=Pr[X<=x] – the probability that the newborn dies by age x S(x)=1-F(x)=Pr[X>x] – the probability that the newborn survives beyond age x L_{x}=s(x) L_{0} Both L_{x} and F(x) describe the distribution: F(x)=1-s(x)=1-L_{x}/L_{0} Actuaries regularly collect statistics on large number of human lives in various categories in order to build models of survival functions s(x): age, sex, smoker, non-smoker, geographical region, special occupations, retired or pre-retired, widows and widowers, people with or without certain diseases or disabilities, urban vs. rural populations. p_k=s(65+k)/s(65)=L_{65+k}/L_{65} APV=p_0 v^0+p_1 v^1+p_2 v^2+… TPV=1+v+v^2+…+v^K=(1-v^{K+1})/(1-v) Retirees could pool their risks P_N=14.25+[10.34/N^{0.5}], where 14.25 is the APV – the average amt needed. P_N=14.25+[10.34/N^{0.5}] N=100, then P_{100}=15.29 N=1000, then P_{1000}=14.35 Ideas of actuarial science: Can be used to analyze a wide range of similar problems It depends, but now we know more about what it depends on and how
https://www.slideserve.com/allayna/how-much-money-do-you-or-your-parents-need-for-retirement
CC-MAIN-2018-26
refinedweb
741
61.46
to create a patcher window with sccript hi, I am not sure about what you mean by "not as an abstraction". Do you mean opening a patcher from another patcher ? I f so, you can send ‘load your/path/to/patch/to/load’ to a [pcontrol]. Is this what you are after ? hth. Julien. Hi Sebastien, From the max java api documentation: snip |MaxPatcher| can be used in conjunction with |MaxBox| to dynamically modify and create patchers on the fly. The interface exposed is very similar to the functionality exposed by the js javascript external and thus much of that documentation for that external is applicable well. public class maxpatchertest extends MaxObject { public void makepatcher() { MaxPatcher p = new MaxPatcher(50,50,200,200); MaxBox b11 = p.newDefault(20,20,"toggle",null); MaxBox b21 = p.newDefault(50,20,"toggle",null); MaxBox b31 = p.newDefault(80,20,"toggle",null); p.getWindow().setVisible(true); p.getWindow().setTitle("TEST PATCH"); } } /snip … and from the javascript in max pdf snip The, use the Constructor, access the patcher property of a jsthis (accessed as this.patcher), or use the subpatcher() method of a Maxobj object. Patcher Constructor var p = new Patcher(left,top,bottom,right); left, top, bottom, right: global screen coordinates of the Patcher window var p = new Patcher(); Uses 100,100,400,400 as default window coordinates /snip Good luck, and keep up the great work. -jim
http://cycling74.com/forums/topic/to-create-a-patcher-window-with-sccript/
CC-MAIN-2014-41
refinedweb
233
54.93
mbsrtowcs() Convert a multibyte-character string into a wide-character string (restartable) Synopsis: #include <wchar.h> size_t mbsrtowcs( wchar_t * dst, const char ** src, size_t n, mbstate_t * ps ); Since: BlackBerry 10.0.0 Arguments: - dst - A pointer to a buffer where the function can store the wide-character string. - src - The string of multibyte characters that you want to convert. - n - The maximum number of wide characters to store in the buffer that dst points to. - ps - An internal pointer that lets mbsrtowcs() be a restartable version of mbstowcs(); if ps is NULL, mbsrtowcs() uses its own internal variable. You can call mbsinit() to determine the status of this variable. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The mbsrtowcs() function converts a string of multibyte characters pointed to by src into the corresponding wide characters pointed to by dst, to a maximum of n wide characters, including the terminating NULL character. The function converts each character as if by a call to mbtowc() and stops early if: - a sequence of bytes doesn't conform to a valid character Or: - converting the next character would exceed the limit of n wide characters This function is affected by LC_TYPE. Returns: - (size_t)-1 - Failure; invalid wide-character code. - x - Success; the number of total characters successfully converted, not including the terminating NULL character. Errors: - EILSEQ - Invalid character sequence. - EINVAL - The ps argument points to an invalid object. Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/m/mbsrtowcs.html
CC-MAIN-2018-09
refinedweb
269
56.96
hi i m new to vb.net i need to connect my .net code to tally odbc for retrieving data . My code snippet con = con.Open() I m getting error wavy lines for OdbcConnection what import statement am i missing any ideas on how to resolve this ? sil I think, you need to create a dsn for data source and try accessing the data source via dsn. Connection string to connect the dsn follows. Dim oODBCConnection As Odbc.OdbcConnectionDim sConnString As String = "Dsn=myDsn;" & _ "Uid=myUsername;" & _ "Pwd=myPassword"oODBCConnection = New Odbc.OdbcConnection(sConnString)oODBCConnection.Open() Reference: You need to use System.Data.Odbc namespace but, it supports only in .Net 1.1 version I saw your questions ..could u post me or mail me how to connect vb.net to tally
http://www.nullskull.com/q/10045107/vbnet-connection-to-odbc-for-retrieving-data-from-tally.aspx
CC-MAIN-2017-04
refinedweb
133
62.95
How to Include BootstrapVue in a Laravel Project I am working on a web app built on Laravel and Laravel Spark right now, and I have decided to use Bootstrap for styling, as that is what Laravel Spark is set up for, and I have been wanting to learn Bootstrap for a while now. After setting up the project, I decided that it would also work great to use BootstrapVue, the Bootstrap Vue component library for web apps. This will make the development a lot faster, and eliminate a lot of the html I will need to write. I had to search the solution up for this, as it was a little tricky to include BootstrapVue in my project and I wanted to save you the hassle. Install Bootstrap and Vue If you are using Laravel, first install the Laravel composer package- laravel/ui that is already built for you. To do that, just open a terminal from your project root directory, and run: composer require laravel/ui. Next run php artisan ui vue --auth. With that command, composer will add all the authentication and ui components needed to offer authentication. Bootstrap and Vue are included in this. You will also need to run npm install, and then npm run dev in order to view the bootstrap styling on your site. npm install npm run dev Then make sure to run your database migrations with: php artisan migrate. Great work! You successfully added bootstrap, vue, and a great authentication service to your app! Next, you will install BootstrapVue. This is how you will be able to use BootstrapVue components in your app. In order to install BootstrapVue, run: npm install bootstap-vue in your terminal from your root directory as you did above. With that command, you will have downloaded all of the bootstrap-vue components into your node_modules directory. Your package.json will also be updated with the bootstrap-vue dependency. Next, in order to start using the BootstrapVue components in your project, you will need to include the css and javascript within your app.scss and app.js. Find your /resources/sass/app.scss file. After the @import 'node_modules/bootstrap/scss/bootstrap'; declaration, add the following: @import 'node_modules/bootstrap-vue/src/index.scss'; This will add some specific css properties that BootstrapVue requires. Next, go to your /resources/js/app.js file and add the below import BootstrapVue from 'bootstrap-vue'; Vue.use(BootstrapVue); Finally, run npm run dev to recompile your css and js in your terminal. npm run dev At that point, you should be able to use BootstrapVue in your project. Hope this helps! Also, let me know what you are building in the comments! I would love to know what you are doing.
https://www.solmediaco.com/blog/how-to-include-bootstrapvue-in-a-laravel-project
CC-MAIN-2021-17
refinedweb
459
64.71
Developing .NET Smart Clients for Microsoft Office XPThis content is no longer actively maintained. It is provided as is, for anyone who may still be using these technologies, with no warranties or claims of accuracy with regard to the most recent product version or service release. Simon Guest and Siew-Moi Khor Microsoft Corporation December 2002 Applies to: Microsoft® Office XP Summary: Use this step-by-step guide to learn about creating and extending smart client solutions for Microsoft Office XP using Microsoft .NET. We start by looking at today's development opportunities with Office XP, the process of creating an Office add-in using Microsoft Visual Studio .NET, and then follow some design guidelines to help build a stable and reusable application. (40 printed pages) Contents Introduction What is a Smart Client? What Makes Developing with Microsoft Office So Attractive? How Do Office and .NET Interoperate? How to Build an Add-in Debugging the Add-in Calling an XML Web Service Through the Add-in Thinking About the Design of the Add-in Adding Outlook References The Application Object Developing the User Experience Building a Menu Structure in Office Code Patterns Creating and Working with Office Objects Performance Considerations When Working with Office Objects Displaying and Using the Office Assistant Displaying Windows Forms in Office Handling Exceptions Conclusion Introduction Through a layer called COM interop, you can access the Microsoft® Office XP programming model from the managed Microsoft .NET world. This article outlines some of the building blocks to developing .NET smart client applications for Office XP—from creating your first add-in to developing a fully fledged enterprise application. The examples used throughout this article are based on Microsoft Outlook® 2002, but these ideas and techniques can be applied to other applications of the Office XP suite. The best way to use this paper is as a tutorial from start to finish. We start by building a simple Component Object Model (COM) add-in, and then expand upon the functionality in each section. It should be noted that the installation and deployment of managed COM add-ins need special security considerations and handling. To ensure a successful deployment of digitally signed managed COM add-ins you will need to incorporate into your managed COM add-in project a small unmanaged proxy called a shim. There are already articles extensively written on this matter and it is strongly recommended that you thoroughly acquaint yourself with the issues: - - Randy Byrne's Using the COM Add-in Shim to Trust Outlook Add-ins Built with Visual Studio .NET article is also a must read on how to use a shim in Outlook 2002. This article is written with the assumption that the reader is comfortable using Office XP and developing client applications using Microsoft Visual Studio® .NET, preferably in Microsoft Visual C#™ so that the samples can be followed. Knowledge of programming Office using Microsoft Visual Basic® Scripting Edition (VBScript) or Microsoft Office XP Visual Basic for Applications (VBA) is useful, but not required for this paper. What is a Smart Client? This is a common question for developers of smart clients in .NET. To summarize, a smart client should: - Leverage local processing power - Consume XML Web services - Support online and offline scenarios Microsoft .NET promotes the development of smart client applications by offering a compelling solution to develop clients that offer the services described above. What Makes Developing with Microsoft Office So Attractive? One of the most compelling reasons for developing using Office is that the learning curve for the end user is greatly reduced. Given its popularity, it is fair to say that Office is a great starting point for developing very popular and widely distributable smart client applications. Here are a couple of examples: - Imagine a stock trading application—would it be quicker for the user to learn a new interface to buy and sell stocks, or simply to enter values into a Microsoft Excel sheet and hit a "commit" button to make the trade? - Customer relationship management (CRM) application—does an experience where the user is required to press ALT+TAB between their existing version of Outlook and another contact management system lend itself to the best user experience, as opposed to one that allows you to add contacts into Outlook (and the addition into the CRM system is taken care of in the background)? How Do Office and .NET Interoperate? Today, there is no "magic" interface to expose and call the Office application programming interfaces (APIs) from .NET. It would be nice if we could connect the two using this: Figure 1. The ideal connectivity for .NET and Office To achieve connectivity as shown above, we need to look at existing technology that is available today. Office exposes a number of interfaces and objects through COM. To access this through .NET, we have to create a runtime callable wrapper (RCW), which allows these COM objects to be called from a .NET assembly. The end result looks something like the following: Figure 2. Today's connectivity for .NET and Office For the developer, there is very little overhead in creating the RCW as the process is automatic if using Visual Studio .NET, and fortunately for the end user, a .NET assembly that is integrated with Office using this method can be made to look and feel as part of the Office application. The purpose of this document is not to dig into how the interop layer works, but more the practicalities of creating a smart client application. For further details on the RCW technology, the Microsoft Office and .NET Interoperability article is a good starting point. How to Build an Add-in Prerequisites To follow the examples and code in this article, you will need to install the Microsoft Office XP Primary Interop Assemblies (PIAs). The Readme file included in the download has the instructions on how to install the PIAs. Creating a New Project We will start with a walkthrough on how to create a Visual Studio .NET Shared Add-In project: - Click Start, point to Programs, point to Microsoft Visual Studio .NET, and click Microsoft Visual Studio .NET. - On the File menu, point to New, and click Project. The New Project dialog box appears. - In the Project Types pane, double-click the Other Projects folder, and click Extensibility Projects. - In the Templates pane, click Shared Add-In. - In the Name box, type a name for the Shared Add-In project. For this sample, name it MyAddIn as shown in Figure 3.Note If you plan to follow the demo code throughout this article, it is recommended that you keep the project name as MyAddIn (note the case). Otherwise, you will have to change the namespace declaration if you copy and paste the code from this article. Figure 3. New Project selection dialog box (click picture for larger image) - In the Location box, save the project at "c:\temp" as shown in Figure 3. Click OK. The Extensibility Wizard appears as shown in Figure 4. Figure 4. The Extensibility Wizard (click picture for larger image) - Click Next. The Select a Programming Language page appears. Select the language that you wish to use to create the add-in. Currently supported languages for creating shared add-ins are Visual C#, Visual Basic .NET and C++/ATL. The samples in this paper use Visual C# throughout. - Select Create an Add-in using Visual C# as shown in Figure 5, then click Next. The Select An Application Host page appears. Figure 5. Selecting the programming language for the add-in (click picture for larger image) - Once the language has been selected, a list of supported applications is displayed as shown in Figure 6. This list shows the host applications for which an add-in can be created. For this sample, select Microsoft Outlook. Figure 6. Selecting the host application for the add-in (click picture for larger image) - Click Next. The Enter a Name and Description page appears. - In the What is the name of your Add-In box, type the name of the add-in as ExampleAddIn as shown in Figure 7. - In the What is the description of your Add-In box, type the description for the add-in. Figure 7. Naming the add-in (click picture for larger image) - Click Next. The Choose Add-In Options page appears. In the next step, you need to specify whether the add-in gets loaded when the application is loaded, and whether the application is available for all users. Most of the time, loading the application on startup is always a good idea. Even if the user does not immediately activate it, you have everything loaded in memory for when they are ready. - Check the I would like my Add-in to load when the host application loads and/or My Add-in should be available to all users of the computer it was installed on, not just the person who installs it boxes as appropriate as shown in Figure 8. Figure 8. Choosing add-in options (click picture for larger image) - Click Next. The Summary page appears, as shown in Figure 9. - Click Finish, and wait while the Shared Add-In project is built. Next, we will investigate what has been created. Figure 9. Completing the add-in (click picture for larger image) - Closing the wizard will generate two projects under one solution file, and if everything went well, it should look as shown in Figure 10. Figure 10. The add-in as displayed in the Solution Explorer The MyAddIn project contains one created class (Connect.cs) which we will look at shortly. The second project, MyAddInSetup, is also automatically generated for you. This is a Microsoft Installer (MSI) project file used to create a setup application for the add-in. Unless explicitly included, this project does not get built when you hit F5 to compile. Some scenarios where an MSI is useful: - Something gets corrupted and you need to reinstall the add-in. - You wish to deploy the add-in to another user or machine. Examining the Generated Code We will now examine the Connect.cs class file. The class itself implements the Extensibility.IDTExtensibility2 interface, which is core to making the add-in work with Office. By default, you will get a number of methods created: - OnConnection. This method is called whenever the add-in is loaded by an Office application. - OnDisconnection. This method is called whenever the add-in is unloaded by an Office application. - OnAddInsUpdate. This method is called whenever the add-in changes (for example, if we were to download or load a new version of an assembly while the add-in was still running). - OnStartupComplete. This method is called whenever the add-in is completed. - OnBeginShutdown. This method is called whenever the application is shutting down. The difference between OnBeginShutdown and OnDisconnection is that the OnBeginShutdown method still has access to some of the Microsoft Windows® Explorer controls. For example, you may want to remove a window or change a control just as the user shuts down the application. There are also two generic objects defined in the class, applicationObject and addInInstance. For the purpose of the add-in, applicationObject is the most valuable as it gives us access to the programming model and environment that we need. Try compiling the solution, but don't run the solution yet. To compile, right-click on Solution MyAddIn' (2 projects) and click Build Solution. You should see a successful compile, indicated by: The one file that is skipped is the Setup Project that we described earlier. Creating the Hello World Application An example wouldn't be an example if we did not have the mandatory Hello World test, and this walkthrough is certainly no exception to the rule. To prove that the add-in is being loaded successfully by Outlook, we are going to display a "Hello World" message on startup. To do this, follow these steps. - In the Solution Explorer, first click the My AddIn project. - In the Project menu, click Add Reference. This displays the Add Reference dialog box. - Select the System.Windows.Forms.dll reference as shown in Figure 11. - Click Select, then OK. This is required to display the message dialogs and Windows Forms within the application. Figure 11. Adding a reference (click picture for larger image) - Next, insert the following using statement at the top of the Connect.cs class: - Locate the OnStartupComplete method within the code that has been generated. Within this method, insert the following line: - All the code is now complete. Compile the project, but don't run it yet. To compile, right-click on Solution MyAddIn' (2 projects) and click Rebuild Solution. Running the Hello World Sample Before you can run the add-in, you need to set the project properties to launch and debug Outlook from within Visual Studio .NET: - In Solution Explorer, right-click on the MyAddIn project and click Properties. Figure 12. Setting the project properties (click picture for larger image) - In the properties window, select Configuration Properties and click Debugging as shown in Figure 12. - Set the Start Application property to the location of the Outlook executable on your machine. On a default installation of Office XP, this should be: - In the same window, set the Working Directory to be the same directory as the OUTLOOK.EXE application resides within. For example: - Click OK, and you should be in a good position to test the sample add-in. - Press F5 to compile and run the project. If successful, Outlook 2002 should launch, and the test message displayed when the add-in starts up as shown in Figure 13.Note If you currently have Outlook 2002 running, close it before hitting F5. Figure 13. Running the add-in Debugging the Add-in Before we move forward with developing this example (into something a little more useful!), we will first take a look at the environment in which we are working. You may have noticed that Visual Studio .NET was responsible for launching the Outlook 2002 application as a process. Because of this we are in a good position if we want to pause, terminate or debug the application when required. To test, close Outlook (wait until Outlook terminates) and go back to the code and insert a breakpoint at the MessageBox line as shown in Figure 14. (To insert a breakpoint, right-click on that line and click Insert Breakpoint.) Figure 14. Setting a breakpoint in the add-in Press F5 to restart the application. What you should observe is Outlook 2002 starting up as usual, but when the add-in is loaded and the breakpoint is reached, control is returned back to Visual Studio .NET. Figure 15. Trapping a breakpoint in the add-in (click picture for larger image) This is very powerful because it gives us a very granular mechanism through which to debug. At any point during the lifetime of the add-in, you can effectively "pause" Outlook and investigate what is going on underneath the covers — investigating both .NET and Office objects. Caution Although pausing and restarting Outlook can help in the debugging process, try to avoid terminating the Outlook process, that is, stopping the project from running halfway through. When you quit Outlook, wait for the Outlook process to finish cleanly and for control to be returned to Visual Studio .NET before continuing. This may take a few seconds depending on the complexity of the add-in that you are writing, but terminating the process halfway through sometimes causes more problems than it is worth. Outlook has a feature where it can disable add-ins if it thinks that they are corrupted or damaged, which can in some cases indicate that you need to reinstall the add-in using the setup program generated in the first step. Another option for debugging is to attach to the Outlook process when it is already running. This is especially useful if you have deployed an add-in, and don't have the source code from which to launch the project and debugger. To test this, do the following: - Close both Outlook 2002 and Visual Studio .NET. Remember to save your project as we will be using it for later examples. - Reopen Outlook 2002 by selecting it from the menu or icon from which you normally launch it. If the add-in is still working (and it should be!), you should see the Hello World message reappear in the Outlook window on startup. Don't click on the OK button for now. - Launch Visual Studio .NET, but do not open a project at this time. Instead, in the Tools menu, click Debug Processes. - Scroll down to select the Outlook process (Outlook.exe) and click Attach as shown in Figure 16. Figure 16. Debugging processes (click picture for larger image). - In the Attach to Process dialog box, select the Common Language Runtime and Native options to debug. Click OK. - In the Processes dialog box, click Break to intercept the process — you should be positioned at the message box point where we were in the previous example.Note Watch the instances. In the previous example, we launched the Outlook process separately and then ran the debugger to attach to it. While this works, the Outlook process places a lock on the add-in DLL that we initially created—which means that we cannot recompile the project without first closing the Outlook application. When we attempt to do this, we get an error saying Cannot copy temporary files to the debug directory. Removing the Add-in You may have been developing the add-in on the main machine from which you read your e-mail messages. If so, you may want to move to your development machine. However, first you need to remove the add-in from Outlook (as it keeps loading even when you are not launching the process from Visual Studio .NET). To remove an add-in, return to Visual Studio .NET, compile and run the add-in Setup project that was generated as part of the initial solution. Do this by right-clicking on MyAddInSetup, and click Rebuild. Then, run the installer application that is generated by the project by right-clicking on MyAddInSetup, and click Install. This will reinstall the add-in, but more importantly, place an icon in your Add/Remove Programs window. From there, you can remove the add-in as you would with any other program and it will cleanly remove the files and registry settings from your machine. If you want to reinstall the add-in to continue with the rest of this walkthrough, simply rerun the add-in Setup installer. Calling an XML Web Service Through the Add-in At this point, this Hello World message is getting a little repetitive now. For a change, it would be fun to consume an XML Web service, for example, a "Quote of the Day" XML Web service and display that message instead. This example shows how this can be done—not so much demonstrating the actual quote XML Web service itself, but more to show once the add-in is loaded, how easy it is for developers to create applications that consume XML Web services and present them through Outlook. This also makes for an alternative to using the Office XP Web Services Toolkit and calling the XML Web service from VBScript. To add the XML Web service in our example, make sure that Outlook is closed and reopen Visual Studio .NET and the add-in project that you saved earlier by clicking on MyAddIn.sln located in C:\temp\MyAddIn. - In the Solution Explorer, right-click on the MyAddIn project and click Add Web Reference. This displays the Add Web Reference UDDI window as shown in Figure 17. Figure 17. Adding a Web Reference (click picture for larger image) - If you don't have access to the Web directly from your machine, you may want to create a test XML Web service that generates quotes. For the purpose of this example, it is assumed that you have access to the Web, and we will pick a predefined one from the UDDI registry. Click on the UDDI Directory Hyperlink in the window as shown above. A search window will be displayed as shown in Figure 18. Figure 18. Searching UDDI (click picture for larger image) - In the Service name field box, type QUOTE and click Search. A list of matching services will be displayed: Figure 19. Search results from UDDI (click picture for larger image) - Expand the Quote of the Day service under the GOTDOTNET header by clicking on the + symbol. Figure 20. Selecting the Quote of the Day Service from UDDI (click picture for larger image) - Scroll down—here we are shown a short description of the Quote XML Web service itself as shown in Figure 20. Click on the hyperlink underneath the Interface Definitions: header. - The contract for this XML Web service should then be loaded, and the Add Reference button should become enabled as shown in Figure 21. Figure 21. The WSDL contract for the Quote of the Day XML Web service (click picture for larger image) - Click Add Reference to add the reference to this XML Web service and return to the Visual Studio .NET integrated development environment (IDE). - Under the project Web References, you should now see the following XML Web service: Figure 22. The Quote of the Day XML Web service reference We can now start calling this XML Web service from our add-in. To do this, locate the OnStartupComplete method in the Connect.cs class. - Replace the code we wrote earlier: With the following code: This code makes a new instance of the quote XML Web service (q) and calls the GetQuote method from the service to retrieve a quote. This quote is then displayed in a message box within the add-in itself. - Once this is written, recompile and run the add-in. If everything has worked successfully, you should see a quote that is obtained by the XML Web service appears as Outlook starts as shown in Figure 23. Figure 23. Running the new version of the add-in (click picture for larger image) We're not sure how applicable that quote is, but hopefully this has demonstrated how easy it is to consume an XML Web service within .NET and display output from them out using Outlook as a client. Preparing to Further Develop the Add-in We are going to pick up the pace somewhat, and start exploring why we are really using Outlook as the client to host these smart client add-ins—as mentioned in the introduction, to take advantage of the user interface that already exists for us upon which to develop. In this section, we will be showing how to create a menu, how to work with objects specific to Office, and how to integrate custom Windows Forms that we create into the Office application. Thinking About the Design of the Add-in Before we dive into the code to build up the user interface, we will spend some time to think about how the add-in is going to run, and what processes we need to build in order to make the solution manageable and robust. We are going to want the add-in to initialize when it is first started. Initialization may require a number of actions to be taken by the add-in. These could be to create menus, create other objects within the Office application, register events, and so on. Once these events are complete, we can expect control to be handed back to the user. The add-in will only "come back to life" if a user triggers an event that is being listened for (for example, clicking on a menu item, and creating a new Office object against which we have registered events). The core classes required for an add-in to develop into a fully fledged application are broken down into the Connect, Init, OfficeUI, Events, and Actions classes. Figure 24. Recommended core classes (click picture for larger image) The Connect class gets called when Outlook is loaded. It is passed a context (the Application object that was previously mentioned will be covered in the next section) and will be responsible for maintaining this context for the life of the add-in. Imagine a process where we are creating a menu structure within Outlook and want to display our Quote message when that menu option is clicked. Figure 25. Calling the Init class (click picture for larger image) Using the above sequence diagram, we can see that the Connect object should call an Init object and instruct it to start the initialization routines. Part of this initialization routine will be to create the relevant Office user interface experience, that is, the menu. The menu is now created by the OfficeUI class—but that does not automatically mean that the menu is usable. We have to register some events with the menu. To do this, we call an Events class. After the events are registered, initialization is complete.class—but that does not automatically mean that the menu is usable. We have to register some events with the menu. To do this, we call an Events class. After the events are registered, initialization is complete. Figure 26. Complete sequence diagram (click picture for larger image) At this point, the only thing that is missing from our process diagram is the action that is taken when the event is triggered. For the menu, we are going to call an action in the Actions class named MenuItemClicked. Figure 27. Complete sequence diagram with Action (click picture for larger image) Turning the Diagram into Code We now have a good concept of our objects and how they should be called. To make this real in our example application, we are going to create four blank classes. - Init.cs - OfficeUI.cs - Events.cs - Actions.cs Now, create these classes by right-clicking the MyAddIn project within Solution Explorer in Visual Studio .NET. (Make sure you stop debugging first before proceeding. To do this, on the Debug menu, click Stop Debugging.) Point to Add and click Add New Item. This displays the Add New Item dialog box. In the Template list, select Class. Enter the appropriate class name, for example Init.cs. When you have done that, click Open. Repeat the steps above for: - OfficeUI.cs - Events.cs - Actions.cs After these are created, your solution should now look as follows: Figure 28. View of the MyAddIn project from Solution Explorer To continue the example, copy and paste the following two code snippets for the Action and Init classes. The Action class displays our quote, while the Init class is (at the moment) just a placeholder for creating the menu structure. Action Class using System; using System.Windows.Forms; using System.Diagnostics; namespace MyAddIn { public class Actions { public static void MenuItemClicked() { try { com.gotdotnet. q = new com.gotdotnet.; // Replace the " with " marks String qotd = q.GetQuote().Trim().Replace(""","\""); MessageBox.Show(qotd); } catch (Exception e) { Debug.Write(e.ToString()); } } } } Init Class This skeleton Init class will allow us to start programming menus and other .NET controls. Modify the Connect Class Also, in the StartupComplete method of the Connect class, remove the two lines of code to display the quote and instead insert a line to call the new Init class. Try to compile, but do not run the add-in at this time. To compile, right-click on Solution 'MyAddIn' (2 projects) and click Rebuild. Note Make sure the namespace in the code that you are copying or pasting matches the namespace of the project that you created. Otherwise, you may get errors when calling between classes in the project. Adding Outlook References Importing the Primary Interop Assemblies Up until now, we have not really used any functions offered by Outlook through the add-in. This is going to change over the next few sections and to facilitate this we need to add a reference to the required libraries. When we create the add-in through the wizard, we have the Office assembly loaded (which provides us with access to Office features that are common across all the applications), but nothing specific to the Outlook application. To do this, right-click on the MyAddIn project and click Add Reference. Then, click on the COM tab and scroll down to locate the Microsoft Outlook 10.0 Object Library as shown in Figure 29. Click Select, then click OK to return to the project. Figure 29. Adding a reference to the interop library (click picture for larger image) In the Solution Explorer, you should see a couple of new references added as shown in Figure 30. Figure 30. All of the references in the add-in Note To check that you are correctly referencing the Microsoft.Office.Interop.Outlook primary interop assembly (PIA) installed earlier (see the "Prerequisites" portion of the How to Build an Add-in section) in the global assembly cache (GAC), do the following. In the Solution Explorer, select the Outlook reference to display its properties. The Copy Local property should be False and the Path property should point to the Microsoft.Office.Interop.Outlook PIA located in the GAC. On a Microsoft Windows NT machine, the path should look something like C:\WINNT\assembly\GAC\Microsoft.Office.Interop.Outlook\10.0.4504.0__31bf3856ad364e35\Microsoft.Office.Interop.Outlook.dll. Outlook is the reference to the Outlook primary interop library that we just selected. Microsoft.Office.Core is a reference to a dependent library that is required by the Outlook library. Once the Microsoft.Office.Core reference is added, remove the Office reference as this is an older library and will only cause duplicate class errors if left in there. To do this, right-click on Office and click Remove. When these references are linked as part of the project, you will see that the file name for the assemblies generated are prefixed with "Interop" (for example, Interop.Outlook.dll). This indicates that they are an interop layer to the COM world. Note After adding and removing these references, it is a good idea to restart Visual Studio .NET to make sure all of the assemblies are being loaded correctly. Remember to save your project first. The Application Object As previously shown, when we first create an add-in, we get an empty project and a Connect class. This class contains the methods required for intercepting the startup and shutdown of the Office application. This class also contains an application object: In its raw form, this isn't that useful to us (as it is not typed). To change this we are going to change the type to an Outlook specific Application object. To do this, add the following using line at the top of the Connect.cs class: This using statement allows us to reference the required library to the Outlook interop assembly that we imported in the previous section. In our example code, we alias the library with the Outlook namespace to avoid ambiguity between methods and properties. Towards the bottom of the Connect.cs class, change the application object declaration from to the following: You will note that we change this object from private to public. This is so that it can be used by other classes within the add-in. We have also made the object static so that it resides as a single reference for the duration of the add-in. In addition, as part of the code that gets automatically created by the Add-in wizard, the OnConnection method containshat sets the global application object to the one that is passed through to the add-in As we have changed the type of the application object, we now need to cast this to be the correct type. To do this, replace this line with the following: The add-in should now compile and run successfully (although very little will happen as we are not making any calls!). You are now ready to start expanding the add-in and developing a full rich client. Developing the User Experience About Explorers Before we look at the code for building a menu structure, it is a good idea to understand how a concept called explorers work within Office. We will take Outlook 2002 as an example. When you click the Outlook icon, what typically happens is that the Outlook.exe process gets started on your machine. During this startup, the windows and view that you are used to seeing gets rendered and displayed. We call this view an explorer. If we were to look at this pictorially, we could imagine something like this: Figure 31. A single process with a single explorer As expected, we have the single Outlook.exe process and then an explorer which handles my folder list, views and so forth. But what happens if we click on the Outlook icon again to launch a second copy of Outlook? You may think that we would see this: Figure 32. Do we get two processes per Outlook application? No (click picture for larger image) But actually, we don't. Instead, a new explorer is opened within the same Outlook.exe process. Figure 33. Single process, multiple explorers (click picture for larger image) This has some interesting consequences for creating and maintaining objects within the explorer. Menus, for instance, are objects that get created at the explorer level. Figure 34. Menu and explorer relationships (click picture for larger image) This has three impacts for us: - When we create new menus (and other objects within the explorer), we have to be aware of the current explorer and the context in which we are running.Note The current explorer object can be found by calling the ActiveExplorer method from the Application object. This will return a type of Outlook.Explorer which can be used through the creation of objects. - More importantly, we have to think very carefully about the user experience if the user opens a new explorer (that is, they double-click on the Outlook icon again when an instance of Outlook is still running). - We only get one instance of the add-in per Outlook.exe process, so when the user opens a new explorer, a new version of the add-in does not get launched (as it is still running for us). Fortunately we can trap for the creation of a new explorer by the user—this is something that we will design into the code that you see, and something that we will cover in a later section. Building a Menu Structure in Office At this point, having a pop-up message once Outlook starts is great for testing, but it does not really lend itself to an intuitive user experience. Ideally, when the user wants a quote for the day, they will want to select the options themselves from a menu option. We will look into the steps required for building such a menu and then calling our Quote service. Here is the code for the OfficeUI class to create and delete a test menu. Copy and paste this code over the OfficeUI.cs class (you can copy and paste over any code that was automatically generated by creating the class—but again, remember to change the namespace if you did not use MyAddIn). using System; using System.Diagnostics; using System.Reflection; using Microsoft.Office.Core; using Outlook = Microsoft.Office.Interop.Outlook; namespace MyAddIn { public class OfficeUI { public static CommandBarPopup Menu1; public static CommandBarButton MenuItem1; public static String _menu1Text = "Test Menu"; public static String _menuItem1Text = "Get Quote of the Day";()); } } public static void DeleteMenu(Outlook.Explorer currentExplorer) { try { currentExplorer.CommandBars["Menu Bar"]. Controls[_menu1Text].Delete(Missing.Value); } catch (Exception) { // Menu did not exist Debug.WriteLine("Could not delete. " + "Assuming Menu does not exist"); } } } } We will go through the code step-by-step to see what is happening. The required references for this piece of code are: In terms of namespaces, System is inserted by default. System.Diagnostics is used to write to the debug window in Visual Studio .NET if an exception occurs. System.Reflection is used for values called across the COM interop layer. Microsoft.Office.Core is the main Office object model. We also have a number of global variables defined that are worth taking a look at. Two items, Menu1 and MenuItem1 are both of types derived from the Office library (CommandBarPopup and CommandBarButton). The other two global declarations are for the menu text that will be used (making them global makes it easy to change the menu items at a later date without altering the structure). We then have two methods in this class, CreateMenu and DeleteMenu. CreateMenu actually calls DeleteMenu before it runs any code. This is to avoid multiple menus being displayed if the menu does not get deleted automatically on shutdown. To create the root of the menu we make a call to the CommandBars["Menu Bar"].Controls.Add(...) method. Menu Bar is a fixed menu in Office and can be verified in any office application by customizing the toolbars.()); } } You may have noticed that we are using the Missing.Value parameter on a number of calls to the Office object model. This is a very useful parameter from System.Reflection namespace and allows us to call methods which have optional parameters in the COM world. (COM does not support overloading through the interop layer, so we have to supply a Missing.Value parameter to specify that we are not going to be sending anything across.) The DeleteMenu method simply tries to delete the menu—if it cannot (that is, the menu does not exist—this could be the first time that the add-in has been run) then a message is written to debug, but otherwise, we ignore the error. Now that we have that code written all we have to do is to change the Init class to call our new Menu method. Make this change, compile and run the add-in to see whether our menu is being created. You can do this by pressing F5 in Visual Studio .NET. If everything is working, this should compile correctly and the following should be displayed: Figure 35. The new Test Menu in Outlook (click picture for larger image) Everything looks good, but of course you will notice that if you click on the menu nothing will happen. This is because we have no events registered with the men yet. Registering Menu Click Events This is a small piece of code to register the events for the menu (that is, what happens when a user clicks on the menu item). Copy and paste this code into the Events.cs class file. using System; using System.Diagnostics; using Microsoft.Office.Core; using Outlook = Microsoft.Office.Interop.Outlook; namespace MyAddIn { public class Events { public static void RegisterMenuEvents(Outlook.Explorer currentExplorer) { try { // Find the current menu OfficeUI.Menu1 = (CommandBarPopup)currentExplorer. CommandBars["Menu Bar"]. Controls[OfficeUI._menu1Text]; // Find the first menu item OfficeUI.MenuItem1 = (CommandBarButton)OfficeUI.Menu1.Controls [OfficeUI._menuItem1Text]; OfficeUI.MenuItem1.Click += new _CommandBarButtonEvents_ClickEventHandler (Actions.MenuItemClicked); } catch (Exception e) { Debug.Write(e.ToString()); } } } } Again, we are using System.Diagnostics for debugging and the Microsoft.Office.Core library to access the office menu controls in code. The RegisterMenuEvents method that we have does three tasks. It firsts locates the Test Menu by indexing the CommandBars["Menu Bar"] object. Once we have a reference to the Test Menu, we find the control that matches the text of the menu item we are trying to locate. This gives us the exact CommandBarButton reference for the menu item that we need to work with. Once we have this, we register the ClickEventHandler, and set the target to the MenuItemClicked method of the Actions class, which we created earlier. We now need to add the call in the Init class to register the events. Add the Events.RegisterMenuEvents line as shown in the following code: using System; namespace MyAddIn { public class Init { public static void Start() { // Create the menu structure OfficeUI.CreateMenu (Connect.applicationObject.ActiveExplorer()); // Register the menu events Events.RegisterMenuEvents (Connect.applicationObject.ActiveExplorer()); // Complete } } } And finally, make a single change to the Actions class by specifying the right parameters. The target method for the ClickEventHandler requires that we have a method that accepts a reference for the button that was clicked and a bool value. The reference to the object is useful as it allows us to make changes (for example, grey out the button if this is a one-time operation). The bool value allows us to send a continue or don't continue message back to Outlook. For a custom menu operation this has no impact, but for other Outlook objects where Outlook may make its own changes after our method, this is a good way to cancel the operation on behalf of the user. using System; using System.Windows.Forms; using System.Diagnostics; using Microsoft.Office.Core; namespace MyAddIn { public class Actions { public static void MenuItemClicked (CommandBarButton cmb, ref bool action) { try { com.gotdotnet. q = new com.gotdotnet.; // Replace the " with " marks String qotd = q.GetQuote().Trim().Replace(""","\""); MessageBox.Show(qotd); } catch (Exception e) { Debug.Write(e.ToString()); } } } } Copy and paste the above code and then compile. If everything works correctly, the call to the XML Web service should be made when the get quote item from the menu is selected from the Test Menu. Note There may be a slight delay between clicking on the menu item and retrieving the value from the XML Web service. This is normal as obtaining the value from the external XML Web service usually will take some time. Code Patterns At this point you may be looking at the code and thinking a couple of things: - Why have we split the creation of the menu with the registration of the menu events? Surely we could register the menu events at the same time we created the menu? - Why are we using static declarations everywhere? These are both great points, and ones that we will address now: Why have we split the creation of the menu with the registration of the menu events? This really comes down to what was described in a previous section about the new explorers being launched. Do you remember this diagram? Figure 36. Multiple menu to explorer mapping (click picture for larger image) At this point, imagine we have only the one explorer open. Our add-in has run successfully and we have created a menu, and we have registered the events. We can show it pictorially like this: Figure 37. Single menu registered This works, and everyone is happy. The user then double-clicks on the Outlook icon again. This launches a new explorer in the same Outlook.exe process. Take a look at what happens to our menu. Figure 38. Menu registrations with second open explorer (click picture for larger image) The menu already exists (as it was created by the add-in) and it will appear in the explorer (the menu only gets deleted when we delete it in code, or when the user resets the menu bar in Office). The problem is that although the menu is there, there are no events registered against it. The end result for the users is that they see a menu, but when they click on any of the options, nothing happens. Give it a go with the example we have built earlier. Open up a new instance of Outlook when the add-in is running (you can do this by clicking Start, then Run and type Outlook.exe). You should see that the Test Menu appears, but nothing happens when you click on the button. As mentioned earlier, it is possible to trap the creation of a new explorer. This is done with the following event handler: In our Connect.cs class we can create an event handler that traps a new instance of the explorer being created and calls a method with the following code: First, create a new declaration (after the application object declaration public static Outlook.Application applicationObject;): Then, in the OnConnection method, add the following two lines of code at the end of the method: At this point, given the current state when the new explorer is launched, what do we need to do in the NewExplorer method? Figure 39. Deciding what to do with the second explorer (click picture for larger image) We don't need to create a new menu as one already exists, but we do need to reregister the events for the new explorer. Because we have split out the creation of the menu with the registration of the events, this becomes very easy for us. We can simply call the method to register the events from the method that gets called when the new explorer is launched. Add the following new method in the Connect.cs class: (The Activate method ensures that the new window is fully loaded and visible before we start adding menu events.) You will also notice that we need to pass the instance of the new explorer to the RegisterMenuEvents method. This is the primary reason that I chose to have an Outlook.Explorer parameter on the RegisterMenuEvents method (and other methods in the example). After this code is run, the new instance of the explorer should look like the previous. Figure 40. Correctly registered menu in second explorer (click picture for larger image) Compile and run this code, try opening a new instance of the explorer (and more if you wish). You should now see that when the new explorer opens you are still able to launch the GetQuote function. Why are we using static declarations everywhere? This really is a two-part answer. The first reason is a personal preference. The process and execution of the add-in and the interaction with the user is typically a single task operation. For example, the user clicks on a menu item—we then follow a course of action at the end of which the user is returned to Outlook. There are very few cases where you can create a multitasked operation in Outlook. Secondly, when building the add-in, it is always a very good idea to declare any object that is going to be around for the lifetime of the add-in as static. A great example (and one that stumped me for some time) of this is with menu items. If menu items and events are declared non-static (that is, local to the methods in which they were created) they can still be created to still work. However, when garbage collection (the freeing up of objects that are no longer in use, but have not been correctly disposed) runs, it has a tendency to collect these objects—some of which we probably did not really wanted collected. Confused? We will now take a look at what the user experiences. The add-in creates a menu and registers an event. The menu object is a non-static object and local to the method in which it was created. The user clicks on the menu item and a function runs. The main job of this function is to call an XML Web service, do some calculations, and return to Outlook. To do this, the add-in is going to require some memory. At this point, the garbage collector collects our menu and menu event (it does not have the ability to physically remove the menu from the explorer, but it does have the ability to unregister the event). The function executes and the user is returned to Outlook. The user tries to click on the menu item again to rerun the function. Nothing happens. There is no event registered to this menu any more, and there will not be one until the add-in restarts. Creating and Working with Office Objects Put the Quote of the Day in Today's Calendar One of the benefits of working with the Office and (in this case) Outlook object model is that we have very fine access to the objects presented within Office. As a developer of this example add-in, we will say you decided that once you get a quote back from the XML Web service, you would like to store it in the Outlook Calendar. To do this, you need to write a new class called CalendarItems CalendarItems.cs. When you have done that, click Open. Then copy and paste the following code: using System; using System.Reflection; using System.Diagnostics; using Outlook = Microsoft.Office.Interop.Outlook; namespace MyAddIn { public class CalendarItems { public static void CreateAllDayEvent (String Subject, String Body) { try { // Set the namespace from // the existing Application Object Outlook.NameSpace ns = Connect.applicationObject.GetNamespace("MAPI"); // Get the default calendar folder Outlook.MAPIFolder calendarFolder = ns.GetDefaultFolder (Outlook.OlDefaultFolders.olFolderCalendar); // Create a new calendar item Outlook.AppointmentItem newItem = (Outlook.AppointmentItem)calendarFolder.Items.Add (Outlook.OlItemType.olAppointmentItem); // Set the properties of the item newItem.Start = System.DateTime.Now; newItem.AllDayEvent = true; newItem.Subject = Subject; newItem.Body = Body; // Save the item newItem.Save(); } catch (Exception e) { Debug.WriteLine(e.ToString()); } } } } As can be deduced, this code first locates the user's default Calendar Folder. Within this folder, we create a new appointment item, set all of the properties of the item (including the subject and body fields which will be passed to us as parameters) and saves the item. To be able to call this, we need to modify the Actions.cs class to call the calendar method when the quote is returned. Insert the following line just after the point where the message box is being displayed: When the add-in is run, and a quote is obtained, the following all day calendar event is created in the default calendar folder of the user: Figure 41. Automatically generated calendar event (click picture for larger image) This small piece of code shows a relatively simple example of creating a new Outlook item and setting some simple properties. The complete Office object model offers very granular access to many object types and items. It was never the intention of this paper to cover the entire object model. There are good programming books on Outlook available that you can go to for more information. As with most of the current Outlook programming books available today, many of the examples are in VBA or VBScript—these will require converting to the appropriate calls in C#. Performance Considerations When Working with Office Objects Although programming against the Outlook and Office model is fairly transparent to the developer, there is still a COM interop layer that all calls need to go through from .NET to Office. The overhead that these calls add is minimal if only one or two calls are made, but can add up to some significant performance overhead if used incorrectly. The recommendation behind this is to minimize the number of calls across the COM interop layer where possible. If some serious processing on an object is required, convert the Office objects into .NET data types, do the processing, and write any changes back. Dealing with objects of .NET data types within the add-in will be much quicker than stepping across the COM layer to read and write individual properties for each object. Displaying and Using the Office Assistant The following example shows how to make calls to the Office assistant: Add a new class called Assistant Assistant.cs. When you have done that, click Open. Then copy and paste the following code: using System; using System.Reflection; using Microsoft.Office.Core; namespace MyAddIn { public class Assistant { public static void DisplayMessage (String caption, String message) { // Turns on the assistant Connect.applicationObject.Assistant.Visible = true; Connect.applicationObject.Assistant.On = true; Connect.applicationObject.Assistant.AssistWithAlerts = true; Connect.applicationObject.Assistant.AssistWithHelp = true; // Displays the message Connect.applicationObject.Assistant.DoAlert (caption,message,MsoAlertButtonType.msoAlertButtonOK, MsoAlertIconType.msoAlertIconInfo, MsoAlertDefaultType.msoAlertDefaultFirst, MsoAlertCancelType.msoAlertCancelFirst,false); } } } We want to call the Office Assistant when we choose the GetQuote option from the Test Menu. To do this, in Actions.cs, change the following line: to: Compile and run the add-in. Figure 42. Our old friend, Clippy The Assistant also allows prompting for questions (yes or no responses). It is possible to make the calls in order to detect whether the Assistant is activated or not. This could be more appropriate than just turning on the Assistant as we have in our code. Supporting Online and Offline Modes (for Office Applications that Support Online and Offline Functionality) One of the valuable features of the add-in (and a prerequisite for building a smart client) is the ability to detect whether Outlook is running in online or offline mode. (Online means that Outlook is currently connected to the Microsoft Exchange Server; offline means that we are viewing mail from an .OST folder). A property of the main Application object is a Session Object. This particular object holds information about the current session—one of the items being whether Outlook is running in online or offline mode. To call this from code in our example, we can make a call to: This property returns a Boolean value (True if we are offline, False otherwise). To integrate this functionality into our Quote of the Day example, simply copy and paste the following code over the existing Actions.cs class: using System; using System.Windows.Forms; using System.Diagnostics; using Microsoft.Office.Core; namespace MyAddIn { public class Actions { public static void MenuItemClicked (CommandBarButton cmb, ref bool action) { try { // First detect whether Outlook is running // Online or Offline if (Connect.applicationObject.Session.Offline) { Assistant.DisplayMessage ("Quote of the Day", "I cannot retrieve the quote as " + "you are currently running in Offline mode."); } else { com.gotdotnet. q = new com.gotdotnet.; // Replace the " with " marks String qotd = q.GetQuote().Trim().Replace(""","\""); Assistant.DisplayMessage ("Quote of the Day",qotd); CalendarItems.CreateAllDayEvent(qotd,""); } } catch (Exception e) { Debug.Write(e.ToString()); } } } } This will change the message to a "You are Offline" message if it detects that Outlook is running in offline mode. A further extension of the above code could also be to put a message in the catch block to display a default message saying that the XML Web service could not be reached. Test this functionality by launching the add-in as normal after rebuilding it, selecting the Work Offline option from the File menu, restarting Outlook, selecting offline mode, and retrieving a quote. Figure 43. Displaying offline status in Outlook On a serious note, this can be very useful for developing smart client applications. This functionality immediately gives the developer a way of finding out whether Outlook is connected to the Exchange Server (and presumably the rest of the world). From there, the add-in can decide which functionality to support or not support based on this mode. Displaying Windows Forms in Office Displaying Windows Forms in Office is probably one of the most powerful features of writing an add-in as it allows you to create fully functional applications that could potentially be developed stand-alone from Office and then allows you to import them and use them as Office add-ins. Here is an example with a few recommendations. First, create a Dialog folder in the project structure as shown in Figure 44. This is purely a personal preference, but I find that it works well to keep the dialogs separate from the main code of the add-in. Do this by right-clicking the MyAddIn project within Solution Explorer in Visual Studio .NET. (You will have to exit debugging first before proceeding. In the Debug menu, click Stop Debugging.) Point to Add and click New Folder. Figure 44. Recommended Dialog folder for Windows Forms In the Dialog folder, create a new Windows Form. This Windows Form is going to ask the user whether they want to save this quote in today's calendar, and will also let them modify certain elements of the calendar control as well. The Windows Form shown in Figure 45 is created using Visual Studio .NET. The code for this Windows Form will not be shown in this article as this would greatly increase the length of the article. Instead, we will leave it up to you as a developer to create a similar simple form. Figure 45. Sample Windows Form The form works by displaying the quote when it loads (the quote label component is made public, and the value set just before the form is displayed). The user has the option of appending some additional text, and then either e-mailing it to another user, saving the quote as today's all day event in their calendar, or just ignoring the whole thing and closing the form. It is recommended when dealing with forms in the add-in is to create a thin wrapper between all of your forms and the add-in code itself. This wrapper is called the DialogManager in this example. Here is the code for the sample DialogManager, using the above form. As you can see, all the DialogManager does is to abstract the creation and display of each form. The main advantage about this is that this class can be used as an interface for calls outside of .NET. For example, you could have an add-in for Outlook that searches a series of contacts. This add-in may have its own set of Windows Forms to do this. It is possible that the add-in may not be launched from a menu, but instead, from a button on a regular Outlook form (for example from VBScript on a "New Contact" form). If this is the case, you are going to have to build a native to managed interface (to go from VBScript to .NET through COM interop). If all of your dialogs are abstracted through one class, this is going to be an easy, one time process instead of thinking about interface considerations each time you create a new form. To get this form displayed, in the Actions.cs class file, simply comment out the reference to the Office Assistant and instead call the form through the dialog manager: When the add-in is compiled and run, the form is displayed (instead of the Office Assistant). Handling Exceptions When coding in .NET, getting unhandled exceptions is generally undesirable; getting unhandled exceptions in add-ins is even more undesirable. There is one major problem that we want to draw your attention to when developing add-ins using the concepts described in this paper. Any unhandled exceptions that occur in your add-in at runtime will likely a) not get reported in Outlook and b) generally cause the add-in to be unloaded, rendering it useless. This is undeirable, as it means that even the most innocent NullReferenceException that does not get trapped by your code could result in your user being stranded with an add-in that is no longer working (until they next restart Outlook). For the user, this will make menu items generally unresponsive, and any functionality or interfaces that the add-in provides will not work. One way to overcome this (and something we tried to demonstrate in all of the code samples that have written for this document) is to use the try…catch blocks for every piece of code. On some one line methods, this may sound like overkill, but it is a good habit to start from the beginning to avoid these problems. Conclusion Going back to our initial three points that described a smart client—can these tasks now be accomplished in Office? - Leverage local processing power: We are certainly doing that. By running all of the code on the client (within Office), we have the ability to leverage the power of the machine. - Consume XML Web services: Our first example showed how easy it was to consume an XML Web service and display the results in a format within Office. - Support online and offline scenarios: Outlook has great support for running and supporting both online and offline operations. This includes the ability to detect the state, and of course, store objects (cache data) locally to the application if running offline. As pointed out in the introduction section, to securely install and deploy managed COM add-ins, you will need to incorporate into your managed COM add-in project a small unmanaged proxy called a shim. For details, see the following articles: - - Using the COM Add-in Shim to Trust Outlook Add-ins Built with Visual Studio .NET is a must read on how to use a shim in Outlook 2002. - Building Outlook 2002 Add-ins with Visual Basic .NET - The Knowledge Base article PRB: Visual Studio .NET Shared Add-in Is Not Displayed in Office COM Add-ins Dialog Box. To conclude, I hope that this article has introduced you to some of the options for creating .NET smart client applications in Office XP. With both the flexibility of calling any .NET class, project, or assembly, and access to an extensive Office object model, this can be a very powerful development environment to create compelling applications using .NET.
http://msdn.microsoft.com/en-us/library/aa163619(v=office.11).aspx
CC-MAIN-2014-35
refinedweb
10,214
62.98
I am here to learn and share some new and cool stuffs Since I am a rookie I will start with a problem I have this problem When the program starts, you will be asked to input an amount of numbers to for your calculations. This can be any number of numbers. No matter what number you put in, it should ask you for that amount of numbers. For instance, if you put in 100, you will be expected to put in 100 numbers. Once you put in the total numbers for use, your program should ask you for each number, while displaying how many numbers you have put in and how many you have left to go. At the end, the program should be able to show you the Sum, Difference, Product, and the average of all your numbers This is my code so far ( it doesn't give me the proper result) package javaapplication13; import java.util.Scanner; public class JavaApplication13 { /** Main method */ public static void main(String[] args) { // Create a Scanner Scanner input = new Scanner(System.in); // Read an initial data System.out.print( "Enter a number of numbers to use in your calculation "); int data = input.nextInt(); // Keep reading data until the input equals number of numbers int sum = 0; while (data !=0) { sum += data; // Read the next data System.out.print( "You have entered 0 numbers you have 5 numbers left to enter "); data = input.nextInt(); System.out.println("All numbers added together equal " + sum); System.out.println("All numbers subtracted from each other equal " + sum); System.out.println("All numbers multiplied6 together equal " + sum); System.out.println("All numbers added together equal " + sum); } } }
http://www.javaprogrammingforums.com/%20member-introductions/17584-hi-there-printingthethread.html
CC-MAIN-2014-49
refinedweb
279
63.8
The parsing and tag creation in this module are based on the specifications in EPCGlobal's *EPC Tag Data Standards Version 1.1 Rev.1.24*, from April 1st 2004 (although it doesn't appear to be a joke...). See <> for more in...GIFF/RFID-EPC-0.002 - 08 Jul 2004 00:32:08 GMT - Search in distribution Constants Tag Type Constants The constants "MATRICS_TAGTYPE_EPC", "MATRICS_TAGTYPE_MATRICS", and "MATRICS_TAGTYPE_OLDMATRICS" are recognized tag types. They can be imported into your namespace with the ":tagtypes" tag. Constructor new Creates a new *...GIFF/RFID-Matrics-0.002 - 08 Jul 2004 00:32:41 GMT - Search in distribution - RFID::Matrics::Reader - Abstract base class for a Matrics RFID reader This abstract base class implements the commands for communicating with an Alien reader. It is written according to the specifications in the *Alien Technology Reader Interface Guide v02.00.00*. It was tested with the original tag reader and also the...GIFF/RFID-Alien-0.003 - 26 Apr 2006 17:19:06 GMT - Search in distribution
https://metacpan.org/search?q=RFID-EPC
CC-MAIN-2015-11
refinedweb
170
57.16
First thing first, here. On linux is as easy as downloading the file, opening the terminal and type ~$ bash Anaconda3-<release> from the folder you downloaded the installer in. Welcome to Anaconda3 2019.07 In order to continue the installation process, please review the license agreement. Please, press ENTER to continue >>> then tap the Enter key. A wall of text will show up, you can read it (or not) and just skip to accepting the license terms, type yes Anaconda3 will now be installed into this location: /home/davide/anaconda3 - Press ENTER to confirm the location - Press CTRL-C to abort the installation - Or specify a different location below [/home/davide/anaconda3] >>> Once the installer has finished it will ask you if you want to initialise Anaconda, type yes installation finished. Do you wish the installer to initialize Anaconda3 by running conda init? [yes|no] [no] >>> All done! But wait... If you try to start the Python REPL by typing python in your terminal, you'll still get the default python edition shipped with your system, or an error if no python was shipped with your OS. We need to add Anaconda to our PATH ~$ source .bashrc ~$ python ~$ python Python 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> the Anaconda distribution is now running on your system, well done! Now let's check that a few more things are working correctly type the following in your terminal ~$ jupyter notebook You should see the Jupyter Notebook starting up in a new tab in your default browser. Create a new Jupyter Notebook document and type the following from sklearn.ensemble import RandomForestRegressor ?RandomForestRegressor The first line will import a Regressor from sci-kit learn and the second line will display the source code of that function thanks to the ? Finally, we want to install Tensorflow. Tensorflow is Google's open source framework for building and evaluating neural network systems and it does have libraries for Python, JS and a lite version for mobile and IOT. To install it type pip install tensorflow and on your newly created notebook add the following: import tensorflow as tf print(tf.__version__) You should see the tensorflow version printed out, 1.14.0 at the time of writing. We are ready to go! The Anaconda distribution installed the conda package manager, Python3, Jupyter Notebook and a bunch of libraries we'll need to build our models. Well done so far and let's start building our first model!
https://kauri.io/installing-anaconda-python3-and-tensorflow/5bf3a311af45420781dfbb3963b253b1/a
CC-MAIN-2020-29
refinedweb
431
64.71
On Wednesday 25 June 2008 09:02:48 pm Zhao Yakui wrote:> On Wed, 2008-06-25 at 09:08 -0600, Bjorn Helgaas wrote:> > On Tuesday 24 June 2008 07:37:37 pm Zhao Yakui wrote:> > > On Tue, 2008-06-24 at 13:52 +0200,.> > > In fact this issue is related with the following factors:> > > a. when acpi is disabled, OS won't initialize the ACPI mutex, which> > > is accessed by many ACPI interface functions. For example:> > > acpi_walk_namespace, acpi_install_fixed_event_handler.> > > b. When acpi is disabled, some drivers will call the ACPI interface> > > functions. For example: The acpi_walk_namespace is called in> > > dock_init/bay_init.> > > > I think most current uses of acpi_walk_namespace() are indications> > that the ACPI or PNP core is missing something.> I don't think so. The acpi_walk_namespace is used to enumerate the ACPI> tree and execute some specific operations. For example: Add the device> notification function for some type of device; call the INI method for> all the device.There are exceptions, and obviously acpi_walk_namespace() will beneeded some places.One example where I think acpi_walk_namespace() should not be usedis to register notification functions for device addition/removal.I think the ACPI core should be handling those notify events andturning them into add()/remove() calls to the driver.> > In dock_init() and bay_init(), it's used to bind a driver to a> > device. I think it would be better if we could figure out how to> > use the usual acpi_bus_register_driver() interface. Actually, it> > looks like this is already 90% done: acpi_dock_match() does the> > same thing as is_dock(), so it looks like dock_init() could easily> > be converted to register as a driver for ACPI_DOCK_HID.> Maybe what you said is reasonable if the dock/bay device exists and is> added to Linux ACPI device tree. But if the status of bay/dock device> doesn't exist , it won't be added into the Linux ACPI device tree. In> such case the dock/bay driver won't be loaded for it. So it will be> reasonable to enumerate the acpi tree to install the notification> function for the dock device so that OS can receive the notification> event when the dock device is hotplugged. If the bay/dock device doesn't exist, we shouldn't need a driverfor it. The normal scenario for non-ACPI drivers is that we loada driver when a device appears. That doesn't work very well inthis case because the ACPI core is missing the "TBD: Handle deviceinsertion/removal" stuff I mentioned earlier.I know it's not very useful for me to talk about this withoutproviding any patches, so I'll shut up now.Bjorn
http://lkml.org/lkml/2008/6/26/477
CC-MAIN-2017-04
refinedweb
435
62.98
Hello again: Yesterday I wrote a function that would perform the old adler32 hash on given data. My function works perfectly well and so far always has found the correct hash. The problem is big files. I made a file exactly 1 MB and hashed it with various commercial software. They took no more than 5-10 secs. Mine takes 1hr and 5mins.... Since this is a fairly simple hash algo, I was hoping that someone here would recognize and optimizations that could be made to my code. I wrote it from scratch and hopefully right off the bat I am not too inefficient! Code:unsigned int adler32(unsigned char *data, unsigned int len) { unsigned int A = 1, B = 0; unsigned int i, j; for (i = 0; i < len; i++) { A += data[i]; for (j = i + 1; j > 0; j--) B += data[j-1]; B++; A %= 65521; B %= 65521; } return (B * 65536) + A; }
http://cboard.cprogramming.com/c-programming/105012-adler32-optimization.html
CC-MAIN-2015-40
refinedweb
153
80.31
A quick way to render your form and get some user input is to incorporate the React Hook Form in your next app. Forms have always seemed to have many moving parts to me, but using React Hook Form simplified the process. To get started, enter this into the command line - npm install react-hook-form Once you've done that, write this line at the top of your Form.js file - import {useForm} from 'react-hook-form' This imports the function useForm(). Next, we will focus on three variables that useForm() returns: register, handleSubmit, and errors. Now we'll set up a basic form before adding in the variables from useForm(). The register variable will handle tracking changes on the input fields of your form. Pass in {register} as the value for the ref property of the input, like so. The form needs an onSubmit property so we can send the data from the form. The value of onSubmit will be handleSubmit which will take a callback function as its argument. For demo purposes, we will console log our form data to ensure we are getting it when we click submit. In your browser, open up the console, fill out the form, and click submit. You should see an object with the form data in the console. At this point, temporarily remove errors as one of the variables retrieved from useForm(), otherwise it will error out. Validations The React Hook Form makes it quick and simple to implement validation in your forms. You can include errors again as one of the variables retrieved from calling useForm(). In your register value, pass in an object containing key/value pairs with proper validations. We want to ensure a user types in a password, and that it is a sufficient length. For the user to know the requirements for the password, we need to notify them using errors. Your form will display an error message if a password has not been entered or if it was too short. There's much more to React Hook Form and I encourage using the resources below! Resources Here are the resources I used to learn about React Hook Form. Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/sfrasica/intro-to-react-hook-form-g9g
CC-MAIN-2021-49
refinedweb
366
71.95
This is the base class for vector data providers. More... #include <qgsvectordataprovider.h> This is the base class for vector data providers. Data providers abstract the retrieval and writing (where supported) of feature and attribute information from a spatial datasource. Definition at line 49 of file qgsvectordataprovider.h. enumeration with capabilities that providers might implement Definition at line 61 of file qgsvectordataprovider.h. Constructor of the vector provider. Definition at line 30 of file qgsvectordataprovider.cpp. Destructor. Definition at line 40 of file qgsvectordataprovider.cpp. Adds new attributes. Definition at line 66 of file qgsvectordataprovider.cpp. Adds a list of features. Definition at line 54 of file qgsvectordataprovider.cpp. Return list of indexes to fetch all attributes in nextFeature() Definition at line 241 of file qgsvectordataprovider.cpp. Returns a list of available encodings. Definition at line 467 of file qgsvectordataprovider.cpp. Returns a bitmask containing the supported capabilities Note, some capabilities may change depending on whether a spatial filter is active on this provider, so it may be prudent to check this value per intended operation. Definition at line 107 of file qgsvectordataprovider.cpp. Returns the above in friendly format. Definition at line 142 of file qgsvectordataprovider.cpp. Changes attribute values of existing features. Definition at line 78 of file qgsvectordataprovider.cpp. Changes geometries of existing features. Definition at line 90 of file qgsvectordataprovider.cpp. Clear recorded errors. Definition at line 530 of file qgsvectordataprovider.cpp. Definition at line 374 of file qgsvectordataprovider.cpp. Definition at line 452 of file qgsvectordataprovider.cpp. Create an attribute index on the datasource. Definition at line 101 of file qgsvectordataprovider.cpp. Creates a spatial index on the datasource (if supported by the provider type). Definition at line 96 of file qgsvectordataprovider.cpp. Return a short comment for the data that this provider is providing access to (e.g. the comment for postgres table). Definition at line 49 of file qgsvectordataprovider.cpp. Returns the default value for field specified by fieldId. Definition at line 84 of file qgsvectordataprovider.cpp. Deletes existing attributes. Definition at line 72 of file qgsvectordataprovider.cpp. Deletes one or more features. Definition at line 60 of file qgsvectordataprovider.cpp. Returns true if the provider is strict about the type of inserted features (e.g. no multipolygon in a polygon layer) Definition at line 338 of file qgsvectordataprovider.h. Get encoding which is used for accessing data. Definition at line 132 of file qgsvectordataprovider.cpp. Returns the possible enum values of an attribute. Returns an empty stringlist if a provider does not support enum types or if the given attribute is not an enum type. Definition at line 204 of file qgsvectordataprovider.h. Get recorded errors. Definition at line 540 of file qgsvectordataprovider.cpp. Number of features in the layer.. Definition at line 131 of file qgsvectordataprovider.h. Returns the index of a field name or -1 if the field does not exist. Definition at line 214 of file qgsvectordataprovider.cpp. Return a map where the key is the name of the field and the value is its index. Definition at line 228 of file qgsvectordataprovider.cpp. Definition at line 379 of file qgsvectordataprovider.cpp. Get feature type. Query the provider for features specified in request. Provider has errors to report. Definition at line 535 of file qgsvectordataprovider.cpp. It returns false by default. Must be implemented by providers that support saving and loading styles to db returning true Definition at line 363 of file qgsvectordataprovider.h. Returns the maximum value of an attribute. Default implementation walks all numeric attributes and caches minimal and maximal values. If provider has facilities to retrieve maximal value directly, override this function. Definition at line 335 of file qgsvectordataprovider.cpp. Returns the minimum value of an attribute. Default implementation walks all numeric attributes and caches minimal and maximal values. If provider has facilities to retrieve minimal value directly, override this function. Definition at line 319 of file qgsvectordataprovider.cpp. Returns the names of the supported types. Definition at line 246 of file qgsvectordataprovider.cpp. Return list of indexes to names for QgsPalLabeling fix. Definition at line 308 of file qgsvectordataprovider.h. Return list of indexes of fields that make up the primary key. Definition at line 303 of file qgsvectordataprovider.h. Definition at line 545 of file qgsvectordataprovider.cpp. Set encoding used for accessing data from layer. Definition at line 113 of file qgsvectordataprovider.cpp. Returns the permanent storage type for this layer as a friendly name. Definition at line 44 of file qgsvectordataprovider.cpp. check if provider supports type of field Definition at line 251 of file qgsvectordataprovider.cpp. Returns the transaction this data provider is included in, if any. Definition at line 370 of file qgsvectordataprovider.h. Return unique values of an attribute. Default implementation simply iterates the features Definition at line 351 of file qgsvectordataprovider.cpp. Definition at line 53 of file qgsvectordataprovider.h. bitmask of all provider's editing capabilities Definition at line 101 of file qgsvectordataprovider.h. List of attribute indices to fetch with nextFeature calls. Definition at line 383 of file qgsvectordataprovider.h. Old-style mapping of index to name for QgsPalLabeling fix. Definition at line 391 of file qgsvectordataprovider.h. Definition at line 376 of file qgsvectordataprovider.h. Encoding. Definition at line 380 of file qgsvectordataprovider.h. The names of the providers native types. Definition at line 386 of file qgsvectordataprovider.h.
https://api.qgis.org/api/2.8/classQgsVectorDataProvider.html
CC-MAIN-2022-27
refinedweb
893
54.18
JSON processing with Java In this article I'll introduce you to JSON parsing solutions in Java. There are not as many built-in options to choose from like with XML and that particular solution is cumbersome too so I will take a look at GSON and Jackson besides the Java API for JSON Processing. I won't write about what JSON is, I assume you know this much and just want to know how to deal with this kind of data in your Java project. The example file For this article I'll stick with a simple example, a list of books with some properties: { "books": [ { "id": "_001", "title": "Beginning XML, 4th Edition", "author": "David Hunter", "copyright": 2007, "publisher": "Wrox", "isbn": "0470114878" }, { "id": "_002", "title": "XML in a Nutshell, Third Edition", "author": "O’Reilly Media, Inc", "copyright": 2004, "publisher": "O’Reilly Media, Inc", "isbn": "0596007647" }, { "id": "_003", "title": "Learning XML, Second Edition", "author": "Erik Ray", "copyright": 2003, "publisher": "O’Reilly Media, Inc.", "isbn": "0596004206" }, { "id": "_004", "title": "XML processing and website scraping in Java", "author": "Gabor Laszlo Hajba", "copyright": 2016, "publisher": "LeanPub" } ] } These values will be read into instances of the following class: /** * This is the sample class we will fill with data from the books.json file for the examples. * * @author GHajba * */ public class Book { private String id; private String title; private String author; private int copyright; private String publisher; private String isbn; // getters and setters omitted @Override public String toString() { final StringBuilder stringRepresentation = new StringBuilder(); final String newLine = System.getProperty("line.separator"); stringRepresentation.append(this.getClass().getName() + " {" + newLine); stringRepresentation.append(" ID: " + this.id + newLine); stringRepresentation.append(" Title: " + this.title + newLine); stringRepresentation.append(" Copyright: " + this.copyright + newLine); stringRepresentation.append(" Publisher: " + this.publisher + newLine); stringRepresentation.append(" ISBN: " + this.isbn + newLine); stringRepresentation.append("}"); return stringRepresentation.toString(); } } And naturally there is a Publictaions wrapper class which contains all the Books: /** * Sample parent class to extract Book objects from JSON. * * @author GHajba * */ public class Publications { List<Book> books; // getters and setters omitted @Override public String toString() { return "Publications: [\n" + this.books.stream().map(Objects::toString).collect(Collectors.joining("\n")) + "\n]"; } } Note that this class is only needed for the Gson and Jackson examples. For the Java API the Book class fits our needs. Java API for JSON Processing Let's start right away with the built-in solution. Actually it is not as built-in as for XML parsing—here you need either a default implementation residing in the Glassfish project (so you need an extra library) or you have to write your own solution. None of them is the best but for the sake of simplicity let's stick with the usage of the default implementation. I won't go into detail how to add a library to your project—and you will need to add the other two libraries later on too. Let' see how it works with the built-in JSON parsing of Java. The core concept is to read some input (it can be a file, an input stream or an already available String object) which is presented in JSON format and to convert the contents into Java objects. To achieve this with the standard API we need the following: - a json.JsonReader obtained from javax.json.Json.createReader() (name it reader) - the whole json.JsonObject obtained from reader.readObject() (name it jsonObject) Now we have all your contents in a nice JsonObject structure, it is time to extract them to usable objects like Publications and Books. For this we have to write our own parsing system where we are aware of the content structure of the data we got. In the example case we know we have an array of elements and this array is called "books". So we instruct the API to get fetch this array: JsonArray jsonArray = jsonObject.getJsonArray("books"); Now we have to iterate over this array and convert its contents to Book objects: for(JsonObject bookObject : jsonArray.getValuesAs(JsonObject.class)) { Book b = new Book(); b.setId(bookObject.getString("id")); // other setters omitted System.out.println(b); } As you can see in the example above, we have to get the elements of the array as JsonObjects so we have to use the specific method JsonArray.getValuesAs(Class<T>). In the body of the loop you can see that we create a new instance of the Book class and set every parameter through a setter method. An alternative would be to have a constructor which takes all the parameters extracted, but it won't be readable so I stick with the setter-version. What about isbn? As you may have noticed the property isbn is not present in the extraction loop. This is because it is not present in the fourth book and this leads to a little problem with the standard API. If you have a JsonObject which does not contain a given property you try to access you will get a NullPointerException. This is bad but we can easily fix this problem: if(bookObject.containsKey("isbn")) { b.setIsbn(bookObject.getString("isbn")); } Well, this solution is quite good for now but what about other cases when more fields are optional in the JSON content? In that case you have to verify the existence of each (optional) property or you can write some generic methods which will do the null-check prior to requesting the property. Gson Gson is the JSON parser for Java developed by Google. The main goal of Gson is to provide simple ways to convert objects to JSON and vice versa. Because of this it is really simple to parse JSON input in Java. To achieve this you only need the following: - a new com.google.gson.GsonBuilder.GsonBuilder() (name it builder) - a google.gson.Gson object obtained with builder.create() (name it gson) To read the contents of a file into an object just create a java.io.Reader (naturally you can utilize a java.lang.String too and do not need a file in every case) and call com.google.gson.Gson.fromJson(Reader, Class<Publications>). In my example case the whole process would look like this: /** * This is an example class to load JSON data with Gson. * * @author GHajba * */ public class GsonExample { public static void main(String... args) { final GsonBuilder builder = new GsonBuilder(); final Gson gson = builder.create(); final Publications publications = gson.fromJson(new FileReader("books.json"), Publications.class); System.out.println(publications); } } The Publications class is the wrapper object which holds the java.util.List of Book objects which are read from the JSON file. Naturally there is some error handling which you need to take care of but I omit it because it won't make the code clear. The solution is simple and clear. However, sometimes you need custom mappings to tell Gson how one object should behave when written to a file or established from JSON data but this is out of the scope of the current article. Jackson Just like Gson, Jackson aims to be the easiest and most usable JSON library for Java developers. To map JSON data into your class structure you will need the following: - a new com.fasterxml.jackson.databind.ObjectMapper.ObjectMapper() (name it mapper) And that's it. Now you can call the readValue method on mapper with the same parameters as previously (a Reader and the target class to map your data to). In my example this would look like this (again, I omit error handling to keep the code readable) /** * Example class load JSON data with Jackson. * * @author GHajba * */ public class JacksonExample { public static void main(String... args) { final ObjectMapper mapper = new ObjectMapper(); final Publications publications = mapper.readValue(new FileReader("books.json"), Publications.class); System.out.println(publications); } } As you can see, using Jackson is as simple as it is with Gson. One difference is that to have the mapping work you need at least getters for each property in your class which you want to fill in with Jackson. This means if you have a class where the id property only exists for internal verification / just to display it in the string representation of your Object you will need a public getter too. If you do not provide a getter you may encounter an exception like this one: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "id" (class hu.japy.dev.json.Book), not marked as ignorable (5 known properties: "title", "author", "publisher", "isbn", "copyright"]) Above there was no publicly available getter for the id attribute. Another difference is that you have more options for the input source. You can use Strings, Files, Readers, and InputStreams. But this is out of the scope of this article and you can look into the various ways in the JavaDoc. Conclusion As you can see, Java provides a default solution for parsing JSON data into your project but it is rather bothersome and you need to implement a great deal to get things rolling. However there are some slick third-party libraries you can utilize which leverage the amount of required code and provide you the necessary objects. So I suggest that you use either Gson or Jackson for JSON parsing until Java does not provide a leaner way to handle this problem. Recent Stories Top DiscoverSDK Experts Compare Products Select up to three two products to compare by clicking on the compare icon () of each product.{{compareToolModel.Error}} Your Comment
http://www.discoversdk.com/blog/json-processing-with-java
CC-MAIN-2020-34
refinedweb
1,563
55.13
Unity 2020.3 is an LTS release, containing features released in 2020.1 and 2020.2, and is supported for two years. See the LTS release page for more information and other available LTS installers. To find out more about the new features, changes, and improvements to Unity 2020 releases, see: 2020.3 Release Notes 2020.2 Release Notes 2020.1 Release Notes If you are upgrading existing projects from an earlier version of Unity, read the Upgrade Guides for information about how your project might be affected. Here are the LTS specific upgrade guides: See what’s changed in Unity 2020 LTS since 2019 LTS and view the documentation for the affected areas. For extra flexibility, you can now enter Prefab Mode without leaving the context of your sceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info See in Glossary and edit PrefabsAn asset type that allows you to store a GameObject complete with components and properties. The prefab acts as a template from which you can create new object instances in the scene. More info See in Glossary with the background grayed out. You can still choose to edit Prefabs in isolation. The Package Manager has several design updates, including new user interface (UI) iconography, improved layout, and better distinctions between information for currently installed packages and for available updates. A new axis conversion setting lets you fix axis import issues without having to reopen meshes in 3D modeling software. You can now import custom properties for objects originating from SketchUp. When importing PNG files, you have the option to ignore gamma correction (which can help with color consistency across different platforms). The Asset Import Pipeline v2 is now the default asset pipeline. Focused InspectorA Unity window that displays information about the currently selected GameObject, asset or project settings, allowing you to inspect and edit the values. More info See in Glossary windows make it easier to see the Inspector details of a selected GameObject. To open a floating Inspector window for the selected GameObject or asset, right-click and select Properties. You can open multiple Focused Inspector windows at the same time, so you can assess or reference multiple objects while making changes to the Scene. You can also choose to focus on a specific component of a GameObject, which requires less screen space. Developers can now expose long-running, asynchronous operations with the new Progress API and the Background Tasks window. These tools work together to display task progress in its own Editor window. You can monitor subtasks, filter tasks by status, view global progress, and more. The local administrator dashboard for the Unity Accelerator lets you configure the tool for your team’s needs, assess its health, and access logs and metrics. The Addressable Asset System provides an easy way to load assets by “address.” It handles asset management overhead by simplifying content pack creation and deployment. We’ve added several new features to the package, including significant user experience updates in the Unity Editor to improve the development workflows, such as sub-object support and runtime catalog updating. Unity Hub version 2.4.2 includes improved workflows for managing projects, downloads, Unity Editor versions, and modules. QuickSearch 2.0 is now available, with even more search tokens and the ability to provide contextual completion when typing queries. You can now also search through all the Scenes and Prefabs of your project at once rather than being limited to just the open Scene. With Editor Coroutines, now out of Preview, you can start the execution of iterator methods within the Editor, similar to how Coroutines inside MonoBehaviour scriptsA piece of code that allows you to create your own Components, trigger game events, modify Component properties over time and respond to user input in any way you like. More info See in Glossary are handled during runtime. You can now add charts window to get more performance insights and context for either existing or user-generated Profiler statistics. NonReorderableattribute to disable this function if you prefer. You can now use Camera Stacking to layer the output of multiple Cameras and create a single combined output. This lets you create effects such as a 3D model in a 2D user interface (UI), or the cockpit of a vehicle. Lighting Setting Assets let users change settings that are used by multiple Scenes simultaneously. This means that modifications to multiple properties can quickly propagate through your projects, which is ideal for lighting artists who might need to make global changes across several Scenes. It’s now much quicker to swap between lighting settings, for example, when moving between preview and production-quality bakes. Setting up models for lightmapping is now much simpler. To simplify the process of finding the required size for the pack margin at import, Unity now offers the Calculate Margin Method in the model importer. Here you can specify the minimum lightmapA pre-rendered texture that contains the effects of light sources on static objects in the scene. Lightmaps are overlaid on top of scene geometry to create the effect of lighting. More info See in Glossary resolution at which the model is used, and the minimum scale. From this input, Unity’s unwrapper calculates the required pack margin so no lightmaps overlap. We have implemented a better decorrelation method for CPU and GPU LightmappersA tool in Unity that bakes lightmaps according to the arrangement of lights and geometry in your scene. More info See in Glossary. These decorrelation improvements are active by default and do not need any user input. The result is lightmaps that converge upon the noiseless result in less time and display fewer artifacts. considers how meaningful a path is to Global IlluminationA group of techniques that model both direct and indirect lighting to provide realistic lighting results. Unity has two global illumination systems that combine direct and indirect lighting.: Baked Global Illumination, and Realtime Global Illumination (deprecated). See in Glossary in the Scene. Each time a ray bounces on a dark surface increases the chances of that path ending early. Culling rays in this way reduces the overall bake time with generally little effect on lighting quality. Previously cookies were limited to real-time Lights only. Unity now supports cookies in the CPU and GPU Lightmappers. This means that baked and mixed-mode Lights also consider the influence of the cookie in attenuating both direct and indirect lighting. The Contributors and Receivers Scene View shows which objects influence Global Illumination (GI) within the Scene. This also makes it easier to see whether GI is received from lightmaps or Light ProbesLight probes store information about how light passes through space in your scene. A collection of light probes arranged within a given space can improve lighting on moving objects and static LOD scenery within that space. More info See in Glossary. The Universal Render Pipeline (URP) has new features that bring it closer to parity with the Built-In Render Pipeline. Screen Space Ambient Occlusion (SSAO) improves the visual quality of ambient lighting in your scenes. You can lower your build data size and improve loading times and with the new Complex Lit Shader. You can use Clear Coat maps to simulate and mimic materials such as car paint. High Definition Render Pipeline (HDRP) now includes better tools to help you debug lighting, improvements to the decal system, Path tracing supports fog absorption and subsurface scattering for organic materials and a new depth of fieldA post-processing effect that simulates the focus properties of a camera lens. More info See in Glossary mode for producing path-traced images with high-quality defocus blur and more. A new HDRP sample scene is available that is a great starting point for projects aiming at high-end graphics. This template includes multiple setups of physically based light intensities, and more, to help you start creating realistic scenes with HDRP. Download it from the Unity Hub. To improve the performance of Animated SpriteA 2D graphic objects. If you are used to working in 3D, Sprites are essentially just standard textures but there are special techniques for combining and managing sprite textures for efficiency and convenience during development. More info See in Glossary deformation at runtime, install the Burst Compiler and Collections packages via the Package Manager. This allows the 2D Animation package to use Burst compilation and low-level array utilities to speed up Unity’s processing of Sprite mesh deformation. A new “Stretched” option is available for Corners to connect adjacent edges without custom Corner Sprites. This option builds geometry to connect adjacent edges without the need to specify custom Corner Sprites in the Sprite Shape Profile. Scripting API support for the new corner mode will be added in a later release. Sprite Shape mesh baking lets you store mesh data while editing so it can be reloaded at runtime, avoiding unnecessary runtime mesh generation.. This release features many updates to 2D Physics, including improvements to Rigidbody2D XY Position Constraint, which makes a RigidbodyA component that allows a GameObject to be affected by simulated gravity and other forces. More info See in Glossary completely solid under any force and has almost zero runtime cost. This feature resulted from changes to Box2D physics. The 2D Physics Examples project has been updated with many Scenes to demonstrate all 2D physics features. Cinemachine is a suite of tools for dynamic, smart, codeless cameras that let the best shots emerge based on scene composition and interaction. This lets you tune, iterate, experiment and create camera behaviors in real-time. With 2020.1, version 2.5 of Cinemachine is now a verified packageWhen a package passes release cycle testing for a specific version of Unity, it receives the Verified For designation. This means that these packages are guaranteed to work with the designated version of Unity. See in Glossary and recommended for productions of any scale. Shader Graph includes several new features that improve the workflow for technical artists, such as better Graph Editor performance. See the [Shader Graph Upgrade guide]((–10–0-x.html) for further guidance. VFX Graph updates include Output Events, allowing users to synchronize lights, sound, physical reactions, or gameplay based on spawn events via a delegate interface in C#. The Animation RiggingThe process of building a skeleton hierarchy of joints for your mesh. Performed with an external tool, such as Blender or Autodesk Maya. More info See in Glossary package is now verified. It enables procedural control of animated skeletons at runtime and authoring of new animation clipsAnimation data that can be used for animated characters or simple animations. It is a simple “unit” piece of motion, such as (one specific instance of) “Idle”, “Walk” or “Run”. More info See in Glossary in the Unity Editor. For Global Illumination, both the GPU Lightmapper and the CPU Lightmapper now have a higher bounce limit. In addition, they now use Blue Noise Sampling for improved lightmap quality and have several other improvements. The Input System package is now verified for production and offers a stable solution for most input needs. AR Foundation, our multi-platform framework for ARAugmented Reality (AR) uses computer graphics or video composited on top of a live video feed to augment the view and create interaction with real and virtual objects. See in Glossary development, now includes support for meshing. AR experiences blend much more seamlessly into the real world because virtual content can be occluded with real-world objects and realistically interact with the physical environment. Samsung Adaptive Performance 2.0 comes with new Sample Projects to showcase different features, including Variable Refresh Rate, Scalers, and Adaptive Performance Simulator extension, to emulate Adaptive Performance on any device. You can now also target Mac hardware’s next evolution with native support for Apple Silicon for the standalone player. reduced capture memory overhead and capture times of the Memory Profiler Preview package. You can access GPU profile data through the Recorder API. Use the Sampler API to collect the data and visualize it in your own runtime performance stats overlay. You can now launch the Profiler as a standalone app. This moves the tool to a separate process outside of Unity, reducing the performance overhead when profiling the Editor and creating cleaner profile data. The Visual Studio integration is now a package and we will not develop the built-in support further. The package also includes new features and improvements, like a faster startup of Visual Studio. The new C# debugging workflow makes the Editor run with C# code optimization in Release Mode by default, improving performance when running your project in Play Mode. To debug your project, you must enable Debug Mode before entering Play Mode. To switch between code optimization modes without restarting the Editor, select the Debug Button at the bottom right of the Unity Editor Status Bar. We’ve improved support for serializing fields of generic types. Previously, if you had a generic type (such as class MyClass>), and you wanted to make a field using that type, you had to define a non-generic subclass of it (like class MyClassInt : MyClassint>). We’ve removed this limitation, so you no longer need to declare the generic subclass, and you can use the generic type directly. We are evolving the Burst Compiler as a development tool, adding native debugging capabilities. Using a native debugger attached to Unity, you can now set breakpoints, skip over and step into code. You can also inspect and navigate call stacks, variables, autos and threads. Unity now offers a -deterministic compilation option when compiling C# scripts. This option lets you avoid unnecessary recompiling of assembly definition (.asmdef) references if the public metadata for the assembly does not change when compiling scripts for the Editor. This is particularly useful for reducing iteration time when you’re making changes to assemblies that have many direct and/or indirect references. Watch the ‘Improve compilation times with deterministic C# compilation by default in Unity 2020.2’ video to find out more. Unity now supports the newest C# 8 features and enhancements, excluding default interface methods. This includes nullable reference types, enabling the compiler to show a warning when you attempt to assign null to a reference type variable. Switch expression with pattern matching lets you write conditional code in a more streamlined way. Namespaces in C# provide an efficient way to organize your code and avoid class naming collisionsA collision occurs when the physics engine detects that the colliders of two GameObjects make contact or overlap, when at least one has a Rigidbody component and is in motion. More info See in Glossary with other packages and libraries. Root Namespace is now available as a new field in the asmdef inspector and is used to automatically add a namespace when creating a new script in Unity and in Visual Studio and Rider. Remember to update the Visual Studio and Rider packages to the latest version if you plan to use this functionality. We’ve improved the build compilation time. If you make changes that don’t involve code, for example, materials, shadersA small script that contains the mathematical calculations and algorithms for calculating the Color of each pixel rendered, based on the lighting input and the Material configuration. More info See in Glossary or prefabs, the IL2CPPA Unity-developed scripting back-end which you can use as an alternative to Mono when building projects for some platforms. More info See in Glossary conversion from .NET assemblies to C++ is now skipped entirely when building a player. We’ve fixed the inconsistent Time.deltaTime values that led to stuttering object movements during gameplay. We have also refactored the TimeManager interface to make frame time calculations more stable and provide smoother object movement when the game is running at a stable frame rate. These time stability improvements are supported on various platforms. Unity 2020.2 features several significant optimizations, including to Nested Prefabs, resulting in dramatically faster sorting and faster lookups. Searches in the Editor scripted importers registration function have tested to be up to 800 times faster. We’ve also refactored Camera.main, reducing the time it takes to query it by hundreds of milliseconds in some projects. Configurable Enter Play Mode is no longer an experimental feature. This lets you disable either, or both, of the “Domain Reload” and “Scene Reload” actions to speed up entering Play Mode. With Editor Coroutines, now out of Preview, you can start the execution of iterator methods within the Editor, similar to how Coroutines inside MonoBehaviour scripts are handled during runtime. Unity Linker performs static analysis to strip managed code. It also recognizes a number of attributes and lets you annotate dependencies See in Glossary where it can’t identify them. The tool receives API updates to match Mono IL Linker. Unity Linker can detect some simple reflection patterns, reducing the need to use link.xml files. The compilation pipeline now supports Roslyn analyzers. This lets you run C# code analyzers asynchronously in the background inside the Unity Editor without interrupting your iteration workflow. You can also run them synchronously from the command line. Unity Safe Mode improves how Unity behaves when opening a project that has script compilation errors. If the Editor detects compilation errors at startup, you will now be prompted to enter Safe Mode. This presents you with an environment designed for resolving them, so that you can quickly return your project to a functional state, without waiting for unnecessary imports of your project’s assets. This feature will simplify and speed up the process of upgrading a project to a new Unity version, and it will help teams working on large projects by reducing the number of cases in which the library folder contains incorrect import artifacts.
https://docs.unity3d.com/Manual/WhatsNew2020LTS.html
CC-MAIN-2021-17
refinedweb
2,988
54.12
Provided by: manpages-dev_5.05-1_all NAME telldir - return current location in directory stream SYNOPSIS #include <dirent.h> long telldir(DIR *dirp); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): telldir(): _XOPEN_SOURCE || /* Glibc since 2.19: */ _DEFAULT_SOURCE || /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVIDtelldir() │ Thread safety │ MT-Safe │ └──────────┴───────────────┴─────────┘ CONFORMING TO POSIX.1-2001, POSIX.1-2008, 4.3BSD. NOTES In glibc up to version 2.1.1, the return type of telldir() was off_t. POSIX.1-2001 specifies long, and this is the type used since glibc 2.1.2. In early filesystems, the value returned by telldir() was a simple file offset within a directory. Modern filesystems use tree or hash structures, rather than flat tables, to represent directories. On such filesystems, the value returned by telldir() (and used internally by readdir(3)) is a "cookie" that is used by the implementation to derive a position within a directory. Application programs should treat this strictly as an opaque value, making no assumptions about its contents. SEE ALSO closedir(3), opendir(3), readdir(3), rewinddir(3), scandir(3), seekdir(3) COLOPHON This page is part of release 5.05 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. 2017-09-15 TELLDIR(3)
http://manpages.ubuntu.com/manpages/focal/man3/telldir.3.html
CC-MAIN-2020-24
refinedweb
220
58.79
Recently I wrote a lot of Jasmine tests for a small personal project. It took me some time until I finally got the feeling of getting the tests right. After this, I always have a hard time when I am switching back to JUnit tests. For some reason JUnit tests did no longer feel that good and I wondered if it would be possible to write JUnit tests in a way similar to Jasmine tests. Jasmine is a popular Behavior Driven Development testing framework for JavaScript that is inspired by RSpec (a Ruby BDD testing Framework). A simple Jasmine test looks like this: describe('AudioPlayer tests', function() { var player; beforeEach(function() { player = new AudioPlayer(); }); it('should not play any track after initialization', function() { expect(player.isPlaying()).toBeFalsy(); }); ... }); The describe() function call in the first line creates a new test suite using the description AudioPlayer tests. Inside a test suite we can use it() to create tests (called specs in Jasmine). Here, we check if the isPlaying() method of AudioPlayer returns false after creating a new AudioPlayer instance. The same test written in JUnit would look like this: public class AudioPlayerTest { private AudioPlayer audioPlayer; @Before public void before() { audioPlayer = new AudioPlayer(); } @Test void notPlayingAfterInitialization() { assertFalse(audioPlayer.isPlaying()); } ... } Personally I find the Jasmine test much more readable compared to the JUnit version. In Jasmine the only noise that does not contribute anything to the test are the braces and the function keyword. Everything else contains some useful information. When reading the JUnit test we can ignore keywords like void, access modifiers (private, public, ..), annotations and irrelevant method names (like the name of the method annotated with @Before). In addition to that, test descriptions encoded in camel case method names are not that great to read. Besides increased readability I really like Jasmine's ability of nesting test suites. Let's look at an example that is a bit longer: describe('AudioPlayers tests', function() { var player; beforeEach(function() { player = new AudioPlayer(); }); describe('when a track is played', function() { var track; beforeEach(function() { track = new Track('foo/bar.mp3') player.play(track); }); it('is playing a track', function() { expect(player.isPlaying()).toBeTruthy(); }); it('returns the track that is currently played', function() { expect(player.getCurrentTrack()).toEqual(track); }); }); ... }); Here we create a sub test suite that is responsible for testing the behavior when a Track is played by AudioPlayer. The inner beforeEach() call is used to set up a common precondition for all tests inside the sub test suite. In contrast, sharing common preconditions for multiple (but not all) tests in JUnit can become cumbersome sometimes. Of course duplicating the setup code in tests is bad, so we create extra methods for this. To share data between setup and test methods (like the track variable in the example above) we then have to use member variables (with a much larger scope). Additionally we should make sure to group tests with similar preconditions together to avoid the need of reading the whole test class to find all relevant tests for a certain situation. Or we can split things up into multiple smaller classes. But then we might have to share setup code between these classes... If we look at Jasmine tests we see that the structure is defined by calling global functions (like describe(), it(), ...) and passing descriptive strings and anonymous functions. With Java 8 we got Lambdas, so we can do the same right? Yes, we can write something like this in Java 8: public class AudioPlayerTest { private AudioPlayer player; public AudioPlayerTest() { describe("AudioPlayer tests", () -> { beforeEach(() -> { player = new AudioPlayer(); }); it("should not play any track after initialization", () -> { expect(player.isPlaying()).toBeFalsy(); }); }); } } If we assume for a moment that describe(), beforeEach(), it() and expect() are statically imported methods that take appropriate parameters, this would at least compile. But how should we run this kind of test? For interest I tried to integrate this with JUnit and it turned out that this actually very easy (I will write about this in the future). The result so far is a small library called Oleaster. A test written with Oleaster looks like this: import static com.mscharhag.oleaster.runner.StaticRunnerSupport.*; ... @RunWith(OleasterRunner.class) public class AudioPlayerTest { private AudioPlayer player; { describe("AudioPlayer tests", () -> { beforeEach(() -> { player = new AudioPlayer(); }); it("should not play any track after initialization", () -> { assertFalse(player.isPlaying()); }); }); } } Only a few things changed compared to the previous example. Here, the test class is annotated with the JUnit @RunWith annotation. This tells JUnit to use Oleaster when running this test class. The static import of StaticRunnerSupport.* gives direct access to static Oleaster methods like describe() or it(). Also note that the constructor was replaced by an instance initializer and the Jasmine like matcher is replaced with by a standard JUnit assertion. There is actually one thing that is not so great compared to original Jasmine tests. It is the fact that in Java a variable needs to be effectively final to use it inside a lambda expression. This means that the following piece of code does not compile: describe("AudioPlayer tests", () -> { AudioPlayer player; beforeEach(() -> { player = new AudioPlayer(); }); ... }); The assignment to player inside the beforeEach() lambda expression will not compile (because player is not effectively final). In Java we have to use instance fields in situations like this (like shown in the example above). In case you worry about reporting: Oleaster is only responsible for collecting test cases and running them. The whole reporting is still done by JUnit. So Oleaster should cause no problems with tools and libraries that make use of JUnit reports. For example the following screenshot shows the result of a failed Oleaster test in IntelliJ IDEA: If you wonder how Oleaster tests look in practice you can have a look at the tests for Oleaster (which are written in Oleaster itself). You can find the GitHub test directory here. Feel free to add any kind of feedback by commenting to this post or by creating a GitHub issue.
https://www.mscharhag.com/java/oleaster-jasmine-junit-tests
CC-MAIN-2019-18
refinedweb
990
54.02
10: Modules - Page ID - 17695 Write code that is easy to delete, not easy to extend. The ideal program has a crystal-clear structure. The way it works is easy to explain, and each part plays a well-defined role. A typical real program grows organically. New pieces of functionality are added as new needs come up. Structuring—and preserving structure—is additional work. It’s work that will pay off only in the future, the next time someone works on the program. So it is tempting to neglect it and allow the parts of the program to become deeply entangled. This causes two practical issues. First, understanding such a system is hard. If everything can touch everything else, it is difficult to look at any given piece in isolation. You are forced to build up a holistic understanding of the entire thing. Second, if you want to use any of the functionality from such a program in another situation, rewriting it may be easier than trying to disentangle it from its context. The phrase “big ball of mud” is often used for such large, structureless programs. Everything sticks together, and when you try to pick out a piece, the whole thing comes apart, and your hands get dirty. Modules Modules are an attempt to avoid these problems. A module is a piece of program that specifies which other pieces it relies on and which functionality it provides for other modules to use (its interface). Module interfaces have a lot in common with object interfaces, as we saw them in Chapter 6. They make part of the module available to the outside world and keep the rest private. By restricting the ways in which modules interact with each other, the system becomes more like LEGO, where pieces interact through well-defined connectors, and less like mud, where everything mixes with everything. The relations between modules are called dependencies. When a module needs a piece from another module, it is said to depend on that module. When this fact is clearly specified in the module itself, it can be used to figure out which other modules need to be present to be able to use a given module and to automatically load dependencies. To separate modules in that way, each needs its own private scope. Just putting your JavaScript code into different files does not satisfy these requirements. The files still share the same global namespace. They can, intentionally or accidentally, interfere with each other’s bindings. And the dependency structure remains unclear. We can do better, as we’ll see later in the chapter. Designing a fitting module structure for a program can be difficult. In the phase where you are still exploring the problem, trying different things to see what works, you might want to not worry about it too much since it can be a big distraction. Once you have something that feels solid, that’s a good time to take a step back and organize it. Packages One of the advantages of building a program out of separate pieces, and being actually able to run those pieces on their own, is that you might be able to apply the same piece in different programs. But how do you set this up? Say I want to use the parseINI function from Chapter 9 in another program. If it is clear what the function depends on (in this case, nothing), I can just copy all the necessary code into my new project and use it. But then, if I find a mistake in that code, I’ll probably fix it in whichever program I’m working with at the time and forget to also fix it in the other program. Once you start duplicating code, you’ll quickly find yourself wasting time and energy moving copies around and keeping them up-to-date. That’s where packages come in. A package is a chunk of code that can be distributed (copied and installed). It may contain one or more modules and has information about which other packages it depends on. A package also usually comes with documentation explaining what it does so that people who didn’t write it might still be able to use it. When a problem is found in a package or a new feature is added, the package is updated. Now the programs that depend on it (which may also be packages) can upgrade to the new version. Working in this way requires infrastructure. We need a place to store and find packages and a convenient way to install and upgrade them. In the JavaScript world, this infrastructure is provided by NPM (). NPM is two things: an online service where one can download (and upload) packages and a program (bundled with Node.js) that helps you install and manage them. At the time of writing, there are more than half a million different packages available on NPM. A large portion of those are rubbish, I should mention, but almost every useful, publicly available package can be found on there. For example, an INI file parser, similar to the one we built in Chapter 9, is available under the package name ini. Chapter 20 will show how to install such packages locally using the npm command line program. Having quality packages available for download is extremely valuable. It means that we can often avoid reinventing a program that 100 people have written before and get a solid, well-tested implementation at the press of a few keys. Software is cheap to copy, so once someone has written it, distributing it to other people is an efficient process. But writing it in the first place is work, and responding to people who have found problems in the code, or who want to propose new features, is even more work. By default, you own the copyright to the code you write, and other people may use it only with your permission. But because some people are just nice and because publishing good software can help make you a little bit famous among programmers, many packages are published under a license that explicitly allows other people to use it. Most code on NPM is licensed this way. Some licenses require you to also publish code that you build on top of the package under the same license. Others are less demanding, just requiring that you keep the license with the code as you distribute it. The JavaScript community mostly uses the latter type of license. When using other people’s packages, make sure you are aware of their license. Improvised modules Until 2015, the JavaScript language had no built-in module system. Yet people had been building large systems in JavaScript for more than a decade, and they needed modules. So they designed their own module systems on top of the language. You can use JavaScript functions to create local scopes and objects to represent module interfaces. This is a module for going between day names and numbers (as returned by Date’s getDay method). Its interface consists of weekDay.name and weekDay.number, and it hides its local binding names inside the scope of a function expression that is immediately invoked. const weekDay = function() { const names = ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]; return { name(number) { return names[number]; }, number(name) { return names.indexOf(name); } }; }(); console.log(weekDay.name(weekDay.number("Sunday"))); // → Sunday This style of modules provides isolation, to a certain degree, but it does not declare dependencies. Instead, it just puts its interface into the global scope and expects its dependencies, if any, to do the same. For a long time this was the main approach used in web programming, but it is mostly obsolete now. If we want to make dependency relations part of the code, we’ll have to take control of loading dependencies. Doing that requires being able to execute strings as code. JavaScript can do this. Evaluating data as code There are several ways to take data (a string of code) and run it as part of the current program. The most obvious way is the special operator eval, which will execute a string in the current scope. This is usually a bad idea because it breaks some of the properties that scopes normally have, such as it being easily predictable which binding a given name refers to. const x = 1; function evalAndReturnX(code) { eval(code); return x; } console.log(evalAndReturnX("var x = 2")); // → 2 console.log(x); // → 1 A less scary way of interpreting data as code is to use the Function constructor. It takes two arguments: a string containing a comma-separated list of argument names and a string containing the function body. It wraps the code in a function value so that it gets its own scope and won’t do odd things with other scopes. let plusOne = Function("n", "return n + 1;"); console.log(plusOne(4)); // → 5 This is precisely what we need for a module system. We can wrap the module’s code in a function and use that function’s scope as module scope. CommonJS The most widely used approach to bolted-on JavaScript modules is called CommonJS modules. Node.js uses it and is the system used by most packages on NPM. The main concept in CommonJS modules is a function called require. When you call this with the module name of a dependency, it makes sure the module is loaded and returns its interface. Because the loader wraps the module code in a function, modules automatically get their own local scope. All they have to do is call require to access their dependencies and put their interface in the object bound to exports. This example module provides a date-formatting function. It uses two packages from NPM— ordinal to convert numbers to strings like "1st" and "2nd", and date-names to get the English names for weekdays and months. It exports a single function, formatDate, which takes a Date object and a template string. The template string may contain codes that direct the format, such as YYYY for the full year and Do for the ordinal day of the month. You could give it a string like "MMMM Do YYYY" to get output like “November 22nd 2017”. const ordinal = require("ordinal"); const {days, months} = require("date-names"); exports.formatDate = function(date, format) { return format.replace(/YYYY|M(MMM)?|Do?|dddd/g, tag => { if (tag == "YYYY") return date.getFullYear(); if (tag == "M") return date.getMonth(); if (tag == "MMMM") return months[date.getMonth()]; if (tag == "D") return date.getDate(); if (tag == "Do") return ordinal(date.getDate()); if (tag == "dddd") return days[date.getDay()]; }); }; The interface of ordinal is a single function, whereas date-names exports an object containing multiple things— days and months are arrays of names. Destructuring is very convenient when creating bindings for imported interfaces. The module adds its interface function to exports so that modules that depend on it get access to it. We could use the module like this: const {formatDate} = require("./format-date"); console.log(formatDate(new Date(2017, 9, 13), "dddd the Do")); // → Friday the 13th We can define require, in its most minimal form, like this: require.cache = Object.create(null); function require(name) { if (!(name in require.cache)) { let code = readFile(name); let module = {exports: {}}; require.cache[name] = module; let wrapper = Function("require, exports, module", code); wrapper(require, module.exports, module); } return require.cache[name].exports; } In this code, readFile is a made-up function that reads a file and returns its contents as a string. Standard JavaScript provides no such functionality—but different JavaScript environments, such as the browser and Node.js, provide their own ways of accessing files. The example just pretends that readFile exists. To avoid loading the same module multiple times, require keeps a store (cache) of already loaded modules. When called, it first checks if the requested module has been loaded and, if not, loads it. This involves reading the module’s code, wrapping it in a function, and calling it. The interface of the ordinal package we saw before is not an object but a function. A quirk of the CommonJS modules is that, though the module system will create an empty interface object for you (bound to exports), you can replace that with any value by overwriting module.exports. This is done by many modules to export a single value instead of an interface object. By defining require, exports, and module as parameters for the generated wrapper function (and passing the appropriate values when calling it), the loader makes sure that these bindings are available in the module’s scope. The way the string given to require is translated to an actual filename or web address differs in different systems. When it starts with "./" or "../", it is generally interpreted as relative to the current module’s filename. So "./ would be the file named format-date.js in the same directory. When the name isn’t relative, Node.js will look for an installed package by that name. In the example code in this chapter, we’ll interpret such names as referring to NPM packages. We’ll go into more detail on how to install and use NPM modules in Chapter 20. Now, instead of writing our own INI file parser, we can use one from NPM. const {parse} = require("ini"); console.log(parse("x = 10\ny = 20")); // → {x: "10", y: "20"} ECMAScript modules CommonJS modules work quite well and, in combination with NPM, have allowed the JavaScript community to start sharing code on a large scale. But they remain a bit of a duct-tape hack. The notation is slightly awkward—the things you add to exports are not available in the local scope, for example. And because require is a normal function call taking any kind of argument, not just a string literal, it can be hard to determine the dependencies of a module without running its code. This is why the JavaScript standard from 2015 introduces its own, different module system. It is usually called ES modules, where ES stands for ECMAScript. The main concepts of dependencies and interfaces remain the same, but the details differ. For one thing, the notation is now integrated into the language. Instead of calling a function to access a dependency, you use a special import keyword. import ordinal from "ordinal"; import {days, months} from "date-names"; export function formatDate(date, format) { /* ... */ } Similarly, the export keyword is used to export things. It may appear in front of a function, class, or binding definition ( let, const, or var). An ES module’s interface is not a single value but a set of named bindings. The preceding module binds formatDate to a function. When you import from another module, you import the binding, not the value, which means an exporting module may change the value of the binding at any time, and the modules that import it will see its new value. When there is a binding named default, it is treated as the module’s main exported value. If you import a module like ordinal in the example, without braces around the binding name, you get its default binding. Such modules can still export other bindings under different names alongside their default export. To create a default export, you write export default before an expression, a function declaration, or a class declaration. export default ["Winter", "Spring", "Summer", "Autumn"]; It is possible to rename imported bindings using the word as. import {days as dayNames} from "date-names"; console.log(dayNames.length); // → 7 Another important difference is that ES module imports happen before a module’s script starts running. That means import declarations may not appear inside functions or blocks, and the names of dependencies must be quoted strings, not arbitrary expressions. At the time of writing, the JavaScript community is in the process of adopting this module style. But it has been a slow process. It took a few years, after the format was specified, for browsers and Node.js to start supporting it. And though they mostly support it now, this support still has issues, and the discussion on how such modules should be distributed through NPM is still ongoing. Many projects are written using ES modules and then automatically converted to some other format when published. We are in a transitional period in which two different module systems are used side by side, and it is useful to be able to read and write code in either of them. Building and bundling In fact, many JavaScript projects aren’t even, technically, written in JavaScript. There are extensions, such as the type checking dialect mentioned in Chapter 8, that are widely used. People also often start using planned extensions to the language long before they have been added to the platforms that actually run JavaScript. To make this possible, they compile their code, translating it from their chosen JavaScript dialect to plain old JavaScript—or even to a past version of JavaScript—so that old browsers can run it. Including a modular program that consists of 200 different files in a web page produces its own problems. If fetching a single file over the network takes 50 milliseconds, loading the whole program takes 10 seconds, or maybe half that if you can load several files simultaneously. That’s a lot of wasted time. Because fetching a single big file tends to be faster than fetching a lot of tiny ones, web programmers have started using tools that roll their programs (which they painstakingly split into modules) back into a single big file before they publish it to the Web. Such tools are called bundlers. And we can go further. Apart from the number of files, the size of the files also determines how fast they can be transferred over the network. Thus, the JavaScript community has invented minifiers. These are tools that take a JavaScript program and make it smaller by automatically removing comments and whitespace, renaming bindings, and replacing pieces of code with equivalent code that take up less space. you run is often not the code as it was written. Module design Structuring programs is one of the subtler aspects of programming. Any nontrivial piece of functionality can be modeled in various ways. Good program design is subjective—there are trade-offs involved and matters of taste. The best way to learn the value of well-structured design is to read or work on a lot of programs and notice what works and what doesn’t. Don’t assume that a painful mess is “just the way it is”. You can improve the structure of almost everything by putting more thought into it. One aspect of module design is ease of use. If you are designing something that is intended to be used by multiple people—or even by yourself, in three months when you no longer remember the specifics of what you did—it is helpful if your interface is simple and predictable. That may mean following existing conventions. A good example is the ini package. This module imitates the standard JSON object by providing parse and stringify (to write an INI file) functions, and, like JSON, converts between strings and plain objects. So the interface is small and familiar, and after you’ve worked with it once, you’re likely to remember how to use it. Even if there’s no standard function or widely used package to imitate, you can keep your modules predictable by using simple data structures and doing a single, focused thing. Many of the INI-file parsing modules on NPM provide a function that directly reads such a file from the hard disk and parses it, for example. This makes it impossible to use such modules in the browser, where we don’t have direct file system access, and adds complexity that would have been better addressed by composing the module with some file-reading function. This points to another helpful aspect of module design—the ease with which something can be composed with other code. Focused modules that compute values are applicable in a wider range of programs than bigger modules that perform complicated actions with side effects. An INI file reader that insists on reading the file from disk is useless in a scenario where the file’s content comes from some other source. Relatedly, stateful objects are sometimes useful or even necessary, but if something can be done with a function, use a function. Several of the INI file readers on NPM provide an interface style that requires you to first create an object, then load the file into your object, and finally use specialized methods to get at the results. This type of thing is common in the object-oriented tradition, and it’s terrible. Instead of making a single function call and moving on, you have to perform the ritual of moving your object through various states. And because the data is now wrapped in a specialized object type, all code that interacts with it has to know about that type, creating unnecessary interdependencies. Often defining new data structures can’t be avoided—only a few basic ones are provided by the language standard, and many types of data have to be more complex than an array or a map. But when an array suffices, use an array. An example of a slightly more complex data structure is the graph from Chapter 7. There is no single obvious way to represent a graph in JavaScript. In that chapter, we used an object whose properties hold arrays of strings—the other nodes reachable from that node. There are several different pathfinding packages on NPM, but none of them uses this graph format. They usually allow the graph’s edges to have a weight, which is the cost or distance associated with it. That isn’t possible in our representation. For example, there’s the dijkstrajs package. A well-known approach to pathfinding, quite similar to our findRoute function, is called Dijkstra’s algorithm, after Edsger Dijkstra, who first wrote it down. The js suffix is often added to package names to indicate the fact that they are written in JavaScript. This dijkstrajs package uses a graph format similar to ours, but instead of arrays, it uses objects whose property values are numbers—the weights of the edges. So if we wanted to use that package, we’d have to make sure that our graph was stored in the format it expects. All edges get the same weight since our simplified model treats each road as having the same cost (one turn). const {find_path} = require("dijkstrajs"); let graph = {}; for (let node of Object.keys(roadGraph)) { let edges = graph[node] = {}; for (let dest of roadGraph[node]) { edges[dest] = 1; } } console.log(find_path(graph, "Post Office", "Cabin")); // → ["Post Office", "Alice's House", "Cabin"] This can be a barrier to composition—when various packages are using different data structures to describe similar things, combining them is difficult. Therefore, if you want to design for composability, find out what data structures other people are using and, when possible, follow their example. Summary Modules provide structure to bigger programs by separating the code into pieces with clear interfaces and dependencies. The interface is the part of the module that’s visible from other modules, and the dependencies are the other modules that it makes use of. Because JavaScript historically did not provide a module system, the CommonJS system was built on top of it. Then at some point it did get a built-in system, which now coexists uneasily with the CommonJS system. A package is a chunk of code that can be distributed on its own. NPM is a repository of JavaScript packages. You can download all kinds of useful (and useless) packages from it. Exercises A modular robot These are the bindings that the project from Chapter 7 creates: roads buildGraph roadGraph VillageState runRobot randomPick randomRobot mailRoute routeRobot findRoute goalOrientedRobot If you were to write that project as a modular program, what modules would you create? Which module would depend on which other module, and what would their interfaces look like? Which pieces are likely to be available prewritten on NPM? Would you prefer to use an NPM package or write them yourself? Here’s what I would have done (but again, there is no single right way to design a given module): The code used to build the road graph lives in the graph module. Because I’d rather use dijkstrajs from NPM than our own pathfinding code, we’ll make this build the kind of graph data that dijkstrajs expects. This module exports a single function, buildGraph. I’d have buildGraph accept an array of two-element arrays, rather than strings containing hyphens, to make the module less dependent on the input format. The roads module contains the raw road data (the roads array) and the roadGraph binding. This module depends on ./graph and exports the road graph. The VillageState class lives in the state module. It depends on the ./roads module because it needs to be able to verify that a given road exists. It also needs randomPick. Since that is a three-line function, we could just put it into the state module as an internal helper function. But randomRobot needs it too. So we’d have to either duplicate it or put it into its own module. Since this function happens to exist on NPM in the random-item package, a good solution is to just make both modules depend on that. We can add the runRobot function to this module as well, since it’s small and closely related to state management. The module exports both the VillageState class and the runRobot function. Finally, the robots, along with the values they depend on such as mailRoute, could go into an example-robots module, which depends on ./roads and exports the robot functions. To make it possible for goalOrientedRobot to do route-finding, this module also depends on dijkstrajs. By offloading some work to NPM modules, the code became a little smaller. Each individual module does something rather simple and can be read on its own. Dividing code into modules also often suggests further improvements to the program’s design. In this case, it seems a little odd that the VillageState and the robots depend on a specific road graph. It might be a better idea to make the graph an argument to the state’s constructor and make the robots read it from the state object—this reduces dependencies (which is always good) and makes it possible to run simulations on different maps (which is even better). Is it a good idea to use NPM modules for things that we could have written ourselves? In principle, yes—for nontrivial things like the pathfinding function you are likely to make mistakes and waste time writing them yourself. For tiny functions like random-item, writing them yourself is easy enough. But adding them wherever you need them does tend to clutter your modules. However, you should also not underestimate the work involved in finding an appropriate NPM package. And even if you find one, it might not work well or may be missing some feature you need. On top of that, depending on NPM packages means you have to make sure they are installed, you have to distribute them with your program, and you might have to periodically upgrade them. So again, this is a trade-off, and you can decide either way depending on how much the packages help you. Roads module Write a CommonJS module, based on the example from Chapter 7, that contains the array of roads and exports the graph data structure representing them as roadGraph. It should depend on a module ./graph, which exports a function buildGraph that is used to build the graph. This function expects an array of two-element arrays (the start and end points of the roads). // Add dependencies and exports" ]; Since this is a CommonJS module, you have to use require to import the graph module. That was described as exporting a buildGraph function, which you can pick out of its interface object with a destructuring const declaration. To export roadGraph, you add a property to the exports object. Because buildGraph takes a data structure that doesn’t precisely match roads, the splitting of the road strings must happen in your module. Circular dependencies A circular dependency is a situation where module A depends on B, and B also, directly or indirectly, depends on A. Many module systems simply forbid this because whichever order you choose for loading such modules, you cannot make sure that each module’s dependencies have been loaded before it runs. CommonJS modules allow a limited form of cyclic dependencies. As long as the modules do not replace their default exports object and don’t access each other’s interface until after they finish loading, cyclic dependencies are okay. The require function given earlier in this chapter supports this type of dependency cycle. Can you see how it handles cycles? What would go wrong when a module in a cycle does replace its default exports object? The trick is that require adds modules to its cache before it starts loading the module. That way, if any require call made while it is running tries to load it, it is already known, and the current interface will be returned, rather than starting to load the module once more (which would eventually overflow the stack). If a module overwrites its module.exports value, any other module that has received its interface value before it finished loading will have gotten hold of the default interface object (which is likely empty), rather than the intended interface value.
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Eloquent_JavaScript_(Haverbeke)/Part_1%3A_Language/10%3A_Modules
CC-MAIN-2021-17
refinedweb
4,975
63.19
$ gcc -o hello hello.m -lobjc hello.m:1:2: warning: #import is obsolete, use an #ifndef wrapper in the header fileWith the gnu compiler you should use "-Wno-import", in order to not get the warning... If you want one good book to learn Obj-C, buy "Programming in Objective-C", by Stephen Kochan. [receiver message];Classes are added as well. These are declared as follows: @interface SomeClass : ItsSuperClass@interface is a preprocessor directive that tells the preprocessor, "a class is being declared. what follows are its instance variables and methods." @interface seems not to be a preprocessor directive. cpp does nothing on it. [Objective-C was originally implemented as a preprocessor that converted the code to straight C. This is different from the traditional C preprocessor which understands things like #define. Modern Objective-C compilers are true compilers, and you probably couldn't implement the language with a preprocessor anymore.] { // begin declaring instance variables int fooInt; id anObject; // id is ' a pointer to an object.' it's a generic type-- any object can be typed as id. } //end declaring instance variables +(SomeClass *)objectWithFooInt:(int) theInt;Plus '+' signifies class method. colon ':' signifies an argument. (int) types the argument, and theInt is what the argument is named. a 'method' is a set of instructions, something an object (in this case the class object) knows how to do. This returns an initialized, allocated, and autoreleased instance of SomeClass with a fooInt value of theInt. in order for it to stick around for long, the coder must tell it to do so by sending it a retain message: [theObject retain]; (retain is a method of NSObject that is involved with memory management and paired with a release method.) -(id) initWithFooInt:(int) theInt object:(id) theObject;This is an instance method which is roughly analogous to a constructor; the message is sent to an allocated instance, and the init method sets instance variables fooInt and anObject to theInt and theObject, respectively. it returns a fully initialized instance of SomeClass. @end //end of interface @implementation SomeClassIn here, the instructions are laid out, and the methods are coded like C functions. the exception is, they can call the superclass' implementation of the method. polymorphism is fully supported in Objective-C. @end-- JoeOsborn [but with modifications by TomStambaugh] Attempts to add many missing Smalltalk features, such as blocks, are underway. See BlocksInObjectiveCee. result = [objectInstance methodName:param];and result = [objectInstance anotherMethod:param1 with:param2 andWithAnotherParameter:param3];Another hold-over from its lineage, though this one on the CeeLanguage side, was the use header files (for better or worse). However, at least one thing was fixed: header files are included via the "#import" directive, which automatically ensures that that file is included exactly once, obviating the conventional kludge in C/C++ to achieve that effect. In practice, both of these are minor compared to the advantages of the language. The core concept that differentiates ObjectiveCee from most other compiled languages is its runtime. (The advent of CsharpLanguage has started to bring the power of this concept more to the fore.) To put it simply, in ObjectiveCee reflection is the way everything's done. There's no such thing as compile-time binding. In other words, ObjectiveCee is like an interpreted language in terms of flexibility and power (string-to-method-invocation is a no brainer) while having the speed of a fully compiled language (rather than being compiled only to byte code). (Yes, other O-O languages have some the data portion of a runtime: RTTI (RunTimeTypeInformation) for C++, and reflection accesses this information in Java, but no other compiled, C-based language uses the runtime for all method dispatch.) A more significant limitation of Objective-C is its lack of namespaces and the issue of name collision. As a result various 2 and 3 letter prefixes on class names are common. This power is achieved by encoding class information into the object files (in the sense of .o files, aka .dll or .so, also known as "libraries") as strings. Upon application execution this information is loaded into the "runtime": a collection of C data structs and functions that are linked into every ObjectiveCee executable. When a method is invoked, the instance's isa pointer is de-referenced to access the related runtime structs. The method is looked up BY NAME, first in the class and then in each superclass in order. Once a successful lookup has occurred (or failed to occur) the associated function pointer (or pointer to the error function) is cached so that future invocations are fast (~ 2-3x a simple function call). ObjectiveC is a hybrid language. Within its O-O side it implements all the features of a dynamic O-O language. It supports single-parent inheritance. The base class of all objects is Object. (Actually the runtime supports multiple-base classes and in NextStep, the base class was NSObject,... details.) Having a runtime means that "new" (and generally computationally expensive) features of other languages (like "dynamic proxy classes in Java) are built-in. From 1988-1995, the base class for the NextStep AppKit? was Object. It had no reference counting; only +alloc, -init, and -free were implemented. NSObject was introduced with the Foundation Kit in NextStep 3.3, but only as part of the optional Enterprise Objects Framework. Thus, -retain and -release were introduced because of EOF's need for reference counting. Eventually, the OpenStep specification was written with NSObject as the base class, and the AppKit? was rewritten massively to build on the new base class. [BrianWilloughby] An interesting (and powerful) feature is that each Class is also (a special type of) Object in the runtime. Class methods ain't "static" -- they are just as dynamic as any instance method. One of the slicker features of Objective-C was the notion of a "category". A category (analogous to an "extension" in Envy/IBM Smalltalk) is any collection of methods (class or instance) that are grouped together and given a a name. In addition to being a useful way of grouping class functionality so that multiple developers could conveniently work on different parts of a class, it allows for the addition of new functionality at any point in the class hierarchy by object consumers (rather than just producers). Say you need every instance of any subclass of the SuperGizmoWidget? class that you purchased/found in some code library to be able to perform a "twirlAboutAxis:" method. Well, just add such a method into a category on the SuperGizmoWidget? class and voila! In other words its the exact opposite of the notion of a "finalize"d class. The limitation is that since instance VARIABLES, unlike instance and class METHODS affect the amount of space that's allocated for an instance, you can't add variables in a category -- or new instances would become incompatible with existing compiled ones. '' There are a few tricks to categories. Since the runtime must load the base class definition first, categories get loaded second. Conversely method lookups happen in the reverse order: whatever is loaded last is found first. So.... you can effectively override and replace any method on an class by writing a category method of the same name. This, of course, is generally not advised: though it does provide for a wickedly powerful tool for those terrible times when there are bugs in some otherwise very useful base class and you just wish you could fix the one broken method ... with ObjectiveCee, you can. [As the IBM Smalltalk community discovered, this can lead to pernicious bugs when multiple extensions/categories attempt to define the same method.] Another feature of the language was its DynamicTyping. There has been much discussion over the years as to the desireability of this trait. One of the strongest arguments against dynamic typing was that it decreased the ability to do compile-time checks. Compile-time typing was later added to the language -- but it doesn't affect the runtime. In a way the type specifications in ObjectiveCee amount to code annotations that are parsed by the compiler and produce warnings (or errors) when there's a type mis-match or when trying to invoke a method that the compiler hasn't been informed of. More information is at: The original ObjectiveCee compiler was a CeeLanguage pre-processor. That was back in the 80s. The current ObjectiveCee compiler has come a ways since then. It is now (thanks to previous negotiations with NeXT) part of the GnuCompilerCollection. The runtime has been enhanced to support multi-threading. Various notable Applications have been written in ObjectiveC:
http://c2.com/cgi/wiki?ObjectiveCee
CC-MAIN-2015-40
refinedweb
1,432
55.34
This article contains the third chapter from my forthcoming book, Launch Your Android App. After I complete the book (Target release date: 03-11-2016) I will release the entire book as a Kindle book on Amazon for only $2.99. I will also release it as a print book within days after that (pricing will vary due to print costs but from estimates will be in the range of $8.99 - 9.99). To read the introduction & first chapter, please check out my article here on CodeProject: (loads in a new tab) To read the second chapter go to : (loads in a new tab) If you scroll down quickly through this article, you will see that it seems quite long. That's because it includes 37. We now continue right where we left off in Chapter 2. If you haven't read that chapter, please do so now. At the top of Android Studio in the main menu, we want to click and expand the Run menu When it expands, we want to choose the Run app choice. When you click that menu item a dialog box will appear which looks like the following: The bottom portion of that dialog is the part that is important to us. You can see that Android Studio has auto-selected the Launch emulator radio button for us, because it knows that there is no emulator running yet. You can also see that the Android virtual device (AVD) that it is suggesting we run is named Nexus4. Go ahead and click the [OK] button now. ############################################################################## Note: When I attempted to start the Virtual Device nothing happened and I went through a number of steps to attempt to figure it out. I’ll add all of the things I ended up trying as a sidebar in case you have issues. I've written a detailed description of the trouble at StackOverflow : (opens in a new tab) Otherwise, this chapter will proceed as if you had no problems starting the emulator. Eventually*, you should see something like the following: *Depending upon your hardware it may take a long time for the Android Virtual Device (emulator) to finally start up. At this point the app looks fairly close to what we saw in the preview within Android Studio. However, for some reason the preview doesn’t show the word, Test, in the title bar as the emulator version does. Clicking Emulator to Touch Screen Also, you can click on the interface using your mouse and the app will react as it would if you ran it on a real device and touched the screen. The Freebies You get a few items for free in the application, simply because we chose the layout template that was provided by Android Studio. First of all, click the vertical ellipsis (menu at top right of app) and you’ll see that it displays one menu item : Settings. However, clicking the Settings menu which appears will not do anything, because we haven’t written any code for it yet. Now, let’s go ahead and click the round pink button with the envelope on it. Clicking it will not do much, but it will active the “snackbar”. Try it now. The UI will adjust and a small message will appear. Before we begin to delve into the code, let’s make sure we know how to use our tools fairly well. In the long run it will pay off. We’ll look at the following list of items to finish out this chapter and then next chapter we will start writing some code and actually doing some things to alter the app. Of course, if you are confident with skipping these items because you already understand them, then feel free to do so. We’ll look at: Android Monitor with Logcat Adding code in our app to write lines to logcat Run (output) window Messages window : look a little closer at output when app fails to build properly ADB (Android Debug Bridge) from the command line After we work through these items you will be quite familiar with Android Studio and it will be much easier to move around Studio and easier to understand the code we are working on. The first thing we want to do is close the app that we just ran on the emulator. There is an easy way to do that. Go back to the emulator and click on the “back” button. It is the curved arrow button shown on the left in the following image. We will learn more about this later, but when you touch that button in an Android app, it actually suspends and closes the app. Go ahead and do that now on your emulator. The app should disappear and you’ll probably see the Android “desktop” again. Now, switch back over to Android Studio and make sure you choose the Android Monitor button at the bottom of the window. That will cause the Android Monitor to display so we can look at it. If you will slowly move over the top border of that window your cursor will change to a double updown arrow so you can resize the window to see more if you like. Notice that you can tell that this window has focus because its title bar says “Android Monitor”. Also notice that the tab that is highlighted on the left says logcat. Right now, this window is showing us the Android log which is constantly being written while to while Android runs. At times the amount of information that is written to this window can be overwhelming because every event which occurs in the system is being written here, but we will learn how to filter this down so we can see only events we are interested in and which are helpful to us when debugging our app. First of all, let’s clear the Android Monitor. To do so, simply right-click where you see the text in the monitor and a context menu will appear. Choose the Clear All menu item and all of the text in the window will disappear. This will help us see the events which are written when we start our app. Let’s start our application again, but this time, we are going to view the output which occurs in this window. To do that, click the green triangle arrow near the top of Android Studio, below the main menu. It’s the green arrow pointing to the right in the following image: Once you click that, you’ll see the Device Chooser window. That window will appear every time even though your emulator is still running. However, you can check the “User same device for future launches” choice so it will always use your running device so it won’t bother you with this window any more. Don’t worry, there is still a way to switch it using another menu option later if something happens. Click the [OK] button to allow the app to start. When you do, keep your eye on the Android Monitor logcat window, because a lot of messages are going to be written there as the application starts. I copied out the text that was written and it is more than 4,600 lines. At 50 lines per page that would be over 92 pages if you printed it out. Here are a few of the interesting lines from the output with notes (marked with note) and bold is my emphasis: 02-08 16:01:48.361 37-37/? D/dalvikvm: GC_EXPLICIT freed 13K, 1% free 12554K/12611K, paused 2ms+4ms Note: dalvikvm is the Virtual Machine that all Android programs ran in under Android versions 4.4 (KitKat) and before. This is the Java Run Time which runs the applications on all Android devices (version 4.4 and before) not just within the emulator. 02-08 16:01:48.361 37-37/? W/Zygote: Preloaded drawable resource #0x1080475 (res/drawable-xhdpi/quickcontact_badge_overlay_normal_light.9.png) that varies with configuration!! 02-08 16:01:49.070 86-100/? I/SystemServer: Entropy Service 02-08 16:01:49.130 86-100/? I/SystemServer: Power Manager 02-08 16:01:49.141 86-100/? I/SystemServer: Activity Manager 02-08 16:01:49.170 86-101/? I/ActivityManager: Memory class: 64 02-08 16:01:49.291 86-101/? A/BatteryStatsImpl: problem reading network stats java.lang.IllegalStateException: problem parsing idx 1 at com.android.internal.net.NetworkStatsFactory.readNetworkStatsDetail(NetworkStatsFactory.java:300) at com.android.internal.net.NetworkStatsFactory.readNetworkStatsDetail(NetworkStatsFactory.java:250) Note: You can see that exceptions (errors) occur within the system that are unrelated to our application. 02-08 16:08:29.517 86-93/? I/dalvikvm: Jit: resizing JitTable from 4096 to 8192 02-08 16:08:29.737 86-104/? I/PackageManager: Removing non-system package:us.raddev.test 02-08 16:08:29.737 86-101/? I/ActivityManager: Force stopping package us.raddev.test uid=10040 Note: These lines indicate that the previous version of our app are being removed from the device and stopped from running. 02-08 16:08:37.737 86-100/? D/BackupManagerService: Received broadcast Intent { act=android.intent.action.PACKAGE_ADDED dat=package:us.raddev.test flg=0x10000010 (has extras) } Note: Here the newly built version of our app is being deployed to the device. 2-08 16:08:42.007 622-622/? D/dalvikvm: Not late-enabling CheckJNI (already on) 02-08 16:08:42.037 86-233/? I/ActivityManager: Start proc us.raddev.test for activity us.raddev.test/.MainActivity: pid=622 uid=10040 gids={} Note: Here the app is being started. 02-08 16:08:44.647 86-114/? I/ActivityManager: Displayed us.raddev.test/.MainActivity: +2s716ms Note: Finally, the MainActivity is being displayed on the screen. Hopefully, that provides you with a bit of insight into the logging and that you can actually get some information about what your app is doing even when it hasn’t yet been drawn on the screen. However, that is way too much information to dig through. That’s why you can add some code to the application and turn on a filter so only the information you want to see is shown in the logcat window. Let’s go ahead and make some changes to our Java code to add our logging functionality. We’ll make the application log when the user clicks the Settings menu item. Go back to Android Studio and open up the MainActivity.java class in the editor. Double-click the file on the left side in the project tree view if necessary and the file should open up for you on the right in an editor window. Scroll down to the bottom of the MainActivity.java file and you’ll see a function named onOptionsSelected. It looks like: We are going to type some code in the if statement shown, right after the opening curly brace. That code currently looks like: if (id == R.id.action_settings) { return true; } Now, let’s change it (add the bolded line shown in the following code snippet. if (id == R.id.action_settings) { Log.d("test","User clicked the Settings menu item."); return true; } When you get as far as the d in that line Android Studio is going to offer some help. It’s just trying to let you know that there is a function it knows about named Log.d and there are a couple of function overloads (function takes varying number and types of parameters). We are going to use the first one shown, but you can just type an opening parenthesis (. When you do, Android Studio will automatically type the closing parenthesis and will offer you more help: It is telling you that the function you are looking for exists in a specific package (library - android.util.Log) which you haven’t included a reference to yet. To add the package simply press Alt and the Enter key combination. When you do that Android Studio adds the following line at the top of MainActivity.java: import android.util.Log; That causes the Java compiler to include the package when it builds the code. That makes the Log.d function, which was written by the original Android Devs, available to you for calling. Now, however, we still need to add the two strings to the Log.d function or the code will not compile. Go ahead and make sure you line looks complete now. Log.d("test","User clicked the Settings menu item."); When it is correct, it will look like the following: Notice that Android Studio code editor colorizes the strings to green for some contrast. This code allows us to write the Android log and the Android log will display this code when we have our logging set to the debug level. That’s why the function is named d(). If we had wanted to output a Warning we would’ve called the function named w() with two strings. We can also call the e() function to indicate an error. If you want to investigate more of the functions you can allow Android Studio to help you by opening up another line where we typed the first line of code and typing: Log. (that’s Log with a period following). When you do that the built in Android Studio help with offer suggestions of function and property names of the Log class. In our function call, we create a filter named “test” and we are writing a log entry line which will say, “User clicked the Settings menu item.” For now, the filter name isn’t used. We’ll use more of them later because they are very helpful so we can filter out other messages. You can see there are so many suggestions that there is even a scrollbar provided so you can see them all. Let’s go ahead and build and run to see the output. Make sure you delete the unfinished line if necessary and go ahead and run the application (it will automatically build and deploy to your running emulator). Keep an eye on the Android Monitor Logcat window. When you do and the app finally starts you will probably notice that there is still tons of output. We need to make a small change to filter the output. There is a droplist that is currently set to Verbose in the Android Monitor window. We want to change that value to Debug, since we are writing a Debug line by calling Log.d. Make that selection now. Then, make sure you grab the scrollbar on the far right side of the Android Monitor window and scroll all the way to the bottom of the window so that when new output is written you are sure to see it. Switch over to your emulator which should be displaying your test app. Click the vertical ellipsis menu in the upper right corner. Next, click the Settings menu item which appears and keep an eye on the Android Monitor Logcat window. When you click that Settings menu item. You will see a line in your Logcat window which looks like the following (I highlighted it to emphasize the one to look at): That’s the text that we put in our Log.d function call. You can also see the filter name (test) just before the message text. Of course, we will use Logging all through the book and you will use it all through your Android development so we will continue to see much more of this as we go. Now, let’s move on to the other smaller items I promised to cover. When you click the Run button at the bottom of Android Studio you can see a bit more information about your running program. The first line gives you the target device. This can be important if you have more than one device attached. At times you may have an emulator running and a physical device attached so it helps to know where Studio deployed the app. The second line is extremely interesting, because it is the APK file which android uses to deploy your app. The full path to the file which was created when Android Studio built the app is provided. Installing APK: C:\Users\roger.deutsch\AndroidStudioProjects\Test\app\build\outputs\apk\app-debug.apk Also notice that it names the file generically as app-debug.apk. We’ll talk more about that later since it will become important when deploying our app to real users. The third line shows where the APK file gets renamed and moved for deployment to the emulator. Uploading file to: /data/local/tmp/us.raddev.test You can see it renames the file and places our app in a directory named /data/local/tmp. Next, Studio runs a command to install the APK onto the emulator. DEVICE SHELL COMMAND: pm install -r "/data/local/tmp/us.raddev.test" Finally, you can see where the app is launched and the command that Studio fires to do that: Launching application: us.raddev.test/us.raddev.test.MainActivity. DEVICE SHELL COMMAND: am start -n "us.raddev.test/us.raddev.test.MainActivity" -a android.intent.action.MAIN -c android.intent.category.LAUNCHER The additionally interesting thing is that you can run those commands yourself from a command line to manually do this work. We will see how this works later, but it is good to know what Android Studio is doing on your behalf. Knowing these things are what will make you excel as an Android Developer. Let’s wrap this chapter up so that (next chapter) we can start writing our first app. However, I still need to cover the Messages window since it is important when something goes wrong. First of all, go ahead and click the Messages button at the bottom of Android Studio. You can see that the top is now labeled : Messages: Gradle build. Now, go back to MainActivity.java and type a single letter inside our if statement that we previously worked on. I’m trying to cause an error when we build and run. It’ll look something like: See the red letter k? Android Studio already knows it is a problem and is trying to warn me. However, I am going to run anyways. I am a stubborn programmer. Click the Run button again to start the build. When you do that Studio automatically collapses the Messages window, so make sure you click the Messages button again so you can see what gets output there. When the build finishes (fails) you will see something like the following: It is warning us that we have an error in MainActivity.java. Right now, it just thinks we are missing a line-ending semicolon. The last error line indicates that we can “see compiler error output for details.” To do that you need to open the main menu, File...Settings… When you choose that, a large window will open. You can move up to the Build, Execution, Deployment item and expand it by clicking the down arrow. Next, choose the Compiler item and you can add a couple of string values which will be provided to the build system (Gradle) when the app builds. The two strings are: --stracktrace --info Those options tell the build system to provide more output information. You could also add --debug, but that creates a vast amount of output and makes the app build very slowly. Click the [OK] button to save your settings. Go ahead and build again and you’ll see far more output in the Messages window. Slow Build? Keep in mind that if at any time you perceive that your builds are slow, you will want to alter the Compiler Command-line options and remove those two strings we added. They generate a lot of output. Making the change still doesn’t provide a lot of help about our error. That’s why developers have to look at the messages we do receive very closely and also be very familiar with valid syntax in our code. Remove the Bad Code Make sure you remove the problem character and get a good build again, before moving on. Before closing out the chapter let’s take a look at one more tool, the ADB (Android Debug Bridge). You can enable it within Android Studio on the Tools menu. Once you do that you can run a Debug version of your code. Go to the Run menu and select the Debug app option. When you do that a special debug version of your app will be built and deployed to your emulator. When the app is started a new window will appear at the bottom of Android Studio. It is letting you know it is connected to your emulator and is ready. Move to your emulator and you should see your app running normally. Click the two different action items available in your app. You will not see any difference at this point. Set a Breakpoint Move back to Android Studio and click to the left of the if statement we’ve been examining. If you do that in the little tray next to the editor then a red dot will appear. That is a breakpoint. Now, when you run the code that hits that line, the execution will break at this location and you will be able to control the execution. Go to the Run menu and choose Debug app. The app will start in debug mode. Go back to your app running in your emulator and once again, click the ellipsis menu and then the Settings menu item. When you do that Android Studio will jump to the top window again and the code will stop and highlight the line it stopped on in blue. You can see a small check on top of the breakpoint now. The execution has stopped on that line. Press your F8 button and the code will advance one line, into the the first statement within the You can also float over variables with your cursor to find out what value they are currently set to. Try this with the id value, even though it isn’t meaningful to us yet -- we’ll learn about it in later chapters. You can see that the id is equal to 2131492991. You can also see values of variables at the bottom of Android Studio. Again, you can see the value of the id variable. You can inspect objects in the window also. For example our Activity object which we named MainActivity is the first item showing in the list. Click the down arrow next to it to expand it and you can see all of it’s member variables and more. At this point we don’t know what all of that means, but it is important to know how you can inspect items at run time. We often need to know the value of a variable to debug our code and this is how we can do that. Go ahead and stop the debugging so we can end the chapter. Go to the Run menu again and click the Stop menu item. Once you do that the debug connection will stop. However, your app will still be running in the emulator. Very often switching between debug and running a normal copy of the app crashes the system running in the emulator. If this happens the app will become unresponsive and then you’ll probably see the boot up screen again in the emulator. You’ll just have to wait for the OS (Operating System) to start again. This chapter has brought you a long way toward building a solid foundation for you as a professional Android app developer. You built an application that is based upon a template. That may not feel like much, but you are much further along because you’ve conquered one of the most difficult barriers to Android development: getting the emulator running. We’ve not only got your app running and deployed to the emulator, but we’ve also successfully altered a small bit of code and learned various ways to know what is going on via logging and debugging. These points of knowledge will serve you well over your Android development career as they grow more solid. But none of this information matters until we build a real application. build a complete app which will allow you to write and save notes on your device. run it on the emulator show you how to sideload the app to a real device -- sideloading allows you to deploy to a device without deploying it to the Google Play store. design a User Interface (UI) using XML and learning about layouts. write Java code to save files, display note lists and more learn a bit about the app manifest (AndroidManifest.xml) and app permissions learn more about how apps are structured. Now, let’s go write some real code! I've provided a zip of the Android Studio project named Test. You should be able to drop it in a folder, unzip it and open it using Android Studio 1.5.1 with no problems. First release of article : 2016-02.
https://www.codeproject.com:443/Articles/1078103/Launch-Your-Android-App-Run-Device-Emulator-Debug?display=Print&PageFlow=Fluid
CC-MAIN-2021-49
refinedweb
4,230
72.36
Re: Precompiled headers on C From: Bonj (benjtaylor) Date: 10/17/04 - ] Date: Sun, 17 Oct 2004 13:49:51 +0100 That's in a situation where c++ actually *would* be faster. I've got sections of my program that are written in C++ because there's no way they could be written in C, namely, they involve ATL. What with ATL being faster and smaller than MFC I'm happy with that, but don't get me wrong, I can't see how using C++ for the part of my program that manages the resource data could actually be faster. It's pointless pointing me to *one example* of how C++ can be faster in some particular tests, because someone else could point me to another example of how a C version would be faster than a C++ version in their own particular tests. It just proves that that particular piece of code was a candidate for object oriented programming, so was faster in C++, but it doesn't prove anything with respect to my own scenario. In my scenario, the data is in a collection of nodes, but they are explicitly arranged into a 'flat' format by the program that generates them, in order that they can be arranged in one contigious block in the resource file. To write a C++ class or even struct and to try to rearrange them into a non-flat structure would be both time consuming for me and for the program - as it would have all this work to do at start-up time. Not counting the fact that it would use bags more heap memory, 400KB more to be precise. And what's more, far from being faster to access, I can't even be sure that it would be *as quick* to access. If this is left in global namespace and accessed with just one pointer that traverses back and forth through the structure as and when it needs to, there is going to be minimal overhead. No offence, but I really wish people would ask the question that has been asked, rather than trying to take a step back and look at the bigger picture and pick apart the questionner's motives and before trying to answer the question they think should have been asked. Example: the question that's been asked was "How do I use precompiled headers in a project with .c files?". The question you answered was 'Mmmm... I've got this project that's got some .c files in it.... I get a precompiled header error when I try to compile it now. I need it to be fast but I'll go for anything that seems to be the flavour of the month/year/decade, how should I proceed ? " I also don't really like people that point to a web resource not authored by them that isn't at all relevant to the question been asked. But I'm sure you're good at what you do and you mean well, and you are a fellow programmer and did take the bother of answering at all, so I thank you for that and hope you don't take offence. "Bo Persson" <bop@gmb.dk> wrote in message news:OGoNsyDtEHA.1452@TK2MSFTNGP11.phx.gbl... > > "Bonj" <benjtaylor at hotpop d0t com> skrev i meddelandet > news:%23AKW7E9sEHA.3572@tk2msftngp13.phx.gbl... >> >> I personally very much doubt that there is any way of using the C++-only >> features or the way C++ compiles that C doesn't, in order to increase the >> speed of the program. But if this is what you meant by "using C++ as a >> better C", then please tell me what they are - because I don't know them! >> What I mean is, when I compile C, it is an old language that has been >> around since the seventies or something. But the compiler I'm using >> hasn't been around since the seventies, so the only advantages in using >> C++ would be in what functionality I could use, not performance. > > Have you seen this paper by Bjarne Stroustrup, where he shows an example > of just how much faster C++ can be ? > > > > > :-) > > > Bo Persson > > - ]
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vc/2004-10/0577.html
crawl-002
refinedweb
701
72.19
>>.'" Go! (Score:5, Informative). Re:Go! (Score:5, Funny) Even more funny is the fact that they hosting their language on code.google.com Perhaps we shouldn't worry that much about them harvesting our data after all? Re:Go! (Score:5, Funny) There once was a language named "Go" By Google it's made to help the Pro But there's a claim the name it sounds quite the same as another fellow's lingo This other lingo named "Go!" "It was earlier" it's inventor says so. "Why didn't you look on a webpage or in my book, it's even google search result two!" "So Google, rename your thing! Or in front of a judge you i bring! Lots of users agree it was disgraceful by thee just be sorry and give me a ring!" So the question arise allthough google might despise "what new name shall we be giving to the lingo that's not yet living and has not yet seen this world with it's own eyes?" One fella proposed the name "Goo" Which is similar to pythons clone "Boo" But also this name is taken and not yet forsaken and honestly sounds close to "Poo". Another said "Lango" is cool, He would take such thing as a tool. But a lingo named "Lango" Only rhymes "Jango" or "Tango" This is real, not Star Wars, you fool! Lots of other names were called some were boring, some others were bold The question still remain Will google act or refrain from renaming it's lingo as told? The remainder of my little piece Is the ironic issue of this Why did you, google miss to google "go" before release You would have known it's not your name, but his'! Re: (Score:3, Insightful) I don't know if there's a Poet Laureate position for Slashdot, but either way I nominate this guy. Brilliant! Re:Go! (Score:4, Funny) Re: (Score:3, Interesting).. Re:Go! (Score:5, Funny) Re: (Score:3, Informative) But it does kind of fly in the face of the "Don't be evil" slogan. Not really. There was no malice here anywhere. Nobody tried to be evil, nobody is trying to be evil this moment and nobody is trying to be evil in the future. Some dude had an idea a couple years back that was so utterly obscurethat no Wikipedia page existed for it. Let that sink in: There's a page on Wikipedia for every actor that was ever seen in the background of any Star Trek episode; yet this supposed "Go language" was so unknown that nobody ever bothered to make a page for it (until yesterday). And t p Re:Go! (Score:5, Informative) He published in "Annals of Mathematics and Artificial Intelligence" and it's cited [acm.org] in the ACM portal. Who cares what Wiki has or doesn't have. This wasn't some geocities page with talk about a language that was never developed. thinkin:Go! (Score:4, Funny) Plus every source file would be a .gog [merriam-webster.com]! Re: (Score:3, Funny) Re:Go! (Score:4, Interesting) Re: (Score:3, Insightful) Re: (Score:3, Insightful) Re:Go! (Score:4, Funny) Wouldn't Go! be pronounced Go(bang)? Maybe we should use "Gang!" as the name, then. Re: (Score:3, Funny) So what? (Score:3, Insightful) "From what I've read, Go! was pretty much unknown to anyone outside a very small group 2 years ago." From what I've read, Go was pretty much unknown outside of Google until about a week ago. I said it yesterday, but... (Score:5, Funny) Two "Go"'s considered harmful. Re: (Score:2) Do not pass Go! Do not collect £200 Re: (Score:2) Better stated as: GOGO considered harmful Re: (Score:2) Not a chance, as its common knowledge that goto's cause the apocalypse. Re: (Score:2) (Why did they name it Go? According to the FAQ, they thought "Go Ogle" would be a great name for the debugger. "Goo Ogle" would be just as gooooood.) Re: (Score:3, Insightful) Is Go! alive? (Score:2) Re: (Score:2) Re: (Score:2) Re: (Score:2, Informative) Re: (Score:2) And as far as I can tell, the wiki entry was created yesterday. (I'm wiki challenged, so I may be wrong) Re: (Score:2) You are correct, it was added by somebody after reading about the go vs go! thing, before then ther wasn't even a reference on "go" disambiguation Re: (Score:3, Interesting) Excellent find, I'm sure the author is relishing in the Streisand Effect right now. How far down the page was Go! two days ago if you googled the name? Someone is getting fired... (Score:3, Funny) I bet someone at Google will get fired soon... Either 1 of 2 things may have happened: 1) They used Microsoft Bing to search for potential trademark violations 2) They were too lazy and didn't check at all. Normal for this crew (Score:2) If Ken Thompson and Rob Pike were designing it, they probably didn't care about getting fired / marketing implications / public backlash etc. They have a history of choosing provocative names, just look at the plan9 stuff. Re: (Score:2) Fired? Isn't that exaggerating things a little? ;). So? (Score:2, Insightful) Re: (Score:2, Informative) The way I see it, TM or copyright are really useful so you don't have to demonstrate that you were using that name before... he doesn't have it, so he has to show that he had a book, that the language was published in 2003 with that name, etc. Re:So? (Score:5, Insightful) Some things are ethically questionable even when there is no legal problem involved. A concept often forgotten in the corporate world. Re: (Score:3, Insightful) "Like reusing the name of an obscure project that seemingly died years ago and nobody here has even heard of?" Right. If Slashdotters haven't heard of it, there's no ethical issue. Goo (Score:3, Funny) Re:Goo (Score:4, Informative) Re: (Score:3, Funny) Good idea! No namespace crash there. Everyone can just call it GPL for short, and... ...d'oh! Re: (Score:3) "Goo" is a dialect of Lisp, so "Gooo" it is! Personally, I think Google should rename it "Giggity".. Re: (Score:2) "Under fire"? (Score:2) Tag this one !news. Since when is a gazillion-dollar company considered "under fire" because one dude with no legal status is annoyed at them? By that logic, "McDonald's has come under fire this week for serving goodmanj a batch of stale fries last time he went there." Google should rename Go to Issue 9 (Score:5, Interesting) Re: (Score:2) Issue 9 is kind of a mouthful to pronounce, plus it might be weird in some other languages (like in french where issue means exit) That said I agree that another name than go could be good if only to make it easier to google. Re:Google should rename Go to Issue 9 (Score:5, Funny) Issue 9 is kind of a mouthful to pronounce, plus it might be weird in some other languages (like in french where issue means exit) Meh, in conversation just shorten it to I9 and you're good to... *cough*. Yeah. Re:Google should rename Go to Issue 9 (Score:4, Funny) I think they should name it Issue Express 9 or IE9 for short. Preemptive naming. Re: (Score:2) Especially with some guys behind this, also behind Plan 9. Re: (Score:3, Interesting) Why don't they just call it "g". Then later, others can invent g++ and g# languages. This won't be gonfusing at all. Easily avoidable (Score:2) Couldn't they have googled the name first? You'd kind of expect at least that from them.. Not like Go is such a great name anyway. They should run a poll to decide the name. With enough luck it'll get called Marblecake or Colbert++. They should plan better (Score:2, Insightful) Re:They should plan better (Score:5, Insightful) As someone stated before, this is not a legal issue. It's just about basic politeness. Re: violat Google simply does not care. (Score:2, Insightful) People! Punctuation is IMPORTANT! (Score:2, Interesting) It originates from the paper by Dijkstra [arizona.edu] where he argued GoTo statements should be banned. That resulted in many structured programming languages main stream computer science. But what is not k Re:People! Punctuation is IMPORTANT! (Score:5, Informative) Google's language is called Go! (with an exclamation mark.) The preexisting language whose existence has been suddenly and rudely revealed is called Go without the exclamation mark. Other way around. Google's language is "Go". McCabe's language is "Go!". Re:People! Punctuation is IMPORTANT! (Score:5, Informative) Dont get me started on the Japanese chess game Go. I don't know if your post was supposed to be either sarcastic or funny, but Go [wikipedia.org] is neither Japanese nor chess. It's Chinese, and it's older than chess. The game commonly referred to as "Japanese chess" is Shogi [wikipedia.org]. Re: (Score:3, Informative) Re: (Score:3, Informative) Actually, "Go" is the Japanese name for the game. That's a Romanization, obviously, but is considered phonetically close to the Japanese pronunciation. Not to sound cranky, but how hard would it be to check the relevant section [wikipedia.org] of the Wikipedia article? Quoting: Re: (Score:2) Google's is go, the other guys is go!, those who cannot tell the difference between the 2 should not be programming. Goop? (Score:2) I think they should call it Goop. So much code produced by humans has looked like a blob from a bad sci-fi movie that it seems fitting. Re: (Score:2) Re: (Score:2) I still vote for Goop. Have you ever heard the cliche, "you are what you eat"? I think a corollary might emerge: "you are what you code (in)". Some genius will use Goop to code the first artificially intelligent self-replicating nanobots, and they'll decide we're no more significant than any other raw material and turn us all into.... Re: (Score:2) Biggest scandal since Lindows! (Score:2) To be honest, i can see the confusion... (Score:2, Funny) How would Google even know that a language called "Go" exists? They would have to have some mechanism for searching the internet to do that. Wikipedia proposes deletion of Go! page (Score:4, Interesting) This template was added 2009-11-12 14:22 Re: . They should change it... (Score:3, Insightful) Slashdot needs a voting mecahnism for this (Score:2) A poll would be interesting. Personally, I think that "Go and "Go! are two different names, so there is no problem. not an issue (Score:2, Interesting) One has a bang (!) at the end, while the other doesn't. Everybody knows the difference between C and C# The claim has no basis. Rename it (Score:2) UUIDs (Score:5, Funny) Re: (Score:3, Funny) Bastard! A little research through a few obscure, un-archived computing journals published in the now defunct USSR would have shown you that I wrote the programming language Ed68c886-6390-4255-813f-48e61f6b0b05 over 25 years ago! The cheek of some people!: (Score:3, Funny) to call a stop. Or a stop! while $STOP; HAMMERTIME; end Re: (Score:2) while $STOP { // Credit where due [xkcd.com] collaborate(); listen(); } Re: (Score:2) if (exists(town{"Der Kommissar"})) { exit } else { @ARRAY = reverse @ARRAY; } Re:Perfect example (Score:4, Insightful) There's no IP. There is copyright, patents and trademarks. This sounds like a trademark thing, so no need to confuse the issue. Re: (Score:2) I think the issue here is more akin to trademarks rather than IP. I think your post is more an issue of words than of text. Intellectual Property is an umbrella term combining trademarks, copyright, and patents. Even without a registered trademark, I think they'd have a good case that Google is trying to pass off their new language as the original Go. Re: (Score:3, Informative) Even without a registered trademark, I think they'd have a good case that Google is trying to pass off their new language as the original Go. Actually, unregistered trademarks are valid, too. In North America, the trademark system is a "first to use" system, not a "first to file". However, the original Go is not a commercial product, so there is no trademark issue. Google will likely consider changing the name just because it's stupid to create a new programming language and give it the same name as an existing one, but trademark won't enter into the discussion. Re: (Score:2) I always thought one had to register a trademark for it to be valid. I thought the the (tm) mark was for pending trademarks. It looks like I was wrong. [ehow.com] I think the whole fighting over the "go" name is stupid. Seriously, what kind of idiot would think no one used such a commonly used word, especially since most people would equate a programming language with an action. (Yeah, and someone actually used the word "Action!" as the name of their programming language [wikipedia.org].) Re: (Score:3, Insightful) Re: (Score:2) where are you googling? the only thing on the first page is the "Go!" wikipedia article which was created yesterday, AFTER Google launched their language Re:Non-issue (Score:5, Funny) Hey it's not their fault. If only they had access to some sort of computer system that allowed one to quickly examine the internet, a "search engine" if you will, then they might have been able to catch this in time. Re: (Score:3, Interesting) Believe me, if there's at least one lawyer working for Google, they knew. Even most start-ups research a product name before announcing it. They probably just figured they could pay the guy off. Re: (Score:2): (Score:2) Re: . Re: : (Score:2) Maybe it's time for tougher IP laws where such things would be possible! At least I would if I were into politics... Re:'GO' != 'GO!' (Score:4, Informative) A+ != A# != A# C != C# (in fairness they are related) There are several languages refereed to as D F != F# L != L# M != M4 If you can't tell the difference between to similarly named programming languages perhaps programming isn't for you! But C# = Db F = E# and moreover B# = C Re: (Score:3, Insightful) Because Googling for "go" gets you 2,950,000,000 hits. Yes, that's billions. And yet they didn't see that choosing such a common word for a language name was a bad idea. Ah, how the mighty goof up.
http://developers.slashdot.org/story/09/11/12/1256234/Google-Under-Fire-For-Calling-Their-Language-Go
CC-MAIN-2015-11
refinedweb
2,485
75
Hi guys can you please help me with this. All I need to do is input a number and display the corresponding alphabet.For example 2 then the output should be A B.. I am working on a code but my problem is the output is displaying together with a special character. Can you help me do it pls. Thanks in advance! #include<stdio.h> #include<stdlib.h> #include<conio.h> int main() { int num; char *buffer; printf("How long is the string:"); scanf("%s",&num); buffer=(char *) malloc(num); if (buffer==NULL) exit(1); for(int i=0;i<num;i++) { for (int j=0;j<i;j++) buffer[j]=(i%26)+'A'; printf("The strings are: %s \n",buffer); getch(); free(buffer); } return 0; } Edited by gelaisg18 You have a number of problems in the code. Please be patient for a moment while I list them out. '#include<conio.h>' Use of this library should be avoided except in cases where it really does offer the best solution (which is extremely rarely). printf("How long is the string:"); This prompt is not guaranteed to be displayed before scanf() blocks for input, so the user might be greeted by nothing more than a blinking cursor and no hint of what to do. The reason for that is C only guarantees three times when the output buffer gets flushed to the destination device (the screen in this case): In the case of #1 you have no control over it because the size of the buffer and its current contents are unknown. #2 isn't ideal for prompts because then you'd have a prompt on one line and the blinkie cursor on the next line; it's kind of ugly. So the only option for prompts is using fflush: printf("How long is the string: "); fflush(stdout); I'm flushing stdout because that's the stream object that printf writes to. scanf("%s",&num); num is not a string, you need to use the %d specifier. Also, it's critically important to check user input for failure. scanf() returns the number of successful conversions, which in general if that number corresponds to the number of specifiers you used denotes complete success. In this case any return value other than 1 means scanf() failed. So you'd do something like this: printf("How long is the string: "); fflush(stdout); if (scanf("%d", &num) == 1) { /* success, work with num */ } buffer=(char *) malloc(num); if (buffer==NULL) exit(1); This is actually fine syntactically and semantically, kudos. :) My only nits would be that 1 isn't a portable argument for exit(), and return instead of exit() is conventional for terminating the program from main(). The three portable options are: EXIT_SUCCESS is presumably there as a counterpart to EXIT_FAILURE, and in general I suggest that for consistency you use EXIT_SUCCESS as your return value instead of 0 when you're also using EXIT_FAILURE somewhere. So something like this: int main(void) { ... buffer = malloc(num); if (buffer == NULL) return EXIT_FAILURE; ... return EXIT_SUCCESS; } Eventually someone will also tell you that the cast on malloc() isn't required and can also hide the very legitimate error of forgetting to include stdlib.h. However, it's not uncommon to write C that must also compile as C++ and it's a well known issue, so I don't consider casting malloc() to be a significant risk anymore. Just be aware of it. :) I'll also mention that when allocating memory for a string, failure to add 1 extra character for the terminating '\0' raises red flags for me. Sometimes that extra character is included in the total, but most of the time it's not and represents an error. In this case you need to add 1 to your total. buffer[j]=(i%26)+'A'; In theory this code is not portable, but in practice it's likely to not be an issue. The problem is that some character sets (EBCDIC in particular) don't have the latin alphabet in contiguous spots. Other characters are mixed in, so this line would certainly produce funky output. However, you're likely to be using either some variant of ASCII or Unicode, both of which do make the guarantee that the latin alphabet is contiguous by value. printf("The strings are: %s \n",buffer); You neglected to terminate your string with '\0'. So while you avoided the problem of not having enough memory, printf() will now overrun the buffer and print anything it sees until it finds a null character somewhere out in the nether regions of memory. That's probably the issue you're having at the moment. getch(); Try to avoid getch(). Most of the time the standard getchar() is a suitable replacement. free(buffer); Awesome, you remembered to release your memory. :) But it's in the wrong place. The next iteration of the outer loop will use the freed pointer. This line should be moved to just before you return. Now for the logic part of this reply. I don't really like your solution because it generates the alphabet instead of simply printing a segment of an existing alphabet. Also, what happens if num is greater than the length of the alphabet? I think a better solution would be to define an explicit alphabet, thus avoiding the portability problems of the character set, and then use a wrapping index so that for large values of num the loop will just start over from the beginning of the alphabet. Something like this: #include <stdio.h> #include <string.h> int main(void) { const char *alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; size_t len = strlen(alphabet); int num; printf("How long is the string: "); fflush(stdout); if (scanf("%d", &num) == 1) { for (int i = 0; i < num; ++i) printf("%-2c", alphabet[i % len]); puts(""); } return 0; } Another added benefit is that you no longer need dynamic memory, with its performance overhead and sometimes tricky semantics. ...
https://www.daniweb.com/programming/software-development/threads/441367/input-an-integer-and-display-the-corresponding-alphabet-value
CC-MAIN-2018-43
refinedweb
990
60.75
Hi Benjamin, The way Atlas is architected is that there is one graph for types and a separate graph for the instances. I see a lot of benefits with this design. Having edges between the types allows us to navigate and check types consistency with gremlin queries. It means we can implement search using gremlin queries looking at the instance graph. I think for Atlas this allows us to manage the metadata types and instances with some degree of insolation in the graph store. I think the way the type system is designed at the graph level is great strength of Atlas. We are also thinking that this will allow us, in the future, to have namespaces - for example one for test and one for development and one for production; each of which will have separate instance graph but all using the same type graph. Of course there are many way to store data in graphs. Mixing types and instances together in one big property graph is the approach RDF takes. all the best, David. From: BONNET Benjamin <benjamin.bonnet@soprasteria.com> To: "dev@atlas.apache.org" <dev@atlas.apache.org> Date: 25/08/2017 11:03 Subject: About the way traits are stored in TitanDB Hi all, Working on Atlas repository in Titan, I am surprised by the way traits are stored : traits type are Vertices, which seems ok, but traits instances are Vertices too. So, when you attach a trait to an entity, Atlas will create a new Vertex (containing the attributes that are set) and draw an Edge between the entity instance Vertex and that trait instance Vertex. There is no edge between the trait instance Vertex and the trait type Vertex : there is just a __typeName attribute in the trait instance that contains the traits type (please, tell me if I missed something...). Actually, I would rather have expected to have the trait instance stored as an Edge between the entity instance Vertex and the trait type Vertex. That edge would contain the attributes values. The advantages of modeling traits instances as Edges are : - The link between a trait and its type is enforced by the database itself and does not rely on a __typeName attribute. - Less Vertices in the database, without growing Edges number. - The data stored in Titan will look more like a graph : today, my data consists of lots of Vertices that are isolated (all types) and there are few edges. So I think we cannot really take advantage of the DB graph-orientation. What do you think about that ? Regards, Benjamin Unless stated otherwise above: IBM United Kingdom Limited - Registered in England and Wales with number 741598. Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
http://mail-archives.apache.org/mod_mbox/atlas-dev/201708.mbox/%3COF0B4CAD2D.232FFEB6-ON00258187.003C2117-80258187.003D026A@notes.na.collabserv.com%3E
CC-MAIN-2018-47
refinedweb
456
69.62
Hello Gentlemen, I am a newbie in programming and apologise in advance if my question is too silly. My c++ project is compiled as library .xll (DLL for excel), the framework code (program entry point) is coded correct and work stable. Custom functions are separate modules. // header.h typedef struct _TMDYDate { long month; long day; long year; } TMonthDayYear; the file funcs.c has a function // funcs.c #include "header.h" __declspec(dllexport) long GetDate() { TMonthDayYear myDate; myDate.day =1 ; myDate.month = 1; myDate.year = 2000; if (DateToMDY(2004, &myDate) != 1) { return 0; } return myDate.year; } where the function DateToMDY is declared in separate file "Dates.c" // dates.c int DateToMDY (long tmpyear, TMonthDayYear *mdy) { mdy->year = tmpyear; // <- Error is here return 1; } I debug a function "GetDate()" and get an error when try to assign by reference (mdy->year = tmpyear; ) the value 2004. The error is: "Unhandled exception at 0x0e342b84 (alcDates.xll) in EXCEL.EXE: 0xC0000005: Access violation writing location 0x40e3db28". The funny thing is when i move declaration of "DateToMDY" to the file "funcs.c", the same where the DateToMDY is called - there is no error. I assume it is to wrong memory usage, but for me is critical to isolate functionality in different modules (ex. dates.c, array.c, sorting.c ...). I don't know where to look for, may be i have wrong project compilation settings. Thank you! Nicholas View Tag Cloud Forum Rules
http://forums.codeguru.com/showthread.php?509251-Problem-pointer-(ref-to-structure)-as-an-argument-for-a-function
CC-MAIN-2015-22
refinedweb
237
69.99
Manpage of MBTOWC MBTOWCSection: Linux Programmer's Manual (3) Updated: 2016-10-08 Index NAMEmbtowc - convert a multibyte sequence to a wide character SYNOPSIS #include <stdlib.h>int mbtowc(wchar_t *pwc, const char *s, size_t n); DESCRIPTIONThe main case for this function is when sis not NULL and pwcis not NULL. In this case, the mbtowc() function inspects at most nbytes of the multibyte string starting at s, extracts the complete multibyte character, converts it to a wide character and stores it at *pwc. It updates an internal shift state known only to the mbtowc() function. If sdoes not point to a null byte (aq\0aq), it returns the number of bytes that were consumed from s, otherwise it returns 0. If the nbytes starting at sdois not NULL but pwcis NULL. In this case, the mbtowc() function behaves as above, except that it does not store the converted wide character in memory. A third case is when sis NULL. In this case, pwcand nare ignored. The mbtowc() function resets the shift state, only known to this function, to the initial state, and returns nonzero if the encoding has nontrivial shift state, or zero if the encoding is stateless. RETURN VALUEIf sis not NULL, the mbtowc() function returns the number of consumed bytes starting at s, or 0 if spoints to a null byte, or -1 upon failure. If sis NULL, the mbtowc() mbtowc() depends on the LC_CTYPEcategory of the current locale. This function is not multithread safe. The function mbrtowc(3) provides a better interface to the same functionality. SEE ALSOMB_CUR_MAX(3), mblen(3), mbrtowc(3), mbstowcs(3), wcstombs(3), wctomb(3) Index This document was created by man2html, using the manual pages. Time: 16:30:17 GMT, December 12, 2016 Click Here!
https://www.linux.com/manpage/man3/mbtowc.3.html
CC-MAIN-2017-09
refinedweb
294
62.27
Details - Type: Bug - Status: Reported - Priority: P2: Important - Resolution: Unresolved - Affects Version/s: 5.12.4 - Fix Version/s: None - - Labels:None - Platform/s: Description I have the following Main.qml import QtQuick 2.0 import QtQuick.Window 2.10 Window { id: app visible: true width: 640 height: 480 Rectangle { id: btn anchors.fill: parent color: "green" SequentialAnimation { running: true loops: Animation.Infinite NumberAnimation {target: btn; property: "width"; to: 60; duration: 1000} NumberAnimation {target: btn; property: "width"; to: 70; duration: 1000} } Timer { interval: 10000 running: true repeat: true onTriggered: console.log("*** timer ") } } } When running this on iOS (on a recent iPhone XR), I get the following logs (I installed my own message handler to add timestamps) 13:50:41.250 Debug: *** timer 13:50:52.749 Debug: *** timer 13:51:04.486 Debug: *** timer 13:51:16.378 Debug: *** timer As you can see, the 10s timer fires only after about 11.5s. When changing `running` to false, or `Window` to `Item`, the timer fires accurately again. This problem is not seen on Mac, Linux or Android
https://bugreports.qt.io/browse/QTBUG-79017
CC-MAIN-2022-21
refinedweb
177
52.05
A/B testing In the Wikimedia platform, the impact of many new features are tested using controlled experiments. Many more should be. This page collects any thoughts, questions, and links that are relevant to the topic. Infrastructure - Phab tasks - T208089: Infrastructure for interventions impacting editing metrics - T135762: A/B Testing solid framework [declined] - T76919: Implement reusable framework for A/B testing product features - T76917: Investigate using Optimizely for UI A/B testing - T213315: [Better Use of Data] Output 3.2: Controlled experiment (A/B test) capabilities - The "bucket" field is the standard for recording buckets in EventLogging data. Maybe this could be made a default part of every schema, and general tools provided so that a bucketed user would have their bucket set along with all of their events for the duration of the test. Bucketing - The most common way to bucket editors for A/B tests has been on their user ID. If the test spans multiple wikis, it would be an improvement to do it on their user name, because that's consistent across wikis, and would ensure they're placed in the same bucket across multiple wikis. This would mean we need to hash the user name to ensure a consistent distribution. - It would also be possible to user the global user ID, which would remove the need for hashing, although as of April 2019, that's not available in Javascript. Sample size / power analysis General advice A/B testing on wiki This is from an email from Aaron Halfaker. This generally applies for any study that might affect our users. An experiment, a survey, or a large-scale interview study, etc. - Create a description of the study on Meta in the Research namespace - - Make sure to clear describe the goals of the study and any disruption it might cause. - Post on a community forum where the active users are likely to take note. E.g. the Village Pump on English Wikipedia. - Engage in the discussion there. - Make sure to link to your Meta page. - It's common for no one to respond. You should wait at least a few days. Consider making a follow-up post reminding people that you'd like to start the study soon (bonus points for a target deployment date. - Sometimes there will be a negative response. Try your best to address concerns and make modifications to the study design. If the negative response persists, consider rescheduling or fundamentally redesigning the study. - Assuming the discussion went well, do the study. Update the meta page with results and discussion. - Post the results of the study in the same community forum and consider bringing it to the Wikimedia Research Showcase.
https://wikitech.wikimedia.org/wiki/A/B_testing
CC-MAIN-2020-10
refinedweb
446
63.8
CallAction Since: BlackBerry 10.3.0 #include <bb/system/phone/CallAction> To link against this class, add the following line to your .pro file: LIBS += -lbbsystem Values describing the actions that can be performed on a phone call. You must also specify the access_phone permission in your bar-descriptor.xml file. Overview Public Types Index Public Types Values describing the actions that can be performed on a phone call. BlackBerry 10.3.0 - HoldCall 1 The hold call action. - ResumeCall 2 The resume call action.Since: BlackBerry 10.3.0 - SplitCall 3 The split call action.Since: BlackBerry 10.3.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/cascades/bb__system__phone__callaction.html
CC-MAIN-2019-35
refinedweb
117
63.56
Can't get focus back in a TextField I have a simple QML form that contains one TextField, some buttons, labels and a clickable link (mouse area). The TextField works fine until I open a child form (via QQuickView inside a QDialog using PyQt5) by clicking clicking on a MouseArea. After the second form is closed the TextField can no longer get the input focus. I used the following to check the focus in the TextField: @ text: activeFocus ? "I have active focus!" : "I do not have active focus" @ I have tried setting the "focus" to true from code when the child form is closed but that doesn't help. Is there something fundamental that I am missing? How can I return the focus to the TextField on the first form when the child form is closed? I am not sure about the issue but you probably want to try calling forceActiveFocus() and not just setting the "focus" property as they mean different things. Using forceActiveFocus() doesn't help either. I will make a miminal example. By the way, where is the detailed documentation for forceActiveFocus()? "The documentation for 5.1 is missing.": Here is a working (or not as is the case here) example. When run from Qt Creator (qmlscene) it works as expected. When run from Python using QQuickView, the TextField in parent.qml does not get the focus back after the child window has been opened and closed. @ // parent.qml import QtQuick 2.1 import QtQuick.Controls 1.0 Rectangle { width: 640 height: 480 signal closed Button { text: qsTr("Open child window") anchors.horizontalCenter: parent.horizontalCenter anchors.verticalCenter: parent.verticalCenter onClicked: { child.visible = true } } TextField { id: text_field1 x: 93 y: 299 width: 471 height: 22 placeholderText: "Text Field" text: activeFocus ? "I have active focus!" : "I do not have active focus" } Child { id: child visible: false modality: Qt.WindowModal } Button { id: button1 x: 488 y: 397 text: "Close" onClicked: { closed() } } } @ @ // Child.qml import QtQuick 2.1 import QtQuick.Controls 1.0 import QtQuick.Window 2.0 Window { width: 500 height: 300 TextField { id: text_field1 x: 73 y: 189 width: 227 height: 22 placeholderText: "Text Field" } Button { id: button1 x: 373 y: 187 text: "OK" onClicked: { close() } } } @ @ // example.py import sys from PyQt5 import QtWidgets, QtCore, QtQuick from PyQt5.QtWidgets import QDialog, QWidget def qt5_qml_dir(): import os import subprocess qmldir = subprocess.check_output(["qmake", "-query", "QT_INSTALL_QML"]).strip() if len(qmldir) == 0: raise RuntimeError('Cannot find QT_INSTALL_QML directory, "qmake -query ' + 'QT_INSTALL_QML" returned nothing') if not os.path.exists(qmldir): raise RuntimeError("Directory QT_INSTALL_QML: %s doesn't exist" % qmldir) # On Windows 'qmake -query' uses / as the path separator # so change it to \\. if sys.platform.startswith('win'): import string qmldir = string.replace(qmldir, '/', '\\') return qmldir class FocusTest(QDialog): def __init__(self, parent=None): super(FocusTest, self).__init__(parent) self._parent = parent view = QtQuick.QQuickView() self._view = view view.setResizeMode(QtQuick.QQuickView.SizeViewToRootObject) # add the Qt Quick library path view.engine().addImportPath(qt5_qml_dir()) view.setSource(QtCore.QUrl('parent.qml')) w = QWidget.createWindowContainer(view, self) w.setMinimumSize(view.size()) w.setMaximumSize(view.size()) r = view.rootObject() self._root = r r.closed.connect(self.onClosed) def onClosed(self): self._view.close() self.accept() if name=="main": app = QtWidgets.QApplication(sys.argv) s = "Test placeholder" label = QtWidgets.QLabel(s, None) label.setAlignment(QtCore.Qt.AlignHCenter | QtCore.Qt.AlignVCenter) label.setWindowTitle(s) label.resize(800, 400) label.show() s = FocusTest(label) s.accepted.connect(label.close) s.open() app.exec_() @ I have "reported this as a bug.": Can't you just do it like this: @ Child { id: child visible: false modality: Qt.WindowModal onVisibleChanged: { if(visible === false){ text_field1.forceActiveFocus(); } } } @ That should set the focus back to the textfield if the Child is closed (and not visible any more). This is more or less what Jens was suggesting I think. Thanks, I tried doing that from code but it didn't work. The above doesn't work either. I have temporarily worked around this problem by hiding the parent form when the child form is shown and then showing it again when the child form is closed.
https://forum.qt.io/topic/33175/can-t-get-focus-back-in-a-textfield
CC-MAIN-2018-13
refinedweb
678
53.07
The <see> tag can be used to link to another class. It contains the cref member which should contain the name of the class that is to be referenced. Visual Studio will provide Intellsense when writing this tag and such references will be processed when renaming the referenced class, too. /// <summary> /// You might also want to check out <see cref="SomeOtherClass"/>. /// </summary> public class SomeClass { } In Visual Studio Intellisense popups such references will also be displayed colored in the text. To reference a generic class, use something similar to the following: /// <summary> /// An enhanced version of <see cref="List{T}"/>. /// </summary> public class SomeGenericClass<T> { }
https://www.notion.so/70288bee85714c508d934c77cde0d23a
CC-MAIN-2022-27
refinedweb
105
56.86
In this article, we’ll deploy a very simple application. And I have attached the application code with this article. So far, we have learned the essentials of Angular. Every application needs to be deployed and that’s the topic of this article. So here, we will cover how to - And if you’ve not read my previous articles of Angular series, you can start your journey with Angular from the following links. Let’s get started. Preparing App for Deployment So, in this article, we’ll deploy a very simple application. And I have attached the application code with this article. Download the source file and run So in this application, we have 2 pages. 1 is the home page and the other one is the list of GitHub followers page. So, this application includes routing and consuming HTTP Services. So here, we have only front-end and back-end, in this case, the GitHub API is provided by a 3rd party. In your application, you may build the backend yourself. So in this article, we’ll also discuss how to deploy the frontend and backend. In terms of deployment, here we have a couple of options. The simplest option is to copy this entire folder with all its files into the non-development machine. So, copy this folder and paste it into production machine and there, run the following command. ng serve But there are a couple of problems with this approach. And, of course, we can exclude this folder and we can run the command on the target machine. npm install It installs all the npm packages referencing into this project. But still, we have a problem with this approach as well. Here, in the terminal, look at the size of the vendor bundle. Yes, it is 2.63 MB. And even it is a very simple application. Moreover, we’re not using any 3rd party libraries. We’re just using Angular. We can do much better. So, let’s see few optimization techniques we can apply here. These techniques have been around for more than a decade and they are not specific to Angular. You might be familiar with these techniques as well. BeforeAfter//// This is my comment//public class HomeComponent{onClick() {}}public class HomeComponent { onClick() { } } BeforeAfterpublic class HomeComponent { onClick() { } }public class HC { oC() { } } Good News We can apply all these optimization techniques using a single command in Angular CLI. ng build –prod So when we build our application using Angular CLI with this command. Angular will produce highly optimized bundles. And then we can simply deploy these files to a non-development machine. JIT vs AOT Compilation Every application is in compilation step. So far you’ve not seen compilation because it happens behind the scene. In the Angular framework, we have a compiler. The job of the compiler is a little bit different from other compilers you might be familiar with. For example, a C++ compiler takes C++ code and converts it to a different language like machine code. Angular compiler, in contrast, takes javascript code and use javascript code. It might be confusing for you at the beginning but honestly speaking it is very easy to understand. Let’s discuss using an example. Look this is the template HTML code of our home.component.html here we have interpolation string for rendering the username field. If we load this HTML file in a browser. Are we going to see the value of the current Username in the browser? Of course not, we’re going to see this as static text exactly as it is here. So this curly braces syntax is only meaningful to Angular. The same is true for the property and even binding expressions. Browsers don’t understand them. So when our application starts compiler is going to kick in, it’s going to walk down the tree of the component and for each component it is going to parse its templates. So in this case, it is going to look at the template of the home component and it seems that here in the top, we have a div and inside the div, we have p tag and inside this p tag we have some static text plus some dynamic text in interpolation string. So, based on this template, it's going to produce some JavaScript code to create the structure in the DOM. So here is a very simplified example, when Angular compiler parses the template for our home component it may produce some javascript code like this. This is how we create the elements and append them according to the structure of the component template HTML. And after append, there will be some code to take the value of the createUser.name field from our component and displayed it in the DOM. And also there is more code for detecting the changes then the value of this field and refreshing the DOM as necessary. So Angular compiler produce this javascript code at runtime and then this code will be executed and as a result, we see the view in the browser. This is Just-In-Time (JIT) Compilation. In other words, the compilation that happens at runtime. This JIT compilation is perfectly fine when you’re developing the application on local machine. But it is very inefficient for a production environment. Because this compilation step is going to happen for every user of our application. So every time user lands on our application, the Angular compiler is going to walk down the tree of our components and compile all these components in their templates. Now you can imagine as the number of components increases or as our templates gets more complex. This compilation step is going to take longer. Due to this reason, we need to ship Angular compiler as part of the vendor bundle. That is why our vendor bundle is 2.6 MB even for the simpler application because almost half of its bundle is dedicated to the Angular compiler. Now, what’s the solution? We can perform this compilation step Ahead-Of-Time before deploying the application then this compilation step doesn’t have to happen for every user. Our users will download the final precompiled application. And that’s mean, our application will So these are the benefits of AOT compilation. Angular Compiler in Action Now let’s see the Angular compiler in action. So if you open up package.json, under dev dependencies we have @angular/compiler-cli dependency. This is the angular compiler package that takes the significant part of our vendor bundle. Now let’s see how to run the Angular compiler PS C:\Users\Ami Jan\deploy-demo\deploy-demo> node_modules/.bin/ngc bin is the directory where binary files are located and ngc is used to runthe Angular compiler. And with the help of this command, angular compiler compile the components and their templates. Now back in VS Code again, We can see we have 43 new files here. Here is an example. In the src/app folder, here we have a new file of css with suffix (.shim.ngstyle.ts). Let’s have a look, So currently we have no css for app component template. So styles array of type any is empty. But if we did have any styles, the Angular compiler will export these styles using a const here. Here we have another file app.module.ngfactory.ts. So for every component that you have a reference in our application. Angular compiler generate an ngfactory file, Look at this code here. This is the combination of the component and its template. So we can see a little bit of the markup here like router-outlet, app-root. So this is the code that angular compiler generates at runtime. And of course this is typescript, so you can transpile it into javascript and then it will be executed. And in the result, we get rendered app component in the DOM. As you can see this code is so cryptic. So it is not designed for humans to read. I just wanted to show what happens under the hood when you load the application. So this is how compilation works. Each file ngfactory for each component is generated referenced in our application. In the real world, we don’t any need to run the command of ngc. It was just because of the demonstration. We use Angular CLI to build our application for production and Angular CLI will internally run the Angular compiler. Building Applications with Angular CLI Now we’ll see how to use Angular CLI to build the application and get deployable packages for production. With this, you’ll get all the optimization techniques that we discussed earlier in this article. But first of all, we need to delete all the files that we generated when we run the compiler. And discard all the files. Now, back in terminal again, in order to build our application for production - PS C:\Users\Ami Jan\deploy-demo\deploy-demo> ng build --prod And, this will create the deployable package that we simply copy paste to a different machine or use FTP or any other mechanism to deploy the production server. But before running this command, we run ng build (without --prod flag). So, Don’t worry about the yellow warning. Look here we have vendor.bundle is 2.55 MB, which is still big. So we don’t have AOT compilation here. In other words, the Angular compiler is part of the vendor bundle. Now, back in VS Code, we have a folder dist stands for distributable. And here we have favicon icons for our application, we got the glyph icon files which is part of the bootstrap. We got index.html and bundles. First, take a look at index.html. It is slightly different from our index.html which is in the src folder. So here in the body element after app-root, we have few script references and the first script bundle is inline.bundle.js and then reference to polyfills and other bundles in our application. And in contrast, index.html that we have in src folder doesn’t have any bundle script because during development these bundles are injected into the body element at runtime. Now back again in dist folder, here every bundle has 2 files. First one is the actual bundle file and the 2nd file is the map or source map file. The purpose of this map file is to map a piece of javascript code in the bundle to its original src file. So when you’re debugging the application in chrome. This map file allows Chrome debugger to show you the actual code in the src file not in the bundle. Let’s take a look at one of these bundles, We can see we don’t have any of the optimization technique which we discussed earlier. We have a lot of comments, a lot of whitespaces, long descriptive names. So we don’t have minification or uglification. We also have dead code. So if you have created components and services that we do not use in our application, we’ll end up in this bundle. Let’s see an example. PS C:\Users\Ami Jan\deploy-demo\deploy-demo> ng g c courses And I’m not going to use it anywhere in the application. Now build the application one more time. PS C:\Users\Ami Jan\deploy-demo\deploy-demo> ng build Now our bundles are again ready. Now let’s take a look at main.bundle.js and search for courses. Here we’ll see 27 results, So all the code of our courses component is in the bundle. So we have the dead code here. Also, we don’t have AOT compilation because the angular compiler is included in the vendor bundle. Now let’s see what happens when we build the application with prod flag. Now you can see, our vendor bundle size is just 1.51 MB which is almost half the size. But, by the way, this is the initial size before applying any kind of minification and uglification. So if you look at the actual file size, it will be even more smaller. Now back in the dist folder, we have a favicon and glyphicons like before. But when you open the index.html you’ll not see any whitespaces there. And then open the other bundles in dist folder one by one. All our html markup is represented as one long string. Here we have not any descriptive identifiers. And all the function names variables are all cryptic. Look here we have dynamic numbers besides inline file name, we call it hash. And this hash is generated based on the content of the bundle and it is the technique to prevent caching. So everytime you modify your code and regenerate this bundle, this hash is going to change and this will prevents the client browser to cach the file with the exact same name because the name of the file is going to change in each build. We can simply copy paste dist folder into non development machine, we can use FTP or in a most complex scenario, we can setup Continuous Deployment Workflow. Environment In the typical applications, we have the concept of Environment. So quite often in most organizations, we have 3 environements. Earlier in our Angular series, we had the quick look at this environment folder. This is exactly to implement the concept of environments of our application. So now let’s take a closer look of what we have here and how we can define additional custom environments. In this folder, we have 2 files environment.prod for production and environment.ts used for development. And in the environment.ts, we’re simply exporting the production one property. Here we might have additional properties i.e. (You may wanna change the color of the navigation bar depending on their environment. This way testers know that they are looking at the actual testing website, now the production website. So they don’t accidentally modify some data in the production. Or you may wanna change the name of the application in the navigation bar. You may wanna add the word testing or perhaps you may wanna use different api endpoint in your testing environment. We can add all these properties in this object i.e. So here we have added the property in the development environment (means environment.ts) now we need to add this property in the production environment as well. So let’s go to environment.prod.ts, And here we have changed the color of navBarBackgroundColor in production. Now back in the environment.ts file and read the by default comment. It is telling that when you build your application using Angular CLI and supply the environment flag, Angular CLI will pick one of these environment files and put it into the bundle. So we don’t have to write any code to work with the specific environment object. Let me show you what I mean. So let’s go to navbar.component.ts and here we define the field called backgroundColor and we set it to the navbarBackgroundColor of our environment object Now let’s go back to the navbar.component.html and here we apply style binding on the nav element And now if you wanna see this in action, you don’t necessarily build your application. You can still serve your application with ng serve but additionally we need to specify the target environment which is by default is development. PS C:\Users\Ami Jan\deploy-demo\deploy-demo> ng serve And if you wanna load your environment into a production environment, we can add --prod PS C:\Users\Ami Jan\deploy-demo\deploy-demo> ng serve --prod This is the short form of PS C:\Users\Ami Jan\deploy-demo\deploy-demo> ng serve --environment=prod Now run the application in the production environment. And open the browser, our navbar is white here. Now, run it again in the development scenario. But for running it in the development mode, we need to kill the port. Because 4200 port is already in use. Here is the link to kill 4200 port. And now it is serving in the development environment. So the result is, Adding Custom Environments So we have seen how to work with development and production environment objects. Now, what if you wanna add an additional environment like the testing environment or staging environment. These are common setups of a lot of organizations. So we copy and paste one of the file and rename it in the environment folder. And look the changes here we have applied in the project. We made production property false because it is the testing environment. And now we have the new environment file, and we need to register this with Angular CLI. So back in the root of the project, here we have .angular-cli.json. it is the configuration of angular cli and here we have the property called environments and in this property, we can register all our environments. So, Now back in the terminal and we can run the application with test environment flag. PS C:\Users\Ami Jan\deploy-demo\deploy-demo> ng serve --environment=test And we can also run with ng build as well PS C:\Users\Ami Jan\deploy-demo\deploy-demo> ng build --environment=test Now let’s see the testing environment in action. And now back in the browser and here our navigation bar is purple. And it indicates the testing environment. And we’re sure that we can work now in the testing environment. There is something you need to be aware of when you’re working with the non-development environment, here is not hot module replacement. So if you make any changes in your code or in your environment object, the changes are not visible immediately. We need to go back to the terminal, stop the webserver and run ng serve again i.e. let’s change something Now save and back in the browser, it is still showing a purple navigation bar. Which means our changes are not reflected because we don’t have hot module replacement. Its only available when you’re running your application in the development environment. So we need to go back to the terminal, stop the web server, and run ng serve again. Back in the browser and refresh it again manually this time. Webpack will not be executed its own. So if you wanna add custom environments, add the new file in the environments folder and make sure all the environment files, the environment object you’re exporting has the exact same property in all the versions (dev, test, prod). And also don’t forget it to register it in the .angular-cli.json and here register a new environment. Linting with Angular CLI The good practice that you should use to make sure that your code is a high degree of readability and maintainability, is using a Linter. It is basically a program that can be configured with various rules and then it performs a static analysis of your code to see if you have violated any of these rules. This way we can ensure that your code is clean and consistent. This is specially important when you’re working in a team because quite often a people have different styles of writing code i.e. some developers use a single quote for the string and some use double quotes. And when you mix up the single quote and double quote in the same file that looks very ugly. Some people terminate their javascript statements with a semi-colon and some don’t care about them. So in the Typescript world, a popular tool we use for linting is tslint and if you want to study more about tslint, you can explore it from here. Now the good news is, the project we create with Angular CLI automatically have the support of linting. So, if you go to the package.json, under dev-dependencies we have the reference of ts-lint. Here, tslint is installed by default. And if you see in the root of the project, here we have tslint.json configuration file as well. This is where we define all the rules. Here is an example, we have a rule of quotemark which is set to single which means everyone in the code should use single quotes for the string. And if you don’t like single quotes and prefere double quotes, you can change this to double. And if you want to entirely disable it. So, Now let’s see linting in action. One simple way to run tslint is via Angular CLI. So, PS C:\Users\Ami Jan\deploy-demo\deploy-demo> ng lint Now, this goes through all the files in our project and finds all the errors and reports them in one go. Obviously, it is not so much interactive. It is really a little bit hard to go through these errors but it is the good option for before committing your code to the repository. So, we can quickly run ng lint to see all the linting issues in your code. Now let’s take a look at the few examples And on line 11, we have trailing whitespaces. So let’s go to the github.service.ts and here we’ll see on 11 a trailing minor whitespace. And these tslint rules are configured by default. If this rule is annoys you, simply go to tslint and search for trailing-whitespace. So this rule is set to true and we can set it to false. "no-trailing-whitespace": true This is how we read the Errors one by one. And we can fix them one by one or you can use Angular CLI to fix them automatically for you. So, PS C:\Users\Ami Jan\deploy-demo\deploy-demo> ng lint --fix And still, Errors are showing there. So ts-lint will fix the errors that are easier to fix and it leaves others for us. So currently it fix most of the errors, they’re only 2 errors left that we need to manually fix. For example, it is saying “the selector” of the component GitHubFollowersComponent should have prefix app. And this is based on the same principal that we generate a component using Angular CLI. Your component selector is prefix with the app. I personally don’t like that, you may like it, you may love it. So we can configure tslint to apply that rule or not. So, PS C:\Users\Ami Jan\deploy-demo\deploy-demo> ng lint (to find all the linting issues in the code) And if we supply the fix flag, we can have tslint fix most of the easy issues for us. So this is the good technique to find and fix most of the linting issues in your code before committing it to the repository. Linting in VS Code Now let’s see how to lint the code in VS Code directly. Install the extension and then press on Reload to make it proper in VS Code environment. And then open github-followers.service.ts place one of the import statement in double quotes instead of single quote. Look it is showing a red squiggle line. And if you click on the yellow light button, you’ll see the options on how to fix this problem. And it is also helps to remove the white trailing spaces. So this extension is very useful when you’re coding. You can find the linting issues and fix them as you code. It is the good practice to run ng lint before committing your code to the repository. Other Deployment Options So we have seen how to build a deployable package for your app using Angular CLI. We can simply copy and paste the content of the dist folder into the Web Server or you may use FTP. But let’s see a few other deployment options. It is the most recommended option. View All
https://www.c-sharpcorner.com/article/preparing-the-angular-apps-for-deployment/
CC-MAIN-2019-22
refinedweb
4,055
66.03
This class allows convenient ways of accessing a motion command. More... #include <MMAccessor.h> This class allows convenient ways of accessing a motion command. Since MotionCommands must be checked out of the motion manager and then checked back in when they are done, this is a common source of errors, leading to deadlock. This class will check the motion out when it's created, and check it back in when it goes out of scope It supports recursive checkin/checkouts. Uses global motman So, instead of doing things like this: YourMotionCommand* ymc = dynamic_cast<YourMotionCommand*>(motman->checkoutMotion(your_mc_id)); //do 'stuff' with ymc, e.g.: ymc->rollOver(); motman->checkinMotion(your_mc_id); ...which can be error prone in many regards - if 'stuff' returns without checking in, or you forget to check in, or you get lazy and leave it checked out longer than you should, which can cause general performance issues (or worse, deadlock) Using MMAccessor makes it much easier to solve these problems, and is easier to code: MMAccessor<YourMotionCommand> mma(your_mc_id); //do 'stuff' with mma, e.g.: mma->rollOver(); We can call a return at any point and the motion command will automatically be checked in, and since C++ guarrantees that the destructor of mma will be called, we don't have to worry about forgetting about it. We can limit the scope by placing {}'s around the segment in question: //pre-stuff { MMAccessor<YourMotionCommand> mma(your_mc_id); //do 'stuff' with mma, e.g.: mma->rollOver(); } //post-stuff - has no knowledge of mma, out of its scope And, for those astute enough to notice that the theoretical rollOver() function is called on MMAccessor when it's actually a member of YourMotionCommand, this is because MMAccessor behaves as a 'smart pointer', which overloads operator->() so it is fairly transparent to use. In some cases, you may wish to access a prunable motion, but may be unsure of whether the motion is still alive. If it has been pruned, the MC_ID is no longer valid, and will not provide access to the motion. Worse, however, is that enough new motions have been created that the ID has been recycled and now refers to another, different, motion. The solution to this requires two steps. First, you must retain the SharedObject you used to initially create the motion. This is required because if the MotionManager prunes the motion, it will dereference the memory region, and if there are no other references to the region, it will be deallocated, destroying the data. Second, you pass this SharedObject to the MMAccessor constructor as shown: SharedObject<YourMC> yourmc; // ... stuff ... if yourmc was added to MotionManager, it may or may not still be active MMAccessor<YourMC> your_acc(*yourmc); // doesn't matter! // your_acc now provides no-op access if not in MotionManager, checks it out if it is This guarantees safe access regardless to the current status of the motion. (Note that you can also just listen for the (EventBase::motmanEGID, MC_ID, EventBase::deactivateETID) event to be notified when a motion is pruned... however, unless you still have a reference to the SharedObject, you won't be able to access/reuse the motion after it was pruned) MMAccessor is a small class, you may consider passing it around instead of a MotionManager::MC_ID if appropriate. (Would be appropriate to avoid multiple checkin/outs in a row from different functions, but not as appropriate for storage and reuse of the same MMAccessor.) Definition at line 68 of file MMAccessor.h. List of all members. this true constructor, checks out by default Definition at line 74 of file MMAccessor.h. constructor, allows objects to provide uniform access to MotionCommands, regardless of whether they are currently in the MotionManager if ckout is true (default parameter), will attempt to check out the motion if the motion reports it has a valid ID Definition at line 83 of file MMAccessor.h. copy constructor - will reference the same mc_id - checkin/checkouts are independent between this and a; however, if a is checked out, this will check itself out as well If the original was checked out, this will checkout as well (so checkOutCnt will be 1) Definition at line 90 of file MMAccessor.h. destructor, checks in if needed Definition at line 96 of file MMAccessor.h. Checks in the motion, passing through the value it is passed. Useful in situations like this: MMAccessor<myMC> mine(myMC_id); if(mine.mc()->foo()) //do something with motman here But we want to check in mine ASAP - if we don't reference it anywhere in the if statement, we're leaving the MC locked longer than we need to. How about instead doing this: bool cond; {MMAccessor<myMC> mine(myMC_id); cond=mine.mc()->foo();} if(cond) //do something with motman here But that uses an extra variable... ewwww... so use this function as a pass through to checkin the MC: MMAccessor<myMC> mine(myMC_id); if(mine.checkin(mine.mc()->foo())) //do something with motman here Definition at line 162 of file MMAccessor.h. Checks in the motion. Don't forget, you can also just limit the scope using extra { }'s Definition at line 127 of file MMAccessor.h. Referenced by MMAccessor< MC_t >::checkin(), and MMAccessor< MC_t >::~MMAccessor(). So you can check out if not done by default (or you checked in already). Definition at line 114 of file MMAccessor.h. Referenced by MMAccessor< MC_t >::MMAccessor(), and MMAccessor< MC_t >::operator=(). Returns the motion command's address so you can call functions. Definition at line 123 of file MMAccessor.h. Referenced by ControlBase::deactivate(), ControlBase::doReadStdIn(), ControlBase::doSelect(), MMAccessor< MC_t >::operator*(), MMAccessor< MC_t >::operator->(), MMAccessor< MC_t >::operator[](), and ControlBase::refresh(). smart pointer to the underlying MotionCommand Definition at line 170 of file MMAccessor.h. Definition at line 169 of file MMAccessor.h. Definition at line 168 of file MMAccessor.h. Definition at line 167 of file MMAccessor.h. allows assignment of MMAccessor's, similar to the copy constructor - the two MMAccessor's will control the same MotionCommand Definition at line 103 of file MMAccessor.h. Definition at line 172 of file MMAccessor.h. Definition at line 171 of file MMAccessor.h. [protected] counter so we know how many times checkout was called Definition at line 176 of file MMAccessor.h. Referenced by MMAccessor< MC_t >::checkin(), MMAccessor< MC_t >::checkout(), MMAccessor< MC_t >::MMAccessor(), MMAccessor< MC_t >::operator=(), and MMAccessor< MC_t >::~MMAccessor(). the MC_ID that this Accessor was constructed with Definition at line 175 of file MMAccessor.h. Referenced by MMAccessor< MC_t >::checkin(), MMAccessor< MC_t >::checkout(), MMAccessor< MC_t >::MMAccessor(), and MMAccessor< MC_t >::operator=(). a pointer to the motion command, should always be valid even when not checked out so you can access member fields (which is reasonably safe) Definition at line 177 of file MMAccessor.h. Referenced by MMAccessor< MC_t >::checkin(), MMAccessor< MC_t >::checkout(), MMAccessor< MC_t >::mc(), MMAccessor< MC_t >::MMAccessor(), and MMAccessor< MC_t >::operator=().
http://tekkotsu.org/dox/classMMAccessor.html
CC-MAIN-2016-26
refinedweb
1,137
52.39
symbol filter redux A while ago I provided this local symbol server proxy you could use to get just the symbols you want. I was watching it work a couple of weeks ago and I noticed that most of the time what it ends up doing is proxying a 302 redirect. Which was kind of cool because that meant that it didn't actually have to do much heavy lifting at all and the symbols were coming directly from the original source. That's when it hit me that I had been doing it wrong the whole time. So I deleted all the of the proxy code. What it does now is that it always serves either a 404 if the request isn't on the white-list or it serves a 302 if it is on the white list. It just redirects according to the path provided. The original article is here To use it, instead of doing this: set _NT_SYMBOL_PATH=srv* Do this: set _NT_SYMBOL_PATH=srv* If the pattern matches a line in symfilter.txt it will serve a 302 redirect to[the usual pdb request path] Note the syntax is slightly different than the original version. Now that it's only serving redirects or failures it probably could use the http listener directly because the reasons for doing sockets myself are all gone. But I didn't make that change. The code is here: using System; using System.Collections.Generic; using System.Text; using System.IO; using System.Net; using System.Net.Sockets; namespace SymbolFilter { class FilterProxy { // we'll accept connections using this listener static TcpListener listener; // this holds the white list of DLLs we do not ignore static List dllFilterList = new List(); // default port is 8080, config would be nice... const int port = 8080; static void Main(string[] args) { // load up the dlls InitializeDllFilters(); // open the socket StartHttpListener(); // all real work happens in the background, if you ever press enter we just exit Console.WriteLine("Listening on port {0}. Press enter to exit", port); Console.ReadLine(); } static void InitializeDllFilters() { try { // we're just going to throw if it fails... StreamReader sr = new StreamReader("symfilter.txt"); // read lines from the file string line; while ((line = sr.ReadLine()) != null) { dllFilterList.Add(line.Trim().ToLowerInvariant()); } sr.Close(); } catch (Exception e) { // anything goes wrong and we're done here, there's no recovery from this Console.WriteLine(e.Message); Environment.Exit(1); } } // Here we will just listen for connections on the loopback adapter, port 8080 // this could really use some configuration options as well. In fact running more than one of these // listening to different ports with different white lists would be very useful. public static void StartHttpListener() { try { listener = new TcpListener(IPAddress.Loopback, port); listener.Start(); listener.BeginAcceptTcpClient(new AsyncCallback(DoAcceptTcpClientCallback), listener); } catch (Exception e) { // anything goes wrong and we're done here, there's no recovery from this Console.WriteLine(e.Message); Environment.Exit(1); } } // when we accept a new listener this callback will do the job public static void DoAcceptTcpClientCallback(IAsyncResult ar) { // Get the listener that handles the client request. TcpListener listener = (TcpListener)ar.AsyncState; // I should probably assert that the listener is the listener that I think it is here // End the operation and display the received data on // the console. try { // end the async activity and open the client // I don't support keepalive so "using" is fine using (TcpClient client = listener.EndAcceptTcpClient(ar)) { // Process the http request // Note that by doing it here we are effectively serializing the listens // that seems to be ok because mostly we only redirect ProcessHttpRequest(client); } } catch (Exception e) { // if anything goes wrong we'll just move on to the next connection Console.WriteLine(e.Message); } // queue up another listen listener.BeginAcceptTcpClient(new AsyncCallback(DoAcceptTcpClientCallback), listener); } // we have an incoming request, let's handle it static void ProcessHttpRequest(TcpClient client) { // we're going to process the request as text NetworkStream stream = client.GetStream(); StreamReader sr = new StreamReader(stream); // read until the first blank line or the end, whichever comes first var lines = new List(); for (;;) { var line = sr.ReadLine(); if (line == null || line == "") break; lines.Add(line); } // e.g. "GET /foo.pdb/DE1EBC3EE7E542EA96B066229D3A40081/foo.pdb HTTP/1.1" var req = lines[0]; // avoid case sensitivity issues for matching the pattern var reqLower = Uri.UnescapeDataString(req).ToLowerInvariant(); // loop over available patterns, if any matches early out int i; for (i = 0; i < dllFilterList.Count; i++) { if (reqLower.Contains(dllFilterList[i])) break; } // if we didn't match, or it isn't a GET or it isn't HTTP/1.1 then serve up a 404 if (i == dllFilterList.Count || !req.StartsWith("GET /") || !req.EndsWith(" HTTP/1.1")) { // you don't match, fast exit, this is basically the whole point of this thing Return404(client); } else { // this is the real work Console.WriteLine("Matched pattern: {0}", dllFilterList[i]); RedirectRequest(client, req); } } // cons up a minimal 404 error and return it static void Return404(TcpClient client) { // it doesn't get any simpler than this var sw = new StreamWriter(client.GetStream()); sw.WriteLine("HTTP/1.1 404 Not Found"); sw.WriteLine(); sw.Flush(); } // cons up a minimal 404 error and return it static void Return302(TcpClient client, string server, string url) { string line = String.Format("Location: https://{0}{1}", server, url); Console.WriteLine("302 Redirect {0}", line); // emit the redirect var sw = new StreamWriter(client.GetStream()); sw.WriteLine("HTTP/1.1 302 Redirect"); sw.WriteLine(line); sw.WriteLine(); sw.Flush(); } static void RedirectRequest(TcpClient client, string req) { // we know this is safe to do because we already verified that the request starts with "GET /" string request = req.Substring(5); // strip off the "GET /" request = request.Substring(0, request.Length - 9); // strip off " HTTP/1.1" // we're looking for the server that we are supposed to use for this request in our path // the point of this is to make it so that you can set your symbol path to include // // and your-sym-server will be used by the proxy int delim = request.IndexOf('/', 1); // if there is no server specified then we're done if (delim < 0) { // serve up an error Return404(client); return; } // the target server is everything up to the / but not including it string targetServer = request.Substring(0, delim); // the new request URL starts with the first character after the | request = request.Substring(delim + 1); // if there isn't already a leading slash, then add one if (!request.StartsWith("/")) request = "/" + request; Return302(client, targetServer, request); } } }
https://docs.microsoft.com/en-us/archive/blogs/ricom/symbol-filter-redux
CC-MAIN-2022-33
refinedweb
1,076
55.34
NOTE: this article was published long ago and is probably way out of date with respect to how Skype plugins are created now. You may find it interesting for historical purposes only! Introduction A while ago, I became interested in looking at the Skype API, and found that it has a reasonably solid plugin architecture. Support for a Java-based API is coming along, although it seems to lag the C++ and C# implementations slightly. One of the reasons for this is (somewhat bizarre, in my opinion) API implementation chosen by Skype themselves – text-based API “commands” (such as ALTER CALL, GET PROFILE, etc) are translated into Win32 window messages, which are dispatched and picked up by the Skype window’s own event loop – a fairly ugly and dated form of IPC, which is made a bit more palatable by their COM wrapper implementation (also used by C#). The Linux version uses a different API – again, not ideal. However, the API does support quite comprehensive automation and callback facilities. Writing a Plugin In order to be able to get started writing a Skype plugin, you will need at a minimum: - A C++ compiler – I used Visual C++ Express 2005; - The Skype4COM libraries. I also downloaded the excellent Boost library for some C++ extensions (especially relating to date and time handling). I further installed WTL for some useful COM and ATL helpers. Getting Started The first thing you will need to do is register the COM server. After you have extracted the files from the Skype4COM distribution, go to the directory where the files have been extracted and run regsvr32 skype4com.dll. A message box should appear confirming successful registration. For convenience, I installed the COM DLL in $WINDIR/system32 for convenience, as it then is always available on the default system path for dynamic loading. We are writing an application that makes use of COM, and my favourite way to write COM apps has always been to create a standard Win32 project and then let ATL/WTL take care of the COM boilerplate.I originally read about this technique in Beginning ATL 3 COM Programming years ago, of which I can see a used copy currently on Amazon for sale for 83 cents!!! If you install the WTL AppWizard as per the installation instructions, you can begin a project using the project template below: You will then need to add any necessary includes and import the Skype COM type library using the import directive in Visual C++. Here is a snippet from my stdafx.h header file: #include <atlbase.h> #include <atlapp.h> extern CAppModule _Module; #include <atlwin.h> #include <atlcom.h> #include <atlhost.h> #include <atlctrls.h> #include <atlctrlw.h> #include <atlframe.h> #include <atldlgs.h> #import "c:\\windows\\system32\\skype4com.dll" named_guids using namespace SKYPE4COMLib; This will allow Visual C++ to generate smart pointer wrappers for the COM interfaces. The main source .cpp file (generated by the WTL AppWizard) contains the program’s main message handler loop: #include "stdafx.h" #include "resource.h" #include "aboutdlg.h" #include "MainDlg.h" CAppModule _Module; int Run(LPTSTR /*lpstrCmdLine*/ = NULL, int nCmdShow = SW_SHOWDEFAULT) { CMessageLoop theLoop; _Module.AddMessageLoop(&theLoop); CMainDlg dlgMain; if(dlgMain.Create(NULL) == NULL) { ATLTRACE(_T("Main dialog creation failed!\n")); return 0; } dlgMain.ShowWindow(nCmdShow); int nRet = theLoop.Run(); _Module.RemoveMessageLoop(); return nRet; } int WINAPI _tWinMain(HINSTANCE hInstance, HINSTANCE /*hPrevInstance*/, LPTSTR lpstrCmdLine, int nCmdShow) { HRESULT hRes = ::CoInitialize(NULL); // If you are running on NT 4.0 or higher you can use the following call instead to // make the EXE free threaded. This means that calls come in on a random RPC thread. // HRESULT hRes = ::CoInitializeEx(NULL, COINIT_MULTITHREADED); ATLASSERT(SUCCEEDED(hRes)); // this resolves ATL window thunking problem when Microsoft Layer for Unicode (MSLU) is used ::DefWindowProc(NULL, 0, 0, 0L); AtlInitCommonControls(ICC_BAR_CLASSES); // add flags to support other controls hRes = _Module.Init(NULL, hInstance); ATLASSERT(SUCCEEDED(hRes)); int nRet = Run(lpstrCmdLine, nCmdShow); _Module.Term(); ::CoUninitialize(); return nRet; } Using the Skype Interfaces Our main class is CMainDlg. We can now declare an instance of the Skype COM class in our class, using one of the ATL smart pointers that Visual C++ has generated for us (see the ATL documentation for more details on what it generates): class CMainDlg : public CDialogImpl<CMainDlg>, public CUpdateUI<CMainDlg>, ... { private: ... public: enum { IDD = IDD_MAINDLG }; ISkypePtr ptr; We can now use this interface pointer to connect to the running Skype instance and send it commands: HRESULT hr; hr = ptr.CreateInstance(__uuidof(Skype)); if (FAILED(hr)) { MessageBox("Failed to create Skype instance!\n", "Error"); exit(-1); } ... hr = ptr->Attach(SKYPE_PROTOCOL_VERSION, true); if (SUCCEEDED(hr)) { OutputDebugString("Connected to Skype\n"); } Event Callbacks If we have connected OK, we can also register for event callbacks. The most fundamental set of callbacks are the callbacks related to change in call status. Unfortunately, COM event handling can be tricky to get right, although ATL/WTL can alleviate some of the pain. Basically, the main class CMainDlg needs to extend the IDispEventImpl class, passing a reference to the dispatch interface that declares the events we are interested in: public IDispEventImpl<IDD_MAINDLG,CMainDlg,&DIID__ISkypeEvents,&LIBID_SKYPE4COMLib,1,0> We then create a COM event sink map for the event(s): BEGIN_SINK_MAP(CMainDlg) SINK_ENTRY_EX(IDD_MAINDLG, DIID__ISkypeEvents, 0x8, &OnCallStatusChange) END_SINK_MAP() Finally, we need to actually tell Skype we are interested in the events: // Hook up for event notifications hr = DispEventAdvise(ptr); if (FAILED(hr)) { ... } The actual details of COM event sinks and connection points is beyond the scope of this article, and can be a bit involved (for instance you may need to use the OLE/COM Type Library Viewer utility to view the dispatch interface details) but both MSDN and the COM book referenced above have good examples. Our actual event callback looks like the following: void __stdcall OnCallStatusChange(ICall* pCall, enum TCallStatus status) { std::ostringstream str; USES_CONVERSION; if (status == clsInProgress) { // A call has been initiated. } else if (status == clsRouting) { } else if (status == clsRinging) { } else if (status == clsFailed) { } else if (status == clsFinished) { } } Within the callback, we can retrieve a handle to the call or counterparty and use the information contained within these objects. For instance, the code below creates an instance of a CallInfo object and populates it with data retrieved from the Call and User objects (the ugly BSTR() and LPCTSTR casting is not ideal, but the only way I could effectively cast the COM BSTR to an ANSI string – if you know a better way, please let me know). For instance, the CallInfo struct below has fields for the call counterparty, id, duration, and user country. if (status == clsInProgress) { str << "Call in progress with " << (LPCTSTR)OLE2T(pCall->PartnerHandle.GetBSTR()); IUserPtr user = ptr->GetUser(pCall->PartnerHandle); CallInfo* callInfo = new CallInfo(); callInfo->callId = pCall->Id; callInfo->duration = 0L; std::string* country = new std::string(OLE2T(user->Country.GetBSTR())); callInfo->callerCountry = country; ... } When the call has finished, we can retrieve the call duration: else if (status == clsFinished) { str << "Call # " << pCall->Id << " finished, duration: " << pCall->Duration << " seconds."; // Retrieve the call from the active calls list CallList::const_iterator it = calls.find(pCall->Id); if (it == calls.end()) { OutputDebugString("Call Id not found!"); return; } CallInfo* callInfo = it->second; callInfo->duration = pCall->Duration; The sample program available for download below contains a very simple example, which basically displays some call-related information as and when Skype reports it. I originally had planned to make it a bit more ambitious, and attempt to deduce the potential costs savings of making a Skype call vs. a fixed-line call on a standard BT tariff, but this proved unreliable for a few different reasons: - The callee’s profile may not have a country field; - If present, the country field may be incorrect; - It is unclear whether it is more appropriate to compare mobile or fixed tariffs. However, I have left the code in there. Running the Plugin If Skype is running, compile and run the plugin. You should see a dialog appear like the following: Once you have connected, the plugin dialog should appear, and you can see event callbacks as they happen (just make and receive calls as normal): Wrapping Up In short, Skype offers fairly decent plugin functionality. However, the actual plugin architecture is clunky and dated, and inconsistent across platforms. I suspect that Skype will spend some time in the future on developing a richer and more uniform API across all platforms, that is not tightly coupled to the Skype GUI. Just for completeness, here are my VC++ compiler settings: /Od /D "WIN32" /D "_WINDOWS" /D "STRICT" /D "_DEBUG" /D "_ATL_STATIC_REGISTRY" /D "_MBCS" /Gm /EHsc /RTC1 /MTd /Yu"stdafx.h" /Fp"Debug\SkypePlugin.pch" /Fo"Debug\\" /Fd"Debug\vc80.pdb" /W3 /nologo /c /ZI /TP /errorReport:prompt Downloads I have packed the source files into a ZIP, which you can download from here:SkypePlugin.zip The source tree also contains an installer and the NSIS install script. The installer can be downloaded separately from here:Skype Plugin Installer Finally, the source tree also contains some documentation generated by Doxygen.
http://www.theresearchkitchen.com/articles/writing-a-skype-plugin-in-c
CC-MAIN-2014-41
refinedweb
1,496
53.61
termkey_interpret_position man page termkey_interpret_position — interpret opaque cursor position event data Synopsis #include <termkey.h> TermKeyResult termkey_interpret_position(TermKey *tk, const TermKeyKey *key, int *line, int *col); Link with -ltermkey. Description termkey_interpret_position() fills in variables in the passed pointers according to the cursor position report event found in key. It should be called if termkey_getkey(3) or similar have returned a key event with the type of TERMKEY_TYPE_POSITION. Any pointer may instead be given as NULL to not return that value. The line and col variables will be filled in with the cursor position, indexed from 1. Note that due to the limited number of bytes in the TermKeyKey structure, the line and column numbers are limited to 2047 and 4095 respectively. Return Value If passed a key event of the type TERMKEY_TYPE_POSITION,).
https://www.mankier.com/3/termkey_interpret_position
CC-MAIN-2017-26
refinedweb
131
55.24
Opened 6 years ago Closed 6 years ago #8484 closed defect (fixed) Syntax error Description Plugin doesn't seem to work on a regular RHEL 5.4 system, using Python 2.4.3 and running Trac 0.12. It generates this error: Feb 8 15:51:13 *hostname* Trac[loader] ERROR: Skipping "tracjsgantt = tracjsgantt": Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/Trac-0.12-py2.4.egg/trac/loader.py", line 70, in _load_eggs entry.load(require=True) File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1830, in load entry = __import__(self.module_name, globals(),globals(), ['__name__']) File "build/bdist.linux-x86_64/egg/tracjsgantt/__init__.py", line 1, in ? # File "/usr/lib/python2.4/site-packages/Trac_jsGantt-0.3-py2.4.egg/tracjsgantt/tracjsgantt.py", line 133 return (options[name] if options.get(name) else self.options[name]) ^ SyntaxError: invalid syntax (tracjsgantt.py, line 133)) I'm using the current SVN code: $ svn info tracjsgantt/tracjsgantt.py | grep -i rev Revision: 9844 Last Changed Rev: 9782 Attachments (2) Change History (12) comment:1 Changed 6 years ago by dannysauer - Cc dannysauer added; danny@… removed comment:2 Changed 6 years ago by ChrisNelson - Resolution set to wontfix - Status changed from new to closed comment:3 follow-up: ↓ 4 Changed 6 years ago by osimons I just spotted this by accident, but can't help but comment... Why don't you just use the common Python ideom of: return options.get(name) or self.options[name]) This is the same case that presumes that options.get(name) is defined AND that it evaluates to True by having non-empty content. If you want it to return possibly empty values, the correct way would be to always return it when defined and only resort to some other default value if non-existing: return options.get(name, self.options[name]) # alternatively return name in options and options[name] or self.options[name] Seeing Trac 0.11 even supports Python 2.3, using 2.5 syntax is not recommended. The patterns above is something you'll find in all parts of Trac when needed. The pattern works by the fact that Python will always return the value of the last and so that if several facts evaluates to True the last one is output for further use - and if any evaluates to False the remaining checks are skipped with no danger of things like lookup errors, and instead moving on to evaluate the or argument: >>> a = 1 >>> a and 'yes' or 'no' 'yes' >>> a = 0 >>> a and 'yes' or 'no' 'no' comment:4 in reply to: ↑ 3 Changed 6 years ago by ChrisNelson comment:5 Changed 6 years ago by anonymous I was going to make the same "and-or" suggestion, but since it's already been made... I would be happy to test any modified code if you don't have an available test environment. Changed 6 years ago by dannysauer Udiff patch for tracjsgantt.py against SVN version 9849 comment:6 Changed 6 years ago by dannysauer Ok, that patch didn't do it. I'm still working. :) Changed 6 years ago by dannysauer Patch to make plugin work with Python 2.4 comment:7 Changed 6 years ago by dannysauer - Resolution wontfix deleted - Status changed from closed to reopened Ok, I created a working patch: attachment:oldpython.patch. I incorporated the earlier partial patch, added some code to handle date time parsing in a way that works with 2.4, and had to modify the _work_time logic to skip milestones (I was getting an invalid key error since milestones don't have estimated hours). I also tweaked it a little to allow tickets with just predecessors, not successors (in the event someone sets up a custom files instead of using the module which keeps the paired fields in sync), and replaced three tabs with whitespace for indentation consistency. ;) I'm using this code now, and it works well. I haven't tested start and end dates with it yet, though, and I don't use parent tickets. Hopefully that stuff will just work. /crosses fingers comment:8 Changed 6 years ago by ChrisNelson comment:9 Changed 6 years ago by ChrisNelson comment:10 Changed 6 years ago by ChrisNelson - Resolution set to fixed - Status changed from reopened to closed I'm sorry to say I'm running Trac 0.11 and Python 2.5.2 and no good way to try to reproduce your problem. While I have done some work since the version you reference, the line: is still there and works fine for me. notes that the ternary if was added to Python in 2.5. I believe you need a newer Python to use this plugin. You could try replacing with (Untested.) but I believe I use the ternary if elsewhere, too.
https://trac-hacks.org/ticket/8484
CC-MAIN-2016-50
refinedweb
809
64.71
Using LESS with Django Lately, I’ve been working on creating a simplified work flow for my front end work here at Caktus. There are all sorts of new and helpful tools to optimize the creative process, allowing for faster iterations, and greater overall enjoyment. As with any new tool, there are a few options to choose from: LESS and SASS. Having read lots of reviews and reading through the documentation, I’ve decided LESS is more for me. LESS provides lots of useful tidbits that you’ve always wanted to do with your style sheets but never imagined possible. For example, variables immediately make your stylesheets less painful to muddle through: @darkred: #CD0000; h1 { color: @darkred; } I love that you can use plain text to map to commonly reused values , making it much easier to remember which is which! This is just the tip of the iceberg when it comes to what you can do with LESS. Since this post is about work flow, I’ll leave it to the authors to explain how to utilize it over here: One thing that we struggled with is what our work flow would look like when compiling LESS. Being primarily a Django shop, we decided to use django-compressor to create a seamless way to create, edit, version and deploy our LESS files. Here is an example of how we use it in our templates: {% load compress %} {% if debug %} // This is the client-side way to compile less and an ok choice for local dev <link rel="stylesheet/less" type="text/css" media="all" href="{{ STATIC_URL }}less/style.less" /> <script src="{{ STATIC_URL }}js/less-1.3.0.min.js"></script> {% else %} {% compress css %} // This is the nifty django-compressor way to compile your less files in css <link rel="stylesheet" type="text/less" media="all" href="{{ STATIC_URL }}less/style.less" /> {% endcompress %} {% endif %} Because the {% if %} statement is only true if DEBUG = True in your settings.py file, which should be your local development setting default anyways, django-compressor will only get to work once your site is deployed to a live or production environment. You can easily keep on saving and editing your LESS files without having to think about. This set-up also helps you get started on your project without having to delve into installing django-compressor or npm locally which is the next step! The LESS documentation recommends you use node.js in order to get everything running. You’ll need to install npm and then use it to install LESS. Then, install django-compressor and place the following in your settings.py file: COMPRESS_PRECOMPILERS = ( ('text/less', 'lessc {infile} {outfile}'), ) INTERNAL_IPS = ('127.0.0.1',) And there you have it! Your LESS file is all ready to automatically compile on your server. Read more about django-compressor blog comments powered by Disqus
http://www.caktusgroup.com/blog/2012/03/05/using-less-django/
CC-MAIN-2013-20
refinedweb
474
61.56
RSS Wins, Signals Atom's Death Toll? 249 S. Housley writes " RSS appears to have conquered the last hurdle. " Atom's Death Toll (Score:5, Funny) Now if only RSS could sound Atom's death knell... (In case the editors have seen fit to correct it, the original title was "Developers: RSS' Win, Signals Atom's Death Toll".) Re:Atom's Death Toll (Score:4, Funny) "RSS' Win, Signals Atom's Death Toll" could really be an article about Orcs on the rampage after receiving the fiery signal of RSS' victory on the glorious battlefield. Re:Atom's Death Toll (Score:5, Informative) Re:Atom's Death Toll (Score:3) Re:Atom's Death Toll (Score:4, Funny) "no fewer than" Greenrd's Law (Score:3, Interesting) "Evey post disparaging someone else's spelling or grammar, or lauding one's own spelling or grammar, will inevitably contain a spelling or grammatical error." Re:Atom's Death Toll (Score:3, Interesting) It's not so bad that this story was approved as an ad, but rather it's so poorly written and poorly understood by the author. After announcing support for RSS, MS's Longhorn team bent over backwards [msdn.com] to explain that they were supporting Atom too. The res Re:Atom's Death Toll (Score:2) Re:Atom's Death Toll (Score:3, Funny) Re:Atom's Death Toll (Score:2) It's almost as bad as people finding their "nitch" (AHHH!! It's "niche"!) Re:Atom's Death Toll (Score:5, Informative) We're not talking about individual words here, for one, we're talking about phrases. "Death toll" is the total number of people who die as a result of a disaster or other adverse event. "Death knell" is a bell rung to announce death, or an omen of death or destruction. So to say "death toll" in this context is completely and utterly wrong, and the fact that "toll", on its own, also can mean to ring a bell is actually completely unrelated and incidental. But even if we do, for a moment, accept your assertion that "death toll" is an acceptable use here, the use of "signals" in conjunction with it as also meaningless. Let's face it: the author meant to say "sounds the death knell" or "rings the death knell" or something to that effect, and just got it horribly, horribly wrong in his mind, likely using the same logic you did ("Hmm, I've heard about a bell tolling before, so "death toll" must be what I'm looking for."). Re:Atom's Death Toll (Score:2) In fact, both "toll" and "knell" can be defined as something like, "the ringing of bells, especially when marking the time of someone's death." They're pretty synonymous. It's a small mistake, in that they said "death toll" instead of "funeral toll". (At least, I Re:Atom's Death Toll (Score:3, Insightful) For what it's worth (ie nothing), I've never heard the phrase "funeral toll" Re:Atom's Death Toll (Score:2) Re:Atom's Death Toll (Score:5, Insightful) When a bell tolls a death knell Each knell's for one body The death toll is the sum of knells But only one's for thee. Re:Atom's Death Toll (Score:2) Of course if this goes on were going to be rephrased as simply trolling. Article from a biased company (Score:5, Insightful) "Google's recent new service that allows web surfers to monitor Google News using either RSS or Atom feeds, appears to be an acknowledgment that perhaps in purchasing Blogger, they chose the wrong specification." Re:Article from a biased company (Score:5, Interesting) Re:Article from a biased company (Score:3, Insightful) You thinking about the same MS? (Score:2) Are you thinking of the same Microsft that I am? Apple has always been ahead of Microsoft. MS doesn't really care. They won't lose customers to Apple over RSS vs. Atom, and users who don't use IE anyways won't care what MS supports. It doesn't seem like a big win to me either, but neither does becoming an IETF standard seem lik Re:Article from a biased company (Score:4, Funny) Re:Article from a biased company (Score:3, Insightful) Re:Article from a biased company (Score:3, Informative) From Wikipedia: [wikipedia.org] Re:Article from a biased company (Score:2, Informative) Re:Article from a biased company (Score:4, Insightful) No kidding, given the rest of the facts: Microsoft already stated that they would be using xml namespaces to add to RSS. Which is exactly what Dave Winer who published RSS 2.0 [wikipedia.org] intended. Microsoft actually consulted Dave before getting very far too. Quote: [reallysimp...cation.com] "Anyway, there's a lot more to what they're doing, but I wanted to say in advance that I think what they're doing is cool. " Additionally, Microsoft has stated support for Atom as well. [msdn.com] Heh. Re:Article from a biased company (Score:2, Informative) Re:Article from a biased company (Score:2) Re:Article from a biased company (Score:2) Yes, and he obviously spoke truthfully, just as truthfully as GWB when he explained that Iraq had WMDs and that the war there would be done within 6 month... Re:Article from a biased company (Score:2) This shouldn't really be too surprising, however, since Atom came from some one who knows a lot about markup, and RSS came from a group of people who hadn't a clue. MSRSS (Score:5, Interesting) RSS extensions (Score:2) Don't you mean embraced&extended RSS (Score:5, Informative) Re:Don't you mean embraced&extended RSS (Score:2, Funny) Why, but thats impossible, that has never happened before and could never happen ! Re:Don't you mean embraced&extended RSS (Score:3, Interesting) Err... This just seem to be a rebranding like Firefox and "Live Bookmarks". Numerous hints at it in the article too: Because of this, its renaming of RSS is not a sign the company is trying to remake the technology for its own purposes but rather a way to make a distinction between RSS and a feature of IE. Microsoft is adding RSS functionality to the next version of Windows, Wind Re:Don't you mean embraced&extended RSS (Score:4, Interesting) How To Publish a Podcast on the iTunes Music Store [apple.com] Re:Don't you mean embraced&extended RSS (Score:3, Informative) Conventional RSS tags in Podcasts without these namespace tags work fine, just don't give the extra useful information. The namespace allows delineation of info voluntarily added for the user's benefit. It hasn't altered the RSS 2.0 spec at all. Does netcraft confirm it? (Score:5, Funny) Re:Does netcraft confirm it? (Score:2) Which RSS did Microsoft embrace? (Score:4, Insightful) Re:Which RSS did Microsoft embrace? (Score:2, Informative) FUD, FUD, and more FUD (Score:5, Interesting) About the Author: Sharon Housley manages marketing for FeedForAll [feedforall.com] software for creating, editing, publishing RSS feeds and podcasts. Wow. It's a marketing plant trumpeting that RSS is now the standard, made by a company that specialises in RSS feeds. Re:FUD, FUD, and more FUD (Score:3, Insightful) This was pure spam, published to sway public opinion in the Atom vs RSS debate, and despite the fact that they've been called out in the comments, their plan is going to work unless slashdot removes the story or substantially edits it to point out the fraud. It will appear in countless syndicated news feeds (in RSS or Atom, ha), in blogs referencing the post (by people who didn't read the comments and were therefore fooled). Google searches about At Re:FUD, FUD, and more FUD (Score:2) Re:FUD, FUD, and more FUD (Score:2) Wow, slashdot sucks. Good for the PR firm that got this posted - it should improve their site ranking. microsoft is going to support ATOM too (Score:5, Informative) [msdn.com] " Beta 1 of Windows Vista and IE 7 for XP currently supports the web feed formats RSS " What's with the bias? (Score:5, Funny) Re:What's with the bias? (Score:2, Interesting) Regards, Steve Re:What's with the bias? (Score:2) WTF???? Crack monkey (Score:5, Insightful) IMO, atom is a far better protocol. The creators obviously tried to integrate the protocol with existing XML standards, v. RSS which basically gets as far as tag>. Its far more clear about its payload and is way better suited towards XML delivery. But, decide for yourself [tbray.org]. I see no problem with the current duality. I do wish Atom were available more places, but I can still live with RSS where I need to. Myren Much ado about nothing (Score:3, Insightful) Re:Much ado about nothing (Score:2) Re:Crack monkey (Score:2) RSS. Just say it to yourself over and over again. It rolls off the tongue. Next to a well designed acronym such as that, Atom just seems really simple. Irony... (Score:2) I also like the GP's suggestion that an acronym like RSS sounds complex: really simple syndication. Complex and simple wouldn't happen to be antonyms, would they? And I love the way the GGP called the submitter a "crack rock smoking monkey" and got modded +4: insightful. Not that it was a bad post. It seems the nazi mods missed it, is all. Re:Crack monkey (Score:2) I'd say that's also true with RSS 1.0 with its RDF base. Formats don't die (Score:5, Insightful) Saying one format or another has won is always premature. The only time it's safe to say that a format is dead is when they have to build new equipment to read it because the hardware is missing. And even then you never know. This article is obviously biased. It's like when Netscape said "the desktop is dead" when the Java plugin was first released. Formats don't die ... they get upgraded (Score:2) Is that so? (Score:5, Informative) Re:Is that so? (Score:2) AFAIK RSS/ATOM is already a religious war being fought right now. Articles, with extra, commas (Score:5, Funny) Re:Articles, with extra, commas (Score:4, Funny) RSS vs. ATOM (Score:4, Interesting) I've implemented RSS before, never bothered with ATOM, since RSS seems to be better supported client side. What are the advantages/disadvantages of each standard? Re:RSS vs. ATOM (Score:5, Informative). Atom is more complicated than RSS 1.0, which is more complicated than RSS 2.0. Re:RSS vs. ATOM (Score:2) People include HTML in RSS 2.0 feeds all the time. Escaped markup may be gross, but people use it. Likewise, most RSS 2.0 extensions seem to use namespaces. Re:RSS vs. ATOM (Score:2) The backbone on RSS1.0 extensibility [resource.org] is namespaces _and_ RDF, in that it can be merged with any other RDF vocabularies. RSS2.0 is extensible via namespaces [harvard.edu]. For example, Microsoft's Simple List Extension [microsoft.com] to RSS 2.0. I don't know how you've come to the conclusion that Atom is more complicated Re:RSS vs. ATOM (Score:2) Really, it doesn't. It's like picking what color wire you want. That said: ATOM specifies a bunch of stuff about how to publish entries and stuff. It's working it's way through the IETF, if I understand right. Basically, serious net work is going into Atom. I strongly suspect I'll be using it in the near future. But again, it hardly matters at all. There are tons of tools that accept and publish everything. Re:RSS vs. ATOM (Score:2) The Atom Syndication Format - the feed format - got signed off as a Proposed Standard [eweek.com] last week. That means the RFC number is on the way. Its here. The publishing protocol still has some way to go yet. Re:RSS vs. ATOM (Score:2) well... (Score:2, Funny) Who Cares? (Score:5, Informative) To be honest, the RSS vs. Atom thing is a lot like DVD+R and DVD-R - at this point they might as well be interchangeable. Just about every feed parser handles both Atom and RSS feeds. Using a tool like Magpie RSS [sourceforge.net] (PHP) or the Universal Feed Parser [feedparser.org] (Python) the format of any given feed is entirely transparent to application developers. RSS 1.0? RSS 2.0? Atom 0.3? It all gets processed by the parser in a nearly identical way. Already tools like Movable Type/Typepad [sixapart.com] or WordPress [wordpress.org] generate both RSS and Atom feeds by default. The vast majority of users don't know and don't care which feed format they're reading so long as it works. Both the toolkits and the applications use both formats and there's really little reason why they can't continue to support both. There doesn't have to be a single "winner" in the syndication feed wars. Atom and RSS can exist together for some time, and arguing that this is a zero-sum game in which one and only one feed format can exist is ridiculous. As long as the difference is transparent to end users, and relatively transparent to developers, neither format will totally conquer the other. Re:Who Cares? (Score:4, Insightful) Re:Who Cares? (Score:2) Re:Who Cares? (Score:2) Explain. Re:Who Cares? (Score:2) I sure am glad you told us or I might never have noticed it! BFD (Score:3, Interesting) Or does Atom have something to do with the way the data is stored internally? And I think Google did pretty well with Blogger-- it's like saying, "Google chose wrong when they bought Blogger, because Blogger used a different stylesheet on their home page than Google does." Isn't this cute ... but it's wrong!!! (Score:5, Informative) However, it is just wrong to say that the format war is over and RSS has won. Atom is a coherent standard now being finished under the umbrella of the IETF [ietf.org], and it is just now just starting to catch. And it will, because many of us have had enough RSS bullshit. We already had a disscussion [slashdot.org] with the guy behind RSS 3.0 which convinced me that with guys like him writing the RSS specs (just for the love of writing), RSS is REALLY DOOMED. Information on the Author/Submitter. (Score:5, Informative) About the Author: Sharon Housley manages marketing for FeedForAll [feedforall.com] software for creating, editing, publishing RSS feeds and podcasts. In addition Sharon manages marketing for FeedForDev [feedfordev.com] an RSS component for developers. In addition Sharon manages marketing for NotePage [notepage.net] a wireless text messaging software company. Needless to say, submitting your own obviously biased, commercially inspired, and untrue article is a tad transparent, but what do I know? Re:Information on the Author/Submitter. (Score:2, Insightful) smartass (Score:2) Yup, I have nothing more to add besides: smartass. Ok, just one more thing: for such smartasses managed MS to be where it is by acting as it acted along the last two decades. Like "ms does it so it is the good thing, everything else sucks". Zealotry school. RSS man (Score:5, Funny) RSS man hates Atom man, They have a fight, RSS wins. RSS man. Captain Obvious (Score:3, Insightful) Oh, and by the way, we happen to produce software to manage your RSS needs! "Now that Atom's attempt at replacing RSS has fallen flat, the syndication arena will likely see significant innovation and progress." Yes, that's what competition does, it stifles innovation. Seriously, though, uniform standards can be great, saving dev time for loads of people and companies. But I'd say that, at the very least, this promotional material (that's what it is) is putting the cart before the horse, and is also poorly written. I'd like to read a detailed analysis by an industry expert (not a marketing department), who is qualified to project market share for the standards. Also: Google's recent new service that allows web surfers to monitor Google News using either RSS or Atom feeds, appears to be an acknowledgment that perhaps in purchasing Blogger, they chose the wrong specification. Actually, this appears to be an acknowledgement that (1) Google would like as many consumers as possible to use Google News and (2) Google is choosing not to use their market share to lock out competitors in related products. Tim Bray: RSS 2.0 and Atom 1.0 Compared (Score:5, Informative) Thus, I wouldn't be so quick to claim RSS' victory. Tim Bray is a big supporter of Atom, and here is recent report titled RSS 2.0 and Atom 1.0 Compared [tbray.org]. Over at Simpy [simpy.com] (feel free to use demo/demo [simpy.com] account if you don't have an account yet), I am happily supporting RSS and Atom (as well as RDF). I believe Atom also has the "push" component, and not just "pull" that RSS has. That is, I believe Atom spec contains specification of Atom as a way for making requests to web services, while RSS, I think, only lets you publish the data passively, and have clients actively pull it. I can't find good references to this now, but maybe somebody else can find them and reply to this thread. Big win for RSS (Score:3, Interesting) Microsoft view of "innovation"? (Score:3, Informative) I suppose that's the usual Microsoft view, which means that we can only have innovation once Microsoft has moved and picked a standard that's substantially inferior to the state of the art. I mean, the differences between RSS and Atom aren't that big (they are both XML), but within those constraints, RSS still manages to get a bunch of things wrong relative to Atom (see here [tbray.org] for a discussion). Here's why RSS won (Score:4, Informative) Actually, everything I said there is basically common sense, but said in a particularly fancy way. RSS wins because it was the first to become widely used, and for the huge majority of uses (millions of random users with their feed-readers), switching to Atom would just break compatibility and offer no technical merits. Why is it any wonder that RSS won? And by technical merits, I mean those observable to normal users. If J. Random Blogger can't see how switching to Atom makes things better, then why would he do it? Maybe the underlying architecture of Atom is much better. (I don't know; I haven't actually read an explanation of its improvements, aside from being standardized.) But if the RSS feeds of the present work just fine, which they do, then nobody's going to switch. I mean, if the Internet community made their protocol/format choices solely on technical merit, then not only would JSON-RPC [json-rpc.org] have superseded XML-RPC, but I should also think thatwe'd be using a variant of Aaron Swartz's RSS 3.0 [aaronsw.com] instead of the XML-based formats by now. It would save bandwidth, make it easier for humans to read and write feeds, and make it easier to parse and generate. (Yes, to parse it you'll have to write a a few custom regexes or something, but you won't need to include a 3MB XML-parsing library.) And we wouldn't need to worry about internationalisation issues like encoding, because RSS 3.0 feeds are UTF-8 by definition. Unfortunately, this is not about technical merits, just like capitalistic competition is never entirely about offering higher-quality goods or services. It's all about marketing, really -- marketing just enough for your product to get a foothold. Google didn't choose the "wrong" specification. They chose a doomed one, maybe, but that doesn't make it bad. Re:Here's why RSS won (Score:3, Insightful) Let's assume RSS "won" something. (Which in itself is baloney - Atom is still very much around and well-supported.) Which RSS "won"? RSS 2.0? RSS 1.0? RSS 0.91? Any of the 9 different incompatible versions of RSS? There's a reason why non-XML formats like JSON-RPC and RSS3.0 never caught on - it's because they're not based on XML. XML, for all its shortcomings, is supported by damn near everything under the sun. You can query it with XPath, transform it back into XHTML with XSTL, slice it, dice it, and tu Re:Here's why RSS won (Score:2) Which you usually don't do (unless you're writing a web-based feedreader), but the ability to create both the web page (XHTML) and feed (RSS/Atom) from the XML templating simply by applying a different XSL is, on the other hand, very pleasing. Ask Slashdot: Easy RSS? (Score:3, Interesting) I know RSS has forked, and I don't use it much myself but I know others have asked for an RSS feed...is there a simple guide to outputting my content in an RSS kind of way? Also, if I wanted to mirror my content on an LJ, would it be easier to automate the LJ postings and get an RSS feed off of that, or vice versa, or are they completely indpendent tasks? Bias (Score:5, Insightful) Yeah, because there's absolutely no possibilty that someone will write a program for Longhorn(Vista) that will support Atom. Longhorn's coming release appears to be the final nail in the coffin of the Atom specification I guess because Microsoft declares something, that's it. Everyone else should just pack up and go home. (Someone should be sure to tell those Firefox people that Firefox isn't going to be on the Vista install CD!) I don't have a dog in this fight, but this story seems to have a bias. Re:Bias (Score:2) (Picking up where the first set of sarcasm left off...) None whatsoever. Not even Microsoft will touch it. Oh, wait! [msdn.com] For those who'd rather not read the article, it's from the Longhorn RSS team blog, and it's titled "Longhorn (hearts) Atom, too." Move along... nothing to see here. (Score:2) What's this RSS' thing? (Score:2) (Actually, this is clearly the regular possessive of RSS, which is, I suppose, plural) Poor dying Google... (Score:5, Funny) ...and we all know that Google's poor, beleaguered programmers will be incapable of altering the source of the application they own to transmit two. different. formats! of syndication data. That'd be like expecting them to support multiple locales or offer some kind of an aggregated news service. Why, oh why, must we constantly demand the impossible of our heroes? Or they could just let an intern hack something up one weekend. Either way. Atom is more than a feed format (Score:5, Insightful) What a troll (Score:3, Insightful) Web Feeds vs. RSS (Score:2) So I guess this means... (Score:2) Advertisements to the right, please (Score:2, Informative) Not only is this article factually incorrect, but it smacks of paid placement. If the Slashdot folks didn't get paid for this post, perhaps they should evaluate why they just gave away a bit of their brand value to pump one side of a religious war. Implications of Vista Integration (Score:2) Does that mean that the final nail in the coffic of Python is Vista's support for AtomAPI (Score:4, Insightful) Re:Who cares, they both suck. (Score:4, Insightful) You've never heard of ISO 8601? XML is UTF-8 by default unless another encoding is explicitly given in the first line. HTTP is compressed by default. XML, being so redundant with all of its angle brackets, quotes, and equals signs, compresses very well indeed.
https://developers.slashdot.org/story/05/08/22/1416230/rss-wins-signals-atoms-death-toll
CC-MAIN-2016-30
refinedweb
4,042
72.97
We can use a HashMap to store a list of messages to pass on, allowing more than one message to be sent to any one user. The HashMap will be indexed by nickname, and each entry will point to an ArrayList that contains messages for that user. Create a file called TellBot.java: import org.jibble.pircbot.*; import java.util.*; public class TellBot extends PircBot { // A map of String (nickname) to ArrayList (message Strings). private HashMap messages = new HashMap( ); public TellBot(String name) { setName(name); } public void onMessage(String channel, String sender, String login, String hostname, String message) { String[] tokens = message.split("\\s+"); // Check for the "tell" command. if (tokens.length > 2 && tokens[0].equalsIgnoreCase("tell")) { String nick = tokens[1]; message = message.substring(message.indexOf(nick) + nick.length( ) + 1); // Convert the nickname to lowercase for use in the HashMap. String key = nick.toLowerCase( ); ArrayList list = (ArrayList) messages.get(key); if (list == null) { // Create a new ArrayList if the HashMap entry is empty. list = new ArrayList( ); messages.put(key, list); } // Add the message to the list for the target nickname. list.add(sender + " asked me to tell you " + message); sendMessage(channel, "Okay, " + sender); } } public void onJoin(String channel, String sender, String login, String hostname) { // Convert the nickname to lowercase to get the HashMap key. String key = sender.toLowerCase( ); ArrayList list = (ArrayList) messages.get(key); if (list != null) { // Send all messages to the user. for (int i = 0; i < list.size( ); i++) { String message = (String) list.get(i); sendMessage(channel, sender + ", " + message); } // Now erase all messages for this user. messages.put(key, null); } } } Notice that the HashMap keys must be converted to lowercase. This effectively makes the nicknames case insensitive, so a message that is left for "Paul" can also be received by "paul." The onMessage method is invoked whenever someone sends a message to the channel. This method checks to see if a user has entered the "tell" command—if so, it adds the message to the HashMap. When a user joins the channel, the onJoin method is invoked. If there are any messages for this user, they are sent to the channel and then removed from the HashMap. To instantiate the bot, you will need a main method. Create this in TellBotMain.java: public class TellBotMain { public static void main(String[] args) throws Exception { TellBot bot = new TellBot("TellBot"); bot.setVerbose(true); bot.connect("irc.freenode.net"); bot.joinChannel("#irchacks"); } } You can also tell the bot to join more than one channel—simply modify the joinChannel method call so it contains a comma-separated list, for example: bot.joinChannel("#irchacks,#jibble,#pircbot"); Messages will be accepted from all channels and delivered to the first one the recipient joins. O'Reilly Home | Privacy Policy © 2007 O'Reilly Media, Inc. Website: All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://oreilly.com/pub/h/1982#code
crawl-003
refinedweb
475
60.01
Can someone show me best way to pass variables from my "alpha" model to my "exec" model I was taking the approach of in main class __init__ file declaring the variable with a default value self.spread = None then in the Update function of my Alpha model , which has a reference to the calling file as "algorithm", setting this variable as such def Update(self, algorithm, data): for security in algorithm.Securities: if self.DataEventOccured(data, security.Key): insights = [] return insights algorithm.spread = self.pair[1].Price - self.pair[0].Price When I use the debugger code to iterate through the program , it recognizes "algorithm.spread" as the correct value of the difference of these two pairs, but when it has completed the AlphaModel and I try to reference the variable using "self.spread" I go back to the intially assined default "None" value. What should I be doing differently? Thanks
https://www.quantconnect.com/forum/discussion/7325/passing-variables-through-algorithm-framework/
CC-MAIN-2021-39
refinedweb
151
54.22
Given below the sample code : 1 public class A { 2 static void test() throws Error{ 3 if (true) throw new AssertionError(); 4 System.out.print("test "); 5 } 6 public static void main(String[] args) { 7 try { test(); } 8 catch (Exception ex) { System.out.print("exception "); } 9 System.out.print("end "); 11}} How can we correct the above code ? 1. No need of correction 2. By changing "Exception" class to "Error" at line 8 in "caught" 3. By changing "throws Error" to "throws Exception" at line number 2. 4. By throwing exception in place of error at line 2. (2) Above code will give uncaught exception as output. Since it is of type "error " so it should be handled by error class object.
http://www.roseindia.net/tutorial/java/scjp/part6/question17.html
CC-MAIN-2014-10
refinedweb
122
68.57
On Friday 06 April 2012, Chris Metcalf wrote:> This change adds support for the tilegx network driver based on the> GXIO IORPC support in the tilegx software stack, using the on-chip> mPIPE packet processing engine.> > Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>> ---> drivers/net/ethernet/tile/Kconfig | 1 +> drivers/net/ethernet/tile/Makefile | 4 +-> drivers/net/ethernet/tile/tilegx.c | 2045 ++++++++++++++++++++++++++++++++++++> 3 files changed, 2048 insertions(+), 2 deletions(-)> create mode 100644 drivers/net/ethernet/tile/tilegx.cI think the directory name should be the company, not the architecture here, so makeit drivers/net/ethernet/tilera/tilegx.c instead.> +> +MODULE_AUTHOR("Tilera");> +MODULE_LICENSE("GPL");> +MODULE_AUTHOR is normally a real person with an email address.> +/* Statistics counters for a specific cpu and device. */> +struct tile_net_stats_t {> + u32 rx_packets;> + u32 rx_bytes;> + u32 tx_packets;> + u32 tx_bytes;> +};I think you need to drop the _t postfix here, which presumably comesfrom converting it from a typedef.> +> +/* The actual devices. */> +static struct net_device *tile_net_devs[TILE_NET_DEVS];> +> +/* The device for a given channel. HACK: We use "32", not> + * TILE_NET_CHANNELS, because it is fairly subtle that the 5 bit> + * "idesc.channel" field never exceeds TILE_NET_CHANNELS.> + */> +static struct net_device *tile_net_devs_for_channel[32];When you need to keep a list or array of device structures in a driver, you'reusually doing something very wrong. The convention is to just pass the pointeraround to where you need it.> +> +/* Convert a "buffer ptr" into a "buffer cpa". */> +static inline void *buf_to_cpa(void *buf)> +{> + return (void *)__pa(buf);> +}> +> +> +/* Convert a "buffer cpa" into a "buffer ptr". */> +static inline void *cpa_to_buf(void *cpa)> +{> + return (void *)__va(cpa);> +}This is almost certainly wrong: The type returned by __pa is a phys_addr_t,which cannot be dereferenced like a pointer. On normal drivers, you woulduse dma_map_single()/dma_unmap_single() to get a token that can getpassed into a dma engine. From what I can tell, this device is directly mapped,while your PCI uses an IOMMU, so that would require two differentimplementations of dma mapping operations.> +/* Allocate and push a buffer. */> +static bool tile_net_provide_buffer(bool small)> +{> + int stack = small ? small_buffer_stack : large_buffer_stack;> +> + /* Buffers must be aligned. */> + const unsigned long align = 128;> +> + /* Note that "dev_alloc_skb()" adds NET_SKB_PAD more bytes,> + * and also "reserves" that many bytes.> + */> + int len = sizeof(struct sk_buff **) + align + (small ? 128 : 1664);> +> + /* Allocate (or fail). */> + struct sk_buff *skb = dev_alloc_skb(len);> + if (skb == NULL)> + return false;> +> + /* Make room for a back-pointer to 'skb'. */> + skb_reserve(skb, sizeof(struct sk_buff **));> +> + /* Make sure we are aligned. */> + skb_reserve(skb, -(long)skb->data & (align - 1));> +> + /* Save a back-pointer to 'skb'. */> + *(struct sk_buff **)(skb->data - sizeof(struct sk_buff **)) = skb;This looks very wrong: why would you put the pointer to the skb into theskb itself?> + /* Make sure "skb" and the back-pointer have been flushed. */> + __insn_mf();Try to use archicture independent names for flush operations like thisto make it more readable. I assume this should be smp_wmb()?> +> + /* Compute the "ip checksum". */> + jsum = isum_hack + htons(s_len - eh_len) + htons(id);> + jsum = __insn_v2sadu(jsum, 0);> + jsum = __insn_v2sadu(jsum, 0);> + jsum = (0xFFFF ^ jsum);> + jh->check = jsum;> +> + /* Update the tcp "seq". */> + uh->seq = htonl(seq);> +> + /* Update some flags. */> + if (!final)> + uh->fin = uh->psh = 0;> +> + /* Compute the tcp pseudo-header checksum. */> + usum = tsum_hack + htons(s_len);> + usum = __insn_v2sadu(usum, 0);> + usum = __insn_v2sadu(usum, 0);> + uh->check = usum;Why to you open-code the ip checksum functions here? Normally the stack takescare of this by calling the functions you already provide inarch/tile/lib/checksum.c Arnd
https://lkml.org/lkml/2012/4/9/132
CC-MAIN-2017-43
refinedweb
563
58.48
Eg: import java.io.*;^M ^M /**^M *^M *This class is used to manage the o/p stream of the application.^M * ^M *@author Niketan Pansare^M *@version 1.0 ^M * ^M */^M public class OutputManager^M {^M OutputStream outputStream; ^M ^M /**^M * This constructor uses standard i/o as default stream.^M */^M and so on ... You can replace all the extra ^M in vi editor by: 1. Go in command mode (Press Esc - If you already are in command mode, you will hear a beep) 2. Then type :%s/^M$//g Don't copy and paste above lines. To add ^M, press (CTRL+V) + (CTRL+M) ie ^V+^M What does above command mean: For substitution you use following command :[range]s/[pattern]/[string]/[options] 's' here mean substitute a pattern with a string. range can be {number} an absolute line number . the current line $ the last line in the file % equal to 1,$ (the entire file) You can also use # instead of / as seperator. Technically you can define pattern". 2. A branch is one or more pieces, concatenated. It matches a match for the first, followed by a match for the second, etc. Example: "foo[0-9]beep", first match "foo", then a digit and then "beep". 3. A piece is an atom, possibly followed by: magic nomagic */star* */\star* * \* matches 0 or more of the preceding atom */\+* \+ \+ matches 1 or more of the preceding atom {not in Vi} */\=* \= \= matches 0 or 1 of the preceding atom {not in Vi} Examples: .* .\* matches anything, also empty string ^.\+$ ^.\+$ matches any non-empty line foo\= foo\= matches "fo" and "foo" 4. An atom can be: - One of these five: magic nomagic ^ ^ at beginning of pattern, matches start of line */^* $ $ at end of pattern or in front of "\|", */$* matches end of line . \. matches any single character */.* */\.* \< \<> \> matches the end of a word */\>* \i \i matches any identifier character (see */ \e */. Examples: ^beep( Probably the start of the C function "beep". [a-zA-Z]$ Any alphabetic character at the end of a line. \<\I\i or \(^\|[^a-zA-Z0-9_]\)[a-zA-Z_]\+[a-zA-Z0-9_]* A C identifier (will stop in front of it). \(\.$\|\. \) A period followed by end-of-line or a space. Note that "\(\. \|\.$\)" does not do the same, because '$' is not end-of-line in front of '\)'. This was done to remain Vi-compatible. [.!?][])"']*\($\|[ ]\) A search pattern that finds the end of a sentence, with almost the same definition as the ")" command. Technical detail:} Seems complex ?? Let us stick to our example for time being. $ mean end of line Therefore, ^M$ mean any ^M character at end of line. It is supposed to be replaced by nothing (since in our example, string is empty). g at end mean "global"
http://niketanblog.blogspot.com/2007/09/mixing-dos-and-unix.html
CC-MAIN-2019-04
refinedweb
462
75.4
Configuration logic¶ Starting from 1.1 certain logic constructs are available. The following statements are currently supported: - for .. endfor - if-dir / if-not-dir - if-env / if-not-env - if-exists / if-not-exists - if-file / if-not-file - if-opt / if-not-opt - if-reload / if-not-reload – undocumented Each of these statements exports a context value you can access with the special placeholder %(_). For example, the “for” statement sets %(_) to the current iterated value. Warning Recursive logic is not supported and will cause uWSGI to promptly exit. for¶ For iterates over space-separated strings. The following three code blocks are equivalent. [uwsgi] master = true ; iterate over a list of ports for = 3031 3032 3033 3034 3035 socket = 127.0.0.1:%(_) endfor = module = helloworld <uwsgi> <master/> <for>3031 3032 3033 3034 3035</for> <socket>127.0.0.1:%(_)</socket> <endfor/> <module>helloworld</module> </uwsgi> uwsgi --for="3031 3032 3033 3034 3035" --socket="127.0.0.1:%(_)" --endfor --module helloworld Note that the for-loop is applied to each line inside the block separately, not to the block as a whole. For example, this: [uwsgi] for = a b c socket = /var/run/%(_).socket http-socket = /var/run/%(_)-http.socket endfor = is expanded to: [uwsgi] socket = /var/run/a.socket socket = /var/run/b.socket socket = /var/run/c.socket http-socket = /var/run/a-http.socket http-socket = /var/run/b-http.socket http-socket = /var/run/c-http.socket if-env¶ Check if an environment variable is defined, putting its value in the context placeholder. [uwsgi] if-env = PATH print = Your path is %(_) check-static = /var/www endif = socket = :3031 if-exists¶ Check for the existence of a file or directory. The context placeholder is set to the filename found. [uwsgi] http = :9090 ; redirect all requests if a file exists if-exists = /tmp/maintainance.txt route = .* redirect:/offline endif = Note The above example uses uWSGI internal routing. if-file¶ Check if the given path exists and is a regular file. The context placeholder is set to the filename found. <uwsgi> <plugins>python</plugins> <http-socket>:8080</http-socket> <if-file>settings.py</if-file> <module>django.core.handlers.wsgi:WSGIHandler()</module> <endif/> </uwsgi> if-dir¶ Check if the given path exists and is a directory. The context placeholder is set to the filename found. uwsgi: socket: 4040 processes: 2 if-dir: config.ru rack: %(_) endif: if-opt¶ Check if the given option is set, or has a given value. The context placeholder is set to the value of the option reference. To check if an option was set, pass just the option name to if-opt. uwsgi: cheaper: 3 if-opt: cheaper print: Running in cheaper mode, with initially %(_) processes endif: To check if an option was set to a specific value, pass option-name=value to if-opt. uwsgi: # Set busyness parameters if it was chosen if-opt: cheaper-algo=busyness cheaper-busyness-max: 25 cheaper-busyness-min: 10 endif: Due to the way uWSGI parses its configs, you can only refer to options that uWSGI has previously seen. In particular, this means: Only options that are set above the if-opt option are taken into account. This includes any options set by previous include (or type specific includes like ini) options, but does not include options set by previous inherit options). if-opt is processed after expanding magic variables, but before expanding placeholders and other variables. So if you use if-opt to compare the value of an option, check against the value as stated in the config file, with only the magic variables filled in. If you use the context placeholder %(_) inside the if-opt block, you should be ok: any placeholders will later be expanded. If an option is specified multiple times, only the value of the first one will be seen by if-opt. Only explicitly set values will be seen, not implicit defaults. See also How uWSGI parses config files
http://uwsgi-docs.readthedocs.org/en/latest/ConfigLogic.html
CC-MAIN-2014-42
refinedweb
669
50.33
04 July 2008 05:43 [Source: ICIS news] By Jeremiah Chan ?xml:namespace> SINGAPORE (ICIS news)--Price hikes of around $80-100/tonne nominated by Asian bisphenol A (BPA) sellers have failed to take effect in the key Chinese market due to lacklustre domestic demand, traders and importers said on Friday. Offers from BPA majors were heard at $1,980-2,100/tonne CFR (cost and freight) ?xml:namespace> “We have no choice but to move up our offers,” the marketing official from “Phenol is up nearly $200/tonne, and we have to pass on the costs to our customers,” he said, although he added that buying sentiment appeared to have waned this week. However, Chinese end-users in the downstream epoxy resins sector saw little incentive to accept the higher numbers due to a seasonal lull in demand. Bids from most buyers were largely heard capped in the low $1,900s/tonne CFR Difficulties in shipping co-feedstock epichlorohydrin (ECH) due to the upcoming Beijing Olympics also kept epoxy resin production rates low, with some traders estimating current Chinese epoxy resin operating rates at a dismal 50-60%. “The overall financial situation in China is now very poor, and most downstream products are not doing well,” the procurement manager of Chinese epoxy resin major Jiangsu Sanmu Group said, adding that they could not buy cargoes above $1,900/tonne CFR without eroding their profit margins. Statistics released by Chinese customs last week showed a huge drop in May BPA import volumes to 26,216 tonnes, a 32% drop month on month. However, this dip in imports did little to boost domestic prices, as traders maintained that current Chinese inventories were still sufficient for domestic consumption. Chinese domestic offers were heard at yuan (CNY)16,600-16,700/tonne ($2,423-2,434/tonne) ex-warehouse on Friday, with fixtures reported at CNY16,400-16,500/tonne ex-warehouse, representing a CNY200/tonne dip from the previous week. BPA majors in ($1 = CNY 6.86).
http://www.icis.com/Articles/2008/07/04/9137754/asia-bpa-hikes-ineffective-on-weak-demand.html
CC-MAIN-2013-20
refinedweb
335
52.33
An input control for integer numbers. More... #include <Wt/WSpinBox> An input control for integer numbers. The spin box provides a control for entering an integer number. It consists of a line edit, and buttons which allow to increase or decrease the value. If you rather need input of a fractional number, use WDoubleSpinBox instead. WSpinBox is an inline widget. Creates a spin-box. The range is (0 - 99) and the step size 1. The initial value is 0. Returns the maximum value. Returns the minimum value. Sets the maximum value. The default value is 99. Sets the minimum value. The default value is 0. Sets the range. Sets the step value. The default value is 1. Sets if this spinbox wraps around to stay in the valid range. Returns the value. A signal that indicates when the value has changed. This signal is emitted when setValue() is called. Returns if the spinbox wraps around.
https://www.webtoolkit.eu/wt/wt-3.3.8/doc/reference/html/classWt_1_1WSpinBox.html
CC-MAIN-2018-09
refinedweb
155
73.13
Well, things seem to be shaping up. The photo album is fixed (but still running under PHP), RSS feeds now use the proper base URL, and there are a whole bunch of them now (one per namespace, i.e., blog, applications – now under the apps tree – HOWTO, etc.). I will be adding them to the sidebar soon, but today I’m pretty much taking the day to read and catch up on things, since the weather isn’t very inviting outside. Still, there are plenty of things to fix, and not just with Yaki. For instance, one of the things I’ve been doing behind the scenes is fiddling with lighttpd, which is what we use as a reverse proxy. Right now, I’ve been shutting out nuisance bots, like @MSRBOT@ (which seems to have a lot of trouble with relative URLs and, despite what it says on that page, hasn’t picked up on robots.txt). Yes, I know I’ve done this before – isn’t progress wonderful? The magic for banning these nuisances under lighttpd (lest I forget sometime in the future) is this incantation: $HTTP["useragent"] =~ "MSRBOT|msnbot/0.9" { url.access-deny = ( "" ) } You can, of course, add anything you like to the regular expression. Popular terms are “sucker”, “grabber”, etc. – it should take you no time at all to go over your server logs and find quite a few more. Have fun stomping them out.
http://taoofmac.com/space/blog/2007/05/20
CC-MAIN-2016-50
refinedweb
239
79.4
Getting started with Dash¶ Overview¶ This article will show you how to publish a Python App with Dash in Domino. In this tutorial you will: - configure a Domino environment with the necessary dependencies to publish a Dash App - create a project and set it up for App publishing - publish an App to the Domino launchpad - observe how other users in Domino can use the App You’ll be working with the second example from Basic Dash Callbacks, part of the Dash User Guide. In this example, the application serves an interactive scatter plot of countries by GDP per capita and life expectancy. It will take approximately 15 minutes to get this example running in Domino. Set up environment¶ The first step is to create a Domino compute environment capable of running your App. From the Lab, click Environments. From the Environments Overview, click Create Environment. Give your environment a descriptive name and description, and select Domino Analytics Distribution Py3.6 R3.4 from the Environment dropdown under Base Image. This selection means that the setup instructions we provide for this environment will be applied on top of a base image with Python 3.6 and some analytics modules already installed. Read Domino standard environments to learn more about the contents of this base image. Click Create Environment when finished. You will be directed to the Overview tab for your new environment. Click Edit Dockerfile. In the Dockerfile Instructions field, paste in the following instructions: # Install the libraries we want to use in our app RUN pip install dash==0.22.0 && \ pip install dash-renderer==0.13.0 && \ pip install dash-html-components==0.11.0 && \ pip install dash-core-components==0.26.0 && \ pip install plotly --upgrade Click Build when finished. You will be directed to the Revisions tab for your environment. Here you’ll be able to monitor the build process for your new version of the environment. If the build succeeds, you’re ready to use this environment for App publishing. Set up project¶ The next step is creating a project with the settings and content you need to publish your App. From the Lab, click Projects. Click New Project. Give your project an informative name, then click Create Project. Click Settings in the project sidebar, then set the Compute environment to the one you created in the previous step. Click Files in the project sidebar, then click Add File. Name the file app.pyin the title field above the editor. In the body of the file, paste the following example App code. import dash import dash_core_components as dcc import dash_html_components as html import pandas as pd import plotly.graph_objs as go df = pd.read_csv( '' 'datasets/master/gapminderDataFiveYear.csv') app = dash.Dash() app.config.update({ #### as the proxy server may remove the prefix 'routes_pathname_prefix': '', #### the front-end will prefix this string to the requests #### that are made to the proxy server 'requests_pathname_prefix': '' }) app.layout = html.Div(style={'paddingLeft': '40px', 'paddingRight': '40px'}, children=[ dcc.Graph(id='graph-with-slider'), dcc.Slider( id='year-slider', min=df['year'].min(), max=df['year'].max(), value=df['year'].min(), step=None, marks={str(year): str(year) for year in df['year'].unique()} ) ]) @app.callback( dash.dependencies.Output('graph-with-slider', 'figure'), [dash.dependencies.Input('year-slider', 'value')]) def update_figure(selected_year): filtered_df = df[df.year == selected_year] traces = [] for i in filtered_df.continent.unique(): df_by_continent = filtered_df[filtered_df['continent'] == i] traces.append(go.Scatter( x=df_by_continent['gdpPercap'], y=df_by_continent['lifeExp'], text=df_by_continent['country'], mode='markers', opacity=0.7, marker={ 'size': 15, 'line': {'width': 0.5, 'color': 'white'} }, name=i )) return { 'data': traces, 'layout': go.Layout( xaxis={'type': 'log', 'title': 'GDP Per Capita'}, yaxis={'title': 'Life Expectancy', 'range': [20, 90]}, margin={'l': 40, 'b': 40, 't': 10, 'r': 10}, legend={'x': 0, 'y': 1}, hovermode='closest' ) } if __name__ == '__main__': app.run_server(port=8888, host='0.0.0.0', debug=True)Make special note of line 12. When serving a Dash application from Domino, you must configure Dash to serve from a relative path, instead of the default root path. Include this line immediately after initializing your app: app.config.requests_pathname_prefix =app.config.update({'routes_pathname_prefix': '', 'requests_pathname_prefix': ''})'' Make note of two important variables in the final line of the file. Domino-hosted applications must run on a host of 0.0.0.0and listen on port 8888. These are the settings Domino will expect when directing users to your App. Click Save when finished. python app.py. Create this file the same way you did for app.py, scatterplot with a Domino toolbar above it showing the project it’s published from, plus buttons to email the App owner and open the description panel.
https://docs.dominodatalab.com/en/4.1/reference/publish/apps/Getting_started_with_Dash.html
CC-MAIN-2020-10
refinedweb
777
51.24
Xbox 360/Xbox Development on the Mac This article describes the process to configure an iBook to compile C code written with OpenXDK, and make Xbox native executables. At the time of writing, the author had successfully compiled a Windows executable but had not been able to convert it to an Xbox executable. Contents Overview[edit] All of these steps are fairly straight forward, although there are some tricks that the novice needs to know to bypass hours of "googling". The process taken is similar to installing OpenXDK on Linux, but differs because in this example we're using a PowerPC processor. This means that we first have to build a cross compiler. The assumption is that you know how to access the shell. DarwinPorts[edit] DarwinPorts is a way to track and install open source programs to your system easily. It is designed specifically for Darwin (as the name suggests), and using this you can download MingGW. The current version of DarwinPorts is 1.2, and it comes in a handy installer package. Once it's installed, run the following command to update the internal list of available ports: - $ sudo port -d selfupdate MinGW[edit] The MingGW compilers and Windows API libraries let you write and compile programs for execution on the i386 system. Although you can download a copy of everything you need from the website, using DarwinPorts to download the right files for you is a better option. To download the tools needed, run the command: - $ sudo port install i386-mingw32-gcc This will download, configure and install the gcc cross-compiler and tools to your system. Note: this step will take a while. Compile Errors[edit] As of this writing, there is a compile error when the installer executes make for the binutils package. The exact error is listed below: - $.../work/binutils-2.15.91-20040904-1/gas/config/tc-i386.h:457: error: array type has incomplete element type To get around this problem, change the compiler to gcc-3.3 when preparing the build. - $ sudo gcc_select 3.3 Then, once the binutils package has compiled successfully, halt the install (Ctrl+C) (the install will fail anyway) and change the compiler back to gcc-4.0. Then the install should complete successfully. Testing[edit] This is an option step. I just tested that the setup worked, by writing a Hello World! program, and running it on my Windows box. The contents of hello.c are shown below: #include <stdio.h> int main(int argc, char **argv) { printf ("Hello\n"); return 0; } Then to compile, it's just like normal gcc/++: $ i386-mingw32-gcc -o hello.exe hello.c This produces a windows executable hello.exe that can be run natively in Windows. Note that this only tests the cross compiler setup, and not OpenXDK. OpenXDK[edit] OpenXDK is a free kit for developing applications that will run on the Xbox. It contains a complete libc replacement, uses SDL for multimedia (sound and networking are not currently supported) and compiles with gcc. You can download the latest version from SourceForge. Make sure to download the binary file OpenXDK_#_bin.zip (where # is the version). Note: although it says it's for the i386 platform, that's our target system, and OpenXDK will NOT compile (even with i386-mingw32-gcc) on the Mac. Cxbe[edit] Cxbe comes with OpenXDK, and is located in the bin directory. It is responsible for converting the Windows executable into an Xbox executable. You won't be able to use the one provided, as it is for Windows. Unfortunately, at the time of writing the Cxbe source in the OpenXDK_src package is from an older version than the compiled version in the OpenXDK_bin package. Until a workaround surfaces, we will have to send the half completed executable to a Windows box and run Cxbe there. Compiling[edit] To compile C code written for the xBox, you need a lengthy flag list for the compiler. The following was adapted from the OpenXDK Website. This assumes that you have a working version of cxbe on your system. - # installed OpenXDK path - XDK = /usr/local/openXDK - # cross-compiler - CC = i386-mingw32-gcc - # cxbe path (for post-processing) - CXBE = $(XDK)/bin/cxbe - SDLFLAGS = -DENABLE_XBOX -DDISABLE_CDROM - CC_FLAGSA = -g -std=gnu99 -ffreestanding -nostdlib -fno-builtin - CC_FLAGSB = -fno-exceptions -mno-cygwin -march=i386 $(SDLFLAGS) - INCLUDE = -I$(XDK)/i386-pc-xbox/include -I$(XDK)/include -I$(XDK)/include/SDL - CLINK = -nostdlib - ALIGN = -Wl,--file-alignment,0x20 -Wl,--section-alignment,0x20 - SHARED = -shared - ENTRYPOINT = -Wl,--entry,_WinMainCRTStartup - STRIP = -Wl,--strip-all - LD_FLAGS = $(CLINK) $(ALIGN) $(SHARED) $(ENTRYPOINT) $(STRIP) - LD_DIRS = -L$(XDK)/i386-pc-xbox/lib -L$(XDK)/lib - LD_LIBS = $(LD_DIRS) -lSDL -lopenXDK -lhal -lc -lhal -lc -lusb -lxboxkrnl - all: default.exe - *.o: *.c - $(CC) $(CC_FLAGSA) $(CC_FLAGSB) $(INCLUDE) $< - default.exe: *.o - $(CC) -c $< -o $@ $(LD_LIBS) $(LD_FLAGS) - $(CXBE) -TITLE:"$@" -DUMPINFO:"default.cxbe" -OUT:"default.xbe" $@ >/dev/null - clean: - rm -f *.o *.xbe *.cxbe *.exe Conclusion[edit] While there is limited support for Xbox development on the Mac, because of the use of Linux to develop legal Xbox executables and the fact that Mac OS X is built on a flavor of Unix, it is possible to piggy-back Linux's support sources and (with slight manipulation) produce a viable solution to Mac Xbox Development.
http://en.wikibooks.org/wiki/Xbox_360/Xbox_Development_on_the_Mac
CC-MAIN-2015-14
refinedweb
879
63.8
The strstr() function searches the given string in the specified main string and returns the pointer to the first occurrence of the given string. C strstr() function declaration char *strstr(const char *str, const char *searchString) str – The string to be searched. searchString – The string that we need to search in string str Return value of strstr() This function returns the pointer to the first occurrence of the given string, which means if we print the return value of this function, it should display the part of the main string, starting from the given string till the end of main string. Example: strstr() function in C #include <stdio.h> #include <string.h> int main () { const char str[20] = "Hello, how are you?"; const char searchString[10] = "you"; char *result; /* This function returns the pointer of the first occurrence * of the given string (i.e. searchString) */ result = strstr(str, searchString); printf("The substring starting from the given string: %s", result); return 0; } Output: The substring starting from the given string: you? As you can see that we are searching the string “you” in the string “Hello, how are you?” using the function strstr(). Since the function returned the pointer to the first occurrence of string “you”, the substring of string str starting from “you” has been printed as output.
https://beginnersbook.com/2017/11/c-strstr-function/
CC-MAIN-2018-30
refinedweb
218
69.01
from __future__ import print_function import numpy as np import mdtraj as md To benchmark the speed of the RMSD calculation, it's not really necessary to use 'real' coordinates, so let's just generate some random numbers from a normal distribution for the cartesian coordinates. t = md.Trajectory(xyz=np.random.randn(1000, 100, 3), topology=None) print(t) <mdtraj.Trajectory with 1000 frames, 100 atoms, 0 residues, without unitcells> The Theobald QCP method requires centering the invidual conformations the origin. That's done on the fly when we call md.rmsd(). import time start = time.time() for i in range(100): md.rmsd(t, t, 0) print('md.rmsd(): %.2f rmsds / s' % ((t.n_frames * 100) / (time.time() - start))) md.rmsd(): 327392.25 rmsds / s But for some applications like clustering, we want to run many rmsd() calculations per trajectory frame. Under these circumstances, the centering of the trajectories is going to be done many times, which leads to a slight slowdown. If we manually center the trajectory and then inform the rmsd() function that the centering has been precentered, we can achieve ~2x speedup, depending on your machine and the number of atoms. t.center_coordinates() start = time.time() for i in range(100): md.rmsd(t, t, 0, precentered=True) print('md.rmsd(precentered=True): %.2f rmsds / s' % ((t.n_frames * 100) / (time.time() - start))) md.rmsd(precentered=True): 2567584.91 rmsds / s Just for fun, let's compare this code to the straightforward numpy implementation of the same algorithm, which mdtraj has (mostly for testing) in the mdtraj.geometry.alignment subpackage from mdtraj.geometry.alignment import rmsd_qcp start = time.time() for k in range(t.n_frames): rmsd_qcp(t.xyz[0], t.xyz[k]) print('pure numpy rmsd_qcp(): %.2f rmsds / s' % (t.n_frames / (time.time() - start))) pure numpy rmsd_qcp(): 2141.21 rmsds / s md.rmsd() code is a lot faster. If you go look at the rmsd_qcp source code, you'll see that it's not because that code is particularly slow or unoptimized. It's about as good as you can do with numpy. The reason for the speed difference is that an inordinate amount of time was put into hand-optimizing an SSE3 implementation in C for the md.rmsd() code. (rmsd-benchmark.ipynb; rmsd-benchmark_evaluated.ipynb; rmsd-benchmark.py)
http://mdtraj.org/latest/examples/rmsd-benchmark.html
CC-MAIN-2017-39
refinedweb
382
52.76
Red Hat Bugzilla – Bug 1255409 [RFE] Add ovirt-shell support for Mac OS Last modified: 2016-05-05 15:11:24 EDT This request was made by Fabrice Bacchella in the users mailing list. Quoting him: I'm trying to install ovirt-shell on my mac, as explained in , but if fails : $ virtualenv-2.7 ovirt New python executable in ovirt/bin/python Installing setuptools, pip, wheel...done. $ ./ovirt/bin/easy_install ovirt-shell Searching for ovirt-shell Reading Best match: ovirt-shell 3.5.0.6 ... Finished processing dependencies for ovirt-shell $ ./ovirt/bin/ovirt-shell Traceback (most recent call last): File "./ovirt/bin/ovirt-shell", line 9, in <module> load_entry_point('ovirt-shell==3.5.0.6', 'console_scripts', 'ovirt-shell')() File "/private/tmp/ovirt/lib/python2.7/site-packages/pkg_resources/__init__.py", line 552, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/private/tmp/ovirt/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2672, in load_entry_point return ep.load() File "/private/tmp/ovirt/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2345, in load return self.resolve() File "/private/tmp/ovirt/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2351, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "build/bdist.macosx-10.10-x86_64/egg/ovirtcli/main.py", line 19, in <module> File "build/bdist.macosx-10.10-x86_64/egg/ovirtcli/infrastructure/options.py", line 21, in <module> File "build/bdist.macosx-10.10-x86_64/egg/ovirtcli/infrastructure/settings.py", line 18, in <module> File "build/bdist.macosx-10.10-x86_64/egg/cli/__init__.py", line 3, in <module> File "build/bdist.macosx-10.10-x86_64/egg/cli/context.py", line 34, in <module> ImportError: cannot import name Terminal This is related to the fact that in several places we check the content of the Python "sys.platform" variable, and then we take into account the values "linux2" and "win32", but not "darwin" (I believe this is the value in Mac OS). We may need to change the following files in order to do the same for "linux2" and "darwin": src/cli/platform/__init__.py src/ovirtcli/platform/__init__.py src/ovirtcli/platform/windows/__init__.py Not sure if this will have a positive effect, as I don't have a Mac OS environment to test it. Yes, after the recommended modification, I was able to use use ovirt-shell and connect to the ovirt server. But I got warning: WARNING: Couldn't write lextab module 'cli.parser_lex'. [Errno 20] Not a directory: '/tmp/ovirt/lib/python2.7/site-packages/ovirt_shell-3.5.0.6-py2.7.egg/cli/parser_lex.py' WARNING: Couldn't create 'cli.parser_tab'. [Errno 20] Not a directory: '/tmp/ovirt/lib/python2.7/site-packages/ovirt_shell-3.5.0.6-py2.7.egg/cli/parser_tab.py' and indeed : $ ls -l /tmp/ovirt/lib/python2.7/site-packages/ovirt_shell-3.5.0.6-py2.7.egg -rw-r----- 1 fa4 wheel 279861 Aug 20 16:28 /tmp/ovirt/lib/python2.7/site-packages/ovirt_shell-3.5.0.6-py2.7.egg The egg is a zip file. My python version: $ ./ovirt/bin/python -V Python 2.7.10 I installed ovirt-shell using virtualenv. Both '/tmp/ovirt/bin/easy_install ovirt-shell' and '/tmp/ovirt/bin/python setup.py install' generated a egg as a zip file. OK, then we need to find a way to generate the parse tables during the build process, and include them in the source tarball, so that parser won't try to generate them during runtime. I think that the two attached patches should improve the situation, as the the CLI will do the same for "darwin" than for "linux2" and the parsing tables should be generated during build time. Can you test it? Should be something like this: # git clone git://gerrit.ovirt.org/ovirt-engine-cli # cd ovirt-engine-cli # git fetch git://gerrit.ovirt.org/ovirt-engine-cli refs/changes/60/45160/1 && git checkout FETCH_HEAD # python setup.py install generating parsing tables .... ImportError: No module named kitchen.text.converters After installing kitchen, and ply too, I got : $ /tmp/ovirt/bin/python setup.py install ... generating parsing tables WARNING: Couldn't write lextab module 'parser_lex'. [Errno 2] No such file or directory: 'build/lib/cli/parser_lex.py' error: source code not available build/lib/cli/ exists, but there is no parser_lex.py in it. I think this happens because you are using a newer version of ply, probably 3.6. I'm using 3.4, and it has a limitation regarding the location of the source files: it puts them always in the current working directory, unless explicitly told to put them in a specific directory. We used to work this around changing the current working directory explicitly, but this doesn't work with 3.6. I have changed the code so that the output directory is passed explicitly, this should work correctly with 3.4 and 3.6. Please try again: # git fetch git://gerrit.ovirt.org/ovirt-engine-cli refs/changes/60/45160/1 && git checkout FETCH_HEAD You may want to create a gerrit.ovirt.org account and comment on the patches: The fetch command in the previous comment is wrong, should be like this: # git fetch git://gerrit.ovirt.org/ovirt-engine-cli refs/changes/60/45160/4 && git checkout FETCH_HEAD Now I'm getting: ... generating parsing tables creating build/bdist.macosx-10.10-x86_64 ... So this step is good, but latter I got: Processing dependencies for ovirt-shell==3.6.0.0rc4 Searching for ovirt-engine-sdk-python>=3.6.0.0preview0 Reading No local packages or download links found for ovirt-engine-sdk-python>=3.6.0.0preview0 error: Could not find suitable distribution for Requirement.parse('ovirt-engine-sdk-python>=3.6.0.0preview0') And this project is not publicly available in gerrit: git clone git://gerrit.ovirt.org/ovirt-engine-sdk-python Cloning into 'ovirt-engine-sdk-python'... fatal: Could not read from remote repository. So I can't check any further., And display is strange : [oVirt shell ([1;32mconnected[1;m)] But I'm connected. For information, the step to build my virtualenv are virtualenv-2.7 ovirt ./ovirt/bin/easy_install ply ./ovirt/bin/easy_install kitchen PYCURL_SSL_LIBRARY=openssl ./ovirt/bin/easy_install pycurl git clone git://gerrit.ovirt.org/ovirt-engine-sdk (cd ovirt-engine-sdk && ../ovirt/bin/python setup.py install) git clone git://gerrit.ovirt.org/ovirt-engine-cli (cd ovirt-engine-cli && git fetch git://gerrit.ovirt.org/ovirt-engine-cli refs/changes/60/45160/4 && git checkout FETCH_HEAD && ../ovirt/bin/python setup.py install) (In reply to Fabrice Bacchella from comment #9) >, > When do you get this error? When building or when running? When running. We do call the "parse_version" function, but I think that we don't iterate the result. It is hard to tell where this is happening. Can you modify your /tmp/ovirt/lib/python2.7/site-packages/pkg_resources/__init__.py file (temporarely) and add these two lines right before that warning is generated? # Add these two lines: import traceback traceback.print_stack() # Right before this: # Warn for use of this function warnings.warn( "You have iterated .... ) Then repeat the test, it will hopefully give us some information of what is causing this issue. Regarding the strange display that happens because your terminal isn't honouring the ANSI color escape codes. That is strange, because most terminals support that these days. What kind of terminal are you using? What is the value of the TERM environment variable? As a workaround you can modify the ovirtcli/utils/colorhelper.py file, and replace the "if color" with "if False": def colorize(...): """...""" if False: # Instead of "if color" ... return text my term is xterm-256color The stack: File "ovirt/bin/ovirt-shell", line 9, in <module> load_entry_point('ovirt-shell==3.6.0.0rc4', 'console_scripts', 'ovirt-shell')() File "build/bdist.macosx-10.10-x86_64/egg/ovirtcli/main.py", line 38, in main shell.onecmd_loop() File "build/bdist.macosx-10.10-x86_64/egg/ovirtcli/shell/engineshell.py", line 339, in onecmd_loop self.do_connect('') File "build/bdist.macosx-10.10-x86_64/egg/ovirtcli/shell/connectcmdshell.py", line 42, in do_connect self.context.execute_string(ConnectCmdShell.NAME + ' ' + args + '\n') File "build/lib/cli/context.py", line 204, in execute_string self._execute_command(command) File "build/lib/cli/context.py", line 424, in _execute_command command.run(self) File "build/bdist.macosx-10.10-x86_64/egg/cli/command/command.py", line 101, in run self.execute() File "build/bdist.macosx-10.10-x86_64/egg/ovirtcli/command/connect.py", line 144, in execute if self.context.sdk_version < MIN_FORCE_CREDENTIALS_CHECK_VERSION: File "/private/tmp/ovirt/lib/python2.7/site-packages/pkg_resources/__init__.py", line 112, in __lt__ return tuple(self) < other File "/private/tmp/ovirt/lib/python2.7/site-packages/pkg_resources/__init__.py", line 188, in __iter__ traceback.print_stack() /private/tmp/ovirt/lib/python2.7/site-packages/pkg_resources/__init__.py:201:, OK, apparently the warning is caused by a version comparison that we do with a tuple. Fortunately we don't need to do that comparison, so I removed it: cli: Don't check for old version of the SDK The ovirt.org has been re-organized. The page that describes the CLI: The fixes for this bug have been merged to the master branch, but there won't be a release from the master branch for version 4.0 of oVirt. Moving to new, as we don't plan to release a new cli. (In reply to Oved Ourfali from comment #18) > Moving to new, as we don't plan to release a new cli. So CLOSED-WONTFIX?
https://bugzilla.redhat.com/show_bug.cgi?id=1255409
CC-MAIN-2018-30
refinedweb
1,605
53.78
Hi cout in the code below stops outputting anything after I copy buffer from empty sstringstream to it. I dont quite understand why it happens. I'll appreciate if you could help explain this. Is there a better way to check stringstream before using rdbuf? I was going to try if(ss.tellp() != ios::beg) but it caused compiled time error. Code: #include<iostream> #include<string> #include<sstream> using namespace std; int main() { stringstream ss; cout << ss.rdbuf(); string s = "abcdefg"; cout << s; // nothing comes out on screen. cout << "123"; // nothing. } Regards, Sam #include<iostream> #include<string> #include<sstream> using namespace std; int main() { stringstream ss; cout << ss.rdbuf(); string s = "abcdefg"; cout << s; // nothing comes out on screen. cout << "123"; // nothing. } Do you understand what the rdbuf member does? Looks to me, like it interprets it as setting the buffer associated with it.. and when you use cout, you're writing to some random place in memory.. not the console. chem My understanding is that rdbuf() allow access to the underlying streambuf from one iostream to another, alowing cout to read from streambuf of ss. But I dont understand why it has problem in the attached code when streambuf in ss is empty. 1) I am not sure what the correct behavior is. 2) My guess is that when the stringstream is empty, the "cout << ss.rdbuf();" FAILS (not just outputs nothing). Once the stream's state becomes bad, further use of the stream fails. 3) If this is the case, clear couts internal flags. Code: cout << ss.rdbuf(); cout.clear(); cout << ss.rdbuf(); cout.clear(); hey this is my code.. the classes are returning correct values.. but look at the output!!!!! i cant fix it.. ive tried everything.. it works perfectly in visual studio but it messes up this way in unix. and since i have to submit this in unix i need help. part of the code: cout << setw(25)<<"Name" << setw(20)<< "Phone Number" << setw(15) << "Date of Birth" << endl << endl; int i; for (i = 0;(i<y);i++) { p = b[i]; cout << setw(25) << p.get_person().getLastName() + ", " + p.get_person().getFirstName() + " " + p.get_person().getMiddleName(); cout << setw(20) << p.getphone() ; cout << p.get_date().getDayofweek() << ", "; if (p.get_date().getMonth()==1) cout << "January"; else if (p.get_date().getMonth() == 2) cout << "February"; else if (p.get_date().getMonth()==3) cout << "March"; else if (p.get_date().getMonth()==4) cout << "April"; else if (p.get_date().getMonth()==5) cout << "May"; else if (p.get_date().getMonth()==6) cout << "June"; else if (p.get_date().getMonth()==7) cout << "July"; else if (p.get_date().getMonth()==8) cout << "August"; else if (p.get_date().getMonth()==9) cout << "September"; else if (p.get_date().getMonth()==10) cout << "October"; else if (p.get_date().getMonth()==11) cout << "November"; else if (p.get_date().getMonth()==12) cout << "December"; cout << " " << p.get_date().getDay() << ", " << p.get_date().getYear() << endl; } output: Name Phone Number Date of Birth Tuesday, October 12, 197055-5555 _________________________________________________________________ how is there no name?? and no comma??? and why the hell is the date before the number??? @mystmanshez don't hijack threads! Post your questions in a new thread or you will not get answers! Forum Rules
http://forums.codeguru.com/showthread.php?467000-Formatting-Error&goto=nextnewest
CC-MAIN-2014-41
refinedweb
521
79.97
verbose_regexp 1.0.0+1 RegExp verbose mode in Dart # Overview # This package contains a single function verbose(String) → String that can be used to simulate the verbose mode known from other RegEx implementations like python. This function will simply remove all unescaped whitespace and line comments and return a purged Dart regular expression that you can pass to the RegExp constructor. Usage # import 'package:verbose_regexp/verbose_regexp.dart'; var a = new RegExp(verbose(r''' \d + # the integral part \. # the decimal point \d * # some fractional digits''')); var b = new RegExp(r'\d+\.\d*'); void main() { assert(a == b); } Changelog # 1.0.0+1 # - Add SDK constraint so the package can be used in Dart 2.0.0. - Run dartfmt --fix. 1.0.0 # - Bump version to 1.0.0. Nothing changed really, but since I don't expect any design/API changes, I should have used this version number a year ago. - Cosmetic changes to appease pana: Add analysis_options.yaml and CHANGELOG.md. - Fix strong_mode and linter warnings. - Update repository name in pubspec.yaml. 0.1.0 # - Initial version. Use this package as a library 1. Depend on it Add this to your package's pubspec.yaml file: dependencies: verbose_regexp: :verbose_regexp/verbose_regexp.dart'; We analyzed this package on Jan 17, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using: - Dart: 2.7.0 - pana: 0.13.2 Maintenance suggestions Package is getting outdated. (-18.63 points) The package was last published 61 weeks ago. Maintain an example. (-10 points) Create a short demo in the example/ directory to show how to use this package. Common filename patterns include main.dart, example.dart, and verbose_regexp.dart. Packages with multiple examples should provide example/README.md. For more information see the pub package layout conventions.
https://pub.dev/packages/verbose_regexp
CC-MAIN-2020-05
refinedweb
299
54.08
How to integrate AutoMapper in ASP.NET Core Web API This article was originally published on blog. In this article, we are going to see how to integrate AutoMapper in ASP.NET Core Web API. First, we will see what the heck is AutoMapper and which problem it solves. After that, we will integrate it with Web API and then we will take a look at commonly used features of AutoMapper. So let's grab the cup of coffee and start learning. TL;DR What is AutoMapper? AutoMapper is a component that helps to copy data from one type of object to another type of object. It is more like an object-object mapper. According to the AutoMapper docs: we need to use AutoMapper? When we have to map two different objects we can use AutoMapper. One of the problem is like while building the application we have some entities which are dealing with DB(forex DTO's) and some entities we are dealing with a client. So we have to map those DTO's to the entities dealing with clients. For example, we have to two classes Student and StudentDTO as below: Student: public class Student { public string Name {get; set;} public int Age {get; set;} public string City {get; set;} } StudentDTO: public class StudentDTO { public string Name {get; set;} public int Age {get; set;} public string City {get; set;} } Now we have to map those objects then what we do is like creating a method that can take input StudentDTO class object and map the values and return Student class object. public Student MapObjects(StudentDTO studto) { return new Student() { name = stud.Name, age = stud.Age, city = stud.City }; } It will solve the problem but then what happens when a class has so many properties, so in that case, it is very difficult to write mappings for each and every property. And this is for one class, now if you have such 100 classes then we should have to write the same kind mapping for 100 class. So there is the simplest solution to solve this problem and it is using Automapper. Let's see how we can integrate it with ASP.NET Core Web API. How AutoMapper works? AutoMapper internally uses a great concept of programming called Reflection. Reflection in C# is used to retrieve metadata on types at runtime. With the help of Reflection, we can dynamically get a type of existing objects and invoke its methods or access its fields and properties. You can read more about Reflection here. Integrating AutoMapper in ASP.NET Core Web API project Prerequisites - Visual Studio 19(if you are using .Net Core 3.1) - .Net Core SDK installed First thing is to create a new ASP.NET Core Web API project or you can open any existing Web API project. Those who are new to ASP.NET Core I have listed down the steps to create a new Web API project. Open Visual Studio and click on File -> New -> Project. Then select ASP.NET Core Web Application and click on the Next button. Give the project name and click on the Create button. After that select API and click on Create button. So now your ASP.NET Core Web API project is setup. Install AutoMapper Nuget Package Right-click on the project and click on Manage Nuget Packages and search for below package: Or open Package Manager Console and run below command Install-Package AutoMapper.Extensions.Microsoft.DependencyInjection The above package will automatically install the AutoMapper package for us since it references the same package. Configure AutoMapper Now we have installed the AutoMapper package. So the next step is to configure for our ASP.NET Core Web API. Open the Startup.cs class and add the below code into ConfigureServices method. public void ConfigureServices(IServiceCollection services) { services.AddControllers(); services.AddAutoMapper(AppDomain.CurrentDomain.GetAssemblies()); } The AddAutoMapper method provided by the AutoMapper package which traverse the assembly and checks for the class which inherits from the Profile class of AutoMapper. This method takes a single assembly or list of assemblies as input. We have used AppDomain.CurrentDomain.GetAssemblies() which gives an array of assemblies in this application domain. Creating classes So to make thing simple create two classes called Student.cs and StudentDTO.cs Creating Profiles Create a new class called AutoMapperProfile.cs which inherits from Profile class of AutoMapper. Use CreateMap<source, destination>() to create mapping between classes. So here we have mapped StudentDTO to Student class. When the application starts it will initialize AutoMapper and then AutoMapper scans all assemblies & look for classes that inherit from the Profile class and load their mapping configurations. It's really simple. Using IMapper IMapper interface is used to map two objects. Create a new controller called StudentController.cs. Resolve the IMapper dependency in the controller constructor. We have created a new object of StudentDTO class and assign values to it. So now we have to map it with Student class, we used _mapper.Map() method. Do you think from where the IMapper is injected? We haven't registered it in ServiceCollection. AddAutoMapper() method in your ConfigureServices takes care of all of this for you. Now start the API and browse /api/student to see the result. That's it. You have successfully configured AutoMapper into your Web API project. Commonly used features of AutoMapper In this section, we will be looking at some of the commonly used features of AutoMapper. Projection Now you must be thinking about what happens when source class has a different property name than the destination class. Let's do it & see the output. So for demo purpose, I have changed the StudentDTO City property to CurrentCity as below: namespace automapper_sample { public class StudentDTO { public string Name { get; set; } public int Age { get; set; } public string CurrentCity { get; set; } } } And I haven't changed Student class which looks like as below: namespace automapper_sample { public class Student { public string Name { get; set; } public int Age { get; set; } public string City { get; set; } } } So now after running the application, you will see the City is not mapped with CurrentCity property: To solve this problem we have concepts in AutoMapper called Projection. So to solve this we have to define a mapping for all the property which are different in both classes. Open the AutoMapperProfile.cs file and add below code: Now run the application and see the output: Nested Mappings In the previous feature, we have seen that how to map two different properties but now think what happens when both classes have Inner class with it. For example, suppose StudentDTO has a property which is of type AddressDTO class and Student class has nested class property called Address. AddressDTO.cs namespace automapper_sample { public class AddressDTO { public string State { get; set; } public string Country { get; set; } } } StudentDTO.cs namespace automapper_sample { public class StudentDTO { public string Name { get; set; } public int Age { get; set; } public string CurrentCity { get; set; } public AddressDTO Address { get; set; } } } Address.cs namespace automapper_sample { public class Address { public string State { get; set; } public string Country { get; set; } } } Student.cs namespace automapper_sample { public class Student { public string Name { get; set; } public int Age { get; set; } public string City { get; set; } public Address Address { get; set; } } } So to map these Nested classes we have to write mapping for nested classes also in our AutoMapperProfile.cs class Output looks like: Conditional Mappings Most of the times we have to map property on the basis of some conditions. And to do this AutoMapper has a concept called Conditional Mappings. While writing mappings we can specify conditions for the specific properties. So in the above example, you can see that IsAdult property is calculated based on condition i.e Age > 18. AutoMapper provides so many other features to simplify complex mappings. You can check docs here. Conclusion In this article, I have explained about AutoMapper and how to integrate it with ASP.NET Core Web API. Also demonstrated some of the commonly used features of AutoMapper. I really hope that you enjoyed this article, and please do not hesitate to send me your thoughts or comments about what could I have done better. You can follow me on twitter @sumitkharche01 Happy Coding!
https://sumitkharche.hashnode.dev/how-to-integrate-automapper-in-aspnet-core-web-api-ck5bbqgwl00ygqks1wevivf05?guid=none&deviceId=8fa138c9-0b0d-4d6c-9c74-bbc450219313
CC-MAIN-2020-45
refinedweb
1,369
57.57
using System.IO.Ports; string[] ports = SerialPort.GetPortNames(); for (int i = 0; i < ports.Length; i++) { Console.WriteLine(ports[i]); } using System.IO.Ports; SerialPort port = new SerialPort(); SerialPort port = new SerialPort("COM 1"); ; SerialPort port = new SerialPort("COM 1", 9600); NOTE: Those are just three of the seven overloads of the constructor for the SerialPort type. The simplest way is to use the SerialPort.Read and SerialPort.Write methods. However you can also retrieve a System.IO.Stream object which you can use to stream data over the SerialPort. To do this, use SerialPort.BaseStream. Reading int length = port.BytesToRead; //Note that you can swap out a byte-array for a char-array if you prefer. byte[] buffer = new byte[length]; port.Read(buffer, 0, length); You can also read all data available: string curData = port.ReadExisting(); Or simply read to the first newline encountered in the incoming data: string line = port.ReadLine(); Writing The easiest way to write data over the SerialPort is: port.Write("here is some text to be sent over the serial port."); However you can also send data over like this when needed: //Note that you can swap out the byte-array with a char-array if you so choose. byte[] data = new byte[1] { 255 }; port.Write(data, 0, data.Length);
https://riptutorial.com/dot-net/example/18729/serial-ports-using-system-io-serialports
CC-MAIN-2021-43
refinedweb
218
62.04
Im new in C++. Please help me for using C++ from basic. What should i do ? Do you mean you have basic knowledge on C++? Or that you're a basic programmer? Regardless I'd recommend grabbing a good book and/or reading online tutorials. Try out cplusplus.com and go from there. If you need an IDE try code::blocks. I would recommend to go take a look at antiRTFM tutorials. These tutorials are really great and easy to understand. Edited 6 Years Ago by nuclear: n/a Do you want to know basic idea of c++? termin8tor has recomanded a good site. You can get many free books through online also. first you should memorize the code of making a program in C++ language. this simple program will help you. # include <iostream> //this part we could the header of your program. using namespace std; //this part we could the standard library. int main () // this part we could the main function. { // start of creating your program. cout<<"Welcome to MAy WORLD"; return 0; } //closing of your program. //Note:: if you going to save your work the file extension is .cpp Edited 5 Years Ago by pyTony: fixed formatting ...
https://www.daniweb.com/programming/software-development/threads/378708/need-help-to-using-c-for-beginner
CC-MAIN-2018-22
refinedweb
199
79.97
Jim O'Neil Technology Evangelist Cups is a recent addition to the Windows Store, the efforts of Julio Colon of ALOMSoftware. I asked Julio to give me a few words I could sprinkle into a blog post about his experiences in building the game, and he provided such a great write-up that I’m including it here in its entirety. It’s written in the first person with Julio as the author. Cups started as a very simple app with the only purpose to get me in the door for Windows Phone 7 development. I wanted to use my current Silverlight and C# skills to create fun to show to my kids. The app today has more than 11,000 downloads in the Windows Marketplace. Once Microsoft announced Windows 8 and its support for apps, it became a possibility for me to write something that could potentially reach millions of users. I worked through the training material offered by the Microsoft Virtual Labs (ed. note: check out the Windows 8 Hands-on Labs Online as well) and visited the Microsoft offices in Waltham, MA for a few local conferences. The training assured me the code could be converted without a considerable effort and I decided to start the journey to convert Cups for Windows 8. First, I started to move the engine to the new RT runtime. No surprises there (thanks to the training); I just had to rename some of the usual namespaces to the new Windows namespaces, deal with a few changes with the event and naming conventions, but no major rewrite. It all got converted in under an hour! Second, I continued to change the Silverlight code to XAML. The previous code worked with the exception (again...) of some namespaces. A quick rename and that was it. I also had to change some of my grids to adjust better mode due to the new supported resolutions, but most of it was no big deal. Third and this is where the new Windows 8 UI features had a slight learning curve. The Settings option in the Charms started consumed too much time for my taste, but thanks to the CharmsFlyoutLibrary it was really easy. This library takes care of the “unexciting” code and it’s a matter of wiring it to the XAML code. Once I finished porting the basic Cups from WP7 to Windows 8, I realized it needed something new to make it different. I talked with the kids and some of them asked me add an “Extreme Mode” to the game. I said: “Why not? It has been pretty easy to convert the code so far”. Most of the ideas and cup distractions for the new mode came from them. The new “Extreme Mode” made the game a lot more interesting, the classic mode is still available for those who want to play the “Classic Mode”. Overall my experience was very good and now that Cups is finished, I feel like my other apps (e.g. Shells, Days Till, Tape Calculator) should be converted in no time. I really want to appreciate Jim O’Neil from Microsoft for the help he provided with the Ad SDK and the Azure Mobile Services information he wrote in his blog. Cups is now in the Window Store and the next release will have scores and mobile notification to keep users engaged and challenging each other. Find the game the marketplace here, play and enjoy it.
http://blogs.msdn.com/b/jimoneil/default.aspx?PageIndex=692&PostSortBy=MostViewed
CC-MAIN-2014-42
refinedweb
578
70.33
I tried researching this area over the net, but couldn't find a whole lot. So, here's my blog about it and since this is my very first blog, I'd like to say "Hello World!" What does it mean to use ChannelFactory? When you share a common service contract dll between the client and the server, you'll be using the ChannelFactory class. The idea is to package the service contract interface and your entities in a library, that would be implemented by the service and used by the client. So if you have a contract such as this in Contract.dll: public interface IHelloWorld{ string SayHello(int numTimes); } You'd use the ChannelFactory class as follows in the client: using Contract; ChannelFactory<IHelloWorld> channel = new ChannelFactory<IHelloWorld>("tcp"); //Binding name When to use a proxy? We use proxy in WCF to be able to share the service contract and/or entities with the client. If the client to your service is external to the system, such as API, it makes sense to use a proxy, because it makes sharing the contract easier by giving a code file rather than a DLL. When to use a ChannelFactory i.e. a shared DLL? A DLL is helpful if the client code is under you control and you'd like to: share more than just the service contract with the client -- such as some utility methods associated with entities and make the client & the service code more tightly bound. If you know that your entities will not change much and the client code is less, then a DLL would work better than a proxy. Proxies have several restrictions like: What all to package in the DLL? You'll need to package the following: So if you are designing a connected system and using WCF for that, then you can use a shared dll instead of a proxy, to avoid code repetition and be more effective in the use of the service entities.
http://blogs.msdn.com/b/juveriak/archive/2008/02/03/using-channels-vs-proxies-in-wcf.aspx
CC-MAIN-2016-07
refinedweb
332
66.88
What is Unity? Unity I started this from the worst way that someone can start a document. Everyone knows Unity 3D is a Game Development platform..3.10, was released in April 2020 as Wiki says. Unity Home Page Anyone can Download unity by visiting this URL and have the newest version of it. After Fundamentals, To see the way how a 2D player movement scripting must be done, We have to start a new 2D project. My Unity Platform After starting a new project, (Unity will get some time to load) then we have to import our 2D model to the Unity screen. This time I am using a default Sprite provided by unity. We can do this by right clicking on the Hierarchy window and selecting new 2d object. Selecting a sprite. Then it will be on our Scene layout and I did some adjustments like changing the size and color, Changing the Sprite shape to a Cube, like that.. My Changes Then We have to add 2 main components to make sure everything works fine after. Those two are Rigidbody 2D and Box Collider 2D. On the inspector window/panel there is a button called Add Components. Click on that and we can select those two attributes to our main Character(square) What is Rigidbody 2D ? A. Adding a Rigidbody 2D allows a sprite to move in a physically convincing way by applying forces from the scripting API. When the appropriate collider component is also attached to the sprite Game Object, it is affected by collisions with other moving Game Objects. Using physics simplifies many common game play mechanics and allows for realistic behavior with minimal coding. Rigidbody 2D Switch to Scripting A Rigidbody 2D component places an object under the control of the . Many concepts familiar from? docs.unity3d.com What is Box Collider 2D ? The Box Collider ? An invisible shape that is used to handle physical collisions for an object. A collider doesn?t need to be exactly the same shape as the object?s mesh a rough approximation is often more efficient and indistinguishable in game play. There are lot of properties, Inherit members, public and static methods inside this. What to do next ? First of all set the gravity scale to 0 on RigidBody 2d. We don’t want to float out cube/square like the space 🙂 Our Rigidbody If you want you can freeze the rotation and keep the cube steady without rotating when collide with something. Just have to tick the Freeze Rotation on the Constrains on RigidBody 2D. Then We go Scripting. We can add a new script by clicking on the Add component button and searching for a script. I am naming this as Player Controls. Added Script After creating the script we have to code for the movements and the code will be like this. Movement Script We do not need any start method for this. just need the update method. Vector2 :- It is representation of 2D vectors and points, used to represent 2D positions, only two axis x&y. Vector3 :- It is representation of 3D vectors and points, used to represent 3D positions,considering x,y & z axis. Since the project is 2D, we just have to create variables for X and Y axis. First I created the speed variable as a Vector2 (you know why) and assign 50 as value. It is a public variable and X/Y will be displayed on the Inspector section and can be customized. Input for X/Y What happened here was I created 2 variables and assigned them to the two axis (X/Y). Words like Horizontal and Vertical must be spelled as same as above because they are per-assigned in unity. Edit -> Project Settings -> Input Then we can simply use the arrow or w/a/s/d keys to preform actions with our cube. We have to multiply them by the speed variable. All done I created a Vector3 variable called movement and it include the multiplication of the Axis. You can see a 0 here, because there is no Z axis in our little cube. 🙂 Time.deltaTime? After that the movement variable is multiplied and assigned again with a variable called Time.deltaTime.. Time.deltaTime using System.Collections; using System.Collections.Generic; using UnityEngine; // Time.deltaTime example. // // Wait? docs.unity3d.com Then we have to pass the main movement variable into transform.translate( ) function. Moves the transform in the direction and distance of translation. If relativeTois left out or set to Space.Self the movement is applied relative to the transform’s local axes. (the x, y and z axes shown when selecting the object inside the Scene View.) If relativeTo is Space.World the movement is applied relative to the world coordinate system. Transform.Translate Suggest a change Thank you for helping us improve the quality of Unity Documentation. Although we cannot accept all? docs.unity3d.com So, That is it.. Yeah we have a moving cube now. I uploaded the code to GitHub and anyone interested or needed can grab it. save-snowden/Movement-Medium-Artical Contribute to save-snowden/Movement-Medium-Artical development by creating an account on GitHub. github.com Peace !
https://911weknow.com/movement-of-a-2d-player-in-unity
CC-MAIN-2021-04
refinedweb
868
67.04
Installation Artist Tools Developer Tools If you are completely new to Python and/or PyQt, then I would recommend working through some online tutorials - there are a TON of them out there. This tutorial is about working with PyQt as it relates to the Blur systems, not a basic introduction to PyQt in general. The tutorials on this page are for generic development - everything in these docs can and does apply to code for 3dsMax and Softimage - as the system we developed is application agnostic. I'm assuming that you've read through the BlurIDE documentation and are setup with the BlurOffline code project. Everyone's favorite - Hello World. For this example, we'll go into a couple of differences between the basics of PyQt and the system that we have in place. First up, fire open your IDE Editor (can be standalone, or in an application of choice) and make sure we are in the BlurOffline project. Click Ctrl+N or File >> New and type: from blurdev.gui import Dialog class HelloWorldDialog(Dialog): def __init__( self, parent = None ): Dialog.__init__( self, parent ) self.setWindowTitle('Hello, World') import blurdev blurdev.launch(HelloWorldDialog) Save this file to c:/blur/dev/offline/code/python/scripts/test.py The first thing to note, is our first line. from blurdev.gui import Dialog We're importing our own Dialog class, not a standard PyQt4.QtGui.QDialog class. In fact, the Dialog class inherits from the standard QDialog class, but it also provides some important wrapper code around it. The wrapper code works with the blurdev/core logic to determine the proper parenting based on the application it's running in. When standalone, no parent is needed, but if running within 3dsMax for instance, a Dialog MUST be a child of a QWinWidget class. Rather than making the developer check for the proper parenting, we just wrote our own base class to support it. We did this for the 3 base classes that we'll use when developing tools: A couple of notes about the differences between the 3: This is obviously a dumbed down version of the differences between the 3, but is most often what drives our decision when to use each. The next thing to note from this example is the way that we run the tool. The last line of code for this example calls blurdev.launch(HelloWorldDialog) What the blurdev.launch code is doing for us is managing the QApplication system. In a standard PyQt application, a developer would have to maintain their application instance, however, there can only ever be 1 instance of a QApplication running per thread. In the case of 3dsMax and Softimage, we'll have 1 QApplication instance running for ALL tools that are created. When we're running standalone though, we're going to have 1 QApplication PER tool. Again, to abstract our code to be able to work in multiple applications, it was easiest to build the system into the core to maintain the application instancing. By calling blurdev.launch on your dialog, this will allow you to develop a tool that can run both IN and OUT of another application. Note: You only need to do this for your root dialogs. If you have say, an About dialog that is a sub-dialog of your main Tool, you can just show it in a standard Qt way. So, what does this all mean? Lets give it a test. This will launch your dialog, and you'll notice that it is parented to the IDE Editor (if running standalone) or your application window (if running in 3dsMax or Softimage). This is because its running within the current thread, which already has a QApplication instance running, and so parents to the root window. Now try: This will run your script as its own application - so it actually creates a full standalone application, creating its own QApplication and runs within its own thread. This now runs outside of the IDE editor and/or application. Note: if you are building tools that need to access information inside of an application like 3dsMax or Softimage, they CANNOT access the Py3dsMax or PySoftimage libraries, so you have to program accordingly. Note 2: if you choose Run (Debug) it is the same as running standalone, except that you'll get the command prompt window as well. Before we get into developing tools, you should read up on how the a tool is structured. Once you have a grasp of what a Tool is, lets create a more advanced script than our Hello, World example from above - this time, lets create a new tool. This will create a new folder at: c:/blur/dev/offline/code/python/tools/General_Use_Tools/_Examples Thats all you have to do to create a new category - create a new folder within the tools folder. Note: For readability, its recommended to start categories with an underscore. You could create a folder the way you just did, and then create a __meta__.xml file within it with the proper settings...but thats just not fun... Instead of forcing a developer to remember everything that goes into a Tool, it was much easier to build a Wizard that we could reuse. With that in mind, we built a robust and extensible wizard system into the IDE Editor. We have wizards for creating more Wizards, creating tools Environments, gui components, and for Tools. This time, right click on your _Examples folder and choose New From Wizard This will bring up the Wizard chooser dialog. Since right now we're creating a new tool, lets choose: User Interface and then Tool The Wizard you'll see will give you a couple of options. For now, lets set: We'll leave the other settings as they are by default (if you want to choose an icon, go ahead and do so). The next page you'll see will outline the files and folders that will be created for your wizard. You'll see this as the last page for all IDE Wizards, so you can always preview your files before you create them. Only checked files will be created, so if for whatever reason you don't need or want certain files or folders, you can disable them through this view. If you want to see the difference between what gets created, you can click back and toggle the simple script and/or create ui file fields, and this will alter the template for the tool that gets created. For now, we'll just create with the standard settings, so click Done You'll then be prompted if you want to change projects - when a tool gets created, it actually makes a new IDE Project specifically for that tool. For now, click no - we'll just stay in the current project. If you expand the _Examples folder, you'll now see your SampleTool tool folder, and all the files from the Wizard are setup for you. If you right click on the main.pyw file and choose any of the three: You can see your tool running as a child of the IDE Editor, and/or as a standalone application. All you'll see right now is a blank Dialog with the title SampleTool Pretty simple to get started, but also boring. Lets add a few GUI components. This should open the Qt Designer with your UI file. Note: If you're not using the standard blur installed version of designer, you can register your version to the IDE by setting the BDEV_QT_DESIGNER' value in PYTHON_PATH/lib/site-packages/blurdev/resource/settings.ini' to the path of your Designer. Otherwise, you can drag & drop the file from the IDE Editor into an open instance of the Designer. We aren't going to go into Qt Designer tutorials - there are plenty of those online. Instead, I'll just say add 3 components: add a tree named "uiBrowserTREE", a button called "uiLoadBTN", and a button called "uiSaveBTN". It doesn't matter how its layed out for the sake of this tutorial, as long as the components are created and named properly. Save the UI, and rerun the code. You'll now see your tool with your components - without having to change any code! So now that you have your tool - you'll want to be able to register it to treegrunt. If you run Treegrunt, you'll notice that your Tool doesn't popup. Whenever you modify the tools structure (add a tool, move a tool, rename a tool) you'll actually have to rebuild your XML index file. Right-click on the Window and choose Environments >> Rebuild Index Now you'll find General_Use_Tools >> Examples >> SampleTool and you can double-click on it to run it. Thats all you have to do to register the tool to the system, the wizard took care of the rest.
http://code.google.com/p/blur-dev/wiki/PyQt
crawl-003
refinedweb
1,482
61.46
DEBSOURCES Skip Quicknav sources / git / 1:2.20.1-2+deb10u3 / levenshtein #include "cache.h" #include "levenshtein.h" /* * This function implements the Damerau-Levenshtein algorithm to * calculate a distance between strings. * * Basically, it says how many letters need to be swapped, substituted, * deleted from, or added to string1, at least, to get string2. * * The idea is to build a distance matrix for the substrings of both * strings. To avoid a large space complexity, only the last three rows * are kept in memory (if swaps had the same or higher cost as one deletion * plus one insertion, only two rows would be needed). * * At any stage, "i + 1" denotes the length of the current substring of * string1 that the distance is calculated for. * * row2 holds the current row, row1 the previous row (i.e. for the substring * of string1 of length "i"), and row0 the row before that. * * In other words, at the start of the big loop, row2[j + 1] contains the * Damerau-Levenshtein distance between the substring of string1 of length * "i" and the substring of string2 of length "j + 1". * * All the big loop does is determine the partial minimum-cost paths. * * It does so by calculating the costs of the path ending in characters * i (in string1) and j (in string2), respectively, given that the last * operation is a substitution, a swap, a deletion, or an insertion. * * This implementation allows the costs to be weighted: * * - w (as in "sWap") * - s (as in "Substitution") * - a (for insertion, AKA "Add") * - d (as in "Deletion") * * Note that this algorithm calculates a distance _iff_ d == a. */ int levenshtein(const char *string1, const char *string2, int w, int s, int a, int d) { int len1 = strlen(string1), len2 = strlen(string2); int *row0, *row1, *row2; int i, j; ALLOC_ARRAY(row0, len2 + 1); ALLOC_ARRAY(row1, len2 + 1); ALLOC_ARRAY(row2, len2 + 1); for (j = 0; j <= len2; j++) row1[j] = j * a; for (i = 0; i < len1; i++) { int *dummy; row2[0] = (i + 1) * d; for (j = 0; j < len2; j++) { /* substitution */ row2[j + 1] = row1[j] + s * (string1[i] != string2[j]); /* swap */ if (i > 0 && j > 0 && string1[i - 1] == string2[j] && string1[i] == string2[j - 1] && row2[j + 1] > row0[j - 1] + w) row2[j + 1] = row0[j - 1] + w; /* deletion */ if (row2[j + 1] > row1[j + 1] + d) row2[j + 1] = row1[j + 1] + d; /* insertion */ if (row2[j + 1] > row2[j] + a) row2[j + 1] = row2[j] + a; } dummy = row0; row0 = row1; row1 = row2; row2 = dummy; } i = row1[len2]; free(row0); free(row1); free(row2); return i; }
https://sources.debian.org/src/git/1:2.20.1-2+deb10u3/levenshtein.c/
CC-MAIN-2020-50
refinedweb
426
64.14
(One of my summaries of a talk at the 2015 django under the hood conference). Jakob Kaplan-Moss talks about one of the oldest parts of Django: http. So: actually talking to the internet :-) Old old old code, so it needs explaining. HTTP in Django means Django’s request/response cycle. Let’s start with a very simple view: def my_view(request): return HttpResonse("it works!") What happens behind the scenes? Actually a lot. Django is a framework. The difference between a framework and a library is that a framework calls your code. Instead of your program calling a library. So Django is in control. Django works with WSGI. So you have some WSGI server that wants to run an application. The application is in your my_app.wsgi.py file. Which calls get_wsgi_application() in django.core.wsgi. Which does some django.setup() and an __init__() of a wsgi_object in the WSGIHandler/BaseHandler. In the end a wsgi_object goes back towards the WSGI server. It is important that django.setup() is called very early. This sets up the ORM, for instance. So most of the setup has happened before your own code is called. Jakob wants to focus a bit of time on an extension point that’s often missed: you can write your own WSGI app, including adding WSGI middleware. WSGI middleware wrapps the entire site in some behaviour. Much django middleware code could better be done as generic WSGI middleware. Examples: wsgi-oauth2. And whitenoise for serving static files like javascript. It performs better that similar things in django as it doesn’t have so much overhead. Ok, back to Django. After the __init__(), Django starts up the middleware. Warning: each of the middleware classes is instantiated only once. So watch out when storing things on self as it can lead to memory leaks. Django middleware is often overused (just like regular WSGI middleware is often underused). A good example is django-secure. Some comments by Jakob on middleware: Back to our http request! This is the point at which the URL config is loaded. After the URLs are known, a resolve() figures out which view to use and which args and kwargs to pass on. Django has a feature that’s almost never used: a request can have its own custom URL resolver. You could do things like multi-tenancy and internationalization, but Jakob couldn’t find any good examples in open source code. Now we’ve got a view. Here it starts to get simple: Django just calls the view which returns a response :-) There’s only an exception for exceptions: those are used for error handling, 404s and 500s and so. Now that we’ve got a response, first the middleware gets to have a go at modifying the response. Note that you can have “lazy responses” that are only rendered very late in the process. You could use this for complex composed views. Basically you don’t return rendered content, but instead a renderer and a context. You can have lazy template responses, for instance. This allows you to modify the context in middleware instead of having to work on the rendered content. Gotcha on the “response finished” signal: it isn’t reliably send. It depends on your WSGI runner whether a close() method is called… So watch out with it. Django’s request/response cycle is pretty elegant and you can extend it in lots of places. But…. big changes might come to it. Django channels might turn everything a bit on its head. Conceptually, you’ll still have “the internet” and “your view” and “django in the middle”. It’ll all be wired together differently, though. It will also support things like websockets. (More explanation about django channels will be in a talk tomorrow). Image: Cycling on the former German “Ahrtalbahn” railway this summer. A big retaining):
https://reinout.vanrees.org/weblog/2015/11/05/http-in-django.html
CC-MAIN-2019-04
refinedweb
644
68.97
This page shows how to migrate data stored in a ThirdPartyResource (TPR) to a CustomResourceDefinition (CRD). Kubernetes does not automatically migrate existing TPRs. This is due to API changes introduced as part of graduating to beta under a new name and API group. Instead, both TPR and CRD are available and operate independently in Kubernetes 1.7. Users must migrate each TPR one by one to preserve their data before upgrading to Kubernetes 1.8. The simplest way to migrate is to stop all clients that use a given TPR, then delete the TPR and start from scratch with a CRD. This page describes an optional process that eases the transition by migrating existing TPR data for you on a best-effort. Rewrite the TPR definition Clients that access the REST API for your custom resource should not need any changes. However, you will need to rewrite your TPR definition as a CRD. Make sure you specify values for the CRD fields that match what the server used to fill in for you with TPR. For example, if your ThirdPartyResource looks like this: ```yaml apiVersion: extensions/v1beta1 kind: ThirdPartyResource metadata: name: cron-tab.stable.example.com description: “A specification of a Pod to run on a cron style schedule” versions: A matching CustomResourceDefinition could look like this: apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: crontabs.stable.example.com spec: scope: Namespaced group: stable.example.com versions: - name: v1 served: true storage: true names: kind: CronTab plural: crontabs singular: crontab Install the CustomResourceDefinition While the source TPR is still active, install the matching CRD with kubectl create. Existing TPR data remains accessible because TPRs take precedence over CRDs when both try to serve the same resource. After you create the CRD, make sure the Established condition goes to True. You can check it with a command like this: kubectl get crd -o 'custom-columns=NAME:{.metadata.name},ESTABLISHED:{.status.conditions[?(@.type=="Established")].status}' The output should look like this: NAME ESTABLISHED crontabs.stable.example.com True Stop all clients that use the TPR The API server attempts to prevent TPR data for the resource from changing while it copies objects to the CRD, but it can’t guarantee consistency in all cases, such as with multiple masters. Stopping clients, such as TPR-based custom controllers, helps to avoid inconsistencies in the copied data. In addition, clients that watch TPR data do not receive any more events once the migration begins. You must restart them after the migration completes so they start watching CRD data instead. Back up TPR data In case the data migration fails, save a copy of existing data for the resource: kubectl get crontabs --all-namespaces -o yaml > crontabs.yaml You should also save a copy of the TPR definition if you don’t have one already: kubectl get thirdpartyresource cron-tab.stable.example.com -o yaml --export > tpr.yaml Delete the TPR definition Normally, when you delete a TPR definition, the API server tries to clean up any objects stored in that resource. Because a matching CRD exists, the server copies objects to the CRD instead of deleting them. kubectl delete thirdpartyresource cron-tab.stable.example.com Verify the new CRD data It can take up to 10 seconds for the TPR controller to notice when you delete the TPR definition and to initiate the migration. The TPR data remains accessible during this time. Once the migration completes, the resource begins serving through the CRD. Check that all your objects were correctly copied: kubectl get crontabs --all-namespaces -o yaml If the copy failed, you can quickly revert to the set of objects that existed just before the migration by recreating the TPR definition: kubectl create -f tpr.yaml Restart clients After verifying the CRD data, restart any clients you stopped before the migration, such as custom controllers and other watchers. These clients now access CRD data when they make requests on the same API endpoints that the TPR previously served.
https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/migrate-third-party-resource/
CC-MAIN-2018-30
refinedweb
666
53.51
Count the Number of Lines in a Text File in Java Hello Learners, today we are going to learn how to count the number of lines in a text file using Java. You just have to know about the basic idea of Java file handling and there you go. Counting the Lines of a File: Basic idea - FileReader Class: This class is used to read the data from a file and returns it in byte format. - BufferedReader: The class is used to read text from the input stream. - readLine() method: This method belongs to BufferedReader class. It is used to read a single line from a file and that’s what we exactly need here. we are going to use the above two classes in our code and the readLine method that makes it easy for us. So, here we have a sample file containing three lines. See the File below: sample.txt Greeting from CodeSpeedy! Today you will learn how to count the number of lines in a text file. you are reading this Article by Avantika Singh. import java.io.*; public class linesInFile { public static void main(String[] arg) throws IOException { File f=new File("C:\\Users\\lenovo\\Documents\\sample.txt"); int lines=0; FileReader fr=new FileReader(f); BufferedReader br = new BufferedReader(fr); while(br.readLine()!=null) { lines++; } fr.close(); System.out.println("no. of lines in the file: "+lines); } } OUTPUT: no. of lines in the file: 3 Explanation: The code is very simple to understand. Follow the below steps: - At first, you create a file object providing the path of the file you want to. - In the next step, you create an object of BufferedReader class to read the input from the file. - Declare and initialize a variable as zero for holding the number of lines. - Call the readLine method on BUfferedReader object over a file loop. - check for the condition until there isn’t remaining a single file in the file and in increment the line variable. - after the loop ends, do not forget to close the file and print the number of lines. That’s it, Done. Try do it on your own, it’s a simple code. Click on the link to learn more about Java File Handling. So, that’s all for now about how to count the number of lines in a text file using Java.
https://www.codespeedy.com/count-the-number-of-lines-in-a-text-file-in-java/
CC-MAIN-2020-45
refinedweb
395
82.95
On 5/1/07, Rex Dieter <rdieter math unl edu> wrote: That doesn't fly with me. Compare that with the repotag-less situation of each repo providing packages with *identical* EVR's. Which is worse? Once you have multiple vendors in the mix, each providing packages which may install over a package from another vendor there's no one-size fits all version comparison scheme which will hold. So to answer the question which is worse.. they are both significantly bad enough that your screwed either way. The only way to really solve the problem is to hold vendor as a separate metric by which to test against in a different way than version comparison. Vendor-ness simply doesn't translate to ordering in a programmatic way. Instead, you hold vendor-ness a parallel collection of namespaces in which you do version comparisons internally. So if I have package foo installed from the Vendor called babyeater, and updates for package foo are available from both the babyeater vendor and the lazybastard Vendors, regardless of how those versions compare, the management tool can be told to stay inside the babyeater namespace for package updates for foo (or be told to ignore vendor-ness completely and gobble the highest versioned packaged based on version comparison rules). If we had a way to authoritatively identify vendors ( hint gpg sigs ) then you could programmatically instruct a depsolver to always prefer updates from the vendor of the package for which you already have packages installed. If such functionality was possible, I bet it would be implemented in a yum plugin called protectvendor. -jef"I'm not a glass is half-empty of half-full sort of person. My glass is filled to the brim.. one part tequila, one part nerve gas"spaleta
https://www.redhat.com/archives/fedora-devel-list/2007-May/msg00094.html
CC-MAIN-2015-18
refinedweb
298
60.85
Overview The purpose of this article is to provide a step-by-step guide for installing and configuring a simple two-node GPFS cluster on AIX. The following diagram provides a visual representation of the cluster configuration. Figure 1. Visual representation of the cluster configuration GPFS GPFS provides a true "shared file system" capability, with excellent performance and scalability. GPFS allows concurrent access for a group of computers to a common set of file data over a common storage area network (SAN) infrastructure, a network, or a mix of connection types. GPFS provides storage management, information lifecycle management tools, and centralized administration and allows for shared access to file systems from remote GPFS clusters providing a global namespace. GPFS offers data tiering, replication, and many other advanced features. The configuration can be as simple or complex as you want. Preparing the AIX environment for GPFS We'll assume that you have already purchased the necessary licenses and software for GPFS. With a copy of the GPFS software available, copy the GPFS file sets to each of the AIX nodes on which you need to run GPFS. In this article, each partition was built with AIX version 7.1, Technology Level 2, Service Pack 1: # oslevel -s 7100-02-01-1245 Each AIX system is configured with seven SAN disks. One disk is used for the AIX operating system (rootvg) and the remaining six disks are used by GPFS. # lspv hdisk0 00c334b6af00e77b rootvg active hdisk1 none none hdisk2 none none hdisk3 none none hdisk4 none none hdisk5 none none hdisk6 none none The SAN disks (to be used with GPFS) are assigned to both nodes (that is, they are shared between both partitions). Both AIX partitions are configured with virtual Fibre Channel adapters and access their shared storage through the SAN, as shown in the following figure. Figure 2. Deployment diagram Click to see larger image The following attributes, shown in the table below, were changed for each hdisk, using the chdev command. Table 1. The lsattr command can be used to verify that each attribute is set to the correct value: # lsattr -El hdisk6 –a queue_depth –q algorithm –a reserve_policy algorithm round_robin Algorithm True queue_depth 32 Queue DEPTH True reserve_policy no_reserve Reserve Policy True The next step is to configure Secure Shell (SSH) so that both nodes can communicate with each other. When building a GPFS cluster, you must ensure that the nodes in the cluster have SSH configured correctly so that they do not require password authentication. This requires the configuration of Rivest-Shamir-Adleman algorithm (RSA) key pairs for the root users SSH configuration. This configuration needs to be configured in both directions, to all nodes in the GPFS cluster. The mm commands in GPFS require authentication in order for them to work. If the keys are not configured correctly, the commands will prompt for the root password each time and the GPFS cluster might fail. A good way to test this is to ensure that the ssh command can work unhindered by a request for the roots password. You can refer to the step-by-step guide for configuring SSH keys on AIX: You can confirm that the nodes can communicate with each other (unhindered) using SSH with the following commands on each node: aixlpar1# ssh aixlpar1a date aixlpar1# ssh aixlpar2a date aixlpar2# ssh aixlpar2a date aixlpar2# ssh aixlpar1a date With SSH working, configure the WCOLL (Working Collective) environment variable for the root user. For example, create a text file that lists each of the nodes, one per line: # vi /usr/local/etc/gfps-nodes.list aixlpar1a aixlpar2a Copy the node file to all nodes in the cluster. Add the following entry to the root users .kshrc file. This will allow the root user to execute commands on all nodes in the GPFS cluster using the dsh or mmdsh commands. export WCOLL=/usr/local/etc/gfps-nodes.list The root users PATH should be modified to ensure that all GPFS mm commands are available to the system administrator. Add the following entry to the root user's .kshrc file. export PATH=$PATH:/usr/sbin/acct:/usr/lpp/mmfs/bin The /etc/hosts file should be consistent across all nodes in the GPFS cluster. Each IP address for each node must be added to /etc/hosts on each cluster node. This is recommended, even when Domain Name System (DNS) is configured on each node. For example: # GPFS_CLUSTER1 Cluster - Test # # GPFS Admin network - en0 10.1.5.110 aixlpar1a aixlpar1 10.1.5.120 aixlpar2a aixlpar2 # # GPFS Daemon - Private Network – en1 10.1.7.110 aixlpar1p 10.1.7.120 aixlpar2p Installing GPFS on AIX Now that the AIX environment is configured, the next step is to install the GPFS software on each node. This is a very straightforward process. We will install GPFS version 3.5 (base-level file sets) and then apply the latest updates to bring the level up to 3.5.0.10. There are only three file sets to install. You can use System Management Interface Tool (SMIT) or the installp command to install the software. aixlpar1 : /tmp/cg/GPFS/gpfs35_aix # inutoc . aixlpar1 : /tmp/cg/GPFS/gpfs35_aix # ls -ltr total 123024 -rw-r--r-- 1 root system 175104 Jun 7 2012 gpfs.msg.en_US -rw-r--r-- 1 root system 868352 Jun 7 2012 gpfs.docs.data -rw-r--r-- 1 root system 61939712 Jun 7 2012 gpfs.base -rw-r--r-- 1 root system 3549 Apr 26 16:37 .toc aixlpar1 : /tmp/cg/GPFS/gpfs35_aix # install –Y –d . ALL Repeat this operation on the second node. You can verify that the base-level GPFS file sets are installed by using the lslpp command: # lslpp -l | grep -i gpfs gpfs.base 3.5.0.0 COMMITTED GPFS File Manager gpfs.msg.en_US 3.5.0.0 COMMITTED GPFS Server Messages - U.S. gpfs.base 3.5.0.0 COMMITTED GPFS File Manager gpfs.docs.data 3.5.0.0 COMMITTED GPFS Server Manpages and The latest GPFS updates are installed next. Again, you can use SMIT (or installp) to update the file sets to the latest level. The lslpp command can be used to verify that the GPFS file sets have been updated. aixlpar1 : /tmp/cg/gpfs_fixes_3510 # inutoc . aixlpar1 : /tmp/cg/gpfs_fixes_3510 # ls -ltr total 580864 -rw-r--r-- 1 30007 bin 910336 Feb 9 00:10 U858102.gpfs.docs.data.bff -rw-r--r-- 1 30007 bin 47887360 May 8 08:48 U859646.gpfs.base.bff -rw-r--r-- 1 30007 bin 99655680 May 8 08:48 U859647.gpfs.gnr.bff -rw-r--r-- 1 30007 bin 193536 May 8 08:48 U859648.gpfs.msg.en_US.bff -rw-r--r-- 1 root system 4591 May 10 05:15 changelog -rw-r--r-- 1 root system 3640 May 10 05:42 README -rw-r----- 1 root system 55931 May 15 10:23 GPFS-3.5.0.10-power-AIX.readme.html -rw-r----- 1 root system 148664320 May 15 10:28 GPFS-3.5.0.10-power-AIX.tar -rw-r--r-- 1 root system 8946 May 15 14:48 .toc aixlpar1 : /tmp/cg/gpfs_fixes_3510 # smitty update_all COMMAND STATUS Command: OK stdout: yes stderr: no Before command completion, additional instructions may appear below. [MORE...59] Finished processing all filesets. (Total time: 18 secs). +-----------------------------------------------------------------------------+ Pre-commit Verification... +-----------------------------------------------------------------------------+ Verifying requisites...done Results... SUCCESSES --------- Filesets listed in this section passed pre-commit verification and will be committed. Selected Filesets ----------------- gpfs.base 3.5.0.10 # GPFS File Manager gpfs.msg.en_US 3.5.0.9 # GPFS Server Messages - U.S. ... << End of Success Section >> +-----------------------------------------------------------------------------+ Committing Software... +-----------------------------------------------------------------------------+ installp: COMMITTING software for: gpfs.base 3.5.0.10 Filesets processed: 1 of 2 (Total time: 18 secs). installp: COMMITTING software for: gpfs.msg.en_US 3.5.0.9 Finished processing all filesets. (Total time: 18 secs). +-----------------------------------------------------------------------------+ Summaries: +-----------------------------------------------------------------------------+ Installation Summary -------------------- Name Level Part Event Result ------------------------------------------------------------------------------- gpfs.msg.en_US 3.5.0.9 USR APPLY SUCCESS gpfs.base 3.5.0.10 USR APPLY SUCCESS gpfs.base 3.5.0.10 ROOT APPLY SUCCESS gpfs.base 3.5.0.10 USR COMMIT SUCCESS gpfs.base 3.5.0.10 ROOT COMMIT SUCCESS gpfs.msg.en_US 3.5.0.9 USR COMMIT SUCCESS aixlpar1 : /tmp/cg/gpfs_fixes_3510 # lslpp -l gpfs\* Fileset Level State Description ---------------------------------------------------------------------------- Path: /usr/lib/objrepos gpfs.base 3.5.0.10 COMMITTED GPFS File Manager gpfs.msg.en_US 3.5.0.9 COMMITTED GPFS Server Messages - U.S. English Path: /etc/objrepos gpfs.base 3.5.0.10 COMMITTED GPFS File Manager Path: /usr/share/lib/objrepos gpfs.docs.data 3.5.0.3 COMMITTED GPFS Server Manpages and Documentation Repeat the update on the second node. Configuring the GPFS cluster Now that GPFS is installed, we can create a cluster across both AIX systems. First, we create a text file that contains a list of each of the nodes and their GPFS description and purpose. We have chosen to configure each node as a GPFS quorum manager. Each node is a GPFS server. If you are unsure of how many quorum managers and GPFS servers are required in your environment, refer to the GPFS Concepts, Planning, and Installation document for guidance. aixlpar1 : /tmp/cg # cat gpfs-nodes.txt aixlpar2p:quorum-manager: aixlpar1p:quorum-manager: The cluster is created using the mmcrcluster command.* The GPFS cluster name is GPFS_CLUSTER1. The primary node (or NSD server; discussed in the next section) is aixlpar1p and the secondary node is aixlpar2p. We have specified that ssh and scp will be used for cluster communication and administration. aixlpar1 : /tmp/cg # mmcrcluster –C GPFS_CLUSTER1 -N /tmp/cg/gpfs-nodes.txt -p aixlpar1p -s aixlpar2p -r /usr/bin/ssh -R /usr/bin/scp Mon Apr 29 12:01:21 EET 2013: mmcrcluster: Processing node aixlpar2 Mon Apr 29 12:01:24 EET 2013: mmcrcluster: Processing node aixlpar1. *Note: To ensure that GPFS daemon communication occurs over the private GPFS network, during cluster creation, we specified the GPFS daemon node names (that is, host names ending with p). There are two types of communication to consider in a GPFS cluster, administrative commands and daemon communication. Administrative commands use remote shell ( ssh, rsh, or other) and socket-based communications. It is considered as a best practice to ensure that all GPFS daemoncommunication is performed over a private network. Refer to the GPFS developerWorks wiki for further information and discussion on GPFS network configuration considerations and practices. To use a separate network for administration command communication, you can change the "Admin node name" using the mmchnode command. In this example, the separate network address is designated by "a" (for Administration) at the end of the node name, aixlpar1a for example. # mmchnode -admin-interface=aixlpar1p -N aixlpar1a # mmchnode -admin-interface=aixlpar2p -N aixlpar2a The mmcrcluster command warned us that not all nodes have the appropriate GPFS license designation. We use the mmchlicense command to assign a GPFS server license to both the nodes in the cluster. aixlpar1 : / # mmchlicense server --accept -N aixlpar1a,aixlpar2a The following nodes will be designated as possessing GPFS server licenses: aixlpar2a aixlpar1a mmchlicense: Command successfully completed mmchlicense: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. The cluster is now configured. The mmlscluster command can be used to display cluster information. # mmlscluster GPFS cluster information ======================== GPFS cluster name: GPFS_CLUSTER1.aixlpar1p GPFS cluster id: 8831612751005471855 GPFS UID domain: GPFS_CLUSTER.aixlpar1p Remote shell command: /usr/bin/ssh Remote file copy command: /usr/bin/scp GPFS cluster configuration servers: ----------------------------------- Primary server: aixlpar1p Secondary server: aixlpar2p Node Daemon node name IP address Admin node name Designation ---------------------------------------------------------------------- 1 aixlpar2p 10.1.7.120 aixlpar2a quorum-manager 2 aixlpar1p 10.1.7.110 aixlpar1a quorum-manager At this point, you can use the mmdsh command to verify that the SSH communication is working as expected on all GPFS nodes. This runs a command on all the nodes in the cluster. If there is an SSH configuration problem, this command highlights the issues. aixlpar1 : / # mmdsh date aixlpar1: Mon Apr 29 12:05:47 EET 2013 aixlpar2: Mon Apr 29 12:05:47 EET 2013 aixlpar2 : / # mmdsh date aixlpar1: Mon Apr 29 12:06:41 EET 2013 aixlpar2: Mon Apr 29 12:06:41 EET 2013 Configuring Network Shared Disks GPFS provides a block-level interface over TCP/IP networks called the Network Shared Disk (NSD) protocol. Whether using the NSD protocol or a direct attachment to the SAN, the mounted file system looks the same to the users and application (GPFS transparently handles I/O requests). A shared disk cluster is the most basic environment. In this configuration, the storage is directly attached to all the systems in the cluster. The direct connection means that each shared block device is available concurrently to all of the nodes in the GPFS cluster. Direct access means that the storage is accessible using a Small Computer System Interface (SCSI) or other block-level protocol using a SAN. The following figure illustrates a GPFS cluster where all nodes are connected to a common Fibre Channel SAN and storage device. The nodes are connected to the storage using the SAN and to each other using a local area network (LAN). Data used by applications running on the GPFS nodes flows over the SAN, and GPFS control information flows among the GPFS instances in the cluster over the LAN. This configuration is optimal when all nodes in the cluster need the highest performance access to the data. Figure 3. Overview diagram of the GPFS cluster The mmcrnsd command is used to create NSD devices for GPFS. First, we create a text file that contains a list of each of the hdisk names, their GPFS designation (data, metadata, both*), and the NSD name. hdisk1:::dataAndMetadata::nsd01:: hdisk2:::dataAndMetadata::nsd02:: hdisk3:::dataAndMetadata::nsd03:: hdisk4:::dataAndMetadata::nsd04:: hdisk5:::dataAndMetadata::nsd05:: hdisk6:::dataAndMetadata::nsd06:: *Note: Refer to the GPFS Concepts, Planning, and Installation document for guidance on selecting NSD device usage types. Then, run the mmcrnsd command to create the NSD devices. # mmcrnsd -F /tmp/cg/gpfs-disks.txt mmcrnsd: Processing disk hdisk1 mmcrnsd: Processing disk hdisk2 mmcrnsd: Processing disk hdisk3 mmcrnsd: Processing disk hdisk4 mmcrnsd: Processing disk hdisk5 mmcrnsd: Processing disk hdisk6 mmcrnsd: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. The lspv command now shows the NSD name associated with each AIX hdisk. # lspv hdisk0 00c334b6af00e77b rootvg active hdisk1 none nsd01 hdisk2 none nsd02 hdisk3 none nsd03 hdisk4 none nsd04 hdisk5 none nsd05 hdisk6 none nsd06 The mmlsnsd command displays information for each NSD, in particular which GPFS file system is associated with each device. At this point, we have not created a GPFS file system. So each disk is currently free. You'll notice that under NSD servers each device is shown as directly attached. This is expected for SAN-attached disks. # mmlsnsd File system Disk name NSD servers --------------------------------------------------------------------------- (free disk) nsd01 (directly attached) (free disk) nsd02 (directly attached) (free disk) nsd03 (directly attached) (free disk) nsd04 (directly attached) (free disk) nsd05 (directly attached) (free disk) nsd06 (directly attached) GPFS file system configuration Next, the GPFS file systems can be configured. The mmcrfs command is used to create the file systems. We have chosen to create two file systems; /gpfs and /gpfs1. The /gpfs (gpfs0) file system will be configured with a GPFS block size of 256K (the default) and /gpfs1 (gpfs1) with a block size of 1M*. Both file systems are configured for replication (-M2 –R2). The /tmp/cg/gpfs-disk.txt file is specified for /gpfs and /tmp/cg/gpfs1-disk.txt for /gpfs1. These files specify which NSD devices are used for each file system during creation. *Note: Choose your block size carefully. It is not possible to change this value after the GPFS device has been created. # cat /tmp/cg/gpfs-disk.txt nsd01:::dataAndMetadata:-1::system nsd02:::dataAndMetadata:-1::system nsd03:::dataAndMetadata:-1::system # cat /tmp/cg/gpfs1-disk.txt nsd04:::dataAndMetadata:-1::system nsd05:::dataAndMetadata:-1::system nsd06:::dataAndMetadata:-1::system # mmcrfs /gpfs gpfs0 -F/tmp/cg/gpfs-disks.txt -M2 -R 2 # mmcrfs /gpfs1 gpfs1 -F/tmp/cg/gpfs1-disks.txt -M2 -R 2 –B 1M The mmlsnsd command displays the NSD configuration per file system. NSD devices 1 to 3 are assigned to the gpfs0 device and devices 4 to 6 are assigned to gpfs1. # mmlsnsd File system Disk name NSD servers --------------------------------------------------------------------------- gpfs0 nsd01 (directly attached) gpfs0 nsd02 (directly attached) gpfs0 nsd03 (directly attached) gpfs1 nsd04 (directly attached) gpfs1 nsd05 (directly attached) gpfs1 nsd06 (directly attached) Both GPFS file systems are now available on both nodes. aixlpar1 : / # df -g Filesystem GB blocks Free %Used Iused %Iused Mounted on /dev/hd4 1.00 0.89 12% 5211 3% / /dev/hd2 3.31 0.96 71% 53415 18% /usr /dev/hd9var 2.00 1.70 16% 5831 2% /var /dev/hd3 2.00 1.36 33% 177 1% /tmp /dev/hd1 2.00 2.00 1% 219 1% /home /proc - - - - - /proc /dev/hd10opt 1.00 0.79 21% 3693 2% /opt /dev/local 1.00 0.97 3% 333 1% /usr/local /dev/loglv 1.00 1.00 1% 54 1% /var/log /dev/tsmlog 1.00 1.00 1% 7 1% /var/tsm/log /dev/hd11admin 0.12 0.12 1% 13 1% /admin /dev/optIBMlv 2.00 1.99 1% 17 1% /opt/IBM /dev/gpfs1 150.00 147.69 2% 4041 3% /gpfs1 /dev/gpfs0 150.00 147.81 2% 4041 7% /gpfs The mmdsh command can be used here to quickly check the file system status on all the nodes. aixlpar1 : / # mmdsh df -g | grep gpfs aixlpar2: /dev/gpfs0 150.00 147.81 2% 4041 7% /gpfs aixlpar2: /dev/gpfs1 150.00 147.69 2% 4041 3% /gpfs1 aixlpar1: /dev/gpfs1 150.00 147.69 2% 4041 3% /gpfs1 aixlpar1: /dev/gpfs0 150.00 147.81 2% 4041 7% /gpfs If more detailed information is required, the mmdf command can be used. aixlpar1 : /gpfs # mmdf gpfs0 --block-size=auto disk disk size failure holds holds free free name group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- ------------ Disks in storage pool: system (Maximum disk size allowed is 422 GB) nsd01 50G -1 yes yes 49.27G ( 99%) 872K ( 0%) nsd02 50G -1 yes yes 49.27G ( 99%) 936K ( 0%) nsd03 50G -1 yes yes 49.27G ( 99%) 696K ( 0%) ------------- -------------------- ------------------- (pool total) 150G 147.8G ( 99%) 2.445M ( 0%) ============= ==================== =================== (total) 150G 147.8G ( 99%) 2.445M ( 0%) Inode Information ----------------- Number of used inodes: 4040 Number of free inodes: 62008 Number of allocated inodes: 66048 Maximum number of inodes: 66048 aixlpar1 : /gpfs # mmdf gpfs1 --block-size=auto disk disk size failure holds holds free free name group metadata data in full blocks in fragments --------------- ------------- -------- -------- ----- -------------------- Disks in storage pool: system (Maximum disk size allowed is 784 GB) nsd04 50G -1 yes yes 49.55G ( 99%) 1.938M ( 00%) nsd05 50G -1 yes yes 49.56G ( 99%) 992K ( 0%) nsd06 50G -1 yes yes 49.56G ( 99%) 1.906M ( 00%) ------------- -------------------- ------------------- (pool total) 150G 148.7G ( 99%) 4.812M ( 00%) ============= ==================== =================== (total) 150G 148.7G ( 99%) 4.812M ( 00%) Inode Information ----------------- Number of used inodes: 4040 Number of free inodes: 155704 Number of allocated inodes: 159744 Maximum number of inodes: 159744 Node quorum with tiebreaker disks Tiebreaker disks are recommended when you have a two-node cluster or you have a cluster where all of the nodes are SAN-attached to a common set of logical unit numbers (LUNs) and you want to continue to serve data with a single surviving node. Typically, tiebreaker disks are only used in two-node clusters. Tiebreaker disks are not special NSDs; you can use any NSD as a tiebreaker disk. In this example, we chose three (out of six) NSD devices as tiebreaker disks. We stopped GPFS on all nodes and configured the cluster accordingly. # mmshutdown -a # mmchconfig tiebreakerDisks="nsd01;nsd03;nsd05" # mmstartup -a Cluster daemon status There are two GPFS daemons (processes) that remain active while GPFS is active ( mmfsd64 and runmmfs). # ps -ef | grep mmfs root 4784176 5505220 0 May 20 - 0:27 /usr/lpp/mmfs/bin/aix64/mmfsd64 root 5505220 1 0 May 20 - 0:00 /usr/lpp/mmfs/bin/mmksh /usr/lpp/mmfs/bin/runmmfs You can use the mmgetstate command to view the status of the GPFS daemons on all the nodes in the cluster. # mmgetstate -aLs Node number Node name Quorum Nodes up Total nodes GPFS state Remarks ------------------------------------------------------------------------------------ 1 aixlpar2a 1* 2 2 active quorum node 2 aixlpar1a 1* 2 2 active quorum node Summary information --------------------- Number of nodes defined in the cluster: 2 Number of local nodes active in the cluster: 2 Number of remote nodes joined in this cluster: 0 Number of quorum nodes defined in the cluster: 2 Number of quorum nodes active in the cluster: 2 Quorum = 1*, Quorum achieved Summary Congratulations! You've just configured your first GPFS cluster. In this article, you've learnt how to build a simple two-node GPFS cluster on AIX. This type of configuration can be easily deployed to support clustered workload with high availability requirements, for example an MQ multi-instance cluster. GPFS offers many configuration options; you can spend a lot of time planning for a GPFS cluster. If you are seriously considering a GPFS deployment, I encourage you to read all of the available GPFS documentation in the Resources section of this article. Resources The following resources were referenced during the creation of this article..
http://www.ibm.com/developerworks/aix/library/au-aix-building-two-node-gpfs-cluster/index.html
CC-MAIN-2016-22
refinedweb
3,577
56.25
Details Description. Activity - All - Work Log - History - Activity - Transitions Thanks, guys! Great work! Max, Chris, thanks ! I updated the documentation here: Thanks Max! added shade plugin, now jar can run > I think that something is wrong in bundle-plugin configuration The packaged jar contains duplicate entries for different packages in /org/apache/cxf/, and probably for others. May be you should use the Maven Shade plugin, here is the example I know! I got down the wrong rabbit hole, thanks for the help, both of you heh... Max, > ... That exception mapper was lost after transition from Jersey to CXF, so we had 500-error instead of 415. Right. The good thing we know the cause and as I indicated we will get to the optional class scanning support Update: Max, sure, we can talk on Jabber, please share your id with me on #cxf or post here - crap, just saw Max's comment. I'm going to let this sit for a while and make sure Max and I can fully run this, before closing the issue. We're close though! - for the intent of this issue, I think that the functionality is complete. We can open up new issues to track further improvements and bugs. Thanks Max, Sergey, Ingo, and others who have contributed, and to Jukka for the original idea and spec! I we have another problem with Tika server. We combine all dependency jar's into one big jar that can be run via 'java -jar tika-server.jar'. It includes Tika with all parsers, web-server and etc. When I try to run it a have following exception: SEVERE: Can't start org.apache.cxf.service.factory.ServiceConstructionException at org.apache.cxf.jaxrs.JAXRSServerFactoryBean.create(JAXRSServerFactoryBean.java:190) at org.apache.tika.server.TikaServerCli.main(TikaServerCli.java:92) Caused by: org.apache.cxf.BusException: No DestinationFactory was found for the namespace. at org.apache.cxf.transport.DestinationFactoryManagerImpl.getDestinationFactory(DestinationFactoryManagerImpl.java:126) at org.apache.cxf.endpoint.ServerImpl.initDestination(ServerImpl.java:88) at org.apache.cxf.endpoint.ServerImpl.<init>(ServerImpl.java:72) at org.apache.cxf.jaxrs.JAXRSServerFactoryBean.create(JAXRSServerFactoryBean.java:151) ... 1 more I think that something is wrong in bundle-plugin configuration but the test failed because TikaExceptionMapper was not added to providers list Max, you're totally right! In r1306883, I committed some cleanup, removing the FIXME and uncommenting @Test, and all tests pass. I'm going to mark this issue as resolved now! We can track further progress and updates in other issues. Thanks for the help here! > FYI I do not understand how having TikaExceptionMapper registered can result in 415 being returned, I'm looking at it and seeing no traces of 415, can you clarify please ? I'll try to explain. Tika server's resources can handle any input mime-type. When we no not specify mime type in our PUT request (or specify something generic like 'application/octet-stream'), Tika uses its own mime-type detector to detect its type and choose parser. When we specify mime-type it skips detection stage and choose parser that handles specified document type. When we can't handle specified mime-type, when we can't detect it, or when we do not have parser for that type, we throw WebApplicationException(Response.Status.UNSUPPORTED_MEDIA_TYPE) - 415 code. Tika parser framework wraps that exception into TikaException. TikaExceptionMapper unwraps it: if (e.getCause() !=null && e.getCause() instanceof WebApplicationException) { return ((WebApplicationException) e.getCause()).getResponse(); } That exception mapper was lost after transition from Jersey to CXF, so we had 500-error instead of 415. PS: maybe we can speak Russian on jabber? Hey Max: The cool part is that we reduce a bunch of the Maven dependencies with CXF and we are eating our own dog food. CXF implementation looks much heavier than Jersey, look at "mvn dependency:tree" I guess here I was talking more about simply only having to rely on one Maven dependency tag in the tika-server pom.xml for cxf-rt-frontend-jars, rather than jersey server + core, and the other dependencies we used to have. If you look at the pom.xml, the deps are now reduced. That's what I was thinking (maybe a side effect?) > I do not completely understand your discussion about 415, but the test failed because TikaExceptionMapper was not added to providers list. What do you not understand ? FYI I do not understand how having TikaExceptionMapper registered can result in 415 being returned, I'm looking at it and seeing no traces of 415, can you clarify please ? > (by the way, maybe CXF supports classpath scanning like Jersey?) No it does not yet. It was a very specific decision - IMHO the random class scanning is impractical in many cases and causes more troubles than it's worth and if of little use when the custom providers have to be configured in the per-endpoint specific way as in case with most interesting applications. However I do accept that for simple mappers it can make sense, though I'm not sure what is simpler, restricting the packages to scan or just go and register required providers , I prefer the latter, but please see# In CXF 2.6.0 FIQL search extensions got moved to the new module, so it is time to optionally support it > CXF implementation looks much heavier than Jersey, look at "mvn dependency:tree" I guess I'd be happy to discuss this issue elsewhere, but since you brought it up, here is the 2.6.0-SNAPSHOT one: [INFO] org.apache.cxf:cxf-rt-frontend-jaxrs:jar:2.6.0-SNAPSHOT [INFO] +- org.apache.aries.blueprint:org.apache.aries.blueprint.core:jar:0.3.1:provided [INFO] | +- org.apache.aries.blueprint:org.apache.aries.blueprint.api:jar:0.3.1:provided [INFO] | +- org.apache.aries:org.apache.aries.util:jar:0.3:provided [INFO] | +- org.slf4j:slf4j-api:jar:1.6.2:provided (version managed from 1.5.11) [INFO] | +- org.apache.aries.testsupport:org.apache.aries.testsupport.unit:jar:0.3:provided [INFO] | - org.apache.aries.proxy:org.apache.aries.proxy.api:jar:0.3:provided [INFO] +- org.osgi:org.osgi.core:jar:4.2.0:provided [INFO] +- junit:junit:jar:4.8.2:test ************* [INFO] +- org.apache.cxf:cxf-api:jar:2.6.0-SNAPSHOT:compile [INFO] | +- org.codehaus.woodstox:woodstox-core-asl:jar:4.1.2:runtime [INFO] | | - org.codehaus.woodstox:stax2-api:jar:3.1.1:runtime [INFO] | +- org.apache.ws.xmlschema:xmlschema-core:jar:2.0.1:compile [INFO] | +- org.apache.geronimo.specs:geronimo-javamail_1.4_spec:jar:1.7.1:compile [INFO] | - wsdl4j:wsdl4j:jar:1.6.2:compile [INFO] +- org.apache.cxf:cxf-rt-core:jar:2.6.0-SNAPSHOT:compile [INFO] | - com.sun.xml.bind:jaxb-impl:jar:2.1.13:compile ************** [INFO] +- org.springframework:spring-core:jar:3.0.7.RELEASE:provided [INFO] | +- org.springframework:spring-asm:jar:3.0.7.RELEASE:provided [INFO] | - commons-logging:commons-logging:jar:1.1.1:provided [INFO] +- org.springframework:spring-context:jar:3.0.7.RELEASE:provided [INFO] | +- org.springframework:spring-aop:jar:3.0.7.RELEASE:provided [INFO] | | - aopalliance:aopalliance:jar:1.0:provided [INFO] | +- org.springframework:spring-beans:jar:3.0.7.RELEASE:provided [INFO] | - org.springframework:spring-expression:jar:3.0.7.RELEASE:provided ***************** [INFO] +- javax.ws.rs:jsr311-api:jar:1.1.1:compile [INFO] +- org.apache.cxf:cxf-rt-bindings-xml:jar:2.6.0-SNAPSHOT:compile [INFO] +- org.apache.cxf:cxf-rt-transports-http:jar:2.6.0-SNAPSHOT:compile ****************** [INFO] +- org.apache.geronimo.specs:geronimo-servlet_3.0_spec:jar:1.0:provided [INFO] +- org.apache.cxf:cxf-rt-transports-local:jar:2.6.0-SNAPSHOT:test [INFO] +- org.apache.cxf:cxf-rt-databinding-jaxb:jar:2.6.0-SNAPSHOT:test [INFO] | - com.sun.xml.bind:jaxb-xjc:jar:2.1.13:test [INFO] +- org.apache.cxf:cxf-rt-transports-http-jetty:jar:2.6.0-SNAPSHOT:test [INFO] | +- org.eclipse.jetty:jetty-server:jar:7.5.4.v20111024:test [INFO] | | +- org.eclipse.jetty:jetty-continuation:jar:7.5.4.v20111024:test [INFO] | | - org.eclipse.jetty:jetty-http:jar:7.5.4.v20111024:test [INFO] | | - org.eclipse.jetty:jetty-io:jar:7.5.4.v20111024:test [INFO] | | - org.eclipse.jetty:jetty-util:jar:7.5.4.v20111024:test [INFO] | - org.eclipse.jetty:jetty-security:jar:7.5.4.v20111024:test [INFO] +- org.slf4j:slf4j-jdk14:jar:1.6.2:test [INFO] +- org.easymock:easymock:jar:3.1:test [INFO] | +- cglib:cglib-nodep:jar:2.2.2:test [INFO] | - org.objenesis:objenesis:jar:1.2:test [INFO] - org.apache.cxf:cxf-rt-databinding-aegis:jar:2.6.0-SNAPSHOT:test Note the strong dependencies surrounded by '************', this is all we have. FYI the wsdl4j dependency is most likely can be excluded - few people have confirmed it I agree it is more monolitic in the 2.5.x. We are doing a major split starting from 2.6 The cool part is that we reduce a bunch of the Maven dependencies with CXF and we are eating our own dog food. CXF implementation looks much heavier than Jersey, look at "mvn dependency:tree" I do not completely understand your discussion about 415, but the test failed because TikaExceptionMapper was not added to providers list (by the way, maybe CXF supports classpath scanning like Jersey?). Hi, here is the thread on the CXF list to do with handling 415:. Re Accept: I think that the client code needs to have an idea about the format of the data it expects back thus CXF WebClient will try to set some specific default. FYI, proxy-based clients will analyze @Produces/@Consumes. Also the idea of the client having to know what it expects back is finding its way into JAX-RS 2.0 client api too. Update: WebClient (trunk/2.5.3-SNAPSHOT) will only default Accept to application/xml if a specific custom class is expected to be populated on return, if Response is expected back then no action is taken OK, I give up for now. I disabled the 415 test that isn't passing. After researching this for hours, and working with Paul Ramirez (thanks for the help Paul), we basically found the following things to be true: - Jersey automatically sets Accept to something like / which IMHO is more sensible than CXF which sets it to an XML accept type (which causes the resource to not even find the path in test415) - For whatever reason, if you set accept to "xxx/xxx" instead of checks up front like it seems Jersey did, CXF will let the call get all the way to the UnpackerResource#unpack method and then cause the Tika AutoDetectParser to fail. Jersey seemed to have caught this. I have no clue why. We mucked around with different accept and type calls and got it to send 200 OK back and parse fine (e.g., if you set the accept to / and type to APPLICATION_MSWORD – but that defeats the purpose of the test. If you send in xxx/xxx, it seems like the JAX RS service should send back a 415. I need some massive help from anyone that knows CXF to figure this out. I have to step away from this for now. For now all tests pass, they are cleaned up using CXF client (with HttpClient removed), and I disabled test415. Any help to get 415 working with CXF is welcomed. Even if we have to modify UnpackerResource to do the check. I know that Sergey is watching this one (from CXF ville so would love some help here!) Hey Max, in r1305940, I committed the latest patch with those 3 tests disabled in UnpackagerResource for now. We can fix them and wrap this up and until I do so, I'll leave the issue open. Help is welcomed! Thanks, Max, see latest patch. I'm close now: ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.tika.server.MetadataResourceTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.281 sec Running org.apache.tika.server.TikaResourceTest Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.167 sec Running org.apache.tika.server.UnpackerResourceTest Tests run: 10, Failures: 1, Errors: 2, Skipped: 0, Time elapsed: 2.012 sec <<< FAILURE! Results : Failed tests: test415(org.apache.tika.server.UnpackerResourceTest): expected:<415> but was:<406> Tests in error: testTarDocPicture(org.apache.tika.server.UnpackerResourceTest): Invalid byte 0 at offset 0 in '{NUL}{NUL}{NUL}{NUL}{NUL}{NUL}{NUL}' len=8 testText(org.apache.tika.server.UnpackerResourceTest): Stream closed 2 failures, 1 error, and the rest pass. Any ideas? Chris, there is two providers in my code that process this Map's. It is ZipWriter and TarWriter: I think now that it was not good idea to use Map class directly, it is better to introduce more specific interface - a lot closer. Unpacker tests are failing. Max, how did Jersey deal with the Map<String,byte[]> that you are returning in UnpackerResource? I don't see any @Providers in Jersey that natively know how to deal with this data structure, nor do I see any @Provider classes that you have written to take care of it. How was Jersey dealing with this? - ok tests passing, mostly. Will finish tomorrow morning! - Max FYI my current progress. I'm trying to get the unit tests rewritten but they are failing right now. Check out MetadataResource to see. The cool part is that we reduce a bunch of the Maven dependencies with CXF and we are eating our own dog food. I will go to the CXF lists tomorrow with my question about the failing unit tests. Chris, that is not enough just to change dependency. There is TikaServerCli class that configures Jetty + Jersey combo, it depends jersey classes <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-frontend-jaxrs</artifactId> <version>2.3.1</version> </dependency> Max, see above. That will take care of the transitive dependencies for JAX-RS, including the API, etc. I'm not sure of a replacement for the test portions of the Jersey code. If you are +1 with the above, I'd like to commit it to the tika-server/pom.xml file. Hey Max, I don't have objections to moving forward re-enabling the module. How about we use CXF like I suggested though? I will try a commit to the POM shortly that will add in the CXF JaxRS dependencies. Let's try that. testExeDOCX(org.apache.tika.server.UnpackerResourceTest): PUT returned a response status of 204 No Content I could not reproduce this problem on current trunk version I found that Jersey dependencies are on Maven Central now (). I'm going to synchronize tika-server with our production code and enable it in default build after tika 1.1 release if there is no objections - push out to 1.2 Ingo, I applied your patch to my local copy, but am still seeing test failures: INFO: Stopping the Grizzly Web Container... Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.341 sec <<< FAILURE! Results : Tests in error: testExeDOCX(org.apache.tika.server.UnpackerResourceTest): PUT returned a response status of 204 No Content Tests run: 11, Failures: 0, Errors: 1, Skipped: 0 Got it running after fixing Eclipse's complaints in pom.xml and updating the mvn dependencies With the latest version (1.1-trunk) of Tika, I tried building tika-server and got this unit test failure: ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.apache.tika.server.TikaResourceTest Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.374 sec Running org.apache.tika.server.UnpackerResourceTest Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.617 sec <<< FAILURE! Running org.apache.tika.server.MetadataResourceTest Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.346 sec Results : Tests in error: testExeDOCX(org.apache.tika.server.UnpackerResourceTest) Tests run: 11, Failures: 0, Errors: 1, Skipped: 0 Here are the surefire-reports: [chipotle:~/src/tika/trunk] mattmann% more tika-server/target/surefire-reports/org.apache.tika.server.UnpackerResourceTest.txt ------------------------------------------------------------------------------- Test set: org.apache.tika.server.UnpackerResourceTest ------------------------------------------------------------------------------- Tests run: 6, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.613 sec <<< FAILURE! testExeDOCX(org.apache.tika.server.UnpackerResourceTest) Time elapsed: 1.418 sec <<< ERROR! com.sun.jersey.api.client.UniformInterfaceException: PUT returned a response status of 204 No Content at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:528) at com.sun.jersey.api.client.ClientResponse.getEntity(ClientResponse.java:506) at com.sun.jersey.api.client.WebResource.handle(WebResource.java:674) at com.sun.jersey.api.client.WebResource.put(WebResource.java:221) I'm looking into it. Here I go, I am going to try and integrate Apache CXF and swap out Jersey here. Wish me luck! - push out to 1.1: prep for 1.0. - pushing to 1.0. assign to me. I'd like to shepherd this through with CXF. I'll make time in the next week. I updated tika-server component. I replaced Grizzly web server with Jetty (that is available from maven central repository). I'm going to try to replace Jersey with Wink later Reopening until we figure out what to do with the references to the dev.java.net repositories. Earlier we had problems with such references to non-standard Maven repositories and I wouldn't like to have this issue block another release. In revision 1079922 I removed the tika-server component from the default build, which should allow us to release Tika even with the dev.java.net dependencies in place (we just can't deploy tika-server to Maven central then). There were also some test failures due apparently to some dependency version mismatch. See for details. Added my implementation in revision 1074088 Out of curiosity, why not just have a simple webapp (war) that uses Tika that reads the InputStream and spits back the data in whatever format is needed/specified? Sure, it requires a servlet container, but is that really a big deal? Just asking because it seems a tiny bit simpler than using Netty or Mina or HttpComponents or embedded Jetty or Grizzly. I uploaded my implementation on GitHub for preview: (please look at README file for build instructions and usage examples). Please comment it. My 2c on this: +1 to using JAX RS RE: the actual implementation, I used Apache CXF for OODT and it's basically a jar drop in (or MVN pom.xml update) single dependency. Wink is still Incubating right? I made HTTP-server with Jersey (JAX-RS) and embedded Glassfish (or Grizzly?) for text, metadata and binary attachment extraction. I has very simple REST-style interface I think we can contribute it to Tika project. Also I can try to replace Glassfish and Jersey with Tomcat and Apache Wink if it is required. What do you think? I updated documentation in wiki
https://issues.apache.org/jira/browse/TIKA-593?focusedCommentId=13241277&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-35
refinedweb
3,165
51.95
. Beginnings. I had found a problem in need of a PHP userland solution! I set to work right away, quickly releasing a 1.0.0 version. Little did I know this marked the beginning of a long road for a small package that would become popular and widely-used. What is a UUID? A UUID aims to be practically unique such that information may be uniquely identified across distributed systems, without central coordination of identifiers. There are 16 32: - Version 1 is constructed from the current timestamp and local machine MAC address - Version 2 is the DCE Security version; it is similar to version 1, but RFC 4122 does not explicitly define it, so it is left out of most implementations - Version 3 is constructed from a namespace and an MD5 hash of a name; given the same namespace and name, the UUID generated will always be the same - Version 4 is randomly-generated and is probably the most common version used - Version 5 is the same as version 3, but it uses SHA-1 hashing instead; it is the preferred version for name-based UUIDs What’s In a Name? What’s in a name? That which we call a roseBy was Rhumsaa and not Ramsey didn’t know how to pronounce it. I received complaints about not remembering how to spell it . I heard from others who didn’t realize I was the package maintainer . One developer even encountered a problem and assumed it was a result of my “ GitHub username change ;” they thought I had changed my GitHub username from rhumsaa to ramsey , breaking the location of the package in Packagist. So, what does rhumsaa mean, after all? As I wrote in my “Wild Garlic” post, “[T]he word rhumsaa is a Manx word derived from the Old Norse hrams-á , meaning ‘wild garlic river.’ In English, this word is ramsey .” I was attempting to be clever with my vendor name, and it caused a lot of confusion. As we were deep in the middle of development on the 3.x series, I decided a vendor name change from 2.x to 3.x might be a good idea and mitigate a lot of this confusion. I asked on Twitter and opened Issue #48 to solicit feedback from the community for the name change. In the end, I made the decision to change the vendor name to Ramsey. I first updated my other Rhumsaa packages (i.e. ramsey/twig-codeblock , ramsey/vnderror , etc.) and then I changed the name of ramsey/uuid for the 3.x series. I tried to make the transition as easy as possible. I’m sure there are better ways to handle changes like this, and in retrospect, I probably should have forked the package to allow projects to use both rhumsaa/uuid and ramsey/uuid together, similar to how the Guzzle project addressed a similar namespace and package name change . Nevertheless, I’ve only heard from a handful of those who’ve encountered problems with the upgrade or couldn’t upgrade yet due to other packages using the older 2.x series. When UUIDs Collide Shortly before the GA release of 3.0.0, I received a troublesome bug report. Issue #80 purported to show that version 4 UUID collisions were occurring on a regular basis even in small-scale tests, and as I mentioned earlier this should not be probable. After several more corroborating reports, we were faced with a conundrum. Since I couldn’t reproduce the issue, and no one could produce a sufficient reproducible script, the issue sat around for a long time. Every couple of weeks or so, someone would chime in to ask the status or confirm they had seen collisions. It began to scare people , and I was worried that community confidence in the library was degrading. I was actually stressed by the whole situation; I wanted my library to be useful and dependable. Finally, after many months attempting to identify the culprit—I was certain it wasn’t inside the library’s code, since ramsey/uuid relies on external random generators—I had a conversation with Willem-Jan Zijderveld and Anthony Ferrara in the #phpc channel on Freenode IRC . Willem-Jan pointed us to the OpenSSL random fork-safety issue, where the OpenSSL project explains:. They go on to say: OpenSSL cannot fix the fork-safety problem because it’s not in a position to do so. OpenSSL was the culprit. More specifically, the use of openssl_random_pseudo_bytes() when using PHP in forked child processes, as is the case when using PHP with Apache or PHP-FPM. The processes were wrapping, so the children would produce the same random sequences as previous children with the same process IDs. Discovering this launched discussions on what to do about OpenSSL for the paragonie/random_compat library. After that project decided to drop the use of OpenSSL as a fallback for generating random bytes, I decided to require paragonie/random_compat as a dependency and use random_bytes() as the default random generator for ramsey/uuid. I then released versions 2.9.0 and 3.3.0 to provide versions in both 2.x and 3.x to solve this problem. ramsey/uuid 2.9.0 & 3.3.0 fix the UUID collision issue caused by OpenSSL. All users should upgrade. Thanks! — Ben Ramsey (@ramsey) March 22, 2016 It’s interesting to note that @SwiftOnSecurity picked up on the issue and posted about it: Fascinating investigation into developers getting tons of UUID collisions in small datasets (via @CiPHPerCoder ) — SecuriTay (@SwiftOnSecurity) March 26, 2016 3.4.1 and Beyond ramsey/uuid has undergone many changes since its 1.0.0 release. That very first release had some severe limitations placed on it, due to the math involved. It also had some grievous bugs because of that math. I required that everyone using the library must use a 64-bit system, and I failed to factor in the unsignedness of the integers. Since all PHP integers are signed, this led to some serious problems in generating UUIDs. The 2.x series of the library followed about seven months later, supporting both 64-bit and 32-bit systems and accounting for the unsignedness of UUID integers through the use of a BC math wrapper library, moontoast/math . We—for it really was a community effort—made many improvements and enhancements over the course of the 2.x series, but it was clear that more flexibility was desired, and this led to the changes in the 3.x series. The 3.x series ushered in a great deal of flexibility through interfaces and dependency injection. While the standard public API was left unchanged, all the guts of the library were completely transformed to allow anyone to use their own random generator, time provider, MAC address provider, and more. Now, as the library matures beyond the 3.4.1 version, I’m looking ahead to the 4.x series, and how it will further improve the library with more flexibility and closer adherence to RFC 4122, while providing some facilities to optimize UUIDs in databases, and more. Here are a handful of the issues I’m considering for 4.0.0. You can read more and may submit your own from the ramsey/uuid GitHub issues page . - Store UUID in an optimized way for InnoDB - Consider supporting version 2 DCE Security UUIDs - Set lowest PHP version requirement to 5.6 - Use a math library other than moontoast/math - Create DateTime with nanoseconds How To Use It The ramsey/uuid library provides a static interface to create immutable UUID objects for RFC 4122 variant version 1, 3, 4, and 5 UUIDs. The preferred installation method is Composer: composer require ramsey/uuid After installation, simply require Composer’s autoloader (or use your own, or one provided by your framework of choice) and begin using the library right away, without any setup. $uuid = /Ramsey/Uuid/Uuid :: uuid4 (); echo $uuid4 -> toString (); The library will make some decisions about your environment and choose the best choices for generating random or time-based UUIDs, but these are configurable. For example, if you wish to use Anthony Ferrara’s RandomLib library as the random generator, you may configure the library to do so: $factory = new /Ramsey/Uuid/UuidFactory (); $factory -> setRandomGenerator ( new /Ramsey/Uuid/Generator/RandomLibAdapter ()); /Ramsey/Uuid/Uuid :: setFactory ( $factory ); $uuid = /Ramsey/Uuid/Uuid :: uuid4 (); If you wish to provide your own random generator, you may do so by implementing Ramsey/Uuid/Generator/RandomGeneratorInterface and setting your object as the random generator to use. Likewise, the library supports the ability to configure the time provider. If you’d like to use the PECL uuid package, for example, to generate time-based UUIDs, this is possible. $factory = new /Ramsey/Uuid/UuidFactory (); $factory -> setTimeGenerator ( new /Ramsey/Uuid/Generator/PeclUuidTimeGenerator ()); /Ramsey/Uuid/Uuid :: setFactory ( $factory ); $uuid = /Ramsey/Uuid/Uuid :: uuid1 (); There are a variety of other ways to configure ramsey/uuid. This example configures the library to generate a version 4 COMB sequential UUID with the timestamp as the first 48 bits. ); $uuid = /Ramsey/Uuid/Uuid :: uuid4 (); Thanks I couldn’t wrap up this post without thanking a few key project contributors. Were it not for the efforts of these folks, ramsey/uuid would not be the great library it is today. I want to first thank Marijn Huizendveld . Marijn submitted the first pull requests to ramsey/uuid and contributed the Doctrine ORM integration that I later split out into the separate ramsey/uuid-doctrine library. It was Marijn’s participation that got me excited about collaborating on an open source project and continuing the work. I owe a debt of gratitude to Thibaud Fabre for his instrumental work in taking ramsey/uuid to version 3. He set out to re-architect the library, providing the interfaces and structure for codecs, generators, providers, and more. I’ve learned a lot about organizing software, object-oriented programming, and dependency injection from his involvement. Most recent, Jessica Mauerhan has been a force for improving our test suite, improving overall test coverage and adding tests for internal bits that were covered but not fully tested. I’ve learned a great deal about testing from her contributions. Last but definitely not least, there are many more without whose contributions ramsey/uuid would be a lesser library. I am grateful to you all for your hard work and help in making ramsey/uuid an awesome library. - List of ramsey/uuid contributors - List of ramsey/uuid-doctrine contributors - List of ramsey/uuid-console contributors Cheers! ramsey/uuid – Ben Ramsey 评论 抢沙发
http://www.shellsec.com/news/13804.html
CC-MAIN-2018-09
refinedweb
1,768
52.9
Quoting Eric W. Biederman (ebiederm@xmission.com):> "Serge E. Hallyn" <serue@us.ibm.com> writes:> > >> > >> >> Index: 2.6.20/fs/autofs4/waitq.c> >> >> ===================================================================> >> >> --- 2.6.20.orig/fs/autofs4/waitq.c> >> >> +++ 2.6.20/fs/autofs4/waitq.c> >> >> @@ -292,8 +292,8 @@ int autofs4_wait(struct autofs_sb_info *> >> >> wq->ino = autofs4_get_ino(sbi);> >> >> wq->uid = current->uid;> >> >> wq->gid = current->gid;> >> >> - wq->pid = current->pid;> >> >> - wq->tgid = current->tgid;> >> >> + wq->pid = pid_nr(task_pid(current));> >> >> + wq->tgid = pid_nr(task_tgid(current));> >> >> wq->status = -EINTR; /* Status return if interrupted */> >> >> atomic_set(&wq->wait_ctr, 2);> >> >> mutex_unlock(&sbi->wq_mutex);> >> > >> I have a concern with this bit as I my quick review said the wait queue> >> persists, and if so we should be cache the struct pid pointer, not the> >> pid_t value. Heck the whol pid_nr(task_xxx(current)) idiom I find very> >> suspicious.> >> > Based just on what I see right here I agree it seems like we would want> > to store a ref to the pid, not store the pid_nr(pid) output, so in this> > context it is suspicious.> > So that far we are in agreement.> > > OTOH if you're saying that using pid_nr(task_pid(current)) anywhere> > should always be 'wrong', then please explain why, as I think we have a> > disagreement on the meanings of the structs involved. In other words,> > at some point I expect the only way to get a "pid number" out of a task> > would be using this exact idiom, "pid_nr(task_pid(current))".> > Dealing with the current process is very common, and> "pid_nr(task_pid(current)" is very long winded. Therefore I think it> makes sense to have a specialized helper for that case.> > I don't think "current->pid" and "current->tgid" are necessarily> wrong.True, current->pid can probably always be legitimately taken as the pidnumber in the current task's cloning namespace. But task->pid is wrong.So if as you say it's worth caching (not saying I doubt you, just that Ihaven't verified), then ideally we could cache current->pid but onlyaccess it using current_pid(). Does that seem worth doing?In any case, certainly adding a task_pid_nr() helper which for startersreturns pid_nr(task_pid(task)) seems reasonable. Note that Suka's aboutready to send a new iteration of the pidns patchset, so I'd like this tobe considered something to clean up on top of that patchset.-serge> For "process_session(current)", and "process_group(current)" I think> they are fine but we might optimize them to something like:> "current_session()" and "current_group()".> > The important part is that we have clearly detectable idioms for> finding the pid values. So we can find the users and audit the code.> Having a little more change so that the problem cases don't compile> when they comes from a patch that hasn't caught up yet with the changes> is also useful.> > The only advantage I see in making everything go through something> like: pid_nr(task_pid(current)) is that we don't have the problem of> storing the pid value twice. However if we have short hand helper> functions for that case it will still work and we won't be horribly> wordy. > > Further I don't know how expensive pid_nr is going to be, I don't> think it will be very expensive. But I still think it may be> reasonable to cache the answers for the current process on the> task_struct. Fewer cache lines and all of that jazz.> > Mostly I just think pid_nr(task_pid(xxx)) looks ugly is rarely needed> and is frequently associated with a bad conversion.> > Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/3/19/321
CC-MAIN-2016-07
refinedweb
613
62.27